1 00:00:04,920 --> 00:00:07,600 Speaker 1: On this episode of news World, we're living in a 2 00:00:07,640 --> 00:00:11,600 Speaker 1: time of transformation that may rival any other time in 3 00:00:11,680 --> 00:00:15,760 Speaker 1: human history. We have a big problem. The narrative that 4 00:00:15,880 --> 00:00:20,720 Speaker 1: surrounds the biggest and most groundbreaking technologies of today is 5 00:00:20,760 --> 00:00:25,320 Speaker 1: one of pessimism and fear. While technology holds the power 6 00:00:25,400 --> 00:00:29,320 Speaker 1: to transform our economy and our lives, we often throttle 7 00:00:29,400 --> 00:00:34,160 Speaker 1: technological breakthroughs before they can fulfill their life changing potential. 8 00:00:34,960 --> 00:00:38,839 Speaker 1: Fueled by a mix of cultural anxieties and policy changes. 9 00:00:39,320 --> 00:00:43,640 Speaker 1: This fear based approach risk denying humans and abundant future. 10 00:00:44,440 --> 00:00:48,479 Speaker 1: My guest today says quote, our leaders are unprepared to 11 00:00:48,520 --> 00:00:52,560 Speaker 1: address the rapid pace of technological change in a positive way. 12 00:00:53,640 --> 00:00:58,800 Speaker 1: The Abundance Institute is a new, mission driven nonprofit organization 13 00:00:59,360 --> 00:01:05,280 Speaker 1: that is folks on creating space for emerging technologies to grow, thrive, 14 00:01:05,800 --> 00:01:09,000 Speaker 1: and have a chance to reach their full potential. Here 15 00:01:09,200 --> 00:01:13,000 Speaker 1: to discuss his new organization, I'm really pleased to welcome 16 00:01:13,040 --> 00:01:16,560 Speaker 1: my guest, Neil Chilson. He is the new leader of 17 00:01:16,680 --> 00:01:33,399 Speaker 1: Artificial Intelligence policy at the Abundance Institute. Neil, welcome and 18 00:01:33,560 --> 00:01:35,240 Speaker 1: thank you for joining me on Newtsworld. 19 00:01:35,920 --> 00:01:38,040 Speaker 2: Thanks so much for having me. It's great to be here. 20 00:01:38,440 --> 00:01:42,080 Speaker 1: Can you talk about your background and artificial intelligence? 21 00:01:43,200 --> 00:01:46,839 Speaker 2: Sure? So I am a Master's agree in computer science 22 00:01:46,920 --> 00:01:50,280 Speaker 2: undergrad and master's agree in computer science. And when I 23 00:01:50,320 --> 00:01:52,920 Speaker 2: was in grad school, I focused on a couple different things, 24 00:01:52,960 --> 00:01:54,560 Speaker 2: but did some work on it what are known as 25 00:01:54,640 --> 00:01:57,680 Speaker 2: agent based systems. Now, when I was in grad school 26 00:01:57,880 --> 00:02:01,760 Speaker 2: in the early two thousands, none of the huge breakthroughs 27 00:02:01,760 --> 00:02:05,080 Speaker 2: that have happened in machine learning were Some of them 28 00:02:05,080 --> 00:02:08,120 Speaker 2: were maybe on the threshold, but nothing had broken the 29 00:02:08,160 --> 00:02:10,320 Speaker 2: way that it has, and certainly hadn't broken into the 30 00:02:10,320 --> 00:02:13,520 Speaker 2: public consciousness like it did with the release of chat 31 00:02:13,600 --> 00:02:17,720 Speaker 2: GPT two years ago. And so my focus was on 32 00:02:17,919 --> 00:02:21,440 Speaker 2: another type of artificial intelligence. And maybe we'll get into 33 00:02:21,480 --> 00:02:23,160 Speaker 2: this a little bit, but there have been many, many 34 00:02:23,200 --> 00:02:26,520 Speaker 2: waves and many different types of what computer scientists have 35 00:02:26,600 --> 00:02:30,360 Speaker 2: called it artificial intelligence over time, and so I've since 36 00:02:30,440 --> 00:02:35,760 Speaker 2: translated that expertise into the policy space. After grad school, 37 00:02:35,840 --> 00:02:39,840 Speaker 2: went to law school, spent some time doing telecommunications law, 38 00:02:40,320 --> 00:02:42,840 Speaker 2: spent a lot of time at the Federal Trade Commission, 39 00:02:42,880 --> 00:02:46,679 Speaker 2: where I was the chief technologist at one point, and 40 00:02:47,000 --> 00:02:50,120 Speaker 2: my job there was to engage with new technologies and 41 00:02:50,240 --> 00:02:54,240 Speaker 2: figure out how do they affect consumers? How can government 42 00:02:54,480 --> 00:02:57,600 Speaker 2: set policies so that consumers are getting the best benefits 43 00:02:57,600 --> 00:03:00,200 Speaker 2: out of it? And how can we drive innovation through 44 00:03:00,200 --> 00:03:02,400 Speaker 2: good policy making? And so that's what I bring to 45 00:03:02,440 --> 00:03:03,080 Speaker 2: this conversation. 46 00:03:03,520 --> 00:03:08,360 Speaker 1: You wrote a paper Getting out of Control Emergent Leadership 47 00:03:08,360 --> 00:03:12,160 Speaker 1: in a Complex World, which looks at different leadership paradigms. 48 00:03:12,560 --> 00:03:14,280 Speaker 1: Is that published? Is it available? 49 00:03:14,720 --> 00:03:17,600 Speaker 2: Yeah, it's actually a book that's available on Amazon, and 50 00:03:17,680 --> 00:03:20,160 Speaker 2: I also have a substack out of Control dot substack 51 00:03:20,240 --> 00:03:22,160 Speaker 2: dot com where you can learn more about the book. 52 00:03:22,520 --> 00:03:23,600 Speaker 1: What led you to write it? 53 00:03:24,520 --> 00:03:27,440 Speaker 2: Well, you know there's two paradigms, Well there's at least 54 00:03:27,480 --> 00:03:30,480 Speaker 2: two paradigms, but there's one that's very dominant in DC 55 00:03:30,639 --> 00:03:33,720 Speaker 2: in particular, and that's the idea that in a world 56 00:03:33,720 --> 00:03:37,800 Speaker 2: of increasing complexity and what might even look like chaos, 57 00:03:37,880 --> 00:03:40,640 Speaker 2: what we really need is more control. We need people 58 00:03:40,680 --> 00:03:44,280 Speaker 2: with more authority who take strong positions to jump into 59 00:03:44,320 --> 00:03:46,600 Speaker 2: the fray and get things under control. And I think 60 00:03:46,960 --> 00:03:48,880 Speaker 2: a lot of people often when they look at how 61 00:03:48,880 --> 00:03:52,400 Speaker 2: complex the world is, they're sort of wishing for that 62 00:03:52,400 --> 00:03:54,520 Speaker 2: that somebody will just take control and tell the world 63 00:03:54,520 --> 00:03:58,000 Speaker 2: how to run. But the problem is that complexity brings 64 00:03:58,080 --> 00:04:02,160 Speaker 2: so many benefits when we think about the ability of 65 00:04:02,480 --> 00:04:06,600 Speaker 2: our ecosystem and our economy to adapt and to create 66 00:04:06,640 --> 00:04:12,240 Speaker 2: new products and benefits for individuals, and those benefits rely 67 00:04:12,520 --> 00:04:16,719 Speaker 2: on a complex network where nobody is really in control, 68 00:04:17,160 --> 00:04:19,640 Speaker 2: but where many people have influence. And so I wrote 69 00:04:19,640 --> 00:04:22,560 Speaker 2: that book to say that both in your personal life 70 00:04:22,560 --> 00:04:25,760 Speaker 2: and in the policy space, we should be really focused 71 00:04:25,800 --> 00:04:29,240 Speaker 2: on not trying to seize control, which can have some 72 00:04:29,279 --> 00:04:31,760 Speaker 2: really negative side effects and get rid of a lot 73 00:04:31,760 --> 00:04:35,120 Speaker 2: of the benefits, but in trying to increase our influence 74 00:04:35,240 --> 00:04:38,760 Speaker 2: and also to understand the complex systems that we're in 75 00:04:38,839 --> 00:04:42,919 Speaker 2: better so that we can resist this impulse to grasp 76 00:04:43,080 --> 00:04:43,760 Speaker 2: for control. 77 00:04:44,040 --> 00:04:46,599 Speaker 1: It's interesting. It's a little bit like Adam Smith's description 78 00:04:47,320 --> 00:04:51,760 Speaker 1: of the hidden hand, which enables us, through the market mechanism, 79 00:04:52,279 --> 00:04:55,960 Speaker 1: to move resources and increase innovation in a way that 80 00:04:56,080 --> 00:04:59,040 Speaker 1: nobody can control, and if you try to control it, 81 00:04:59,120 --> 00:04:59,880 Speaker 1: you actually kill it. 82 00:05:00,520 --> 00:05:03,760 Speaker 2: The invisible hand is an emergent phenomenon. It's something that 83 00:05:03,839 --> 00:05:06,919 Speaker 2: happens when all of us trying to apply the information 84 00:05:06,960 --> 00:05:09,080 Speaker 2: that we know in front of us, are trying to 85 00:05:09,080 --> 00:05:12,080 Speaker 2: solve the problems that we face, and then over time 86 00:05:12,800 --> 00:05:15,880 Speaker 2: you get something that looks quite orderly from the outside, 87 00:05:15,880 --> 00:05:19,080 Speaker 2: but which no one person designed, and that's very powerful 88 00:05:19,160 --> 00:05:23,120 Speaker 2: and it says a lot about our capabilities as humans 89 00:05:23,200 --> 00:05:27,200 Speaker 2: to solve really complex problems as a group, even when 90 00:05:27,279 --> 00:05:29,880 Speaker 2: there's nobody setting out a design ahead of time. 91 00:05:30,760 --> 00:05:33,440 Speaker 1: The number of different things that have to come together 92 00:05:34,560 --> 00:05:38,440 Speaker 1: for major breakthroughs are beyond the ability for bureaucratic planning. 93 00:05:39,680 --> 00:05:41,360 Speaker 1: They almost occur serendipitously. 94 00:05:42,800 --> 00:05:45,719 Speaker 2: Yeah, and you can actually see often they occur in parallel, 95 00:05:45,760 --> 00:05:48,760 Speaker 2: so you might have multiple people who are doing similar 96 00:05:48,800 --> 00:05:53,320 Speaker 2: work on similar tracks, and there's a sort of threshold 97 00:05:53,440 --> 00:05:56,560 Speaker 2: where all of the enabling technologies enable a lot of 98 00:05:56,600 --> 00:05:58,840 Speaker 2: new people who are trying to solve lots of different 99 00:05:58,880 --> 00:06:01,800 Speaker 2: problems to solve a very similar problem all at once. 100 00:06:01,839 --> 00:06:05,000 Speaker 2: The light bulb this happened, you know, Calculus was kind 101 00:06:05,000 --> 00:06:09,000 Speaker 2: of parallel invented between two very very smart people, and 102 00:06:09,040 --> 00:06:10,640 Speaker 2: so I think this happens a lot, and I think 103 00:06:10,640 --> 00:06:13,200 Speaker 2: it's happening in the artificial intelligence space right now, where 104 00:06:13,240 --> 00:06:16,080 Speaker 2: you have a lot of people are seeing like these 105 00:06:16,120 --> 00:06:20,160 Speaker 2: new capabilities are possible and trying to solve new problems, 106 00:06:20,320 --> 00:06:22,320 Speaker 2: and there's just so much going on right now. 107 00:06:22,640 --> 00:06:24,520 Speaker 1: You saw that with the Wright brothers, where there were 108 00:06:24,600 --> 00:06:29,239 Speaker 1: really many people in Europe and America trying to figure 109 00:06:29,240 --> 00:06:32,159 Speaker 1: out how to fly, and they happened to be first, 110 00:06:32,520 --> 00:06:35,320 Speaker 1: but there were a lot of other people working the problem, 111 00:06:35,600 --> 00:06:38,080 Speaker 1: and in fact they often would write letters back and forth. 112 00:06:38,320 --> 00:06:39,719 Speaker 1: There was certainly not an isolation. 113 00:06:41,000 --> 00:06:43,120 Speaker 2: One of the great things about that example is that 114 00:06:43,160 --> 00:06:46,480 Speaker 2: the Wright brothers were not scientists, right. They were bicycle 115 00:06:46,520 --> 00:06:50,159 Speaker 2: mechanics who were trying to solve a problem that lots 116 00:06:50,160 --> 00:06:52,640 Speaker 2: of scientists had tried to solve, and many scientists at 117 00:06:52,640 --> 00:06:57,000 Speaker 2: the time, if you read the contemporary history, were quite 118 00:06:57,000 --> 00:06:59,760 Speaker 2: skeptical that it was possible from the equations that they 119 00:06:59,760 --> 00:07:02,560 Speaker 2: had worked out, and the Right brothers, through trial and 120 00:07:02,680 --> 00:07:06,640 Speaker 2: error and practical experience, were able to show that no, 121 00:07:06,760 --> 00:07:09,440 Speaker 2: it actually was possible. And once they made that break through, 122 00:07:09,440 --> 00:07:11,680 Speaker 2: all of a sudden there was a flood of even 123 00:07:11,720 --> 00:07:14,080 Speaker 2: bigger flood of people tried to solve that problem once 124 00:07:14,160 --> 00:07:15,960 Speaker 2: they could see that it was possible to do. 125 00:07:16,600 --> 00:07:20,160 Speaker 1: The Smithsonian got fifty thousand dollars to try to build 126 00:07:20,200 --> 00:07:24,080 Speaker 1: a heavier than air vehicle and failed, and did it 127 00:07:24,160 --> 00:07:27,760 Speaker 1: with a very elegant design using a very powerful German 128 00:07:28,040 --> 00:07:31,920 Speaker 1: motor which required a very large structure to be able 129 00:07:31,920 --> 00:07:34,480 Speaker 1: to hold the motor, and it was just way too big, 130 00:07:34,520 --> 00:07:36,280 Speaker 1: and they didn't know what they were doing. And it 131 00:07:36,360 --> 00:07:39,240 Speaker 1: actually cleverly decided to launch it off of a ship 132 00:07:39,640 --> 00:07:42,160 Speaker 1: in the Potomac, so when it crashed, it went straight 133 00:07:42,200 --> 00:07:44,920 Speaker 1: into the water and they couldn't figure well went wrong. Meanwhile, 134 00:07:44,960 --> 00:07:48,280 Speaker 1: the Right Brothers, for about a dollar per flight, are 135 00:07:48,640 --> 00:07:51,960 Speaker 1: operating out of Kitty Hawk, doing ten, twelve, fifteen flights 136 00:07:51,960 --> 00:07:54,000 Speaker 1: a day, and they weren't breaking their plane, so they 137 00:07:54,000 --> 00:07:56,440 Speaker 1: could say, gee, that didn't quite work, let's do this 138 00:07:56,480 --> 00:08:00,000 Speaker 1: and that. But when the Right Brothers broke through shortly 139 00:08:00,160 --> 00:08:04,840 Speaker 1: after the Smithsonian failed, the Smithsonian was so angry that 140 00:08:05,000 --> 00:08:08,160 Speaker 1: for years they wouldn't deal with the Wright brothers. That's 141 00:08:08,200 --> 00:08:11,640 Speaker 1: one of those great examples where human nature transcended the 142 00:08:12,040 --> 00:08:14,920 Speaker 1: scientific impulse. I am interested by the way. You have 143 00:08:14,960 --> 00:08:19,200 Speaker 1: a bachelor's in computer science from Harting and a master's 144 00:08:19,640 --> 00:08:22,920 Speaker 1: in computer science from the University of Illinois, and you 145 00:08:23,080 --> 00:08:25,480 Speaker 1: have a law degree from George warshingtam Law School, So 146 00:08:25,800 --> 00:08:31,080 Speaker 1: you really have a combination of the law and public 147 00:08:31,120 --> 00:08:34,600 Speaker 1: policy with the science and technology. Do you think that 148 00:08:34,640 --> 00:08:37,360 Speaker 1: gives you a different outlook than most of the people 149 00:08:37,400 --> 00:08:37,880 Speaker 1: in the field. 150 00:08:38,920 --> 00:08:41,760 Speaker 2: I think it does in two ways. It's very interesting 151 00:08:41,800 --> 00:08:44,640 Speaker 2: because when I was in grad school, there wasn't a 152 00:08:44,679 --> 00:08:48,400 Speaker 2: path really for computer scientists to work on public policy 153 00:08:48,480 --> 00:08:50,480 Speaker 2: in the way that there is now that's much more common, 154 00:08:51,360 --> 00:08:53,400 Speaker 2: and so I went to law school. What law school 155 00:08:53,440 --> 00:08:56,760 Speaker 2: taught me is that the engineering paradigm that I think 156 00:08:56,800 --> 00:09:02,000 Speaker 2: we typically think about the engineers try to solve problems with, 157 00:09:02,320 --> 00:09:07,280 Speaker 2: is not the same as the paradigm for lawmaking. Unfortunately, 158 00:09:07,360 --> 00:09:09,560 Speaker 2: we do see a lot of people who have tech 159 00:09:09,679 --> 00:09:13,480 Speaker 2: expertise come to DC with a sort of engineering mindset 160 00:09:13,520 --> 00:09:16,760 Speaker 2: that says, well, law is like code, and once we 161 00:09:16,840 --> 00:09:18,880 Speaker 2: write it, of course it will work the way that 162 00:09:19,080 --> 00:09:22,360 Speaker 2: it's intended, just like when I write code in a computer, 163 00:09:22,600 --> 00:09:25,959 Speaker 2: it runs that way. But humans and the law, as 164 00:09:26,000 --> 00:09:29,400 Speaker 2: you well know, are a complex system with lots of 165 00:09:29,400 --> 00:09:32,160 Speaker 2: feedback loops. The things that you write don't run the 166 00:09:32,200 --> 00:09:36,200 Speaker 2: way that you think that they should, and so I 167 00:09:36,280 --> 00:09:40,520 Speaker 2: often have to temper some of my enthusiastic engineering background 168 00:09:40,559 --> 00:09:47,240 Speaker 2: to friends about what is possible in law and when 169 00:09:47,400 --> 00:09:50,320 Speaker 2: it's an appropriate approach to solve problems and when it isn't. 170 00:09:50,360 --> 00:09:52,839 Speaker 2: And so I think having window into both of those. 171 00:09:52,920 --> 00:09:55,560 Speaker 2: Lets me speak across that chasm that I think exists 172 00:09:55,559 --> 00:10:12,040 Speaker 2: between engineering and law. 173 00:10:13,040 --> 00:10:16,679 Speaker 1: I mean, you're studying of artificial intelligence. You indicate that 174 00:10:17,240 --> 00:10:20,080 Speaker 1: it's evolved already in less than a decade to about 175 00:10:20,080 --> 00:10:23,440 Speaker 1: one hundred billion dollar industry, but you think in the 176 00:10:23,480 --> 00:10:26,880 Speaker 1: next decade it could easily grow to something like a trillion, 177 00:10:27,360 --> 00:10:31,240 Speaker 1: three hundred billion dollars as an industry. The analysts at 178 00:10:31,280 --> 00:10:35,640 Speaker 1: Pricewater Scoopers estimate that AI will add fifteen point seven 179 00:10:35,760 --> 00:10:39,840 Speaker 1: trillion dollars to the global economy by twenty thirty. First 180 00:10:39,880 --> 00:10:43,400 Speaker 1: of all, why is it accelerating that rapidly and what 181 00:10:43,520 --> 00:10:45,040 Speaker 1: is the nature of its impact? 182 00:10:47,000 --> 00:10:49,520 Speaker 2: So this is a difficult question to answer, and I 183 00:10:49,559 --> 00:10:52,960 Speaker 2: think everybody's trying to predict the future on AI comes 184 00:10:53,040 --> 00:10:56,720 Speaker 2: up against a single difficult problem, which is that defining 185 00:10:56,800 --> 00:11:01,160 Speaker 2: what exactly is artificial intelligence is quite difficult. This current 186 00:11:01,200 --> 00:11:05,320 Speaker 2: wave of deep learning, large language models, I think is 187 00:11:05,320 --> 00:11:08,240 Speaker 2: what most people sort of settle on as what they're 188 00:11:08,280 --> 00:11:10,079 Speaker 2: trying to project off of right now. And the reason 189 00:11:10,080 --> 00:11:12,120 Speaker 2: it's moving so fast right now is that there is 190 00:11:12,120 --> 00:11:17,800 Speaker 2: this real confluence of capability in these new chips, these 191 00:11:17,840 --> 00:11:22,560 Speaker 2: new techniques in the transformer model that was developed at Google, 192 00:11:22,679 --> 00:11:26,559 Speaker 2: but quickly spread beyond that company to lots of researchers, 193 00:11:27,120 --> 00:11:31,400 Speaker 2: and this demonstrated almost in the Right Brothers manner. This 194 00:11:31,520 --> 00:11:35,480 Speaker 2: demonstrated potential in something like chat GPT, where people are like, 195 00:11:36,000 --> 00:11:38,760 Speaker 2: I can't believe this works as well as it does. 196 00:11:38,880 --> 00:11:42,679 Speaker 2: I don't think that pre chat GPT being released, that 197 00:11:42,720 --> 00:11:46,560 Speaker 2: people were really aware, even people in the computer science field, 198 00:11:47,000 --> 00:11:49,719 Speaker 2: that you could have something that was as useful as 199 00:11:49,760 --> 00:11:53,079 Speaker 2: it turns out these large language models are turning out 200 00:11:53,120 --> 00:11:56,200 Speaker 2: to be. And so I think once that was demonstrated, 201 00:11:56,240 --> 00:11:59,040 Speaker 2: there's been such a flood of energy into this space. 202 00:11:59,720 --> 00:12:05,280 Speaker 2: The impacts I think are still pretty unknown. But when 203 00:12:05,280 --> 00:12:07,679 Speaker 2: you think of what these large language models and other 204 00:12:07,720 --> 00:12:09,800 Speaker 2: deep learning models can do, what they can do is 205 00:12:10,360 --> 00:12:13,680 Speaker 2: they can take a big bunch of unstructured data and 206 00:12:13,720 --> 00:12:17,960 Speaker 2: they can pull essential patterns out of it in a 207 00:12:18,000 --> 00:12:23,040 Speaker 2: way that reveals new things that wasn't easy to identify 208 00:12:23,120 --> 00:12:26,319 Speaker 2: in that data previously. And so I think it has 209 00:12:26,360 --> 00:12:29,680 Speaker 2: the biggest potential in spaces where we have a lot 210 00:12:29,679 --> 00:12:32,960 Speaker 2: of data but we don't know what to do with 211 00:12:32,960 --> 00:12:35,280 Speaker 2: that data. And so healthcare to me is one of 212 00:12:35,320 --> 00:12:39,320 Speaker 2: the biggest potential impacts. Here we can collect a lot 213 00:12:39,360 --> 00:12:44,800 Speaker 2: of data about an individual's basic bodily functions, their heart rate, 214 00:12:44,840 --> 00:12:48,000 Speaker 2: they're breathing, their brain waves. We can see that data, 215 00:12:48,040 --> 00:12:51,480 Speaker 2: but we don't quite understand what it means. And so 216 00:12:51,559 --> 00:12:54,640 Speaker 2: I think using these types of techniques of deep learning 217 00:12:55,520 --> 00:13:00,600 Speaker 2: to pull out meaning from those large data is going 218 00:13:00,640 --> 00:13:03,120 Speaker 2: to help us do things like personalized medicine where we 219 00:13:03,200 --> 00:13:07,560 Speaker 2: no longer treat people as just essentially an average human, 220 00:13:07,600 --> 00:13:10,840 Speaker 2: but we can look at what are the specific conditions 221 00:13:11,440 --> 00:13:14,520 Speaker 2: and functions of their body, and we can design treatments, 222 00:13:14,600 --> 00:13:20,880 Speaker 2: including potentially medicines that focus specifically on their particular body. 223 00:13:20,920 --> 00:13:22,720 Speaker 2: And so I think there's a lot of potential there. 224 00:13:23,120 --> 00:13:26,640 Speaker 2: There's obviously lots of potential in the creative fields because 225 00:13:26,640 --> 00:13:31,040 Speaker 2: you can use these techniques to generate well written pros, 226 00:13:31,480 --> 00:13:35,160 Speaker 2: to translate between lots of different languages, to create even 227 00:13:35,440 --> 00:13:38,359 Speaker 2: pictures and videos now in a way that's quite impressive, 228 00:13:38,400 --> 00:13:42,239 Speaker 2: and so I think it brings a lot of powerful 229 00:13:42,640 --> 00:13:45,400 Speaker 2: content creation tools down to the average person in a 230 00:13:45,400 --> 00:13:50,080 Speaker 2: way that is going to mean that it's much easier 231 00:13:50,120 --> 00:13:54,560 Speaker 2: to create high quality content even if you're an individual 232 00:13:54,679 --> 00:13:57,000 Speaker 2: or a small team. And so I think we'll see 233 00:13:57,000 --> 00:13:59,600 Speaker 2: an explosion of creativity using these tools in the very 234 00:13:59,600 --> 00:14:00,600 Speaker 2: near freet as well. 235 00:14:01,400 --> 00:14:06,520 Speaker 1: The process could be very, very positive, almost certainly will be. 236 00:14:07,360 --> 00:14:11,600 Speaker 1: But at the same time, there's a cultural aura that 237 00:14:11,920 --> 00:14:15,160 Speaker 1: views all of it with fear. Why do you think 238 00:14:15,200 --> 00:14:20,720 Speaker 1: we have drifted into this fear based response to technological opportunity? 239 00:14:22,880 --> 00:14:26,640 Speaker 2: Humankind always has had a sort of technopanic curve where 240 00:14:26,760 --> 00:14:29,680 Speaker 2: you have the early adopters who are really excited, then 241 00:14:29,720 --> 00:14:32,760 Speaker 2: you have a sort of peak moment where people are 242 00:14:32,800 --> 00:14:36,080 Speaker 2: talking about the potential downsides, and then it gets accepted 243 00:14:36,120 --> 00:14:39,360 Speaker 2: into the community and people forget that they ever debated, 244 00:14:39,440 --> 00:14:43,760 Speaker 2: like oh, that novels were good or that bicycles were good. 245 00:14:43,800 --> 00:14:46,160 Speaker 2: And so I think it's part that. But I think 246 00:14:46,240 --> 00:14:50,560 Speaker 2: AI in particular raises because it's such a vague technology. 247 00:14:50,640 --> 00:14:54,200 Speaker 2: So much of what we call AI artificial intelligence. There's 248 00:14:54,240 --> 00:14:59,920 Speaker 2: actually dozens and dozens of artificial intelligence algorithms on everybody's phone, right. 249 00:15:00,240 --> 00:15:02,640 Speaker 2: It does all the things like helping you search for 250 00:15:02,920 --> 00:15:08,119 Speaker 2: your photo collection for a particular individual. Those are artificial 251 00:15:08,120 --> 00:15:13,119 Speaker 2: intelligence algorithms. But people when they hear the term artificial intelligence, 252 00:15:13,160 --> 00:15:15,480 Speaker 2: they think of sci fi movies, right, They think of 253 00:15:15,600 --> 00:15:18,880 Speaker 2: the Terminator, or they think of two thousand and one 254 00:15:19,080 --> 00:15:23,200 Speaker 2: and How and in those movies, almost universally, AI is 255 00:15:23,640 --> 00:15:27,600 Speaker 2: projected as an evil entity or an entity that's gone 256 00:15:27,680 --> 00:15:31,280 Speaker 2: wrong somehow that is threatening human safety. And so I 257 00:15:31,320 --> 00:15:35,280 Speaker 2: think there's that sort of cultural piece. But more generally, 258 00:15:35,360 --> 00:15:41,160 Speaker 2: I think the US has shifted from a frontier mindset, 259 00:15:41,160 --> 00:15:44,280 Speaker 2: one where we're trying to push the edges we're trying 260 00:15:44,320 --> 00:15:47,200 Speaker 2: to explore. We want grand adventures, we want to be 261 00:15:47,280 --> 00:15:50,360 Speaker 2: the next Right Brothers. We hold up those people as 262 00:15:51,160 --> 00:15:55,240 Speaker 2: icons for what it means to contribute to society, and 263 00:15:55,240 --> 00:15:57,960 Speaker 2: we're afraid of that now. I worry that we're now 264 00:15:57,960 --> 00:16:01,840 Speaker 2: in a time where maybe our kids' greatest adventures will 265 00:16:01,840 --> 00:16:07,040 Speaker 2: be figuring out what particular trauma they are trying to 266 00:16:07,040 --> 00:16:09,360 Speaker 2: deal with in their life, when their great adventures should 267 00:16:09,360 --> 00:16:11,720 Speaker 2: be like trying to come up with the next big invention, 268 00:16:12,000 --> 00:16:16,160 Speaker 2: or exploring a new space of science, or maybe outer space. 269 00:16:16,200 --> 00:16:19,320 Speaker 2: And so I think that cultural shift not one hundred 270 00:16:19,320 --> 00:16:21,880 Speaker 2: percent sure why it's happened. I think in part it 271 00:16:21,960 --> 00:16:25,560 Speaker 2: might be we've gotten comfortable as a country, maybe, and 272 00:16:25,680 --> 00:16:28,480 Speaker 2: so we're focusing on problems that are smaller when we 273 00:16:28,480 --> 00:16:31,440 Speaker 2: should be looking to opportunities and problems that are bigger. 274 00:16:32,000 --> 00:16:35,720 Speaker 2: But it really is a sort of new phenomenon. When 275 00:16:35,720 --> 00:16:38,080 Speaker 2: I think of the late nineties, we were a country 276 00:16:38,120 --> 00:16:41,080 Speaker 2: that was excited to be on the cutting edge of technology, 277 00:16:41,520 --> 00:16:45,320 Speaker 2: and now I think we're often, at least at the 278 00:16:45,520 --> 00:16:49,600 Speaker 2: elite levels, people talk about technology as if it's primarily 279 00:16:49,640 --> 00:16:55,200 Speaker 2: a threat rather than an enormous opportunity For the United States. 280 00:16:55,880 --> 00:17:02,119 Speaker 1: Somebody said that the Europeans had decided they regulation over innovation, 281 00:17:03,040 --> 00:17:06,320 Speaker 1: and the result was in almost every new innovative area, 282 00:17:06,920 --> 00:17:09,919 Speaker 1: the US was just rapidly pulling away from Europe. And 283 00:17:10,080 --> 00:17:12,879 Speaker 1: isn't there a real danger that some of our politicians 284 00:17:12,920 --> 00:17:17,840 Speaker 1: would like to introduce a European style bureaucratic over control. 285 00:17:18,480 --> 00:17:22,400 Speaker 2: Absolutely, and in fact almost expressly. When you look at 286 00:17:22,400 --> 00:17:25,800 Speaker 2: what California has done around some of its approaches to 287 00:17:27,119 --> 00:17:31,879 Speaker 2: software development and privacy, they're borrowed directly from the European Union. 288 00:17:32,680 --> 00:17:36,320 Speaker 2: And the European Union has a different mindset around technology, 289 00:17:36,600 --> 00:17:40,879 Speaker 2: whereas in the US we generally think people have the 290 00:17:40,960 --> 00:17:43,359 Speaker 2: right to build new things and then we'll see what 291 00:17:43,400 --> 00:17:46,480 Speaker 2: the effects are and they might have to temper their solutions. 292 00:17:46,520 --> 00:17:49,840 Speaker 2: Both the market might discipline them but then also there 293 00:17:49,880 --> 00:17:53,520 Speaker 2: might be real consumer harms aos are possible, Whereas in 294 00:17:53,560 --> 00:17:56,560 Speaker 2: Europe they have a mindset that basically, until the government 295 00:17:56,640 --> 00:18:00,520 Speaker 2: sort of authorizes an innovation in a particular space, that 296 00:18:00,600 --> 00:18:03,040 Speaker 2: nobody is really allowed to do it. And so that 297 00:18:03,119 --> 00:18:09,840 Speaker 2: mindset is very chilling because inevitably it's not the regulators 298 00:18:09,880 --> 00:18:12,760 Speaker 2: who are the best at trying to figure out what 299 00:18:13,400 --> 00:18:16,200 Speaker 2: future technologies might come about. It's the people who are 300 00:18:16,200 --> 00:18:19,840 Speaker 2: practically trying to make those things happen. And if they 301 00:18:19,920 --> 00:18:21,760 Speaker 2: live in a culture that says until you get the 302 00:18:21,840 --> 00:18:25,159 Speaker 2: okay you can't try something new, you know, it's just 303 00:18:25,200 --> 00:18:27,720 Speaker 2: a very difficult place to innovate in. So the US 304 00:18:27,800 --> 00:18:31,520 Speaker 2: does and maintains quite a good edge in that space, 305 00:18:31,760 --> 00:18:34,080 Speaker 2: but we are at risk of, at least at the 306 00:18:34,080 --> 00:18:38,920 Speaker 2: policy level, giving that up by trying to adopt these 307 00:18:38,960 --> 00:18:41,040 Speaker 2: precautionary approaches to innovation. 308 00:18:41,920 --> 00:18:45,560 Speaker 1: Doesn't there seem to be a pretty big partisan split 309 00:18:45,720 --> 00:18:50,520 Speaker 1: over regulation versus innovation, with people like send them a 310 00:18:50,600 --> 00:18:54,520 Speaker 1: Joy Leader Chuck Schumer really pushing for all out federal regulation. 311 00:18:55,760 --> 00:19:01,080 Speaker 2: I don't feel like artificial intelligence has been particularly politicized yet, 312 00:19:01,480 --> 00:19:04,520 Speaker 2: but we have seen many other of cutting edge technologies, 313 00:19:04,640 --> 00:19:08,000 Speaker 2: especially in the software space really do have a political 314 00:19:08,119 --> 00:19:11,960 Speaker 2: valance to them. The default, I think would be in 315 00:19:12,000 --> 00:19:14,359 Speaker 2: the partisan space would be that Democrats tend to be 316 00:19:14,480 --> 00:19:19,200 Speaker 2: much more precautionary in approaches to technology and Republicans tend 317 00:19:19,240 --> 00:19:23,000 Speaker 2: to be more permissionless let people build things. That's not 318 00:19:23,040 --> 00:19:27,160 Speaker 2: one hundred percent across the board, and there are interesting opportunities, 319 00:19:27,200 --> 00:19:29,080 Speaker 2: I think in the AI space to think about that. 320 00:19:29,400 --> 00:19:33,640 Speaker 2: On AI in particular, the Biden administration has very much 321 00:19:33,680 --> 00:19:37,280 Speaker 2: taken a whole of government approach, one that says, hey, 322 00:19:37,320 --> 00:19:39,560 Speaker 2: we as a government had to figure this thing out, 323 00:19:40,000 --> 00:19:41,960 Speaker 2: and we need to get all our ducks. 324 00:19:41,720 --> 00:19:42,280 Speaker 1: In a row. 325 00:19:42,840 --> 00:19:45,040 Speaker 2: And not all of that is bad. Government uses of 326 00:19:45,080 --> 00:19:48,680 Speaker 2: AI should certainly should be thoughtful. But when we're trying 327 00:19:48,680 --> 00:19:51,639 Speaker 2: to set up an environment in which we are making 328 00:19:51,680 --> 00:19:55,520 Speaker 2: sure that innovators have both the incentive and the freedom 329 00:19:55,560 --> 00:19:59,120 Speaker 2: to develop new things, a lot of government action can 330 00:19:59,160 --> 00:20:02,040 Speaker 2: be pretty chilling to that. And so I do think 331 00:20:02,080 --> 00:20:05,840 Speaker 2: the Biden administration has not really set a very positive vision. 332 00:20:06,359 --> 00:20:10,080 Speaker 2: In contrast, in the late nineties, there was a very 333 00:20:10,119 --> 00:20:14,440 Speaker 2: clear vision set for the Internet that the commercial development 334 00:20:14,480 --> 00:20:16,960 Speaker 2: of the Internet, that it was going to be market 335 00:20:17,040 --> 00:20:20,000 Speaker 2: driven rather than government driven. And we have a very 336 00:20:20,040 --> 00:20:23,600 Speaker 2: different mindset about artificial intelligence right now coming out of 337 00:20:23,840 --> 00:20:24,639 Speaker 2: this administration. 338 00:20:40,520 --> 00:20:43,160 Speaker 1: You've sort of triggered a couple of thoughts on my part. 339 00:20:43,160 --> 00:20:47,679 Speaker 1: One is you could do a very interesting survey of 340 00:20:47,720 --> 00:20:51,800 Speaker 1: the artificial intelligence that's already around us. I mean your 341 00:20:51,800 --> 00:20:54,400 Speaker 1: whole point, for example, about your cell phone and how 342 00:20:54,400 --> 00:20:58,600 Speaker 1: many different versions of AI you're relating to, And I 343 00:20:58,640 --> 00:21:02,600 Speaker 1: think people would be shocked realized that artificial intelligence isn't 344 00:21:02,600 --> 00:21:06,719 Speaker 1: the future. Artificial intelligence is the present and is going 345 00:21:06,760 --> 00:21:10,159 Speaker 1: to expand into the future. They're an amazing number of 346 00:21:10,160 --> 00:21:14,280 Speaker 1: places where we've actually been using a variation of artificial 347 00:21:14,280 --> 00:21:17,280 Speaker 1: intelligence many many years ago. I went out to San 348 00:21:17,280 --> 00:21:21,400 Speaker 1: Diego to the Navy's labs and looked at how they 349 00:21:21,400 --> 00:21:25,159 Speaker 1: had designed a carrier battlegroup defense system, and it was 350 00:21:25,200 --> 00:21:28,239 Speaker 1: clearly what we would now call artificial intelligence. But it 351 00:21:28,280 --> 00:21:31,640 Speaker 1: was thirty five years ago. So in that sense, we're 352 00:21:31,680 --> 00:21:34,840 Speaker 1: already surrounded by a large amount of artificial intelligence. 353 00:21:36,000 --> 00:21:40,080 Speaker 2: Absolutely. There's a great quote by AI pioneer John McCarthy, 354 00:21:40,640 --> 00:21:42,840 Speaker 2: which is that as soon as it works, nobody calls 355 00:21:42,880 --> 00:21:46,880 Speaker 2: it AI anymore. There was a time at which chess 356 00:21:46,920 --> 00:21:52,080 Speaker 2: playing was cutting edge artificial intelligence or speech recognition, or 357 00:21:52,720 --> 00:21:56,080 Speaker 2: recommendation algorithms like what you should watch next on Netflix. 358 00:21:56,400 --> 00:21:58,240 Speaker 2: There was a point at which these were cutting edge 359 00:21:58,280 --> 00:22:00,960 Speaker 2: AI research. But now because as they're working, we just 360 00:22:00,960 --> 00:22:04,280 Speaker 2: call that computers and we don't call it artificial intelligence anymore. 361 00:22:04,320 --> 00:22:07,199 Speaker 2: And so I do think that people don't realize that, 362 00:22:07,960 --> 00:22:11,000 Speaker 2: and I think in part it's because these new technologies 363 00:22:11,040 --> 00:22:14,199 Speaker 2: came out in a sort of chatbot form, right, and 364 00:22:14,240 --> 00:22:16,159 Speaker 2: so like you're talking and it sort of seems like 365 00:22:16,160 --> 00:22:20,200 Speaker 2: you're talking to something. I think that feels a little 366 00:22:20,200 --> 00:22:23,480 Speaker 2: different to people. But ultimately, I do think it is 367 00:22:23,520 --> 00:22:26,800 Speaker 2: helpful to point out that AI is around us. It's 368 00:22:26,840 --> 00:22:27,800 Speaker 2: pretty ubiquitous. 369 00:22:27,840 --> 00:22:30,119 Speaker 1: In fact. The other side of that is the notion 370 00:22:30,280 --> 00:22:34,600 Speaker 1: of getting across how many different ways AI helps us. 371 00:22:34,640 --> 00:22:36,400 Speaker 1: I mean, I'm saying about this the other night because 372 00:22:36,440 --> 00:22:39,080 Speaker 1: I was trying to go somewhere, but I realized I 373 00:22:39,080 --> 00:22:41,960 Speaker 1: don't care anymore because I just plug in the address 374 00:22:42,920 --> 00:22:47,000 Speaker 1: and the GPS system, which is artificial intelligence, knows where 375 00:22:47,040 --> 00:22:50,280 Speaker 1: I am, knows where I am going, and has a 376 00:22:50,320 --> 00:22:53,199 Speaker 1: sense of which route will be best. Now, if you 377 00:22:53,240 --> 00:22:56,640 Speaker 1: think about it, that's an astonishing level of data in 378 00:22:56,680 --> 00:23:00,960 Speaker 1: real time at no cost. We thought about the number 379 00:23:00,960 --> 00:23:05,040 Speaker 1: of ways the AI is already helping us, and then 380 00:23:05,080 --> 00:23:08,439 Speaker 1: you could actually build out the potential over the next decade. 381 00:23:08,480 --> 00:23:12,399 Speaker 1: I think, particularly for example of helping Alzheimer's patients and 382 00:23:12,520 --> 00:23:16,840 Speaker 1: parkinson patients, and people have much better richer lives and 383 00:23:16,880 --> 00:23:21,680 Speaker 1: their families having much better richer lives as new artificial 384 00:23:21,760 --> 00:23:23,280 Speaker 1: intelligence systems are developed. 385 00:23:23,960 --> 00:23:27,800 Speaker 2: Yeah. Absolutely, And you know I was thinking. I had 386 00:23:27,800 --> 00:23:29,880 Speaker 2: a bit of a health scare with my twenty month 387 00:23:29,920 --> 00:23:32,200 Speaker 2: old last week where she had a seizure, and one 388 00:23:32,200 --> 00:23:34,360 Speaker 2: of the things that doctors kept asking me was how 389 00:23:34,359 --> 00:23:38,040 Speaker 2: long did it last? And I was literally able to 390 00:23:38,080 --> 00:23:41,600 Speaker 2: look at my fit bit heart rate monitor and see 391 00:23:41,640 --> 00:23:44,400 Speaker 2: when my heart rate shot through the roof to see 392 00:23:44,440 --> 00:23:47,280 Speaker 2: when it started, and I could figure out from that. Now, 393 00:23:47,280 --> 00:23:48,919 Speaker 2: it would have been even better if I could just 394 00:23:48,960 --> 00:23:51,040 Speaker 2: ask the Alexa that was in my room like hey, 395 00:23:51,119 --> 00:23:53,920 Speaker 2: you hurt us, yelling like how long did that last? 396 00:23:54,000 --> 00:23:54,160 Speaker 1: Right? 397 00:23:54,400 --> 00:23:56,720 Speaker 2: But I couldn't do that, in part because I think 398 00:23:56,800 --> 00:24:00,359 Speaker 2: that people worry about enabling those types of devices to 399 00:24:00,440 --> 00:24:03,520 Speaker 2: monitor all the time, and sure that can have some downsides. 400 00:24:03,560 --> 00:24:05,480 Speaker 2: But all I could think was, Man, how great it 401 00:24:05,480 --> 00:24:07,320 Speaker 2: would have been if I could have pulled that data 402 00:24:07,800 --> 00:24:10,919 Speaker 2: that I know was available even if we weren't capturing it, 403 00:24:10,960 --> 00:24:14,440 Speaker 2: and I think AI opens the potential to do that 404 00:24:14,560 --> 00:24:19,960 Speaker 2: sort of really powerful empowerment of people to make the 405 00:24:20,000 --> 00:24:23,280 Speaker 2: most of the information that's around them, and so I'm 406 00:24:23,320 --> 00:24:24,000 Speaker 2: excited about it. 407 00:24:24,560 --> 00:24:25,960 Speaker 1: I really want to ask you a little more about 408 00:24:26,000 --> 00:24:28,520 Speaker 1: the Abundance in City because it's absolutely what I believe 409 00:24:28,520 --> 00:24:30,760 Speaker 1: in and I'm fascinated that you're doing it. But I 410 00:24:30,840 --> 00:24:35,159 Speaker 1: noticed that you have focused on artificial intelligence and energy 411 00:24:35,200 --> 00:24:37,920 Speaker 1: as your two major emphasis. Why is that? 412 00:24:38,880 --> 00:24:40,600 Speaker 2: Thank you so much for your interest in what we're 413 00:24:40,600 --> 00:24:42,600 Speaker 2: doing at the Abundance Institute. As you said, we just 414 00:24:42,680 --> 00:24:45,320 Speaker 2: launched recently launched this week, actually, although we've been in 415 00:24:45,359 --> 00:24:48,040 Speaker 2: soft launch for a little while. Right now we're very 416 00:24:48,040 --> 00:24:53,399 Speaker 2: focused on AI and energy because artificial intelligence is not 417 00:24:53,600 --> 00:24:57,359 Speaker 2: only a hot topic, but this issue is not going away. 418 00:24:57,400 --> 00:25:00,520 Speaker 2: The way people are talking about AI right now, the 419 00:25:00,560 --> 00:25:03,520 Speaker 2: cultural and the policy decisions that we make right now, 420 00:25:03,800 --> 00:25:07,119 Speaker 2: we'll have ramifications for the next fifty years, if not longer, 421 00:25:07,160 --> 00:25:09,960 Speaker 2: and in a space that's moving this fast, those are 422 00:25:10,000 --> 00:25:12,639 Speaker 2: really huge impacts. You already mentioned some of the numbers 423 00:25:12,680 --> 00:25:17,320 Speaker 2: about why AI could drive such tremendous growth across the 424 00:25:17,480 --> 00:25:20,560 Speaker 2: US economy and across the world. But if we don't 425 00:25:20,600 --> 00:25:22,560 Speaker 2: get this right, we're going to see the lead that 426 00:25:22,600 --> 00:25:26,320 Speaker 2: the US has to other nations such as China, and 427 00:25:26,400 --> 00:25:28,280 Speaker 2: I think it is really important that we get this 428 00:25:28,400 --> 00:25:30,360 Speaker 2: right right now, which is part of why we're focusing 429 00:25:30,400 --> 00:25:35,200 Speaker 2: on it. Energy. Similarly, there's enormous opportunity. There's new technologies 430 00:25:35,240 --> 00:25:38,760 Speaker 2: in this space that could really drive us into a 431 00:25:38,840 --> 00:25:42,280 Speaker 2: time of energy abundance. We've been operating in the US 432 00:25:42,359 --> 00:25:46,000 Speaker 2: since the seventies under a sort of scarcity mindset around energy, 433 00:25:46,480 --> 00:25:49,240 Speaker 2: the idea being that there's a limited amount of it 434 00:25:49,520 --> 00:25:51,760 Speaker 2: and that we should conserve it. Our goal should be 435 00:25:51,800 --> 00:25:56,520 Speaker 2: to conserve energy. Whereas we know that the wealthiest countries 436 00:25:56,600 --> 00:25:59,440 Speaker 2: are those that produce the most energy, and so here 437 00:25:59,480 --> 00:26:01,919 Speaker 2: in the US we need to get better at producing energy. 438 00:26:01,960 --> 00:26:03,800 Speaker 2: We need to do it in a way that's sustainable 439 00:26:03,840 --> 00:26:06,800 Speaker 2: and creates a good environment for our people. But there's 440 00:26:06,840 --> 00:26:10,560 Speaker 2: no reason that we are stagnating when it comes to 441 00:26:10,600 --> 00:26:14,000 Speaker 2: the amount of energy that we're creating. We have the technology, 442 00:26:14,080 --> 00:26:16,560 Speaker 2: we have the capabilities. The main things that are holding 443 00:26:16,640 --> 00:26:19,280 Speaker 2: us back are regulatory barriers, and we need to get 444 00:26:19,320 --> 00:26:20,639 Speaker 2: those out of the way so that we can do 445 00:26:20,720 --> 00:26:24,080 Speaker 2: things like bring nuclear back to the US in a 446 00:26:24,080 --> 00:26:27,520 Speaker 2: way that produces a huge abundance of energy that we're 447 00:26:27,520 --> 00:26:30,359 Speaker 2: going to need for many different things, including artificial intelligence. 448 00:26:30,400 --> 00:26:33,000 Speaker 2: And so that's why we're very focused in those two areas. 449 00:26:33,040 --> 00:26:34,800 Speaker 2: I will say that in the longer term, we are 450 00:26:34,840 --> 00:26:38,719 Speaker 2: also very interested in the biology space. The biotech space. 451 00:26:39,200 --> 00:26:42,560 Speaker 2: There is a huge amount of potential there, especially as 452 00:26:42,560 --> 00:26:47,040 Speaker 2: some of these artificial intelligence therapies come online, to really 453 00:26:48,240 --> 00:26:50,640 Speaker 2: just a total step change and the way that we 454 00:26:50,680 --> 00:26:53,480 Speaker 2: deal with health and healthcare in this country and around 455 00:26:53,520 --> 00:26:55,560 Speaker 2: the world. And so we're very excited about the potential 456 00:26:55,600 --> 00:26:56,600 Speaker 2: in that space as well. 457 00:26:57,240 --> 00:27:01,040 Speaker 1: So if somebody wanted to get involved with the Abundance Institute, 458 00:27:01,080 --> 00:27:02,280 Speaker 1: what would they do? What could they do? 459 00:27:03,040 --> 00:27:05,520 Speaker 2: You can reach out to us. Our website is abundance 460 00:27:05,560 --> 00:27:08,760 Speaker 2: dot Institute and we have an email at Hello at 461 00:27:08,800 --> 00:27:13,360 Speaker 2: abundance dot Institute. What we're trying to do is invest 462 00:27:13,400 --> 00:27:15,960 Speaker 2: in talent and talent assembly. We're also trying to build 463 00:27:15,960 --> 00:27:19,919 Speaker 2: a community of optimists, founders, and inventors to combat this 464 00:27:20,200 --> 00:27:23,760 Speaker 2: very pessimistic mindset about the future of technology that I 465 00:27:23,760 --> 00:27:27,320 Speaker 2: think resonates. That's very prevalent in DC, but I think 466 00:27:27,480 --> 00:27:30,280 Speaker 2: there's lots of people talking that way other places as well, 467 00:27:30,320 --> 00:27:34,000 Speaker 2: and so we welcome all people of similar mindset who 468 00:27:34,040 --> 00:27:36,880 Speaker 2: are excited about the future, who think that humans have 469 00:27:37,040 --> 00:27:41,800 Speaker 2: the great potential to create solutions to big problems, and 470 00:27:41,840 --> 00:27:43,560 Speaker 2: who want to get involved. We would love to hear 471 00:27:43,560 --> 00:27:43,840 Speaker 2: from you. 472 00:27:44,480 --> 00:27:48,280 Speaker 1: We will certainly put the connections on our show page, 473 00:27:48,640 --> 00:27:51,720 Speaker 1: and we will encourage people to look at the Abundance 474 00:27:51,760 --> 00:27:55,239 Speaker 1: Institute and to get involved in helping think about a 475 00:27:55,240 --> 00:27:58,119 Speaker 1: positive future. Now, I want to thank you for joining me. 476 00:27:58,320 --> 00:28:02,040 Speaker 1: I'm looking forward to seeing what the Abundance Institute can accomplish. 477 00:28:02,400 --> 00:28:05,360 Speaker 1: I think it's exactly the right direction. I know you're 478 00:28:05,400 --> 00:28:07,919 Speaker 1: just getting started, so I hope in a few months 479 00:28:08,240 --> 00:28:10,720 Speaker 1: you'll come back and join us again and report on 480 00:28:10,760 --> 00:28:11,600 Speaker 1: what you're working on. 481 00:28:12,160 --> 00:28:14,000 Speaker 2: I would love to and in a few months the 482 00:28:14,080 --> 00:28:17,720 Speaker 2: AI technology will probably be even different than it is 483 00:28:17,760 --> 00:28:19,879 Speaker 2: now and we'll have plenty to talk about. So very 484 00:28:19,960 --> 00:28:21,159 Speaker 2: much welcome the opportunity. 485 00:28:24,520 --> 00:28:26,880 Speaker 1: Thank you to my guest Neil Chilson. You can learn 486 00:28:26,920 --> 00:28:30,080 Speaker 1: more about the Abundance Institute on our show page at 487 00:28:30,200 --> 00:28:33,679 Speaker 1: newtsworld dot com. Newsworld is produced by Gingrid three sixty 488 00:28:33,680 --> 00:28:38,280 Speaker 1: and iHeartMedia. Our executive producer is Guarnsey Sloan. Our researcher 489 00:28:38,320 --> 00:28:41,800 Speaker 1: is Rachel Peterson. The artwork for the show was created 490 00:28:41,800 --> 00:28:45,000 Speaker 1: by Steve Penley. Special thanks to the team at Gingrid 491 00:28:45,040 --> 00:28:48,200 Speaker 1: three sixty. If you've been enjoying Newtsworld, I hope you'll 492 00:28:48,240 --> 00:28:50,440 Speaker 1: go to Apple Podcast and both rate us a five 493 00:28:50,480 --> 00:28:53,640 Speaker 1: stars and give us a review so others can learn 494 00:28:53,680 --> 00:28:57,440 Speaker 1: what it's all about. Right now, listeners of Newtsworld can 495 00:28:57,480 --> 00:29:01,320 Speaker 1: sign up from my three free weekly columns at Gingrich 496 00:29:01,320 --> 00:29:05,880 Speaker 1: three sixty dot com slash newsletter. I'm Newt Gingrich. This 497 00:29:06,120 --> 00:29:06,800 Speaker 1: is Nutsworld.