1 00:00:04,160 --> 00:00:06,920 Speaker 1: On this episode of newch World. Vice President J D. 2 00:00:07,080 --> 00:00:10,559 Speaker 1: Vance attended an AI Summit meeting in France on Tuesday, 3 00:00:10,960 --> 00:00:14,360 Speaker 1: hosted by France and India. In his opening address, he 4 00:00:14,440 --> 00:00:18,800 Speaker 1: described his vision of a coming era of American technological domination. 5 00:00:20,000 --> 00:00:23,680 Speaker 1: He said, quote, the Trump administration will ensure that the 6 00:00:23,680 --> 00:00:26,720 Speaker 1: most powerful AI systems are built in the US with 7 00:00:26,880 --> 00:00:31,680 Speaker 1: American design and manufactured chips. Here to discuss the AI Summit, 8 00:00:32,080 --> 00:00:35,000 Speaker 1: I'm really pleased to welcome back my guest, Neil Chilson. 9 00:00:35,400 --> 00:00:38,880 Speaker 1: He's a lawyer, computer scientist, and author of the book 10 00:00:39,320 --> 00:00:43,240 Speaker 1: Getting out of Control Emergent Leadership in a Complex World. 11 00:00:43,760 --> 00:00:46,600 Speaker 1: As head of the AI Policy at the Abundance Institute, 12 00:00:46,800 --> 00:00:50,200 Speaker 1: Chilson works to create a policy and cultural environment where 13 00:00:50,240 --> 00:01:09,680 Speaker 1: emerging technologies, including artificial intelligence, can develop and thrive. Neil, 14 00:01:09,880 --> 00:01:12,280 Speaker 1: welcome and thank you for joining me again on new tool. 15 00:01:12,600 --> 00:01:13,520 Speaker 2: It's great to be back. 16 00:01:13,920 --> 00:01:16,039 Speaker 1: Before we get in an AI tell us just for a 17 00:01:16,040 --> 00:01:18,360 Speaker 1: minute about the Abundance Institute. Yeah. 18 00:01:18,400 --> 00:01:21,760 Speaker 2: So, the bud Ins Institute is a mission driven nonprofit. 19 00:01:21,840 --> 00:01:25,959 Speaker 2: We're focused on the policy and the culture around emerging 20 00:01:26,000 --> 00:01:30,360 Speaker 2: technologies and making sure that the policy and cultural environments 21 00:01:30,640 --> 00:01:34,880 Speaker 2: let emerging technologies get full throated market tests to try 22 00:01:34,920 --> 00:01:37,680 Speaker 2: to solve the problems that we know that historically that 23 00:01:37,840 --> 00:01:41,520 Speaker 2: technology can do to create widespread human prosperity. So we 24 00:01:41,560 --> 00:01:44,640 Speaker 2: do that in lots of different ways, including policy at 25 00:01:44,640 --> 00:01:46,960 Speaker 2: the federal and the state level in the US. 26 00:01:47,680 --> 00:01:52,240 Speaker 1: When you think about emerging technologies beyond artificial intelligence, what 27 00:01:52,360 --> 00:01:53,280 Speaker 1: else do you think about? 28 00:01:53,680 --> 00:01:57,400 Speaker 2: There's so much We have quantum computing, which is another 29 00:01:57,520 --> 00:02:00,440 Speaker 2: new model for computing that could be super fast and 30 00:02:00,640 --> 00:02:03,600 Speaker 2: super exciting. It's still in the very early days. There's 31 00:02:03,640 --> 00:02:07,040 Speaker 2: things like biotechnology. There is so much in that space 32 00:02:07,080 --> 00:02:12,639 Speaker 2: about how to design new drugs custom to individual bodies 33 00:02:12,880 --> 00:02:15,000 Speaker 2: and what even like maybe what a drug looks like 34 00:02:15,040 --> 00:02:16,720 Speaker 2: in the future, it could be quite different. And so 35 00:02:17,280 --> 00:02:19,400 Speaker 2: those are some of the new ones that are coming up. 36 00:02:19,440 --> 00:02:22,600 Speaker 2: There's all these new areas and energy that are really 37 00:02:22,639 --> 00:02:25,200 Speaker 2: exciting abundances to right now is very active in the 38 00:02:25,200 --> 00:02:29,160 Speaker 2: policy space around small modular reactors, which are small safe 39 00:02:29,240 --> 00:02:32,840 Speaker 2: nuclear reactors that can power essentially a data center to 40 00:02:32,880 --> 00:02:35,720 Speaker 2: get back to the AI topic or other small uses. 41 00:02:36,360 --> 00:02:37,880 Speaker 2: So we're very excited about all of those. 42 00:02:38,200 --> 00:02:39,680 Speaker 1: Do you also look at robotics. 43 00:02:39,919 --> 00:02:43,160 Speaker 2: Robotics is a key part of moving from the world 44 00:02:43,240 --> 00:02:45,959 Speaker 2: of bits to the world of atoms, and robotics has 45 00:02:46,000 --> 00:02:49,960 Speaker 2: played a big role already in manufacturing, and with these 46 00:02:50,000 --> 00:02:53,400 Speaker 2: new generative AI models and deep machine learning models that 47 00:02:53,440 --> 00:02:56,360 Speaker 2: we can do, the things that robotics can do are 48 00:02:56,520 --> 00:03:00,320 Speaker 2: becoming much more expansive as well. Very excited about robotics. 49 00:03:00,160 --> 00:03:04,640 Speaker 1: As well well, specifically on the AI topic. On February 50 00:03:04,680 --> 00:03:09,360 Speaker 1: tenth and eleventh, France hosted the Artificial Intelligence Action Summit, 51 00:03:10,000 --> 00:03:14,480 Speaker 1: where notable world leaders, including US Vice President Jadvans showed up. 52 00:03:14,960 --> 00:03:19,040 Speaker 1: Can you give us an overview of the France AI conference, 53 00:03:19,040 --> 00:03:22,680 Speaker 1: which I noted was also co hosted by India. What 54 00:03:22,760 --> 00:03:24,320 Speaker 1: was its main purpose and what were some of the 55 00:03:24,400 --> 00:03:25,359 Speaker 1: key takeaways. 56 00:03:25,560 --> 00:03:29,359 Speaker 2: So this is the third in a series of international 57 00:03:29,680 --> 00:03:33,560 Speaker 2: summits that have happened in the wake of the popularization 58 00:03:33,680 --> 00:03:36,800 Speaker 2: of chat, GPT and all the innovation that that has 59 00:03:36,880 --> 00:03:40,040 Speaker 2: kicked off, and this one shifted a little bit. The 60 00:03:40,080 --> 00:03:42,840 Speaker 2: two previous ones were called AI safety summits, and they 61 00:03:42,920 --> 00:03:45,680 Speaker 2: really focused on how do we make sure this technology 62 00:03:45,720 --> 00:03:48,720 Speaker 2: is safe? And they were driven largely by this fear 63 00:03:48,800 --> 00:03:53,080 Speaker 2: of catastrophic risk from AI systems. This is sort of 64 00:03:53,120 --> 00:03:56,680 Speaker 2: the terminator scenarios that people think about. And there were 65 00:03:56,680 --> 00:03:59,000 Speaker 2: some agreements that came out of those in the UK 66 00:03:59,200 --> 00:04:02,840 Speaker 2: and Seoul, Korea that were voluntary pledges from some of 67 00:04:02,840 --> 00:04:06,360 Speaker 2: the companies, the leading companies, but then signed onto by 68 00:04:06,360 --> 00:04:09,280 Speaker 2: a bunch of the different countries as well. It seems 69 00:04:09,320 --> 00:04:13,600 Speaker 2: like that storyline that fear, that concern has shifted as 70 00:04:13,680 --> 00:04:18,240 Speaker 2: people have seen the rise of US companies, the dominance 71 00:04:18,320 --> 00:04:20,960 Speaker 2: that the US has had in this space, and just 72 00:04:21,080 --> 00:04:24,400 Speaker 2: the excitement and the potential for this technology. I think 73 00:04:24,480 --> 00:04:26,240 Speaker 2: a lot of the other countries of the world are 74 00:04:26,360 --> 00:04:28,520 Speaker 2: less worried about safety, and they're a lot more worried 75 00:04:28,560 --> 00:04:31,600 Speaker 2: about getting left behind. How do they build a space 76 00:04:31,680 --> 00:04:34,040 Speaker 2: that has this type of innovation, how do they make 77 00:04:34,080 --> 00:04:37,560 Speaker 2: sure that their citizens benefit from it? And so the 78 00:04:37,600 --> 00:04:41,320 Speaker 2: emphasis I think at this summit was ideas around inclusion 79 00:04:41,520 --> 00:04:46,000 Speaker 2: or widespread availability of the technology, and jade Vance certainly 80 00:04:46,080 --> 00:04:48,840 Speaker 2: brought a very different perspective to this than the Biden 81 00:04:48,880 --> 00:04:52,880 Speaker 2: administration had brought to the previous two. Really, Barnburner of 82 00:04:52,920 --> 00:04:56,960 Speaker 2: a speech that was pro American that really emphasized the 83 00:04:57,040 --> 00:05:00,960 Speaker 2: fact that the path to AI innovation is not through 84 00:05:01,080 --> 00:05:05,800 Speaker 2: heavy regulation but one through deregulation, and he urged the 85 00:05:05,880 --> 00:05:09,320 Speaker 2: European Union to take that path if they wanted to 86 00:05:09,360 --> 00:05:10,640 Speaker 2: be part of the future. 87 00:05:11,040 --> 00:05:13,240 Speaker 1: He picks up on the cit his speech, where we 88 00:05:13,360 --> 00:05:17,920 Speaker 1: have really in the United States emphasized innovation, the Europeans 89 00:05:18,080 --> 00:05:22,880 Speaker 1: and a whole range of issues have emphasized regulation. And 90 00:05:23,200 --> 00:05:25,839 Speaker 1: part of a result is, I think, with the exception 91 00:05:25,960 --> 00:05:29,799 Speaker 1: maybe of two Chinese companies, virtually every trillion dollar company 92 00:05:29,800 --> 00:05:33,719 Speaker 1: in the world's American. There are no competitors in Europe 93 00:05:34,080 --> 00:05:37,840 Speaker 1: for whole sections of this, and efforts by the Europeans 94 00:05:37,880 --> 00:05:41,000 Speaker 1: to punish us by creating fines and other kinds of 95 00:05:41,080 --> 00:05:44,760 Speaker 1: things is just going to backfire, particularly with the Trump administration, 96 00:05:45,240 --> 00:05:48,479 Speaker 1: which will respond, I think, very aggressively to that kind 97 00:05:48,520 --> 00:05:51,080 Speaker 1: of narrow minded policy making, and. 98 00:05:51,040 --> 00:05:54,600 Speaker 2: They certainly did. J. D. Evan specifically called out fines 99 00:05:54,800 --> 00:06:00,400 Speaker 2: and regulations that had been unfairly applied, almost unfairly created 100 00:06:00,640 --> 00:06:03,800 Speaker 2: specifically to target large American companies, and he said the 101 00:06:03,880 --> 00:06:07,440 Speaker 2: US wasn't going to tolerate that anymore. And he also 102 00:06:07,480 --> 00:06:09,320 Speaker 2: made a strong case that it's not good for the 103 00:06:09,320 --> 00:06:12,680 Speaker 2: European Union either, and for their economy as well. So 104 00:06:13,000 --> 00:06:16,400 Speaker 2: it was a very strident speech that Biden administration had 105 00:06:16,400 --> 00:06:19,919 Speaker 2: in many cases worked with the European Union to enforce 106 00:06:20,240 --> 00:06:23,960 Speaker 2: European laws against American companies, and that was certainly a 107 00:06:23,960 --> 00:06:26,440 Speaker 2: different tack that the Trump administration is taking. 108 00:06:27,000 --> 00:06:31,120 Speaker 1: The long tide of history is sort of leading Europe behind. 109 00:06:31,520 --> 00:06:36,640 Speaker 1: I'm the European historian by training. I've lived in France, Germany, Belgium, 110 00:06:36,720 --> 00:06:40,839 Speaker 1: and Italy, and I love the continent, but I'm watching 111 00:06:40,880 --> 00:06:43,560 Speaker 1: it sort of commit technological suicide. 112 00:06:44,120 --> 00:06:46,479 Speaker 2: The moment that gelledas for me the most in the 113 00:06:46,520 --> 00:06:51,080 Speaker 2: AI space was the cheery signing of the European Union 114 00:06:51,160 --> 00:06:54,440 Speaker 2: AI Act, which was signed about a year ago now 115 00:06:54,440 --> 00:06:57,280 Speaker 2: and is starting to go into effect. That was their 116 00:06:57,320 --> 00:06:59,560 Speaker 2: big moment. They really celebrated the fact that they had 117 00:06:59,600 --> 00:07:03,719 Speaker 2: passed this very comprehensive regulation that is now looking quite 118 00:07:03,800 --> 00:07:07,520 Speaker 2: unworkable because the technology has already left behind a bunch 119 00:07:07,520 --> 00:07:09,880 Speaker 2: of it. For example, they had been working on this 120 00:07:10,440 --> 00:07:12,800 Speaker 2: pre chat GPT and they had to sort of loo 121 00:07:13,000 --> 00:07:18,240 Speaker 2: retro fit how the regulation of generative AI technologies like 122 00:07:18,320 --> 00:07:21,840 Speaker 2: chat GPT into this big regulatory structure they'd already designed, 123 00:07:22,240 --> 00:07:24,200 Speaker 2: and the world has already moved on even since that 124 00:07:24,320 --> 00:07:27,800 Speaker 2: final text was finalized, and I think they're figuring out, 125 00:07:27,800 --> 00:07:30,520 Speaker 2: and there's a lot of discussion at this Paris summit 126 00:07:30,680 --> 00:07:33,720 Speaker 2: about how do they maybe dial back some of this 127 00:07:34,040 --> 00:07:38,880 Speaker 2: regulatory pressure in order to get their own national champions. 128 00:07:39,640 --> 00:07:44,200 Speaker 1: The so called Statement on Inclusive even Sustainable Artificial Intelligence 129 00:07:44,520 --> 00:07:48,360 Speaker 1: for People in the Planet, which apparently was signed by 130 00:07:48,360 --> 00:07:51,040 Speaker 1: about sixty countries, but both the United States and the 131 00:07:51,120 --> 00:07:54,280 Speaker 1: United Kingdom refused to sign it. Now, what was that 132 00:07:54,320 --> 00:07:54,800 Speaker 1: all about. 133 00:07:55,320 --> 00:07:57,000 Speaker 2: There was a series of statements. Like I said at 134 00:07:57,000 --> 00:08:00,560 Speaker 2: the previous summits, there were statements that were similar. Those 135 00:08:00,600 --> 00:08:05,160 Speaker 2: ones were much more focused on safety issues. And I 136 00:08:05,160 --> 00:08:07,320 Speaker 2: think the main reason. First of all, like all of 137 00:08:07,360 --> 00:08:09,680 Speaker 2: these international things, they don't say a ton right. A 138 00:08:09,680 --> 00:08:12,520 Speaker 2: lot of it is sort of very positive words about, 139 00:08:13,080 --> 00:08:16,120 Speaker 2: you know, where we want to go, not really operational 140 00:08:16,160 --> 00:08:18,640 Speaker 2: in how we're going to get there. But there was 141 00:08:18,680 --> 00:08:22,120 Speaker 2: a definite shift in emphasis for this one, and I 142 00:08:22,120 --> 00:08:23,520 Speaker 2: think it left nobody happy. 143 00:08:23,560 --> 00:08:23,800 Speaker 1: Really. 144 00:08:23,840 --> 00:08:26,440 Speaker 2: The people who are worried about existential risk hated this 145 00:08:26,520 --> 00:08:30,120 Speaker 2: statement because they thought it didn't address their ideals. People 146 00:08:30,440 --> 00:08:33,960 Speaker 2: from the Trump administration, I think, are rightly skeptical of 147 00:08:34,160 --> 00:08:37,280 Speaker 2: like this language of inclusion to the extent that that 148 00:08:37,400 --> 00:08:41,440 Speaker 2: might drive some sort of international or national mandates about 149 00:08:41,480 --> 00:08:44,440 Speaker 2: how the technology is deployed. And I think the people 150 00:08:44,480 --> 00:08:46,440 Speaker 2: who signed it are largely the people who have not 151 00:08:46,520 --> 00:08:49,560 Speaker 2: been building these overwhelmingly great products, and so it's not 152 00:08:49,640 --> 00:08:52,280 Speaker 2: clear what effect it would have on the other countries. 153 00:08:52,320 --> 00:08:55,240 Speaker 2: And so I think maybe I shouldn't say this without 154 00:08:55,240 --> 00:08:56,840 Speaker 2: knowing it for sure, but I think maybe I had 155 00:08:56,840 --> 00:08:58,880 Speaker 2: heard that China did sign it, I should double check that. 156 00:08:59,320 --> 00:09:01,160 Speaker 2: And so for them, it may be a way to 157 00:09:01,200 --> 00:09:04,120 Speaker 2: help bolster their relations with Europe or who they I 158 00:09:04,160 --> 00:09:08,000 Speaker 2: think probably see as if the Europeans are going to 159 00:09:08,080 --> 00:09:12,000 Speaker 2: continue to suppress American companies, perhaps it's a new market 160 00:09:12,000 --> 00:09:14,599 Speaker 2: for China to get in their cheap AI models. 161 00:09:14,920 --> 00:09:17,400 Speaker 1: A nice thing about China is they can sign anything, 162 00:09:17,400 --> 00:09:19,960 Speaker 1: but they'll intend to enforce it. They get to be 163 00:09:20,000 --> 00:09:22,240 Speaker 1: at the press conference and get their picture taken, and 164 00:09:22,240 --> 00:09:23,800 Speaker 1: then they go about doing whatever they want to. In 165 00:09:23,840 --> 00:09:30,120 Speaker 1: the first place, there's this entire emerging pattern where folks 166 00:09:30,320 --> 00:09:34,160 Speaker 1: go from fancy hotel to fancy hotel to have big 167 00:09:34,200 --> 00:09:37,439 Speaker 1: meetings talking about big ideas with big words, none of 168 00:09:37,480 --> 00:09:41,440 Speaker 1: which relates to reality except they keep inventing more and 169 00:09:41,520 --> 00:09:45,640 Speaker 1: more multinational organizations which have to hire their friends and relatives, 170 00:09:46,080 --> 00:09:48,760 Speaker 1: so they can have meetings where they talk about things 171 00:09:48,800 --> 00:09:52,240 Speaker 1: that have no connection with reality, and it's almost like 172 00:09:52,280 --> 00:09:52,920 Speaker 1: an industry. 173 00:09:53,520 --> 00:09:55,600 Speaker 2: Oh, it's definitely an industry. I think there's a lot 174 00:09:55,600 --> 00:09:59,000 Speaker 2: of academic conferences that are basically exactly what you described 175 00:09:59,120 --> 00:10:02,480 Speaker 2: over and over. It certainly is an industry, and in 176 00:10:02,520 --> 00:10:06,000 Speaker 2: the AI space this is the hot topic. But we've 177 00:10:06,000 --> 00:10:10,080 Speaker 2: seen international stylings of things like this around privacy, et cetera. 178 00:10:10,200 --> 00:10:12,120 Speaker 2: You know, sometimes it's good to get on the same 179 00:10:12,120 --> 00:10:15,040 Speaker 2: page with everybody and figure out like how people can 180 00:10:15,480 --> 00:10:18,240 Speaker 2: secure human rights across borders. But I don't think that 181 00:10:18,320 --> 00:10:21,760 Speaker 2: this particular document really moves the ball very much towards 182 00:10:22,360 --> 00:10:23,440 Speaker 2: anything that's productive. 183 00:10:39,320 --> 00:10:42,360 Speaker 1: I think what JD did there was really important mark 184 00:10:42,360 --> 00:10:45,400 Speaker 1: in terms of his own evolution as a national leader, 185 00:10:45,640 --> 00:10:47,520 Speaker 1: and I thought I was worth listening just for a bit. 186 00:10:47,720 --> 00:10:51,040 Speaker 3: Now. Our administration, the Trump administration, believes that AI will 187 00:10:51,080 --> 00:10:58,319 Speaker 3: have countless revolutionary applications and economic innovation, job creation, national security, healthcare, 188 00:10:58,440 --> 00:11:02,880 Speaker 3: free expression, and beyond, and to restrict its development now 189 00:11:03,280 --> 00:11:06,800 Speaker 3: will not only unfairly benefit incumbents in the space, it 190 00:11:06,840 --> 00:11:10,520 Speaker 3: would mean paralyzing one of the most promising technologies we 191 00:11:10,600 --> 00:11:15,000 Speaker 3: have seen in generations. Now, with that in mind, I'd 192 00:11:15,040 --> 00:11:19,480 Speaker 3: like to make four main points today. Number One, this 193 00:11:19,559 --> 00:11:23,640 Speaker 3: administration will ensure that American AI technology continues to be 194 00:11:23,760 --> 00:11:26,760 Speaker 3: the gold standard worldwide, and we are the partner of 195 00:11:26,880 --> 00:11:31,760 Speaker 3: choice for others foreign countries, and certainly businesses as they 196 00:11:31,800 --> 00:11:36,199 Speaker 3: expand their own use of AI. Number two, we believe 197 00:11:36,400 --> 00:11:40,280 Speaker 3: that excessive regulation of the AI sector could kill a 198 00:11:40,360 --> 00:11:44,319 Speaker 3: transformative industry just as it's taking off, and will make 199 00:11:44,400 --> 00:11:48,320 Speaker 3: every effort to encourage pro growth AI policies, and I 200 00:11:48,480 --> 00:11:52,439 Speaker 3: like to see that deregulatory flavor making its way into 201 00:11:52,440 --> 00:11:58,040 Speaker 3: a lot of the conversations this conference. Number three, we 202 00:11:58,080 --> 00:12:02,040 Speaker 3: feel very strongly that AI must remain free from ideological 203 00:12:02,040 --> 00:12:05,160 Speaker 3: bias and that America AI will not be co opted 204 00:12:05,240 --> 00:12:10,040 Speaker 3: into a tool for authoritarian censorship. And finally, number four, 205 00:12:10,600 --> 00:12:14,520 Speaker 3: the Trump administration will maintain a pro worker growth path 206 00:12:14,600 --> 00:12:17,040 Speaker 3: for AI so it can be a potent tool for 207 00:12:17,120 --> 00:12:20,280 Speaker 3: job creation in the United States. And I appreciate Prime 208 00:12:20,320 --> 00:12:24,920 Speaker 3: Minister Modi's point. AI I really believe will facilitate and 209 00:12:25,040 --> 00:12:28,160 Speaker 3: make people more productive. It is not going to replace 210 00:12:28,240 --> 00:12:31,360 Speaker 3: human beings. It will never replace human beings, and I 211 00:12:31,360 --> 00:12:33,840 Speaker 3: think too many of the leaders in the AI industry. 212 00:12:33,840 --> 00:12:37,160 Speaker 3: When they talk about this fear of replacing workers, I 213 00:12:37,160 --> 00:12:40,280 Speaker 3: think they really miss the point. AI, we believe is 214 00:12:40,320 --> 00:12:45,000 Speaker 3: going to make us more productive, more prosperous, and more free. 215 00:12:45,280 --> 00:12:47,920 Speaker 3: The United States of America is the leader in AI, 216 00:12:48,040 --> 00:12:50,320 Speaker 3: and our administration plans to keep it that way. 217 00:12:51,400 --> 00:12:55,320 Speaker 1: Now. You recently tweeted about Vice President Jdvins's speech at 218 00:12:55,360 --> 00:12:57,719 Speaker 1: the AI summon in Paris, calling it quote one of 219 00:12:57,760 --> 00:13:00,560 Speaker 1: the most pro innovation speeches you have heard from an 220 00:13:00,559 --> 00:13:04,760 Speaker 1: elected politician. What stood out to you and his remarks. 221 00:13:05,559 --> 00:13:07,320 Speaker 2: What stood out is it was the first time that 222 00:13:07,400 --> 00:13:10,160 Speaker 2: I can remember that the US has stood up to 223 00:13:10,640 --> 00:13:13,880 Speaker 2: European regulators and said, hey, you know, this is not 224 00:13:14,080 --> 00:13:19,120 Speaker 2: the path towards innovation and progress. That really stood out. 225 00:13:19,160 --> 00:13:23,920 Speaker 2: He also pointed out the connection, for example, between AI 226 00:13:24,080 --> 00:13:27,200 Speaker 2: and energy and how linked those two topics are, and 227 00:13:27,240 --> 00:13:32,120 Speaker 2: that Europe has been kneecapping itself in energy policy by 228 00:13:32,480 --> 00:13:36,319 Speaker 2: essentially de industrializing in some ways, and that's grown their 229 00:13:36,360 --> 00:13:39,720 Speaker 2: reliance on providers such as Russia, and he pointed to 230 00:13:39,800 --> 00:13:42,360 Speaker 2: that as a big problem. The other thing that I 231 00:13:42,400 --> 00:13:45,520 Speaker 2: loved about his speech was he pointed out we often 232 00:13:45,600 --> 00:13:49,400 Speaker 2: think about AI and software as being part of the 233 00:13:49,520 --> 00:13:52,040 Speaker 2: laptop class, right, that the people who build these things 234 00:13:52,080 --> 00:13:55,440 Speaker 2: are people who sit behind laptops and type into computers, 235 00:13:55,480 --> 00:13:57,720 Speaker 2: and that it's in the world of bits. And he 236 00:13:57,800 --> 00:14:02,400 Speaker 2: pointed out that AI and computation takes large data centers, 237 00:14:02,440 --> 00:14:07,120 Speaker 2: which are enormous construction projects that involve lots of hard 238 00:14:07,120 --> 00:14:10,600 Speaker 2: hat workers who are plumbers and electricians and skilled labor 239 00:14:11,080 --> 00:14:13,680 Speaker 2: and so I really liked his pointing out that this 240 00:14:13,840 --> 00:14:16,360 Speaker 2: is good for jobs, not just in the tech space, 241 00:14:16,600 --> 00:14:19,080 Speaker 2: but this is good for blue collar jobs all over. 242 00:14:19,160 --> 00:14:22,560 Speaker 2: The types of investment that the US has private companies 243 00:14:22,560 --> 00:14:25,680 Speaker 2: have been pouring into building data centers in order to 244 00:14:26,240 --> 00:14:29,840 Speaker 2: meet the demands of the software space have been a 245 00:14:30,000 --> 00:14:33,960 Speaker 2: really big construction investments, infrastructure investments, And I really liked 246 00:14:34,040 --> 00:14:34,920 Speaker 2: him pointing that out. 247 00:14:35,280 --> 00:14:39,520 Speaker 1: When you talk about artificial intelligence, when you're watching, how 248 00:14:39,560 --> 00:14:42,160 Speaker 1: do you think it's going to manifest itself in terms 249 00:14:42,200 --> 00:14:45,440 Speaker 1: of the average small company or the average individual. 250 00:14:47,120 --> 00:14:50,280 Speaker 2: So artificial intelligence is a general purpose technology, and I 251 00:14:50,280 --> 00:14:53,200 Speaker 2: think sometimes the very name of it kind of conceals 252 00:14:53,240 --> 00:14:55,280 Speaker 2: more than at reveals. I like to think of it 253 00:14:55,360 --> 00:14:58,080 Speaker 2: just as advanced computing. But the way I think that 254 00:14:58,120 --> 00:15:00,920 Speaker 2: a lot of these new tools are going to manifest 255 00:15:00,960 --> 00:15:05,400 Speaker 2: themselves are by making the boring parts of the types 256 00:15:05,440 --> 00:15:09,000 Speaker 2: of content creation or analysis that people do a lot easier. 257 00:15:09,040 --> 00:15:12,480 Speaker 2: And so you're tracking all of your accounting for taxes 258 00:15:12,920 --> 00:15:15,560 Speaker 2: and you're not quite sure how to categorize different things. 259 00:15:15,560 --> 00:15:17,560 Speaker 2: This is a type of tool that can easily categorize 260 00:15:17,600 --> 00:15:20,720 Speaker 2: different things just by asking it. And so whereas right 261 00:15:20,760 --> 00:15:24,000 Speaker 2: now you have to learn the interface of any software 262 00:15:24,000 --> 00:15:26,360 Speaker 2: program by figuring out like which buttons to click and 263 00:15:26,440 --> 00:15:28,760 Speaker 2: what commands to type in, I think this is going 264 00:15:28,800 --> 00:15:30,800 Speaker 2: to add a lot of more conversational so you can 265 00:15:30,840 --> 00:15:33,040 Speaker 2: just ask the software like how do I do this 266 00:15:33,080 --> 00:15:35,560 Speaker 2: thing I'm trying to do in natural language? And it 267 00:15:35,600 --> 00:15:37,520 Speaker 2: will help you navigate things like that. So I think 268 00:15:37,560 --> 00:15:40,920 Speaker 2: those are some of the very direct ways. I think, 269 00:15:41,320 --> 00:15:44,560 Speaker 2: less directly, more behind the scenes. I think everybody's going 270 00:15:44,600 --> 00:15:48,320 Speaker 2: to be impacted by how this affects medicine. These tools 271 00:15:48,360 --> 00:15:54,120 Speaker 2: are already showing remarkable ability to diagnose say, cat scans 272 00:15:54,360 --> 00:16:00,160 Speaker 2: or mammograms with extremely good accuracy in far advance of 273 00:16:00,160 --> 00:16:02,000 Speaker 2: where humans are able to do that, and they can 274 00:16:02,040 --> 00:16:05,520 Speaker 2: spot problems early, and that's going to make people's lives 275 00:16:05,520 --> 00:16:07,120 Speaker 2: healthier and safer. 276 00:16:07,200 --> 00:16:10,800 Speaker 1: We'll all let's talk about crossing borders. When you look 277 00:16:10,840 --> 00:16:15,160 Speaker 1: at emerging consumer technologies like the cell phone, they just 278 00:16:15,280 --> 00:16:18,360 Speaker 1: cross the border because people buy them. They suddenly become 279 00:16:18,400 --> 00:16:22,360 Speaker 1: worldwide because in fact they're a better outcome for normal people. 280 00:16:22,680 --> 00:16:24,320 Speaker 1: I mean, are we going to see in that an 281 00:16:24,320 --> 00:16:30,200 Speaker 1: extraordinary spread of artificial intelligence capabilities just because they're so 282 00:16:30,400 --> 00:16:31,760 Speaker 1: enhancing to what we can do? 283 00:16:32,560 --> 00:16:34,760 Speaker 2: Yeah. Absolutely, I think we're already seeing this. You know, 284 00:16:34,840 --> 00:16:37,560 Speaker 2: chat GPT was the fastest growing app that has ever 285 00:16:37,640 --> 00:16:40,440 Speaker 2: been deployed by a lot. It had one hundred million 286 00:16:40,520 --> 00:16:43,760 Speaker 2: users within two months, and so there's a real hunger 287 00:16:43,840 --> 00:16:46,760 Speaker 2: for using these types of tools, and I think they 288 00:16:46,800 --> 00:16:49,640 Speaker 2: are spreading wildly, and I think that's part of why 289 00:16:49,680 --> 00:16:53,680 Speaker 2: we have these international summits around AI, where I think 290 00:16:53,720 --> 00:16:56,600 Speaker 2: in the similar timeline for say the Internet or the 291 00:16:56,640 --> 00:16:59,360 Speaker 2: automobile or say the telephone, I don't think you saw 292 00:16:59,480 --> 00:17:02,560 Speaker 2: international summits because people are so excited about what AI 293 00:17:02,640 --> 00:17:05,640 Speaker 2: can deliver that everybody wants to figure out how to benefit. 294 00:17:05,960 --> 00:17:10,880 Speaker 1: But also there's now sort of an international information ecosystem, 295 00:17:11,320 --> 00:17:14,600 Speaker 1: so that every place on the planet has some connectivity 296 00:17:15,119 --> 00:17:16,639 Speaker 1: in a way that would not have been true an 297 00:17:16,640 --> 00:17:20,080 Speaker 1: eighteen hundred or nineteen hundred or even nineteen eighty. Let 298 00:17:20,119 --> 00:17:22,359 Speaker 1: me switch gears for just a second, because one of 299 00:17:22,359 --> 00:17:25,400 Speaker 1: the stories, which is slightly confusing, has been the whole 300 00:17:25,440 --> 00:17:29,480 Speaker 1: issue about deep Seek, the Chinese artificial intelligence company, which 301 00:17:29,520 --> 00:17:32,560 Speaker 1: has gotten a lot of controversy now. They were claiming 302 00:17:32,640 --> 00:17:37,960 Speaker 1: that they used open ais technology to develop this brand new, inexpensive, 303 00:17:38,040 --> 00:17:42,000 Speaker 1: fabulous model that leap frogs everything that's going on. What's 304 00:17:42,040 --> 00:17:44,000 Speaker 1: your sense of that whole story. 305 00:17:44,200 --> 00:17:49,040 Speaker 2: So Deepseek was a venture fund, a financial firm in China, 306 00:17:49,480 --> 00:17:51,840 Speaker 2: and they used a bunch of compute that they had 307 00:17:51,920 --> 00:17:53,960 Speaker 2: to develop these new models, and so there is a 308 00:17:54,000 --> 00:17:58,119 Speaker 2: controversy about how much they did something called distillation, and 309 00:17:58,160 --> 00:18:00,480 Speaker 2: this is a tool where you ask a bunch of 310 00:18:00,600 --> 00:18:04,359 Speaker 2: questions to somebody else's model and you use those answers 311 00:18:04,640 --> 00:18:09,160 Speaker 2: to help train your own model. And open ai says 312 00:18:09,200 --> 00:18:12,000 Speaker 2: that they have some evidence that deep Seek did this 313 00:18:12,520 --> 00:18:15,359 Speaker 2: for one of their what's called a reasoning model, the 314 00:18:15,600 --> 00:18:18,720 Speaker 2: R one model that they have. I think that's a problem. 315 00:18:18,960 --> 00:18:22,800 Speaker 2: It's a problem that's relatively easy for companies like open 316 00:18:23,000 --> 00:18:26,679 Speaker 2: ai to solve by screening who their customers are and 317 00:18:26,720 --> 00:18:30,080 Speaker 2: setting their terms of service. But I think it's also 318 00:18:30,160 --> 00:18:34,080 Speaker 2: a pretty useful tool often, and most importantly, I don't 319 00:18:34,119 --> 00:18:39,199 Speaker 2: think it is the primary innovation that Deepseek developed. Some 320 00:18:39,359 --> 00:18:43,800 Speaker 2: of their other innovations around streamlining the process, the algorithms 321 00:18:43,840 --> 00:18:47,159 Speaker 2: that they use to be more efficient, those are real innovations, 322 00:18:47,200 --> 00:18:49,640 Speaker 2: and the great thing is, honestly that they released those 323 00:18:49,640 --> 00:18:51,960 Speaker 2: and told everybody about them. So US companies are going 324 00:18:52,000 --> 00:18:55,360 Speaker 2: to be able to retrofit those innovations back into their 325 00:18:55,840 --> 00:18:59,119 Speaker 2: own software as well, And so overall, I think what 326 00:18:59,240 --> 00:19:01,840 Speaker 2: Deep Seek reminds us is that you know, it's not 327 00:19:01,920 --> 00:19:05,680 Speaker 2: inevitable that the US is going to lead this technology, 328 00:19:05,720 --> 00:19:08,320 Speaker 2: that we need to step up. We need to continue 329 00:19:08,359 --> 00:19:11,919 Speaker 2: to innovate. The American system is very strong in this space. 330 00:19:12,080 --> 00:19:17,640 Speaker 2: Our funding mechanisms, private funding, our markets, and our tech 331 00:19:17,680 --> 00:19:19,720 Speaker 2: talent are really strong. But we can't rest on our 332 00:19:19,800 --> 00:19:22,440 Speaker 2: laurels and sit back. This is a fast moving space 333 00:19:22,480 --> 00:19:23,440 Speaker 2: and we need to move faster. 334 00:19:23,880 --> 00:19:27,680 Speaker 1: From your perspective, did you see deep seat as a 335 00:19:27,720 --> 00:19:30,800 Speaker 1: real leap forward or simply a very clever development of 336 00:19:30,880 --> 00:19:32,119 Speaker 1: existing capabilities? 337 00:19:32,600 --> 00:19:35,680 Speaker 2: So its final models are not better than the cutting 338 00:19:35,800 --> 00:19:39,080 Speaker 2: edge models in the US, they are on par with 339 00:19:39,160 --> 00:19:42,320 Speaker 2: the cutting models in the US. So the advancements that 340 00:19:42,400 --> 00:19:45,399 Speaker 2: Deep Seek reached are more in the how do you 341 00:19:45,480 --> 00:19:48,320 Speaker 2: implement these things in a way that's less expensive. It's 342 00:19:48,359 --> 00:19:52,520 Speaker 2: actually a very common model for Chinese innovation, taking something 343 00:19:52,560 --> 00:19:56,960 Speaker 2: that the US has innovated, has created, and doing it cheaper, faster, 344 00:19:57,280 --> 00:19:59,399 Speaker 2: or at bigger scale. And so I don't see it 345 00:19:59,400 --> 00:20:02,480 Speaker 2: as a giant forward. I see it as an incremental improvement. 346 00:20:03,040 --> 00:20:05,080 Speaker 2: And I think some of the techniques that they used 347 00:20:05,119 --> 00:20:07,560 Speaker 2: are going to help other companies move forward as well. 348 00:20:08,040 --> 00:20:11,080 Speaker 2: But yeah, it's a spur to US innovators to stay 349 00:20:11,080 --> 00:20:11,960 Speaker 2: on top of things. 350 00:20:12,640 --> 00:20:16,120 Speaker 1: You posted on x the deep Seek's success reminds us 351 00:20:16,680 --> 00:20:19,560 Speaker 1: that the AI race is global. What do we have 352 00:20:19,600 --> 00:20:22,080 Speaker 1: to do to make sure we win it well? 353 00:20:22,160 --> 00:20:24,480 Speaker 2: In many cases, what we need to do is keep 354 00:20:24,520 --> 00:20:26,359 Speaker 2: doing what we're doing, but make sure we don't get 355 00:20:26,400 --> 00:20:28,280 Speaker 2: in our own way as well. We talked a lot 356 00:20:28,320 --> 00:20:32,280 Speaker 2: about the sort of European model being proved that people 357 00:20:32,320 --> 00:20:36,600 Speaker 2: are growing increasingly skeptical of it. Unfortunately, in the US 358 00:20:36,720 --> 00:20:39,400 Speaker 2: we have people who are advancing those types of models, 359 00:20:39,520 --> 00:20:42,760 Speaker 2: especially at the state regulatory level. So we have over 360 00:20:42,800 --> 00:20:45,960 Speaker 2: three hundred and fifty AI regulatory bills that have been 361 00:20:46,000 --> 00:20:49,720 Speaker 2: introduced around the States already this session, and there's just 362 00:20:49,760 --> 00:20:53,200 Speaker 2: an avalanche of this stuff coming in, including in red 363 00:20:53,240 --> 00:20:56,000 Speaker 2: states like Texas, for example. I'm flying down to Austin 364 00:20:56,040 --> 00:20:59,119 Speaker 2: to talk to some people about these bills, and I 365 00:20:59,160 --> 00:21:01,080 Speaker 2: think the lesson that we need to take is that 366 00:21:01,240 --> 00:21:03,880 Speaker 2: we need to continue to double down on the American 367 00:21:03,920 --> 00:21:08,240 Speaker 2: model of permissionless innovation. Copying the European model of regulation 368 00:21:08,520 --> 00:21:12,640 Speaker 2: or the Chinese model of centralized control, control and command 369 00:21:12,720 --> 00:21:15,479 Speaker 2: is just not going to work for the US. And 370 00:21:16,000 --> 00:21:18,040 Speaker 2: what has been working in the US has been working 371 00:21:18,080 --> 00:21:18,560 Speaker 2: really well. 372 00:21:19,119 --> 00:21:22,680 Speaker 1: Ultimately, does the federal government need to make this a 373 00:21:22,760 --> 00:21:26,119 Speaker 1: federal issue for the national economy rather than have it 374 00:21:26,359 --> 00:21:28,880 Speaker 1: broken up in micromanaged by fifty states. 375 00:21:29,280 --> 00:21:32,600 Speaker 2: I think and AI development, especially at the model level, 376 00:21:32,800 --> 00:21:36,080 Speaker 2: is a federal issue. The types of concerns that it 377 00:21:36,160 --> 00:21:40,320 Speaker 2: might raise around national security in particular are obviously federal issues, 378 00:21:41,000 --> 00:21:44,760 Speaker 2: and so I think outside of even the legal whether 379 00:21:44,840 --> 00:21:47,760 Speaker 2: or not this is in in interstate or international issue, 380 00:21:47,760 --> 00:21:51,280 Speaker 2: I think from a policy perspective, this is the type 381 00:21:51,280 --> 00:21:54,520 Speaker 2: of space that the President should step forward and say 382 00:21:55,320 --> 00:21:57,520 Speaker 2: this is important enough to the US that we need 383 00:21:57,560 --> 00:22:00,320 Speaker 2: to be thinking about this on the national level rather 384 00:22:00,359 --> 00:22:04,000 Speaker 2: than creating a patchwork of fifty different regulatory environments that 385 00:22:04,040 --> 00:22:08,080 Speaker 2: will slow down innovation and will especially harm the smaller 386 00:22:08,119 --> 00:22:11,040 Speaker 2: innovators and people who are trying to deploy this day 387 00:22:11,080 --> 00:22:14,400 Speaker 2: to day and get the benefits from these technologies. 388 00:22:29,640 --> 00:22:33,800 Speaker 1: Do you think that the way forward for us will 389 00:22:33,840 --> 00:22:37,600 Speaker 1: be collaborative with other countries or will just in the 390 00:22:37,640 --> 00:22:43,760 Speaker 1: sense that Google and Facebook, Malmeta, Microsoft, Apple, these were 391 00:22:43,760 --> 00:22:48,280 Speaker 1: all American companies then spread worldwide. As you look at 392 00:22:48,320 --> 00:22:50,960 Speaker 1: how AI will probably evolve, to what extent do you 393 00:22:50,960 --> 00:22:55,000 Speaker 1: think will be American driven and then adopted overseas? And 394 00:22:55,080 --> 00:22:57,080 Speaker 1: to what extent do you think will be collaborative? 395 00:22:58,080 --> 00:23:01,480 Speaker 2: So I think it will be American driven and adopted overseas. 396 00:23:01,720 --> 00:23:04,439 Speaker 2: We have a lead at this point, and if we 397 00:23:04,480 --> 00:23:07,679 Speaker 2: continue to develop this technology and make it accessible to 398 00:23:07,760 --> 00:23:10,479 Speaker 2: the world, I think we will be the leaders across 399 00:23:10,480 --> 00:23:12,239 Speaker 2: the world that it will be our standards and our 400 00:23:12,280 --> 00:23:15,800 Speaker 2: technology that get adopted. There are some threats to that, however. 401 00:23:16,320 --> 00:23:20,000 Speaker 2: We have some concerning rules that, for example, the Biden 402 00:23:20,080 --> 00:23:24,400 Speaker 2: administration put in just before he left office, something called 403 00:23:24,440 --> 00:23:27,639 Speaker 2: the Diffusion Rule that focuses on AI chips, and it 404 00:23:27,680 --> 00:23:30,240 Speaker 2: tries to deal with some real concerns about say China 405 00:23:30,680 --> 00:23:35,440 Speaker 2: misusing these technologies, but it puts a lot of burdens 406 00:23:35,480 --> 00:23:39,600 Speaker 2: on countries that are strong allies and completely non threats. 407 00:23:39,600 --> 00:23:43,960 Speaker 2: So we're talking everybody from Israel to Portugal have to 408 00:23:44,920 --> 00:23:47,600 Speaker 2: follow these what are called Tier two rules to get 409 00:23:47,720 --> 00:23:51,600 Speaker 2: chips that are US manufactured, and I just don't understand 410 00:23:51,600 --> 00:23:54,800 Speaker 2: the strategic model for that. Most of the world, other 411 00:23:54,880 --> 00:23:57,719 Speaker 2: than a small select group of countries, are going to 412 00:23:57,800 --> 00:24:00,600 Speaker 2: have to go through a lot more paperwork to get 413 00:24:00,880 --> 00:24:03,640 Speaker 2: US technology, and I don't think that really helps our 414 00:24:03,680 --> 00:24:06,600 Speaker 2: security model. And I think it actually means that it's 415 00:24:06,640 --> 00:24:09,199 Speaker 2: a market opportunity for China if they are able to 416 00:24:09,280 --> 00:24:10,720 Speaker 2: develop these kinds of chips. 417 00:24:11,160 --> 00:24:13,439 Speaker 1: Do you think we may actually restrict ourselves and our 418 00:24:13,480 --> 00:24:15,720 Speaker 1: ability to dominate markets. 419 00:24:16,200 --> 00:24:19,320 Speaker 2: I do. And another example of this is deep Seek's model. 420 00:24:19,359 --> 00:24:22,240 Speaker 2: For example, is what's called open weights, which means anybody 421 00:24:22,240 --> 00:24:24,760 Speaker 2: can download these weights and use them on their own computer, 422 00:24:24,920 --> 00:24:28,320 Speaker 2: on their own software. This type of open source development 423 00:24:28,400 --> 00:24:32,320 Speaker 2: is really important and it's really useful to researchers and 424 00:24:32,480 --> 00:24:35,800 Speaker 2: startups who don't want to spend to train their own models. 425 00:24:36,200 --> 00:24:39,399 Speaker 2: And if we in the US restrict the ability of 426 00:24:40,240 --> 00:24:43,840 Speaker 2: open weight or open source development, which some of our 427 00:24:43,880 --> 00:24:47,320 Speaker 2: policies have some implication of doing. So we could lose 428 00:24:47,359 --> 00:24:49,800 Speaker 2: that market in the world, and I think that would 429 00:24:49,800 --> 00:24:53,880 Speaker 2: be bad because, somewhat uniquely to technology, these AI models 430 00:24:53,960 --> 00:24:57,640 Speaker 2: embody values. They're based on language, and so when they're 431 00:24:57,680 --> 00:25:00,960 Speaker 2: trained in the West, they have more or Western values. 432 00:25:00,960 --> 00:25:02,960 Speaker 2: And when they're trained in China, for example, the deep 433 00:25:03,000 --> 00:25:06,600 Speaker 2: Seek models struggle to tell you anything critical about Chinese 434 00:25:06,680 --> 00:25:09,199 Speaker 2: government and they won't even talk about Tianamen Square and 435 00:25:09,240 --> 00:25:11,560 Speaker 2: things like that. And so I think it's to the 436 00:25:11,560 --> 00:25:16,000 Speaker 2: benefit of US security and influence on the world to 437 00:25:16,080 --> 00:25:18,280 Speaker 2: have our open source models be the ones that are 438 00:25:18,320 --> 00:25:19,359 Speaker 2: adopted around the world. 439 00:25:19,840 --> 00:25:23,159 Speaker 1: Open ai, how much does that fit that model? 440 00:25:23,880 --> 00:25:26,639 Speaker 2: So open ai, despite the name, they have some open 441 00:25:26,680 --> 00:25:30,600 Speaker 2: source technologies, but overwhelmingly they are a closed source business 442 00:25:30,960 --> 00:25:34,760 Speaker 2: where you subscribe or sign up for an account and 443 00:25:35,200 --> 00:25:38,520 Speaker 2: then use the technology that they are providing, rather than 444 00:25:38,560 --> 00:25:41,040 Speaker 2: downloading it to your own machine and using it. And 445 00:25:41,080 --> 00:25:44,000 Speaker 2: that's a you know, a totally viable and important business model. 446 00:25:44,040 --> 00:25:45,800 Speaker 2: It's not that open source is the only way we 447 00:25:45,840 --> 00:25:49,320 Speaker 2: should be doing this stuff. But open AI's business model 448 00:25:49,359 --> 00:25:52,160 Speaker 2: right now is not an open source model, so. 449 00:25:52,119 --> 00:25:54,480 Speaker 1: It doesn't it tell us from it. About this whole 450 00:25:55,240 --> 00:25:58,560 Speaker 1: rivalry between Elon Musk and Sam Altman. I mean, it's 451 00:25:58,560 --> 00:26:01,320 Speaker 1: almost like a soap opera. They co open AI in 452 00:26:01,359 --> 00:26:05,359 Speaker 1: twenty fifteen, supposed to be a nonprofit, Musk leaves. In 453 00:26:05,400 --> 00:26:10,080 Speaker 1: twenty nineteen, Altman launches a for profit subsidiary, which has 454 00:26:10,119 --> 00:26:13,800 Speaker 1: made it remarkable. And you know, Musk is now clearly 455 00:26:13,840 --> 00:26:18,520 Speaker 1: offering a hostile takeover and effect supposedly leading a ninety 456 00:26:18,560 --> 00:26:22,120 Speaker 1: seven point four billion dollar fund to take it over. 457 00:26:22,440 --> 00:26:23,560 Speaker 1: What is all this about. 458 00:26:24,240 --> 00:26:26,399 Speaker 2: That's such a complicated story. It's like a so proper 459 00:26:26,400 --> 00:26:28,679 Speaker 2: that's been going on for many seasons. At this point, 460 00:26:29,359 --> 00:26:32,520 Speaker 2: there was a lot of acrimony between Musk and the 461 00:26:32,600 --> 00:26:39,040 Speaker 2: leadership of the Open AI nonprofit. Musk was very concerned. 462 00:26:39,080 --> 00:26:40,960 Speaker 2: His main focus with this was how can we build 463 00:26:41,000 --> 00:26:44,040 Speaker 2: it fast to prevent sort of existential risk And so 464 00:26:44,200 --> 00:26:46,240 Speaker 2: to him, this wasn't so much about building a company. 465 00:26:46,280 --> 00:26:49,240 Speaker 2: It was about this safety concern that he had, and 466 00:26:49,280 --> 00:26:50,919 Speaker 2: I think he thought that some of the choices that 467 00:26:50,960 --> 00:26:54,840 Speaker 2: were being made weren't the right choices to pursue that goal. Obviously, 468 00:26:54,920 --> 00:26:59,200 Speaker 2: he has his own rival company, XAI, and as Altman 469 00:26:59,400 --> 00:27:04,560 Speaker 2: is trying to transition open ai from a model that's 470 00:27:04,600 --> 00:27:08,920 Speaker 2: a nonprofit model to a for profit company. The way 471 00:27:08,920 --> 00:27:11,480 Speaker 2: they're doing that is complicated, and it has to value 472 00:27:11,520 --> 00:27:17,679 Speaker 2: the assets of the nonprofit properly and compensate the nonprofit properly. 473 00:27:18,600 --> 00:27:21,280 Speaker 2: I see Musk's bid here as largely a sort of 474 00:27:21,400 --> 00:27:26,159 Speaker 2: lawfare what is already a very complicated process of moving 475 00:27:26,200 --> 00:27:29,359 Speaker 2: from a nonprofit to a for profit. That is I 476 00:27:29,359 --> 00:27:33,360 Speaker 2: think the latest valuation's worth potentially three hundred billion dollars, 477 00:27:34,000 --> 00:27:37,440 Speaker 2: and I think Musk's bid is just raising the costs 478 00:27:37,480 --> 00:27:40,040 Speaker 2: to do that. He's making it more complicated to do that. 479 00:27:40,520 --> 00:27:42,600 Speaker 2: I don't know how serious the bid is. I mean, 480 00:27:42,640 --> 00:27:45,240 Speaker 2: obviously I think he could get the money together if 481 00:27:45,680 --> 00:27:49,400 Speaker 2: Altman accepted it. But I don't think Musk had any 482 00:27:50,240 --> 00:27:52,919 Speaker 2: supposition that Altman or I should be clear the board 483 00:27:52,960 --> 00:27:56,679 Speaker 2: of the open Ai Foundation would accept this offer. But 484 00:27:56,920 --> 00:28:00,960 Speaker 2: it does make it more legally complicated for Sam Altman's 485 00:28:01,080 --> 00:28:04,280 Speaker 2: continued transition of open ai to a for profit company. 486 00:28:04,920 --> 00:28:09,439 Speaker 1: It's a fascinating story, and Musk at least implies that 487 00:28:09,520 --> 00:28:11,920 Speaker 1: if he took it over that they would be much 488 00:28:11,960 --> 00:28:15,560 Speaker 1: more public and much more open. On the other hand, 489 00:28:15,760 --> 00:28:17,800 Speaker 1: if that's true, how does he earn back the ninety 490 00:28:17,840 --> 00:28:19,240 Speaker 1: seven billion? Right? 491 00:28:19,280 --> 00:28:22,280 Speaker 2: It's a tough thing, and I think there's somewhat plausible 492 00:28:22,320 --> 00:28:24,280 Speaker 2: business models, but it does seem like you would have 493 00:28:24,320 --> 00:28:26,960 Speaker 2: to spin out at some point a for profit of 494 00:28:26,960 --> 00:28:29,520 Speaker 2: his own, or find some sort of funding mechanism to 495 00:28:29,560 --> 00:28:33,800 Speaker 2: earn back that money. In some ways, the initial donations 496 00:28:33,880 --> 00:28:37,399 Speaker 2: that Musk gave to open AI and I don't think 497 00:28:37,440 --> 00:28:40,120 Speaker 2: you really saw them as business investments, but those are 498 00:28:40,160 --> 00:28:42,960 Speaker 2: a whole different scale than ninety seven billion dollars. It's 499 00:28:43,000 --> 00:28:46,239 Speaker 2: hard to get investors in to do charitable work at 500 00:28:46,240 --> 00:28:46,960 Speaker 2: that scale. 501 00:28:47,120 --> 00:28:49,040 Speaker 1: That's what I was thinking, Neil. I want to thank 502 00:28:49,120 --> 00:28:51,000 Speaker 1: you for joining me. I'm sure we're going to come 503 00:28:51,000 --> 00:28:53,360 Speaker 1: back to you again in the future because the whole 504 00:28:53,440 --> 00:28:56,640 Speaker 1: concept of the Abundance Institute is so much down the 505 00:28:56,720 --> 00:28:58,760 Speaker 1: road of what I believe in and what I think 506 00:28:58,840 --> 00:29:00,760 Speaker 1: is the future of the country. I want to let 507 00:29:00,800 --> 00:29:03,000 Speaker 1: our listeners know they can find out more about the 508 00:29:03,000 --> 00:29:05,360 Speaker 1: work you're doing as the head of the AI policy 509 00:29:05,360 --> 00:29:09,120 Speaker 1: at the Abundance Institute by following you on x at 510 00:29:09,240 --> 00:29:14,320 Speaker 1: Neil Underscore Chilson, or by visiting your website at Abundance 511 00:29:14,360 --> 00:29:17,920 Speaker 1: dot Institute and we'll list all that on our web page. 512 00:29:18,240 --> 00:29:20,080 Speaker 1: And I'm really grateful you took the time to talk 513 00:29:20,120 --> 00:29:20,480 Speaker 1: to us. 514 00:29:20,560 --> 00:29:22,120 Speaker 2: Well. Thank you so much for having me on. It's 515 00:29:22,120 --> 00:29:22,840 Speaker 2: always a pleasure. 516 00:29:26,280 --> 00:29:28,800 Speaker 1: Thank you to my guest Neil Chilson. You can learn 517 00:29:28,840 --> 00:29:32,040 Speaker 1: more about the Abundance Institute on our show page at 518 00:29:32,080 --> 00:29:35,480 Speaker 1: newsworld dot com. New World is produced by Gayard three 519 00:29:35,520 --> 00:29:39,920 Speaker 1: sixty and iHeartMedia. Our executive producer is Guernsey Sloan. Our 520 00:29:40,000 --> 00:29:43,640 Speaker 1: researcher is Rachel Peterson. The artwork for the show was 521 00:29:43,680 --> 00:29:47,080 Speaker 1: created by Steve Penley. Special thanks to the team at 522 00:29:47,080 --> 00:29:50,360 Speaker 1: Ganward three sixty. If you've been enjoying news World, I 523 00:29:50,400 --> 00:29:53,080 Speaker 1: hope you'll go to Apple Podcast and both rate us 524 00:29:53,080 --> 00:29:56,160 Speaker 1: with five stars and give us a review so others 525 00:29:56,160 --> 00:29:59,360 Speaker 1: can learn what it's all about. Right now, listeners of 526 00:29:59,440 --> 00:30:03,040 Speaker 1: news World can sign up for my three freeweekly columns 527 00:30:03,200 --> 00:30:07,520 Speaker 1: at Ganghamstreet sixty dot com slash newsletter. I'm newt Gangrich. 528 00:30:07,680 --> 00:30:08,760 Speaker 1: This is neutrald