1 00:00:04,680 --> 00:00:07,960 Speaker 1: On this episode of Newsworld, I'm speaking with James Broll, 2 00:00:08,840 --> 00:00:12,560 Speaker 1: Senior fellow at the Competitive Enterprise Institute and author of 3 00:00:12,600 --> 00:00:17,000 Speaker 1: a brand new report by CEI on artificial intelligence just 4 00:00:17,040 --> 00:00:21,680 Speaker 1: came out yesterday entitled Rules for Robots, a Framework for 5 00:00:21,760 --> 00:00:34,760 Speaker 1: Governance of AI. James, thank you, and I really appreciate 6 00:00:34,760 --> 00:00:36,120 Speaker 1: your joining me on Newtsworld. 7 00:00:36,840 --> 00:00:38,560 Speaker 2: Thank you so much for inviting me. It's great to 8 00:00:38,600 --> 00:00:38,960 Speaker 2: be with you. 9 00:00:39,440 --> 00:00:43,640 Speaker 1: So why did the Competitive Enterprise Institute issue this study, 10 00:00:44,120 --> 00:00:47,240 Speaker 1: Rules for Robots a Framework for Governance of AI? 11 00:00:48,000 --> 00:00:50,040 Speaker 2: Well, I would say a couple of reasons. I mean, 12 00:00:50,080 --> 00:00:54,600 Speaker 2: ever since chat ept came out, this new chatbot that 13 00:00:54,600 --> 00:00:57,920 Speaker 2: open Ai put out late last year, there's just been 14 00:00:58,000 --> 00:01:01,760 Speaker 2: a lot of discussion in Washington, DC about regulation of 15 00:01:01,880 --> 00:01:05,200 Speaker 2: artificial intelligence. It's a cutting edge new technology. It's getting 16 00:01:05,200 --> 00:01:07,080 Speaker 2: a lot of attention, and there's a lot of people 17 00:01:07,120 --> 00:01:09,360 Speaker 2: who want to regulate it. So I think the question 18 00:01:09,440 --> 00:01:13,119 Speaker 2: of what does a sound regulatory environment or system look 19 00:01:13,240 --> 00:01:16,840 Speaker 2: like for AI was important. The other thing is just 20 00:01:16,920 --> 00:01:20,480 Speaker 2: that a lot of the debates about AI, to me 21 00:01:20,680 --> 00:01:24,200 Speaker 2: haven't been grounded in evidence, and there's a lot of 22 00:01:24,600 --> 00:01:30,360 Speaker 2: very hyperbolic claims coming from journalists or from Internet bloggers 23 00:01:30,360 --> 00:01:32,600 Speaker 2: about how this new technology is going to lead to 24 00:01:33,120 --> 00:01:35,880 Speaker 2: all sorts of terrible consequences all the way up to 25 00:01:35,959 --> 00:01:38,319 Speaker 2: the end of the world or the end of humanity. 26 00:01:39,040 --> 00:01:44,440 Speaker 2: And there are grandiose proposals being made for international regulatory 27 00:01:44,480 --> 00:01:48,840 Speaker 2: agencies or new federal regulatory bodies, or banning the most 28 00:01:48,920 --> 00:01:52,960 Speaker 2: advanced AI systems, pausing AI for six months or more. 29 00:01:53,800 --> 00:01:58,200 Speaker 2: And I just wanted to explain to some of these 30 00:01:58,520 --> 00:02:03,120 Speaker 2: commentators that we actually have a pretty good system of understanding, 31 00:02:03,200 --> 00:02:06,160 Speaker 2: I would say, of what evidence based regulation looks like. 32 00:02:06,240 --> 00:02:09,200 Speaker 2: That comes from it's been developed of the last century, 33 00:02:09,880 --> 00:02:12,920 Speaker 2: and we know the kind of data and evidence that 34 00:02:12,960 --> 00:02:16,200 Speaker 2: should be brought to bear to make regulations that achieve 35 00:02:16,280 --> 00:02:20,200 Speaker 2: good outcomes for the public. And this discussion just does 36 00:02:20,240 --> 00:02:22,440 Speaker 2: not seem to be grounded in any of that kind 37 00:02:22,480 --> 00:02:24,239 Speaker 2: of debate for evidence. 38 00:02:24,440 --> 00:02:27,360 Speaker 1: I mean, when I think about artificial intelligence in the 39 00:02:27,400 --> 00:02:30,600 Speaker 1: more hysterical terms, it strikes me that a lot of 40 00:02:30,639 --> 00:02:32,280 Speaker 1: that comes out of the movie The Terminator. 41 00:02:33,000 --> 00:02:35,679 Speaker 2: That's certainly part of it. So I mean, I think 42 00:02:35,760 --> 00:02:39,560 Speaker 2: at its simplest, the concern is just that if we 43 00:02:39,600 --> 00:02:43,640 Speaker 2: create a technology that is smarter than humans, we won't 44 00:02:43,639 --> 00:02:47,040 Speaker 2: be able to control it, and that will lead to 45 00:02:47,080 --> 00:02:49,959 Speaker 2: some kind of set of unintended consequences that could spire 46 00:02:49,960 --> 00:02:54,240 Speaker 2: a lot of control. Exactly what the chain of events 47 00:02:54,320 --> 00:02:56,880 Speaker 2: is that leads to that is usually not spelled out 48 00:02:57,040 --> 00:02:57,640 Speaker 2: very clearly. 49 00:02:57,960 --> 00:02:58,960 Speaker 3: But that's the concern. 50 00:02:59,560 --> 00:03:02,160 Speaker 2: But I should also point out that that's a longer 51 00:03:02,280 --> 00:03:06,639 Speaker 2: term concern, if it's even a problem at all. While 52 00:03:06,639 --> 00:03:10,200 Speaker 2: we have artificial intelligence that might be smarter than people 53 00:03:10,240 --> 00:03:13,520 Speaker 2: in particular instances. I mean, your calculator is smarter than 54 00:03:13,560 --> 00:03:18,240 Speaker 2: a typical person at doing multiplication a bit large numbers, 55 00:03:18,280 --> 00:03:21,240 Speaker 2: but we don't have this kind of artificial general intelligence. 56 00:03:21,240 --> 00:03:24,520 Speaker 2: It's smarter than people across a wide variety of domains, 57 00:03:24,560 --> 00:03:28,480 Speaker 2: and it's not clear when that would happen or whether 58 00:03:28,520 --> 00:03:31,959 Speaker 2: it's even possible. And so while I think it's reasonable 59 00:03:32,000 --> 00:03:35,120 Speaker 2: to think about those risks for academics to be studying it, 60 00:03:35,160 --> 00:03:38,240 Speaker 2: they're actually a whole host of nearer term risks that 61 00:03:38,280 --> 00:03:41,600 Speaker 2: are already upon us that we should be thinking about, 62 00:03:42,000 --> 00:03:46,520 Speaker 2: related to privacy or discrimination or transportational others. That we 63 00:03:46,520 --> 00:03:49,120 Speaker 2: can get into any of these issues, but those should 64 00:03:49,120 --> 00:03:50,920 Speaker 2: be more of the focus. Yet a lot of the 65 00:03:50,960 --> 00:03:54,280 Speaker 2: online discussion, the chatter coming out of Washington, DC is 66 00:03:54,280 --> 00:03:57,000 Speaker 2: about these existential threats to humanity. 67 00:03:57,400 --> 00:04:00,720 Speaker 1: So let me start at the most basic level. How 68 00:04:00,720 --> 00:04:04,680 Speaker 1: would you describe the difference between just mass computation and 69 00:04:04,840 --> 00:04:06,040 Speaker 1: artificial intelligence. 70 00:04:06,680 --> 00:04:10,440 Speaker 2: Well, just defining artificial intelligence can be challenging. I mean, 71 00:04:10,560 --> 00:04:15,600 Speaker 2: a typical definition would just be that it's machine technology 72 00:04:15,680 --> 00:04:20,800 Speaker 2: that has human level capabilities as some task, but that 73 00:04:20,920 --> 00:04:25,920 Speaker 2: obviously would include some what we consider basic technologies like 74 00:04:25,960 --> 00:04:30,039 Speaker 2: a pocket calculator or an Excel spreadsheet. At this point, 75 00:04:30,560 --> 00:04:36,080 Speaker 2: and how artificial intelligence gets defined legally speaking is going 76 00:04:36,120 --> 00:04:39,719 Speaker 2: to be very important because all kinds of technologies could 77 00:04:39,720 --> 00:04:43,200 Speaker 2: get wrapped up in this. The basic definition that academics 78 00:04:43,200 --> 00:04:47,800 Speaker 2: talk about is machine technology with human level capabilities at 79 00:04:47,839 --> 00:04:51,040 Speaker 2: some task, but that's again very general, so definitions are 80 00:04:51,040 --> 00:04:51,760 Speaker 2: going to be important. 81 00:04:52,160 --> 00:04:55,360 Speaker 1: In terms of just computing power, there are already all 82 00:04:55,400 --> 00:04:57,440 Speaker 1: sorts of systems. I think of the air traffic control 83 00:04:57,480 --> 00:05:01,560 Speaker 1: system for example, or some marine warfare. There are already 84 00:05:01,640 --> 00:05:06,200 Speaker 1: lots of places where activities occur at levels of speed 85 00:05:07,080 --> 00:05:09,920 Speaker 1: and levels of dealing with data that no human could 86 00:05:10,000 --> 00:05:10,920 Speaker 1: possibly match. 87 00:05:11,760 --> 00:05:14,960 Speaker 2: Right, So in some respects this is not new, and 88 00:05:15,520 --> 00:05:18,480 Speaker 2: we deal with artificial intelligence already in our everyday lives. 89 00:05:18,520 --> 00:05:20,279 Speaker 2: I mean, if you use any kind of social media 90 00:05:20,320 --> 00:05:26,520 Speaker 2: platform Facebook, Twitter, Instagram, there's an artificial intelligence algorithm that's 91 00:05:26,720 --> 00:05:29,240 Speaker 2: elevating certain posts, so you see that, and it's often 92 00:05:29,279 --> 00:05:31,360 Speaker 2: based on what you've clicked on before. They try to 93 00:05:31,360 --> 00:05:34,320 Speaker 2: figure out what you like and gear content toward that. 94 00:05:35,000 --> 00:05:39,200 Speaker 2: There's artificial intelligence used in financial markets. I mean, we 95 00:05:39,240 --> 00:05:41,719 Speaker 2: saw the flash crash I think it was in twenty ten, 96 00:05:41,800 --> 00:05:44,039 Speaker 2: where there was a kind of a sudden collapse of 97 00:05:44,040 --> 00:05:47,760 Speaker 2: stock prices. That's an example of probably some of these 98 00:05:47,800 --> 00:05:50,599 Speaker 2: algorithms kind of going haywire for a moment. It didn't 99 00:05:50,640 --> 00:05:54,200 Speaker 2: really lead to any long term problems, fortunately, but that's 100 00:05:54,240 --> 00:05:57,560 Speaker 2: an example of a risk that could arise from these technologies. 101 00:05:57,560 --> 00:06:00,000 Speaker 2: But again, I mean that was more than a decade 102 00:06:00,040 --> 00:06:02,799 Speaker 2: at this point, so we're really already in the midst 103 00:06:02,880 --> 00:06:06,760 Speaker 2: of an artificial intelligence revolution. I guess you'd say, or 104 00:06:06,880 --> 00:06:11,159 Speaker 2: it's part of the mainstream market ecosystem, but it's these 105 00:06:11,360 --> 00:06:15,880 Speaker 2: Internet chatbots, this generative AI technology it's called that can 106 00:06:15,960 --> 00:06:21,320 Speaker 2: produce text, can produce video content, can produce imagery very rapidly, 107 00:06:21,440 --> 00:06:25,600 Speaker 2: very realistically. That is the most recent development, and I 108 00:06:25,640 --> 00:06:29,360 Speaker 2: think that really took people by surprise when that started 109 00:06:29,480 --> 00:06:33,479 Speaker 2: rolling out last year, and that technology also appears to 110 00:06:33,480 --> 00:06:35,920 Speaker 2: be advancing very rapidly. I think even in the next 111 00:06:35,960 --> 00:06:38,760 Speaker 2: few months we'll see some pretty significant improvements in it, 112 00:06:39,160 --> 00:06:41,520 Speaker 2: And so that seems to be what got people's attention. 113 00:06:41,760 --> 00:06:46,000 Speaker 1: Hasn't it, for example, already dramatically lowered the cost of 114 00:06:46,760 --> 00:06:52,000 Speaker 1: making movies that used to be hand drawn. In terms 115 00:06:52,040 --> 00:06:55,120 Speaker 1: of the kind of films that are cartoons, or that 116 00:06:55,680 --> 00:07:00,560 Speaker 1: they're now even mainstream films that are entirely done by computer. 117 00:07:01,200 --> 00:07:04,599 Speaker 2: You could probably consider CGI technology to be a branch 118 00:07:04,640 --> 00:07:08,360 Speaker 2: of artificial intelligence. And you may have followed some of 119 00:07:08,400 --> 00:07:10,800 Speaker 2: the strikes that were taking place in Hollywood or the 120 00:07:10,840 --> 00:07:14,720 Speaker 2: writers went on strike, and they're very concerned about artificial 121 00:07:14,720 --> 00:07:20,640 Speaker 2: intelligence because chatch epte, the chatbot, can produce pretty decent 122 00:07:20,800 --> 00:07:25,400 Speaker 2: scripts within minutes or seconds even, and so they're worried 123 00:07:25,400 --> 00:07:28,800 Speaker 2: about their jobs being replaced. And at the next stage 124 00:07:28,880 --> 00:07:31,840 Speaker 2: is video, where you're just going to be able to say, hey, 125 00:07:32,240 --> 00:07:36,400 Speaker 2: make me a two hour movie script an accompanying video 126 00:07:36,560 --> 00:07:40,800 Speaker 2: of something about whatever topic, and the artificial intelligence is 127 00:07:40,880 --> 00:07:42,920 Speaker 2: going to be able to create it. So there's a 128 00:07:42,920 --> 00:07:45,360 Speaker 2: lot of concern about job loss. I think in the 129 00:07:45,400 --> 00:07:48,760 Speaker 2: near term. It's going to make these workers more productive. 130 00:07:49,320 --> 00:07:51,240 Speaker 2: It's going to make office workers more productive. You can 131 00:07:51,280 --> 00:07:54,760 Speaker 2: write that office memo in a minute rather than an hour. 132 00:07:55,320 --> 00:07:57,960 Speaker 2: That could potentially be a boon for workers, but there's 133 00:07:57,960 --> 00:08:00,800 Speaker 2: certainly some concerns about job loss as well, and so 134 00:08:01,600 --> 00:08:03,560 Speaker 2: Hollywood is one area where that's the case, but there's 135 00:08:03,600 --> 00:08:04,320 Speaker 2: others as well. 136 00:08:04,840 --> 00:08:07,480 Speaker 1: Yeah. I was down at the Air University recently that 137 00:08:07,720 --> 00:08:10,800 Speaker 1: run by the Air Force, and they now allow their 138 00:08:10,840 --> 00:08:15,600 Speaker 1: students to use chat GPT for the initial research and 139 00:08:15,680 --> 00:08:18,680 Speaker 1: I thought that was a very interesting evolution from what 140 00:08:18,800 --> 00:08:22,480 Speaker 1: would normally be a pretty conservative institution about having to 141 00:08:22,520 --> 00:08:25,920 Speaker 1: do your own work. But they just said it's ubiquitous now. 142 00:08:26,480 --> 00:08:29,320 Speaker 2: It's changing the education landscape, and I think that's the 143 00:08:29,360 --> 00:08:31,400 Speaker 2: smart approach. You're not going to be able to ban 144 00:08:31,600 --> 00:08:34,800 Speaker 2: students from accessing the Internet, and it's going to make 145 00:08:34,840 --> 00:08:36,880 Speaker 2: students more productive in some sense. We don't want to 146 00:08:36,920 --> 00:08:40,640 Speaker 2: say students can't do a Google search to assist them 147 00:08:40,679 --> 00:08:43,800 Speaker 2: with their research project. And if they can write a 148 00:08:43,800 --> 00:08:47,320 Speaker 2: first draft of their paper in ten minutes, because it 149 00:08:47,440 --> 00:08:49,640 Speaker 2: just takes some time to put together a prompt to 150 00:08:49,800 --> 00:08:53,440 Speaker 2: enter into chat GPT, then they can benefit from that 151 00:08:53,559 --> 00:08:56,880 Speaker 2: productivity boost, but also do the follow up work to 152 00:08:56,960 --> 00:09:00,520 Speaker 2: ensure clean up the writing, ensure that everything that chatbot 153 00:09:00,559 --> 00:09:03,600 Speaker 2: is set is accurate, because sometimes they can it's called hallucinate, 154 00:09:03,640 --> 00:09:07,200 Speaker 2: where they make mistakes. So there's additional follow up work 155 00:09:07,200 --> 00:09:08,640 Speaker 2: that has to be done. But this is going to 156 00:09:08,679 --> 00:09:10,640 Speaker 2: be the future. This is going to be the way 157 00:09:10,679 --> 00:09:14,400 Speaker 2: workers and writers work in the future, and we should 158 00:09:14,440 --> 00:09:14,960 Speaker 2: embrace that. 159 00:09:15,640 --> 00:09:17,520 Speaker 1: I do a lot of my own writing and research 160 00:09:18,040 --> 00:09:23,280 Speaker 1: by just going on Google. It's astonishing given that I 161 00:09:23,320 --> 00:09:26,320 Speaker 1: started as a graduate student. My work was in the 162 00:09:26,360 --> 00:09:29,719 Speaker 1: Biblia Tech Albert and Russell's. You literally had to go 163 00:09:29,840 --> 00:09:33,240 Speaker 1: to the librarian and hand her a card and then 164 00:09:33,280 --> 00:09:36,240 Speaker 1: she would go into the stacks to get the book 165 00:09:36,280 --> 00:09:38,679 Speaker 1: that they would then let you read in the library, 166 00:09:38,679 --> 00:09:40,760 Speaker 1: but you couldn't take it out, and you had to 167 00:09:40,840 --> 00:09:42,559 Speaker 1: check it back in at the end of every day 168 00:09:42,880 --> 00:09:44,440 Speaker 1: and go back in the next day to get it. 169 00:09:44,760 --> 00:09:48,400 Speaker 1: And I think back to how difficult and laborious and 170 00:09:48,440 --> 00:09:51,360 Speaker 1: time consuming. And now I just sit on my iPad 171 00:09:51,720 --> 00:09:54,720 Speaker 1: and I can pull up so much information so rapidly. 172 00:09:55,280 --> 00:10:00,400 Speaker 1: It's already a revolution of extraordinary power compared to World 173 00:10:00,400 --> 00:10:01,559 Speaker 1: of twenty or thirty years ago. 174 00:10:02,480 --> 00:10:06,200 Speaker 2: It's a great democratizer in many ways. The Internet in general, 175 00:10:06,240 --> 00:10:12,520 Speaker 2: Google search and artificial intelligence chatbots and similar technologies are 176 00:10:12,600 --> 00:10:15,199 Speaker 2: leading the way in that as well. Obviously I grew 177 00:10:15,280 --> 00:10:17,560 Speaker 2: up with the Dewey decimal system and all that as well. 178 00:10:17,559 --> 00:10:19,559 Speaker 2: I think that maybe that's still used a little bit. 179 00:10:19,640 --> 00:10:22,240 Speaker 2: But when you can access all these books online, or 180 00:10:22,240 --> 00:10:26,120 Speaker 2: when the artificial intelligence chatbot has read all the books 181 00:10:26,120 --> 00:10:28,240 Speaker 2: already and you can just get a summary of it, 182 00:10:28,240 --> 00:10:30,400 Speaker 2: I mean, that's something that's chat GPT is great for. 183 00:10:30,440 --> 00:10:30,920 Speaker 3: You don't have. 184 00:10:30,880 --> 00:10:34,440 Speaker 2: Time to read War and Peace, get a chapter by 185 00:10:34,520 --> 00:10:37,160 Speaker 2: chapter breakdown, and you can go into as much or 186 00:10:37,160 --> 00:10:40,520 Speaker 2: as little detail for the AI as you'd like. I mean, 187 00:10:40,559 --> 00:10:44,719 Speaker 2: it'll just increase productivity and amount of books you can 188 00:10:45,040 --> 00:10:47,720 Speaker 2: at least have a basic understanding of what's in them. 189 00:10:47,920 --> 00:10:49,200 Speaker 3: It is amazing. It's a revolution. 190 00:10:49,720 --> 00:10:52,160 Speaker 1: Let me ask you about the role of congress and 191 00:10:52,200 --> 00:10:57,280 Speaker 1: state legislatures. First, what should individuals and their staffs do 192 00:10:58,120 --> 00:11:01,120 Speaker 1: to educate themselves? And then I want to come back 193 00:11:01,120 --> 00:11:03,160 Speaker 1: and talk about what they should focus on in terms 194 00:11:03,240 --> 00:11:06,520 Speaker 1: of concerns. But first of all, if you were a 195 00:11:06,559 --> 00:11:10,040 Speaker 1: state legislator or a congressman or a staff member listening 196 00:11:10,120 --> 00:11:12,440 Speaker 1: to this, What would your advice be for how they 197 00:11:12,480 --> 00:11:17,160 Speaker 1: can get educated into the whole contour of artificial intelligence. 198 00:11:18,200 --> 00:11:20,080 Speaker 2: Sure, so, I mean the first thing they could do 199 00:11:20,120 --> 00:11:21,720 Speaker 2: is they could read my paper. 200 00:11:21,640 --> 00:11:23,560 Speaker 1: After they read your paper. What should they do? 201 00:11:24,080 --> 00:11:25,160 Speaker 3: I would say a couple of things. 202 00:11:25,160 --> 00:11:27,600 Speaker 2: So first, I mean, there's a lot of good work 203 00:11:27,640 --> 00:11:30,079 Speaker 2: coming out from the think tank community. I think Cato 204 00:11:30,120 --> 00:11:32,600 Speaker 2: Institute is doing good work in this area. R Street, 205 00:11:33,080 --> 00:11:35,080 Speaker 2: my former colleague Adam Thorer, who I used to work 206 00:11:35,120 --> 00:11:37,520 Speaker 2: with at the mccata Center, is just pumping out research 207 00:11:37,559 --> 00:11:39,960 Speaker 2: papers left and right. He's done a lot of great 208 00:11:39,960 --> 00:11:41,680 Speaker 2: work on this topic. If you just want to get 209 00:11:41,880 --> 00:11:44,600 Speaker 2: a basic foundation of what I think has a primer 210 00:11:44,640 --> 00:11:48,720 Speaker 2: on artificial intelligence online that is absolutely great, a great 211 00:11:48,760 --> 00:11:51,720 Speaker 2: resource for just getting a lay of the land. The 212 00:11:51,800 --> 00:11:55,480 Speaker 2: second thing I would say is policymakers shouldn't rush to judgment. 213 00:11:55,559 --> 00:11:58,240 Speaker 2: I mean, there's so many proposals out there to just 214 00:11:58,400 --> 00:12:02,120 Speaker 2: create a new regulatory aid agency, but the reality is 215 00:12:02,440 --> 00:12:08,559 Speaker 2: that regulators already have considerable authority to regulate artificial intelligence. 216 00:12:09,320 --> 00:12:14,320 Speaker 2: We have antidiscrimination laws. We have transportation regulators who are 217 00:12:14,400 --> 00:12:17,680 Speaker 2: regulating automobiles and taking a close look at driverless car 218 00:12:18,000 --> 00:12:22,199 Speaker 2: technology as that's coming online. We have privacy laws or 219 00:12:22,200 --> 00:12:25,600 Speaker 2: intellectual property laws. Now some of these policies are going 220 00:12:25,640 --> 00:12:27,720 Speaker 2: to have to be updated, but I think the best 221 00:12:27,760 --> 00:12:31,920 Speaker 2: thing that policymakers can do is deal with problems as 222 00:12:31,920 --> 00:12:35,439 Speaker 2: they arise on a case by case basis, rather than 223 00:12:35,600 --> 00:12:40,120 Speaker 2: try to take a broad brush and address everything with 224 00:12:40,240 --> 00:12:44,480 Speaker 2: one super regulatory body. The reality is the types of 225 00:12:44,600 --> 00:12:48,679 Speaker 2: issues that arise with AI are so varied, from healthcare 226 00:12:49,120 --> 00:12:54,800 Speaker 2: diagnostics to education policy, as we talked about transportation privacy. 227 00:12:54,840 --> 00:12:58,920 Speaker 2: They're so wide ranging that the different solutions are going 228 00:12:58,960 --> 00:13:02,240 Speaker 2: to have to be tailor to the specific problems. 229 00:13:01,800 --> 00:13:02,560 Speaker 3: As they arise. 230 00:13:03,200 --> 00:13:07,000 Speaker 2: And so I would say, assess these problems on a 231 00:13:07,000 --> 00:13:10,600 Speaker 2: case by case basis as they arise. And one thing 232 00:13:10,640 --> 00:13:13,080 Speaker 2: they can do is also just fund more research. I mean, 233 00:13:13,120 --> 00:13:19,719 Speaker 2: there's surprisingly little academic research on artificial intelligence and the 234 00:13:19,800 --> 00:13:20,680 Speaker 2: risks related to it. 235 00:13:20,720 --> 00:13:21,800 Speaker 3: That's starting to change. 236 00:13:21,840 --> 00:13:26,120 Speaker 2: There's now papers coming out, but a year, two, three 237 00:13:26,840 --> 00:13:29,360 Speaker 2: years ago, I mean, it was pretty sparse, and a 238 00:13:29,360 --> 00:13:32,000 Speaker 2: lot of the research is coming from the technology companies 239 00:13:32,080 --> 00:13:35,160 Speaker 2: rather than from the academics, which is interesting. I mean 240 00:13:35,200 --> 00:13:37,840 Speaker 2: it's a little different than what we usually see. So 241 00:13:38,360 --> 00:13:41,600 Speaker 2: do your homework, do more research, and address these issues 242 00:13:41,640 --> 00:14:00,439 Speaker 2: on a case by case basis. 243 00:14:03,480 --> 00:14:06,120 Speaker 1: There seem to be an emerging number of very specific 244 00:14:06,440 --> 00:14:09,920 Speaker 1: kind of challenges, for example, such as the ability to 245 00:14:09,960 --> 00:14:12,880 Speaker 1: imitate somebody so well that you can get a phone 246 00:14:12,920 --> 00:14:15,600 Speaker 1: call from what you think is a person, but in 247 00:14:15,640 --> 00:14:19,640 Speaker 1: fact it's a phone call from artificial intelligence. What are 248 00:14:19,680 --> 00:14:23,440 Speaker 1: the sort of real time today immediate problems that we 249 00:14:23,480 --> 00:14:26,200 Speaker 1: should be aware of and that may require some kind 250 00:14:26,240 --> 00:14:27,400 Speaker 1: of regulatory intervention. 251 00:14:28,360 --> 00:14:31,240 Speaker 2: One that's received a lot of attention in recent months 252 00:14:31,400 --> 00:14:35,520 Speaker 2: is just this idea of misinformation or deep fakes, where 253 00:14:35,960 --> 00:14:39,320 Speaker 2: images are produced of people that aren't real, but they 254 00:14:39,400 --> 00:14:42,960 Speaker 2: look real and can't be distinguished from real imagery, similar for. 255 00:14:43,000 --> 00:14:44,440 Speaker 3: Videos or audio. 256 00:14:44,840 --> 00:14:48,840 Speaker 2: I received a call on my cell phone, I don't know, 257 00:14:48,840 --> 00:14:51,160 Speaker 2: maybe six months ago, where I was listening to the 258 00:14:51,160 --> 00:14:53,560 Speaker 2: message and it was clearly some kind of scam call. 259 00:14:53,960 --> 00:14:58,360 Speaker 2: Sounded like a perfectly normal American voice on the message, 260 00:14:58,360 --> 00:15:01,960 Speaker 2: and about halfway through the filter of whatever algorithm was 261 00:15:02,000 --> 00:15:04,000 Speaker 2: being used kind of broke down and I could hear 262 00:15:04,280 --> 00:15:08,600 Speaker 2: the same speaking, just in a heavy Chinese accent, and 263 00:15:08,640 --> 00:15:10,240 Speaker 2: so it was clear if somebody had been speaking into 264 00:15:10,320 --> 00:15:12,960 Speaker 2: kind of a filter that had made their voice sound 265 00:15:13,240 --> 00:15:15,720 Speaker 2: like it came from an American accent. And so I 266 00:15:15,760 --> 00:15:18,680 Speaker 2: think we can definitely expect this kind of miss and 267 00:15:18,720 --> 00:15:22,800 Speaker 2: disinformation to ramp up and more scams along these lines. 268 00:15:22,880 --> 00:15:26,840 Speaker 2: It's also going to affect our elections and misinformation in 269 00:15:27,040 --> 00:15:30,120 Speaker 2: political campaigns. So one of the first areas I think 270 00:15:30,160 --> 00:15:35,920 Speaker 2: you might see legislation on AI is surrounding political messaging, 271 00:15:36,400 --> 00:15:40,440 Speaker 2: and probably what we'll see our requirements that the use 272 00:15:40,480 --> 00:15:44,440 Speaker 2: of AI be disclosed in some kind of format. Now, 273 00:15:44,520 --> 00:15:47,720 Speaker 2: is this necessarily going to solve all these problems? Probably not, 274 00:15:47,840 --> 00:15:50,840 Speaker 2: but that might be the first step is some kind 275 00:15:50,840 --> 00:15:54,520 Speaker 2: of mandatory disclosure when AI is being used. But as 276 00:15:54,520 --> 00:15:58,760 Speaker 2: far as fraud is concerned ripping people off running scams, 277 00:15:58,760 --> 00:16:01,680 Speaker 2: it's already illegal. So there are already laws on the 278 00:16:01,680 --> 00:16:05,160 Speaker 2: books outlawing a lot of this activity. But it's just 279 00:16:05,200 --> 00:16:08,440 Speaker 2: going to be that much harder for regulators to keep 280 00:16:08,560 --> 00:16:11,680 Speaker 2: up as technology ramps up. So the challenge may not 281 00:16:11,760 --> 00:16:14,400 Speaker 2: necessarily be what are the new regulations that are going 282 00:16:14,440 --> 00:16:17,200 Speaker 2: to have to come up as much as how are 283 00:16:17,200 --> 00:16:20,360 Speaker 2: the regulators going to keep track of all these new scams? 284 00:16:20,520 --> 00:16:22,560 Speaker 2: How many scam phone calls do you get on a 285 00:16:22,600 --> 00:16:25,600 Speaker 2: daily basis. Now spam calls I mean, I get most 286 00:16:25,640 --> 00:16:27,680 Speaker 2: calls I get on my cell phone are probably spam 287 00:16:27,920 --> 00:16:30,880 Speaker 2: these days, and it just doesn't seem like the regulators 288 00:16:31,040 --> 00:16:31,960 Speaker 2: are able to stop it. 289 00:16:32,440 --> 00:16:35,840 Speaker 1: There was a letter that was very widely signed my 290 00:16:36,080 --> 00:16:40,200 Speaker 1: people who are in the artificial intelligence business really calling 291 00:16:40,280 --> 00:16:42,600 Speaker 1: for a pause. What do you think motivated that letter? 292 00:16:43,080 --> 00:16:45,480 Speaker 2: I would say a couple of things. So certainly there 293 00:16:45,520 --> 00:16:49,640 Speaker 2: are some people in the technology industry who are generally 294 00:16:49,680 --> 00:16:53,600 Speaker 2: concerned about there being some kind of catastrophe as artificial 295 00:16:53,600 --> 00:16:57,560 Speaker 2: intelligence spirals out of control due to this innovation race 296 00:16:57,600 --> 00:17:02,200 Speaker 2: that we're in. Take their concerns as genuine. The problem 297 00:17:02,320 --> 00:17:04,480 Speaker 2: is at a minimum, even if the risks they're talking 298 00:17:04,480 --> 00:17:08,440 Speaker 2: about are real, their proposal, which is basically to pause 299 00:17:08,560 --> 00:17:12,280 Speaker 2: AI development for six months, isn't really going to do 300 00:17:12,359 --> 00:17:17,120 Speaker 2: anything to stop that. At best, it probably, I mean, 301 00:17:17,440 --> 00:17:21,639 Speaker 2: will lead some of America's companies to shut down development 302 00:17:21,680 --> 00:17:23,959 Speaker 2: for a few months and then just ramp it right 303 00:17:24,000 --> 00:17:27,640 Speaker 2: back up, and then we'll see basically the same innovation. 304 00:17:27,320 --> 00:17:29,359 Speaker 3: On a six month delay. 305 00:17:29,680 --> 00:17:32,960 Speaker 2: At worst, I wouldn't expect China to pause AI development 306 00:17:33,359 --> 00:17:35,800 Speaker 2: for six months. I wouldn't expect any bad actors who 307 00:17:35,840 --> 00:17:39,960 Speaker 2: might be working on AI development in their basements, who 308 00:17:40,160 --> 00:17:43,920 Speaker 2: might have a malicious intent to pause development for six months. 309 00:17:44,359 --> 00:17:46,680 Speaker 2: I think that their solution is a pretty good example 310 00:17:46,720 --> 00:17:51,639 Speaker 2: of the kind of evidence free proposal that I'm trying 311 00:17:51,680 --> 00:17:52,520 Speaker 2: to address in. 312 00:17:52,600 --> 00:17:53,919 Speaker 3: My paper on this topic. 313 00:17:54,400 --> 00:17:56,160 Speaker 2: The other thing I would say is we really need 314 00:17:56,200 --> 00:18:00,200 Speaker 2: to be weary of the potential for regulatory capture sure, 315 00:18:00,240 --> 00:18:03,040 Speaker 2: which is this idea that the industry that is being 316 00:18:03,080 --> 00:18:06,439 Speaker 2: regulated kind of tailor regulations to their benefit at the 317 00:18:06,480 --> 00:18:09,480 Speaker 2: expense of competitors. And so we should be skeptical when 318 00:18:09,480 --> 00:18:13,960 Speaker 2: we see some of these leading tech industry tycoons from 319 00:18:14,000 --> 00:18:17,440 Speaker 2: open AI or from Microsoft and Google coming to Washington, 320 00:18:17,520 --> 00:18:20,600 Speaker 2: d C and saying, please regulate us. I mean, they 321 00:18:20,600 --> 00:18:23,199 Speaker 2: are the incumbents right now. They may not be the 322 00:18:23,320 --> 00:18:26,199 Speaker 2: leading innovators a year from now. I mean, this is 323 00:18:26,240 --> 00:18:30,040 Speaker 2: moving so quickly, and they know that if heavy handed 324 00:18:30,080 --> 00:18:33,000 Speaker 2: regulation comes down from Washington, d C, the regulators are 325 00:18:33,000 --> 00:18:35,280 Speaker 2: going to sit down at the table with them. They 326 00:18:35,320 --> 00:18:38,800 Speaker 2: can get their regulations tailored so that they suit the 327 00:18:38,880 --> 00:18:41,960 Speaker 2: kind of technology they're currently using, and maybe that they're 328 00:18:42,000 --> 00:18:45,000 Speaker 2: smaller competitors aren't, and it's a good way to shut 329 00:18:45,000 --> 00:18:47,679 Speaker 2: out competition, and so that is a very real danger, 330 00:18:47,720 --> 00:18:51,959 Speaker 2: and I think that also explains why some leading innovators 331 00:18:51,960 --> 00:18:54,440 Speaker 2: and tech industry people sign that letter. 332 00:18:54,680 --> 00:18:58,560 Speaker 1: There's some concern that artificial intelligence could actually shape the 333 00:18:58,600 --> 00:19:00,960 Speaker 1: election in a variety of ways. Do you know if 334 00:19:01,000 --> 00:19:04,320 Speaker 1: the federal election commissions are actually looking at AI as 335 00:19:04,400 --> 00:19:05,119 Speaker 1: an issue? 336 00:19:05,640 --> 00:19:10,240 Speaker 2: So I don't know offhand whether that specific agency is 337 00:19:10,280 --> 00:19:12,600 Speaker 2: looking at this, but I know that across the government, 338 00:19:13,520 --> 00:19:17,560 Speaker 2: many agencies are looking carefully at AI, and so I 339 00:19:17,560 --> 00:19:19,680 Speaker 2: would be very surprised if they were not looking into 340 00:19:19,720 --> 00:19:22,439 Speaker 2: this issue. You know, we saw already some of the 341 00:19:22,440 --> 00:19:28,000 Speaker 2: recent elections just concerns about interference from Russia, or interference 342 00:19:28,040 --> 00:19:32,560 Speaker 2: from bots on Twitter elevating certain messages or on Facebook 343 00:19:32,560 --> 00:19:35,560 Speaker 2: that may be true or may not be true. So 344 00:19:35,720 --> 00:19:39,000 Speaker 2: in essence, we're already in this climate where there's concerns 345 00:19:39,000 --> 00:19:41,600 Speaker 2: about misinformation. You know, you may have thought about whether 346 00:19:41,640 --> 00:19:44,800 Speaker 2: those concerns are overblown or whether they're really worth taking 347 00:19:44,920 --> 00:19:48,719 Speaker 2: very seriously. But we're probably going to see more of 348 00:19:48,760 --> 00:19:52,400 Speaker 2: that in the future, and I would expect some action 349 00:19:52,520 --> 00:19:55,360 Speaker 2: from regulators. A lot of regulatory agencies right now are 350 00:19:55,400 --> 00:19:58,760 Speaker 2: just coming up with AI policies for their own internal 351 00:19:59,320 --> 00:20:01,560 Speaker 2: systems how they're going to rely on AI. 352 00:20:02,240 --> 00:20:04,120 Speaker 3: But the next step is a lot. 353 00:20:03,920 --> 00:20:06,679 Speaker 2: Of agencies are going to be issuing regulations where they 354 00:20:06,680 --> 00:20:09,760 Speaker 2: have authority currently under current law, they're going to try 355 00:20:09,760 --> 00:20:14,200 Speaker 2: to issue regulations that affect AI development or the use 356 00:20:14,240 --> 00:20:16,760 Speaker 2: of AI. And I'm sure that's true in the case 357 00:20:16,800 --> 00:20:19,480 Speaker 2: of the Federal Election Commission, and it's true in other 358 00:20:19,520 --> 00:20:22,640 Speaker 2: agencies as well. I mentioned the Securities and Exchange Commission, 359 00:20:22,800 --> 00:20:24,280 Speaker 2: and we're going to see more of that. 360 00:20:24,680 --> 00:20:29,840 Speaker 1: What would you say are the major centers of studying 361 00:20:30,000 --> 00:20:33,240 Speaker 1: the where would you send people to go to find 362 00:20:33,560 --> 00:20:34,760 Speaker 1: the leading expertise. 363 00:20:35,440 --> 00:20:38,040 Speaker 2: There's a lot of research coming out of Oxford Stanford 364 00:20:38,680 --> 00:20:44,080 Speaker 2: these are probably some of the leaders, but also just Google, Microsoft, 365 00:20:44,200 --> 00:20:47,840 Speaker 2: Open Ai, and Thropic. These are some of the leading 366 00:20:47,880 --> 00:20:52,520 Speaker 2: tech companies and some of the best computer programmers and 367 00:20:52,600 --> 00:20:56,640 Speaker 2: experts on artificial intelligence and AI safety issues and risk 368 00:20:56,720 --> 00:20:59,600 Speaker 2: issues are working in the private sector, and so you 369 00:20:59,640 --> 00:21:03,440 Speaker 2: can find research from them online as well. I think 370 00:21:03,480 --> 00:21:08,879 Speaker 2: OpenAI now has a whole AI safety division working for them, 371 00:21:09,160 --> 00:21:12,280 Speaker 2: and they're devoting a considerable amount of their computing power 372 00:21:12,640 --> 00:21:15,760 Speaker 2: just toward looking at some of these risk issues. So 373 00:21:15,880 --> 00:21:17,520 Speaker 2: private sector is kind of leading the way on this 374 00:21:17,600 --> 00:21:18,960 Speaker 2: and academics are a little behind. 375 00:21:19,440 --> 00:21:23,120 Speaker 1: Are there any countries that seem to be particularly intensely 376 00:21:23,160 --> 00:21:24,600 Speaker 1: focused on developing AI? 377 00:21:25,359 --> 00:21:28,000 Speaker 2: I would say that the United States is clearly the 378 00:21:28,040 --> 00:21:31,719 Speaker 2: innovator in this area, or big tech companies like Microsoft 379 00:21:31,760 --> 00:21:35,840 Speaker 2: and Google, Facebook, Meta, they're really kind of at the 380 00:21:35,840 --> 00:21:39,400 Speaker 2: cutting edge of this technology. The UK in London that 381 00:21:39,440 --> 00:21:42,840 Speaker 2: there's deep Mind, which is I think part of Google. 382 00:21:43,359 --> 00:21:44,480 Speaker 2: They've been leading the way. 383 00:21:45,000 --> 00:21:45,200 Speaker 3: Now. 384 00:21:45,400 --> 00:21:48,639 Speaker 2: China is obviously investing a lot in this technology, and 385 00:21:49,280 --> 00:21:53,760 Speaker 2: they're using a lot of artificial intelligence. Domestically, there's a 386 00:21:53,800 --> 00:21:58,720 Speaker 2: significant amount of surveillance activity going on using AI, facial 387 00:21:58,760 --> 00:22:02,119 Speaker 2: recognition technology being used widely in some parts of China. 388 00:22:02,640 --> 00:22:07,760 Speaker 2: Increasingly military technologies relying on artificial intelligence as well, which 389 00:22:07,840 --> 00:22:10,680 Speaker 2: raises all kinds of other questions. So right now there's 390 00:22:10,760 --> 00:22:13,160 Speaker 2: kind of a race, I would say, going on between 391 00:22:13,440 --> 00:22:17,400 Speaker 2: the United States and China, with the United States currently 392 00:22:17,440 --> 00:22:19,440 Speaker 2: in the lead, but it doesn't necessarily need to stay 393 00:22:19,440 --> 00:22:23,640 Speaker 2: that way, and Europe is sometimes the case. They're kind 394 00:22:23,680 --> 00:22:28,280 Speaker 2: of seeding this territory to other countries and they're focus 395 00:22:28,320 --> 00:22:33,440 Speaker 2: far more on regulating artificial intelligence. It hasn't passed yet, 396 00:22:33,480 --> 00:22:35,520 Speaker 2: but it looks like there's a good chance that their 397 00:22:36,160 --> 00:22:39,600 Speaker 2: AI Act is going to pass before the end of 398 00:22:39,640 --> 00:22:42,240 Speaker 2: the year. It's going to be a very sweeping new 399 00:22:42,880 --> 00:22:46,919 Speaker 2: AI law. Assuming it passes, it has the potential to 400 00:22:46,960 --> 00:22:50,320 Speaker 2: really kill innovation on the content and we'll see what 401 00:22:50,359 --> 00:22:52,959 Speaker 2: the ramifications of that are. But they're focus much more 402 00:22:52,960 --> 00:22:54,680 Speaker 2: on regulating as opposed to innovating. 403 00:22:54,880 --> 00:22:58,880 Speaker 1: And we saw that happen with pharmaceuticals, where they basically 404 00:22:58,960 --> 00:23:03,400 Speaker 1: drove pharmaceutical research out of Europe and didn't really seem 405 00:23:03,440 --> 00:23:05,800 Speaker 1: to even understand that's what they were doing at the 406 00:23:05,840 --> 00:23:06,840 Speaker 1: time that had happened. 407 00:23:07,440 --> 00:23:09,920 Speaker 2: They had a sweeping privacy law a couple of years 408 00:23:09,920 --> 00:23:13,800 Speaker 2: ago as well, which seems to have really hampered down innovation. 409 00:23:14,440 --> 00:23:15,440 Speaker 3: And this is the pattern. 410 00:23:15,880 --> 00:23:19,480 Speaker 1: Where does India fit in that? Clearly there are remarkable 411 00:23:19,480 --> 00:23:23,200 Speaker 1: Indian leaders in American companies. Do you sense in India 412 00:23:23,240 --> 00:23:26,080 Speaker 1: itself any significant effort in this area. 413 00:23:26,760 --> 00:23:30,400 Speaker 2: India is definitely somebody to be on the lookout for. 414 00:23:30,640 --> 00:23:34,840 Speaker 2: A surprising number of tech CEOs and CEOs in general 415 00:23:34,880 --> 00:23:38,080 Speaker 2: in the United States now or of Indian origin. Some 416 00:23:38,160 --> 00:23:41,760 Speaker 2: of the leading universities in India, their names escape me 417 00:23:41,800 --> 00:23:45,920 Speaker 2: at the moment, but are very focused on tech. I 418 00:23:45,960 --> 00:23:48,919 Speaker 2: would say it's not clear whether domestically they're going to 419 00:23:48,960 --> 00:23:51,919 Speaker 2: be producing companies that are cutting edge in this area, 420 00:23:51,960 --> 00:23:56,120 Speaker 2: but they're certainly pumping out talent who may go abroad 421 00:23:56,240 --> 00:23:58,679 Speaker 2: or may come to American companies or go to some 422 00:23:58,720 --> 00:24:02,160 Speaker 2: of these leading companies and be at the cutting edge 423 00:24:02,200 --> 00:24:04,760 Speaker 2: and leading. I think that they're definitely going to be 424 00:24:04,800 --> 00:24:07,320 Speaker 2: at the forefront, but it remains to be seen how 425 00:24:07,359 --> 00:24:27,639 Speaker 2: many companies they produce that are cutting edge. 426 00:24:29,320 --> 00:24:32,479 Speaker 1: To really do cutting edge artificial intelligence work, you have 427 00:24:32,560 --> 00:24:35,719 Speaker 1: to have a pretty big resource base and you have 428 00:24:35,800 --> 00:24:38,399 Speaker 1: to have a pretty large number of people that This 429 00:24:38,560 --> 00:24:41,120 Speaker 1: is not like the Wright brothers inventing the airplane. 430 00:24:41,920 --> 00:24:42,200 Speaker 3: Right. 431 00:24:42,240 --> 00:24:46,560 Speaker 2: So at the moment, the big tech companies that are 432 00:24:46,680 --> 00:24:50,160 Speaker 2: really leading the way have a tremendous amount of computing 433 00:24:50,240 --> 00:24:54,840 Speaker 2: power that is required to operate these algorithms like chat, 434 00:24:54,880 --> 00:24:58,480 Speaker 2: GPT or Claude or some of the other ones, and 435 00:24:59,359 --> 00:25:04,120 Speaker 2: that is already leading to discussions about energy use. And 436 00:25:04,359 --> 00:25:07,160 Speaker 2: we saw this debate with cryptocurrencies. For example, in bitcoin, 437 00:25:07,200 --> 00:25:10,240 Speaker 2: there's people who don't like all the energy use that's 438 00:25:10,280 --> 00:25:15,320 Speaker 2: going into bitcoin mining activity, and they're trying to restrict 439 00:25:15,600 --> 00:25:18,359 Speaker 2: these technologies by going after their energy use. And I 440 00:25:18,359 --> 00:25:22,719 Speaker 2: think we're going to see something similar in the AI space. Now, 441 00:25:22,880 --> 00:25:25,159 Speaker 2: whether it will continue to be the case that you 442 00:25:25,200 --> 00:25:27,320 Speaker 2: need to have tons of computing power and these big 443 00:25:27,400 --> 00:25:30,480 Speaker 2: data centers they're using all kinds of energy to run 444 00:25:30,520 --> 00:25:33,600 Speaker 2: these algorithms, it may become easier over time. It may 445 00:25:33,600 --> 00:25:37,400 Speaker 2: be that this connection between computing power and cutting edge 446 00:25:37,440 --> 00:25:41,639 Speaker 2: technology that link breaks over time. And another thing to 447 00:25:41,680 --> 00:25:44,840 Speaker 2: look out for is that some of these AI technologies 448 00:25:44,840 --> 00:25:48,640 Speaker 2: are going to be open source, where essentially the technology 449 00:25:48,720 --> 00:25:51,720 Speaker 2: is made freely available for others to make use of 450 00:25:51,800 --> 00:25:54,919 Speaker 2: as they will. And so I think that that's going 451 00:25:54,960 --> 00:25:58,280 Speaker 2: to lead to more people in their basement experimenting with 452 00:25:58,359 --> 00:26:02,280 Speaker 2: this technology and not necessarily making it dependent on big 453 00:26:02,359 --> 00:26:03,040 Speaker 2: data centers. 454 00:26:03,760 --> 00:26:08,080 Speaker 1: Most of us have no idea how massive the investment 455 00:26:08,200 --> 00:26:13,359 Speaker 1: is worldwide in sustaining the modern Internet, the cloud, and 456 00:26:13,400 --> 00:26:16,640 Speaker 1: all these different capacities that we just take for granted, 457 00:26:17,560 --> 00:26:21,440 Speaker 1: but that in fact it's an enormous investment in maintaining 458 00:26:21,440 --> 00:26:24,959 Speaker 1: it and sustaining it and growing it. I pick up 459 00:26:24,960 --> 00:26:27,600 Speaker 1: my phone or my iPad or whatever, I just assume 460 00:26:27,600 --> 00:26:31,600 Speaker 1: it's all going to work, and it does, but behind 461 00:26:31,640 --> 00:26:36,240 Speaker 1: that is billions and billions of dollars and hundreds of 462 00:26:36,280 --> 00:26:40,040 Speaker 1: thousands of hours of first building it and then sustaining it. 463 00:26:40,680 --> 00:26:42,600 Speaker 1: I mean, it doesn't that strike you as kind of amazing. 464 00:26:43,160 --> 00:26:46,440 Speaker 2: It's absolutely amazing, and we take it for granted. I mean, 465 00:26:46,480 --> 00:26:49,200 Speaker 2: the Competitive Enterprise Institute where I work does a lot 466 00:26:49,200 --> 00:26:54,520 Speaker 2: of research on electricity and the energy grid and environmental policy, 467 00:26:55,200 --> 00:26:58,720 Speaker 2: and there's a real risk that our electricity isn't going 468 00:26:58,760 --> 00:27:01,840 Speaker 2: to be as reliable as it was ten twenty thirty 469 00:27:01,920 --> 00:27:04,520 Speaker 2: years ago as a result of a lot of the 470 00:27:04,560 --> 00:27:08,040 Speaker 2: government policies that are coming out. We hope that there 471 00:27:08,080 --> 00:27:11,280 Speaker 2: will be transition to cleaner energy sources and it will 472 00:27:11,280 --> 00:27:14,679 Speaker 2: all go smoothly and there won't be any problems associated 473 00:27:14,680 --> 00:27:16,720 Speaker 2: with that, but most likely there's going to be hiccups 474 00:27:16,720 --> 00:27:20,359 Speaker 2: along the way. And if energy becomes less reliable, then 475 00:27:21,119 --> 00:27:23,520 Speaker 2: the technology that uses all the energy is going to 476 00:27:23,520 --> 00:27:27,280 Speaker 2: become less reliable. And that's even aside from all the 477 00:27:27,280 --> 00:27:30,479 Speaker 2: investments you're talking about in the technology itself, which if 478 00:27:30,520 --> 00:27:36,000 Speaker 2: you discourage investments by regulating a technology into overhanded a manner, 479 00:27:36,359 --> 00:27:38,880 Speaker 2: then the rate of innovation is going to slow down 480 00:27:39,480 --> 00:27:42,200 Speaker 2: and we're going to have less growth and less improvement. 481 00:27:42,240 --> 00:27:43,200 Speaker 2: In living standards. 482 00:27:43,200 --> 00:27:45,879 Speaker 1: As a result, if we follow the European approach and 483 00:27:46,040 --> 00:27:50,400 Speaker 1: over regulated to what extent do you think that would 484 00:27:50,440 --> 00:27:54,960 Speaker 1: simply lead research and development out of the US and 485 00:27:55,080 --> 00:27:56,040 Speaker 1: other places. 486 00:27:57,040 --> 00:28:00,960 Speaker 2: So that's certainly a concern with artificial intel diligence, because 487 00:28:01,920 --> 00:28:05,640 Speaker 2: anyone can write computer code from pretty much anywhere. All 488 00:28:05,640 --> 00:28:09,880 Speaker 2: you need is a laptop and some computing power. And 489 00:28:10,320 --> 00:28:14,600 Speaker 2: a major concern is that if we overregulate here then 490 00:28:15,400 --> 00:28:19,679 Speaker 2: China will basically just pick up the slack, and China 491 00:28:19,680 --> 00:28:22,479 Speaker 2: would more than love for that to happen. There's not 492 00:28:22,520 --> 00:28:26,440 Speaker 2: just economic concerns here, there's national security concerns. I mean, 493 00:28:26,480 --> 00:28:29,920 Speaker 2: if this is going to be how warfare takes place 494 00:28:29,960 --> 00:28:35,200 Speaker 2: in the future, that military drones are running on autonomous systems, 495 00:28:35,760 --> 00:28:39,320 Speaker 2: and the more rapid response and the more sophisticated the 496 00:28:39,360 --> 00:28:42,320 Speaker 2: response and the strategic a response that you get, it's 497 00:28:42,360 --> 00:28:45,560 Speaker 2: going to rely on the more sophisticated AI algorithm and 498 00:28:45,600 --> 00:28:48,560 Speaker 2: so militarily, I mean, we almost have to be the 499 00:28:48,640 --> 00:28:50,800 Speaker 2: leaders in this area. If we don't want to seed 500 00:28:50,840 --> 00:28:54,280 Speaker 2: ground to China, They're just waiting for us to mess 501 00:28:54,320 --> 00:28:57,760 Speaker 2: this up. And fortunately, I think the policymakers really understand 502 00:28:57,800 --> 00:29:02,160 Speaker 2: this I think that there's bipartisan recognition that we can't 503 00:29:02,200 --> 00:29:05,760 Speaker 2: lose this race. This is too important. But yet we 504 00:29:05,800 --> 00:29:08,800 Speaker 2: still see the proposals that are out there. They're pretty 505 00:29:08,800 --> 00:29:11,160 Speaker 2: heavy handed at the moment. I'm not sure how realistic 506 00:29:11,200 --> 00:29:13,080 Speaker 2: they are of passing, at least in the near term. 507 00:29:13,520 --> 00:29:16,000 Speaker 2: But there is a danger that you get one kind 508 00:29:16,040 --> 00:29:21,040 Speaker 2: of big news story about something going wrong and the 509 00:29:21,080 --> 00:29:25,760 Speaker 2: policy makers react with passing some big sweeping legislation. I mean, 510 00:29:25,800 --> 00:29:27,760 Speaker 2: Dodd Frank was probably a good example of this with 511 00:29:28,160 --> 00:29:31,000 Speaker 2: in financial markets, and something similar could happen with AI. 512 00:29:31,280 --> 00:29:33,560 Speaker 2: So that's something to be concerned about. 513 00:29:33,800 --> 00:29:36,080 Speaker 1: I mean, the reform could actually do more damage than 514 00:29:36,120 --> 00:29:37,360 Speaker 1: the thing of us trying to fix. 515 00:29:37,720 --> 00:29:40,960 Speaker 2: Yeah, it's the law of unintended consequences. Very hard to 516 00:29:40,960 --> 00:29:44,920 Speaker 2: predict how policies are actually going to impact the real world. 517 00:29:45,080 --> 00:29:48,280 Speaker 1: This is such a huge topic. It covers so many 518 00:29:48,320 --> 00:29:51,680 Speaker 1: different things. I'm delighted that you have done this Rules 519 00:29:51,680 --> 00:29:54,480 Speaker 1: for Robots, a framework for governance of AI. But I 520 00:29:54,520 --> 00:29:56,320 Speaker 1: have to ask you, James, what are you going to 521 00:29:56,360 --> 00:29:57,160 Speaker 1: work on next. 522 00:29:58,080 --> 00:30:00,360 Speaker 2: So the purpose of this paper was to try to 523 00:30:00,400 --> 00:30:04,720 Speaker 2: present a framework for judging policy proposals. In the paper, 524 00:30:04,760 --> 00:30:07,760 Speaker 2: I basically outline a four step process for determining whether 525 00:30:08,400 --> 00:30:11,800 Speaker 2: a regulatory proposal makes sense in the context of AI, 526 00:30:11,920 --> 00:30:14,880 Speaker 2: and that includes steps like defining a problem that exists 527 00:30:15,480 --> 00:30:20,040 Speaker 2: in the marketplace, identifying an outcome that's desired, considering different 528 00:30:20,040 --> 00:30:22,760 Speaker 2: alternatives to solve that problem and achieve the outcome you want, 529 00:30:22,800 --> 00:30:26,560 Speaker 2: and then ranking the alternatives based on efficiency or cost 530 00:30:26,640 --> 00:30:30,760 Speaker 2: concerns or risk concerns. And so this paper was basically 531 00:30:30,800 --> 00:30:34,440 Speaker 2: to say, hey, here's a framework for judging policies. Next, 532 00:30:34,480 --> 00:30:37,600 Speaker 2: I'd like to actually go out and judge some of 533 00:30:37,600 --> 00:30:40,680 Speaker 2: the proposals that have been put out. I just recently 534 00:30:40,720 --> 00:30:44,320 Speaker 2: wrote a short comment to the Securities and Exchange Commission 535 00:30:44,640 --> 00:30:48,920 Speaker 2: about a regulation they proposed, and I essentially applied this framework. 536 00:30:48,960 --> 00:30:54,160 Speaker 2: That regulation is related to technologies that broker dealers and 537 00:30:54,200 --> 00:30:59,000 Speaker 2: investment advisors are utilizing to make recommendations to investors or 538 00:30:59,280 --> 00:31:03,120 Speaker 2: in some cases used on investment apps like Robinhood that 539 00:31:03,200 --> 00:31:07,120 Speaker 2: people are using and that are encouraging more trading activity. 540 00:31:07,520 --> 00:31:08,880 Speaker 3: And I just said, hey. 541 00:31:08,800 --> 00:31:11,600 Speaker 2: Okay, the sec says that these new technologies are a problem, 542 00:31:11,720 --> 00:31:15,160 Speaker 2: that they could cause a financial crisis. What evidence did 543 00:31:15,200 --> 00:31:18,560 Speaker 2: they provide that this problem is real? Did they define 544 00:31:18,560 --> 00:31:21,959 Speaker 2: what the objective is, how seriously did they consider alternative solutions? 545 00:31:22,240 --> 00:31:24,680 Speaker 2: And I found that the proposal really kind of judged 546 00:31:24,680 --> 00:31:28,160 Speaker 2: by this framework, just kind of fell flat. They didn't 547 00:31:28,160 --> 00:31:32,040 Speaker 2: provide anything other than really anecdotal evidence. They weren't clear 548 00:31:32,080 --> 00:31:34,520 Speaker 2: about what they're trying to achieve. They estimated some costs 549 00:31:34,560 --> 00:31:37,560 Speaker 2: of their proposed solution, but they didn't consider any other 550 00:31:38,160 --> 00:31:42,720 Speaker 2: proposals very seriously, and they didn't estimate any benefits from 551 00:31:42,760 --> 00:31:45,440 Speaker 2: the regulation, which to me suggests there's a lot of 552 00:31:45,440 --> 00:31:48,880 Speaker 2: costs but really no benefits to speak of. The regulations 553 00:31:48,960 --> 00:31:51,840 Speaker 2: probably not going to accomplish much good. I see myself 554 00:31:51,880 --> 00:31:55,960 Speaker 2: doing more work like that, looking at individual policies and 555 00:31:56,000 --> 00:31:57,160 Speaker 2: applying this framework. 556 00:31:57,360 --> 00:31:59,480 Speaker 1: If you look back over the long history of these things, 557 00:32:00,280 --> 00:32:03,240 Speaker 1: being a little slow and a little cautious and regulating 558 00:32:04,280 --> 00:32:08,120 Speaker 1: very often is dramatically more profitable for the society than 559 00:32:08,240 --> 00:32:12,520 Speaker 1: over regulating and causing people to quit trying, or causing 560 00:32:12,560 --> 00:32:16,080 Speaker 1: people to have cost structures so high that the barrier 561 00:32:16,120 --> 00:32:20,160 Speaker 1: to entry for anybody with a new idea becomes impossible. James, 562 00:32:20,240 --> 00:32:22,920 Speaker 1: I want to thank you for joining me and for 563 00:32:23,080 --> 00:32:27,200 Speaker 1: sharing the CEI study. Rules for Robots a framework for 564 00:32:27,240 --> 00:32:29,640 Speaker 1: Government of AI. I think we're going to be back 565 00:32:29,680 --> 00:32:32,600 Speaker 1: on this topic again and again, and I really appreciate 566 00:32:32,640 --> 00:32:36,640 Speaker 1: you helping us get a general overview of what's coming 567 00:32:36,640 --> 00:32:39,120 Speaker 1: down the road, how important it is, and the kind 568 00:32:39,160 --> 00:32:41,960 Speaker 1: of questions we should ask before we rush off and 569 00:32:42,040 --> 00:32:44,320 Speaker 1: do something. So thank you very very much. 570 00:32:44,800 --> 00:32:46,720 Speaker 2: Thanks so much for having me at a great time. 571 00:32:50,120 --> 00:32:52,880 Speaker 1: Thank you to my guest James Brow. You can get 572 00:32:52,880 --> 00:32:56,200 Speaker 1: a link to his new study for CI Rules for Robots, 573 00:32:56,480 --> 00:32:59,920 Speaker 1: a Framework for Governance of AI on our show page 574 00:33:00,040 --> 00:33:04,000 Speaker 1: at newsworld dot com. Newsworld is produced by Gangrah three 575 00:33:04,040 --> 00:33:08,440 Speaker 1: sixty and iHeartMedia. Our executive producer is Guarnsey Sloman. Our 576 00:33:08,480 --> 00:33:12,760 Speaker 1: researcher is Rachel Peterson. The artwork for the show was 577 00:33:12,760 --> 00:33:16,080 Speaker 1: created by Steve Penley. Special thanks to the team at 578 00:33:16,080 --> 00:33:19,400 Speaker 1: Gingrid three sixty. If you've been enjoying Newsworld, I hope 579 00:33:19,400 --> 00:33:22,080 Speaker 1: you'll go to Apple Podcast and both rate us with 580 00:33:22,200 --> 00:33:25,479 Speaker 1: five stars and give us a review so others can 581 00:33:25,560 --> 00:33:29,200 Speaker 1: learn what it's all about. Right now, listeners of Newsworld 582 00:33:29,560 --> 00:33:33,560 Speaker 1: can sign up for my three freeweekly columns at gingrichsthree 583 00:33:33,560 --> 00:33:38,440 Speaker 1: sixty dot com slash newsletter. I'm Newt gingrich, This is newtsworld.