1 00:00:00,120 --> 00:00:00,280 Speaker 1: Now. 2 00:00:00,320 --> 00:00:03,440 Speaker 2: I know you're hearing about AI everywhere all the time, 3 00:00:03,600 --> 00:00:05,880 Speaker 2: and people either love it or hate it. People are 4 00:00:05,920 --> 00:00:08,760 Speaker 2: saying it's going to be huge for the economy and productivity. 5 00:00:08,800 --> 00:00:10,920 Speaker 2: Other people are saying it's going to cost people jobs. 6 00:00:10,960 --> 00:00:15,240 Speaker 2: And probably everybody's right and everybody's wrong. But what about 7 00:00:15,280 --> 00:00:20,000 Speaker 2: the security concerns about this? What about how AI opens 8 00:00:20,079 --> 00:00:22,840 Speaker 2: up a whole lot of issues with regard to our 9 00:00:22,960 --> 00:00:29,159 Speaker 2: national security and other aspects of it. I want to 10 00:00:29,200 --> 00:00:31,080 Speaker 2: dive deep into this so that we can all be 11 00:00:31,080 --> 00:00:32,680 Speaker 2: better informed about it, and I want to tell you 12 00:00:32,840 --> 00:00:35,320 Speaker 2: right now, before we even get started, I need you 13 00:00:35,360 --> 00:00:40,000 Speaker 2: to go to secureai now dot org secure ai now 14 00:00:40,040 --> 00:00:42,040 Speaker 2: dot org. Not only are you going to get information there, 15 00:00:42,360 --> 00:00:43,760 Speaker 2: but you're also going to be able to sign up 16 00:00:43,800 --> 00:00:46,760 Speaker 2: so you can continue to get information and be informed 17 00:00:46,800 --> 00:00:48,720 Speaker 2: about this stuff. Because I know you've got people in 18 00:00:48,760 --> 00:00:51,320 Speaker 2: your life who think they know it all and they don't. 19 00:00:51,560 --> 00:00:53,159 Speaker 2: And if you can get plugged in and you can 20 00:00:53,200 --> 00:00:55,880 Speaker 2: get educated, well you'll be the guy who actually knows 21 00:00:55,920 --> 00:00:59,080 Speaker 2: it all or the gal. Now. I am not that guy. 22 00:00:59,480 --> 00:01:01,800 Speaker 2: I do not know it all. I know my limitations, 23 00:01:01,800 --> 00:01:04,640 Speaker 2: but thankfully we have an expert with us Brendan Steinhauser. 24 00:01:04,720 --> 00:01:07,680 Speaker 2: He is the CEO of the Alliance for Secure AI 25 00:01:08,080 --> 00:01:10,280 Speaker 2: and Brendan, I really appreciate you joining it not just 26 00:01:10,319 --> 00:01:13,320 Speaker 2: to better inform me, but also everyone in our audience 27 00:01:13,319 --> 00:01:13,800 Speaker 2: about this. 28 00:01:14,920 --> 00:01:17,160 Speaker 3: It's great to be with you, Larry, looking forward to 29 00:01:17,160 --> 00:01:19,199 Speaker 3: the conversation about this really important topic. 30 00:01:20,000 --> 00:01:23,160 Speaker 2: So I mean at advanced AI is a kind of 31 00:01:23,160 --> 00:01:25,680 Speaker 2: a big deal and everybody is talking about it, and 32 00:01:25,760 --> 00:01:28,399 Speaker 2: everybody already is impacted by it. Can you sort of 33 00:01:28,760 --> 00:01:31,199 Speaker 2: flesh this out a little bit and explain how even 34 00:01:31,200 --> 00:01:34,640 Speaker 2: if people don't realize they're being impacted by it, they are. 35 00:01:35,720 --> 00:01:39,200 Speaker 3: Yeah, AI is going to transform society everything from national 36 00:01:39,200 --> 00:01:42,680 Speaker 3: security to our economy. If you don't think you're using it, 37 00:01:42,720 --> 00:01:45,600 Speaker 3: you probably are, or you're using products and services that 38 00:01:45,640 --> 00:01:49,120 Speaker 3: include AI. A lot of software now has AI built 39 00:01:49,160 --> 00:01:51,600 Speaker 3: into it. A lot of companies are using AI to 40 00:01:51,680 --> 00:01:55,240 Speaker 3: market products to you. It's already impacting jobs in the 41 00:01:55,280 --> 00:01:57,000 Speaker 3: economy as we speak, and we. 42 00:01:57,000 --> 00:01:58,160 Speaker 1: Expect that to pick up. 43 00:01:58,360 --> 00:02:01,680 Speaker 3: And so it's going to transform all of society very rapidly. 44 00:02:02,040 --> 00:02:04,040 Speaker 3: And it's a technology that is unique because it is 45 00:02:04,080 --> 00:02:07,360 Speaker 3: growing at an exponential rate, and it is unique in 46 00:02:07,440 --> 00:02:10,720 Speaker 3: kind as well because AI is essentially, as it sounds, 47 00:02:10,720 --> 00:02:14,080 Speaker 3: an artificial intelligence, something that is getting more and more 48 00:02:14,120 --> 00:02:17,520 Speaker 3: powerful and will have agency and the ability to make 49 00:02:17,560 --> 00:02:19,600 Speaker 3: decisions and do things in the real world. 50 00:02:20,520 --> 00:02:22,200 Speaker 2: Well, and part of why I'm drawn to you and 51 00:02:22,240 --> 00:02:25,440 Speaker 2: your organization is that you're not sort of just resisting 52 00:02:25,480 --> 00:02:27,440 Speaker 2: for the sake of resisting and say no, this is bad, 53 00:02:27,520 --> 00:02:31,519 Speaker 2: stop it. It's happening, It's going to happen. You recognize 54 00:02:31,560 --> 00:02:33,880 Speaker 2: the vulnerability and concerns here. What you want to do 55 00:02:34,040 --> 00:02:37,079 Speaker 2: is ensure that AI is in a place and is 56 00:02:37,120 --> 00:02:39,320 Speaker 2: secured in such a way that we can all benefit 57 00:02:39,400 --> 00:02:41,840 Speaker 2: from it. So, if you don't mind, I'd like to 58 00:02:41,840 --> 00:02:45,360 Speaker 2: start with the upside, the positive side. Assuming that AI 59 00:02:46,240 --> 00:02:48,280 Speaker 2: and all the problems that are associated with it can 60 00:02:48,360 --> 00:02:51,560 Speaker 2: be resolved, what will it bring to the table here? 61 00:02:51,639 --> 00:02:53,559 Speaker 2: How can it benefit our lives? 62 00:02:54,960 --> 00:03:00,280 Speaker 3: I could make a wealthier, smarter, more efficient, more productive. 63 00:03:01,000 --> 00:03:04,320 Speaker 3: It could help us solve really important problems like nuclear fusion, 64 00:03:04,320 --> 00:03:07,359 Speaker 3: which would allow us to have abundant energy. It could 65 00:03:07,400 --> 00:03:10,119 Speaker 3: help us solve problems in the medical field. It could 66 00:03:10,120 --> 00:03:15,720 Speaker 3: help us develop better medicines, better healthcare, potentially increase longevity 67 00:03:15,760 --> 00:03:20,280 Speaker 3: by decades and lift up lifespans. It could make us 68 00:03:20,639 --> 00:03:24,480 Speaker 3: unbelievably wealthy as a society and as the world and 69 00:03:24,600 --> 00:03:27,400 Speaker 3: lift you know, billions more people out of poverty if 70 00:03:27,440 --> 00:03:29,960 Speaker 3: things go well, If we get it right, and AI 71 00:03:30,200 --> 00:03:32,360 Speaker 3: is used to work for people and work to serve 72 00:03:32,680 --> 00:03:35,040 Speaker 3: or is used to serve the interest of humanity, it 73 00:03:35,080 --> 00:03:36,840 Speaker 3: could do all of these things and more. 74 00:03:38,200 --> 00:03:40,960 Speaker 2: It could, but there are some concerns and there are 75 00:03:41,040 --> 00:03:43,360 Speaker 2: some drawbacks, and we need to focus on the first 76 00:03:43,360 --> 00:03:46,400 Speaker 2: of all, I keep hearing the term super intelligence. Can 77 00:03:46,440 --> 00:03:47,680 Speaker 2: you explain what is is? 78 00:03:47,720 --> 00:03:47,920 Speaker 1: That? 79 00:03:48,040 --> 00:03:50,960 Speaker 2: Is that what drives AI? Or is super intelligent something 80 00:03:51,000 --> 00:03:53,440 Speaker 2: that will be born from AI? Help me out with this. 81 00:03:54,520 --> 00:03:56,520 Speaker 1: There's not a precise definition. 82 00:03:56,880 --> 00:03:59,080 Speaker 3: I was actually just speaking with someone recently about this, 83 00:03:59,280 --> 00:04:02,680 Speaker 3: and the best way to think about super intelligence is 84 00:04:02,760 --> 00:04:06,440 Speaker 3: an AI that is orders of magnitude faster and smarter 85 00:04:06,520 --> 00:04:09,720 Speaker 3: than human beings and potentially to some of all human beings. 86 00:04:09,800 --> 00:04:14,200 Speaker 3: So think about an extremely fast kind of supercomputer with 87 00:04:14,320 --> 00:04:18,279 Speaker 3: agency that can calculate things and make decisions and act 88 00:04:18,320 --> 00:04:21,320 Speaker 3: in the real world at the speed of light, and 89 00:04:21,360 --> 00:04:24,000 Speaker 3: that is, you know, has the information at hand that 90 00:04:24,040 --> 00:04:27,080 Speaker 3: would basically include all of human knowledge. Think about something 91 00:04:27,440 --> 00:04:32,280 Speaker 3: that can rapidly answer questions with precise accuracy, that can 92 00:04:32,760 --> 00:04:35,960 Speaker 3: manage things and execute things in the real world at 93 00:04:36,400 --> 00:04:39,920 Speaker 3: rapid speeds and again maybe ten thousand times faster than 94 00:04:39,960 --> 00:04:42,800 Speaker 3: human beings, ten thousand times smarter potentially. 95 00:04:42,400 --> 00:04:46,440 Speaker 2: Yeah, yikes, and so and of course this conjures concerns 96 00:04:46,480 --> 00:04:47,880 Speaker 2: of you know, you can go back to the Twilight 97 00:04:47,960 --> 00:04:51,120 Speaker 2: Zone episode of the robot who ends up replacing human beings, 98 00:04:51,200 --> 00:04:55,640 Speaker 2: or or the Matrix where you know, these entities end 99 00:04:55,720 --> 00:04:58,159 Speaker 2: up taking over the world. Obviously that's all science fiction. 100 00:04:58,880 --> 00:05:02,120 Speaker 2: But from your perspective in a real world scenario, what 101 00:05:02,160 --> 00:05:03,719 Speaker 2: are the biggest dangers you see here? 102 00:05:04,640 --> 00:05:06,159 Speaker 3: Yeah, I think if we just look at what's happened 103 00:05:06,200 --> 00:05:09,160 Speaker 3: in the last year or so, as AI has developed 104 00:05:09,279 --> 00:05:13,200 Speaker 3: rapidly as the capabilities have grown exponentially, we've already seen 105 00:05:13,240 --> 00:05:16,599 Speaker 3: problems with control. So we've already seen you know, AI's 106 00:05:16,720 --> 00:05:19,599 Speaker 3: kind of the rogue. The most famous example. There's some 107 00:05:19,680 --> 00:05:23,040 Speaker 3: of these agents, the open call agents, that are essentially 108 00:05:23,040 --> 00:05:25,240 Speaker 3: doing the opposite of what the people are telling them 109 00:05:25,240 --> 00:05:27,880 Speaker 3: to do. They've kind of gone rogue in the real world, 110 00:05:27,920 --> 00:05:31,800 Speaker 3: they've gone rogue and testing scenarios, there's been these hypothetical 111 00:05:32,360 --> 00:05:36,240 Speaker 3: scenarios created by companies who are kind of testing their models, 112 00:05:36,640 --> 00:05:39,120 Speaker 3: and they've done things like, you know, lying and scheming 113 00:05:39,160 --> 00:05:41,920 Speaker 3: and manipulating human beings or blackmailing human beings and these 114 00:05:41,920 --> 00:05:45,599 Speaker 3: testing scenarios to pursue their goals. And so that's very troubling, 115 00:05:45,960 --> 00:05:47,800 Speaker 3: especially given the fact that this is sort of the 116 00:05:48,640 --> 00:05:51,359 Speaker 3: worst that AI is going to ever be. It's only 117 00:05:51,360 --> 00:05:53,880 Speaker 3: going to get better, faster, and smarter. And so we 118 00:05:53,960 --> 00:05:59,560 Speaker 3: are on this path toward artificial superintelligence, and that's explicitly 119 00:05:59,560 --> 00:06:01,320 Speaker 3: with the company are saying they are building. And so 120 00:06:01,320 --> 00:06:04,159 Speaker 3: if we're going to do that, we have to control 121 00:06:04,160 --> 00:06:05,760 Speaker 3: it first. And I don't really think we should build 122 00:06:05,760 --> 00:06:08,520 Speaker 3: a super intelligence until we can guarantee that AI is 123 00:06:08,520 --> 00:06:11,000 Speaker 3: aligned with human values and that we can control it 124 00:06:11,080 --> 00:06:14,640 Speaker 3: and use it for our purposes rather than it controllings. 125 00:06:16,040 --> 00:06:18,600 Speaker 2: I think a lot of us our first interaction with 126 00:06:18,720 --> 00:06:23,080 Speaker 2: AI are these chat bots or these AI campaigns. Like 127 00:06:24,480 --> 00:06:26,680 Speaker 2: if you're calling a big corporation or trying to reach 128 00:06:26,720 --> 00:06:29,040 Speaker 2: out to them and on their website, they start asking 129 00:06:29,040 --> 00:06:31,719 Speaker 2: you questions and three questions in you realize, wait a second, 130 00:06:31,760 --> 00:06:34,440 Speaker 2: I'm not talking to a person here. Is that what 131 00:06:34,480 --> 00:06:39,440 Speaker 2: these things are? And how powerful can these be? Because 132 00:06:39,600 --> 00:06:42,200 Speaker 2: if you know, a phone company is using that to 133 00:06:42,320 --> 00:06:45,160 Speaker 2: interact and act like a human being. Well, that can 134 00:06:45,200 --> 00:06:48,120 Speaker 2: be brought up to a pretty scary level actually for 135 00:06:48,240 --> 00:06:49,720 Speaker 2: nefarious reasons, I would think. 136 00:06:50,720 --> 00:06:53,280 Speaker 3: I think the kind of help desk chatbots are sort 137 00:06:53,320 --> 00:06:54,719 Speaker 3: of the most basic form of this. 138 00:06:54,800 --> 00:06:56,440 Speaker 1: They have been in existence for a while. 139 00:06:56,920 --> 00:07:01,039 Speaker 3: That technology is basically built on these eyes predicting the 140 00:07:01,080 --> 00:07:03,359 Speaker 3: next word, predicting what they ought to say this and 141 00:07:03,400 --> 00:07:05,400 Speaker 3: what you said that sort of thing. And so that's 142 00:07:05,440 --> 00:07:09,359 Speaker 3: kind of the basics of how these algorithms work. But 143 00:07:09,760 --> 00:07:13,240 Speaker 3: what we're seeing that is already sort of happening now 144 00:07:13,280 --> 00:07:15,280 Speaker 3: and what is kind of in the near term, is 145 00:07:15,320 --> 00:07:18,400 Speaker 3: that AIS will be able to clone human voices, clone 146 00:07:18,800 --> 00:07:21,040 Speaker 3: humans based on the video that we put out there, 147 00:07:21,640 --> 00:07:24,480 Speaker 3: to basically create a representation of human beings that will 148 00:07:24,520 --> 00:07:26,680 Speaker 3: sound and look just like you or me, And then 149 00:07:26,680 --> 00:07:29,960 Speaker 3: people could use that technology, use that sort of avatar, 150 00:07:30,000 --> 00:07:33,400 Speaker 3: if you will, without your knowledge or control, to you know, 151 00:07:33,560 --> 00:07:37,000 Speaker 3: to put out information or to maybe defraud people, or 152 00:07:37,040 --> 00:07:39,720 Speaker 3: to you know, pack into systems and do sort of 153 00:07:39,760 --> 00:07:42,080 Speaker 3: you know, nefarious things, empty bank accounts and that sort 154 00:07:42,080 --> 00:07:42,280 Speaker 3: of thing. 155 00:07:42,320 --> 00:07:43,680 Speaker 1: And so the technology is. 156 00:07:43,680 --> 00:07:48,800 Speaker 3: Rapidly advancing, and it's very hard to discern whether something 157 00:07:48,880 --> 00:07:50,760 Speaker 3: is a human being or an AI. This is kind 158 00:07:50,760 --> 00:07:54,000 Speaker 3: of the what Alan Turing, one of the godfathers of AI, 159 00:07:54,320 --> 00:07:56,720 Speaker 3: talk about even in the nineteen fifties, the Turing test, 160 00:07:56,800 --> 00:07:58,520 Speaker 3: you know, which was the idea that we would at 161 00:07:58,560 --> 00:08:01,000 Speaker 3: some point, you know, reach a moment where human beings 162 00:08:01,040 --> 00:08:04,040 Speaker 3: cannot tell the difference between a human being or a robot. 163 00:08:04,680 --> 00:08:08,200 Speaker 2: And here we are and venness Listen, I'm a conservative. 164 00:08:08,240 --> 00:08:09,960 Speaker 2: A lot of people in my audience are conservatives, so 165 00:08:10,600 --> 00:08:13,280 Speaker 2: I think that there's a reflexive recoil from the idea 166 00:08:13,360 --> 00:08:17,520 Speaker 2: of government regulation here. But we're conservative, we're not anarchists. 167 00:08:17,640 --> 00:08:20,280 Speaker 2: There is a role in government on certain aspects of 168 00:08:20,320 --> 00:08:24,720 Speaker 2: our lives. So at the Alliance for Secure AI, is 169 00:08:24,720 --> 00:08:26,520 Speaker 2: that what you guys are looking at. Is it is 170 00:08:26,640 --> 00:08:29,840 Speaker 2: the question of government regulation or is it more a 171 00:08:29,840 --> 00:08:33,200 Speaker 2: consortium of all the corporations that are benefiting from AI 172 00:08:33,280 --> 00:08:35,959 Speaker 2: sort of getting together and putting together some standards and safeguards. 173 00:08:36,000 --> 00:08:37,480 Speaker 2: What are we hoping to achieve? 174 00:08:38,559 --> 00:08:40,320 Speaker 3: Yeah, it's a great question. And I happen to be 175 00:08:40,360 --> 00:08:42,559 Speaker 3: a conservative as well, a lifelong conservative. I have a 176 00:08:42,559 --> 00:08:44,479 Speaker 3: lot of friends who are have a lot of libertarian 177 00:08:44,559 --> 00:08:47,640 Speaker 3: friends and you know, most of us want government to 178 00:08:48,559 --> 00:08:51,200 Speaker 3: sort of be you know, want we want the private 179 00:08:51,280 --> 00:08:53,520 Speaker 3: market to work. We want there to be free market 180 00:08:53,559 --> 00:08:56,640 Speaker 3: solutions first, and where there are potential market failures, then 181 00:08:56,679 --> 00:08:59,040 Speaker 3: government should step in. So I would say, first, we 182 00:08:59,080 --> 00:09:02,560 Speaker 3: want to see these comps act in the interest if humanity, 183 00:09:02,600 --> 00:09:04,880 Speaker 3: went to see them act in the interest of all Americans. 184 00:09:04,920 --> 00:09:06,959 Speaker 1: I don't think that they have so far. 185 00:09:06,840 --> 00:09:09,280 Speaker 3: And I do think that the transparency and accountability from 186 00:09:09,280 --> 00:09:12,560 Speaker 3: the American people and from watchdog groups and NGOs can 187 00:09:12,600 --> 00:09:14,600 Speaker 3: kind of help encourage them to do the right thing. 188 00:09:14,960 --> 00:09:16,640 Speaker 3: But I also do think there's a role for government, 189 00:09:16,720 --> 00:09:19,360 Speaker 3: especially as it relates to national security. I mean, the 190 00:09:19,400 --> 00:09:22,559 Speaker 3: federal government should definitely lead on that, you know, export 191 00:09:22,600 --> 00:09:24,880 Speaker 3: controls to prevent our chips from going to the Chinese 192 00:09:24,880 --> 00:09:25,640 Speaker 3: Communist Party. 193 00:09:25,720 --> 00:09:28,080 Speaker 1: That's something that is an appropriate role for government. We 194 00:09:28,120 --> 00:09:28,760 Speaker 1: should do that. 195 00:09:29,200 --> 00:09:31,040 Speaker 3: And I also think there's a lot of space for 196 00:09:31,200 --> 00:09:32,760 Speaker 3: you know, federalism in the Tenth Amendment. 197 00:09:32,800 --> 00:09:32,959 Speaker 1: Here. 198 00:09:33,000 --> 00:09:34,960 Speaker 3: I think states have shown that they are taking the 199 00:09:35,040 --> 00:09:38,079 Speaker 3: lead to protect kids online, but put in some safeguards. 200 00:09:38,080 --> 00:09:40,040 Speaker 3: I think we need to be able to you know, 201 00:09:40,200 --> 00:09:42,679 Speaker 3: encourage states to do that and have that kind of 202 00:09:42,760 --> 00:09:45,960 Speaker 3: light touch light regulatory touch that I think is necessary 203 00:09:46,000 --> 00:09:48,080 Speaker 3: to protect us so we can reap the benefits of 204 00:09:48,120 --> 00:09:50,400 Speaker 3: AI and mitigate those risks. 205 00:09:50,720 --> 00:09:52,680 Speaker 2: Can you plush that out a little bit? I'm fascinated 206 00:09:52,679 --> 00:09:55,120 Speaker 2: by it because I've seen headlines about states sort of 207 00:09:55,480 --> 00:09:58,360 Speaker 2: drawing some lines and getting some sort of protocols going 208 00:09:58,400 --> 00:10:00,600 Speaker 2: for AI, and I know that the President has tried 209 00:10:00,640 --> 00:10:03,719 Speaker 2: to give guidance to Congress to form some legislation. What 210 00:10:04,040 --> 00:10:07,480 Speaker 2: does that federalism aspect look like? I mean, if AI 211 00:10:07,760 --> 00:10:09,960 Speaker 2: is on the internet and accessible through all of our 212 00:10:10,000 --> 00:10:13,520 Speaker 2: smartphones and devices and what have you, what does it 213 00:10:13,600 --> 00:10:15,880 Speaker 2: care what state you happen to be in, you know 214 00:10:15,920 --> 00:10:16,320 Speaker 2: what I mean? 215 00:10:17,240 --> 00:10:19,640 Speaker 3: Yeah, No, I think that again, there's appropriate role for 216 00:10:19,840 --> 00:10:22,160 Speaker 3: the federal government here as it relates to national security. 217 00:10:22,160 --> 00:10:25,520 Speaker 3: But at the state level, states have already acted to 218 00:10:25,600 --> 00:10:27,760 Speaker 3: do things like protect kids online and to say you 219 00:10:27,800 --> 00:10:30,760 Speaker 3: can't use AI for certain criminal purposes like creating child 220 00:10:30,800 --> 00:10:34,120 Speaker 3: pornography for example, of creating kind of well, those types 221 00:10:34,120 --> 00:10:36,800 Speaker 3: of things that I think there's an appropriate state level response. 222 00:10:36,880 --> 00:10:39,440 Speaker 3: And by the way, Congress hasn't acted yet, and we 223 00:10:39,480 --> 00:10:42,520 Speaker 3: do want to see Congress move and move quickly on this, 224 00:10:42,679 --> 00:10:45,280 Speaker 3: but until they do. We've talked to many state allmakers 225 00:10:45,320 --> 00:10:48,200 Speaker 3: around the country and governors, including in red states like 226 00:10:48,200 --> 00:10:51,520 Speaker 3: Governor DeSantis in Florida, Governor Sanders and Arkansas and others 227 00:10:52,200 --> 00:10:53,920 Speaker 3: who have said, we want to continue to move and 228 00:10:53,920 --> 00:10:56,240 Speaker 3: pursue policies that protect our people from the harms of 229 00:10:56,240 --> 00:11:00,160 Speaker 3: the eye while Congress is slow to move. You know, 230 00:11:00,200 --> 00:11:02,079 Speaker 3: Congress is designed to be a little slow to move. 231 00:11:02,120 --> 00:11:04,160 Speaker 3: But that's why our founding fathers were so brilliant, and 232 00:11:04,200 --> 00:11:07,280 Speaker 3: they designed a federalist system where states could act. 233 00:11:07,040 --> 00:11:07,720 Speaker 1: And act quickly. 234 00:11:08,440 --> 00:11:11,120 Speaker 2: That's very fascinating actually, and eventually a lot of these 235 00:11:11,320 --> 00:11:13,280 Speaker 2: great ideas that come from the states end up getting 236 00:11:13,280 --> 00:11:17,559 Speaker 2: adopted at the federal level. So listen, this is the 237 00:11:17,640 --> 00:11:20,360 Speaker 2: kind of thing that I think that really invites the 238 00:11:20,400 --> 00:11:24,560 Speaker 2: American people's participation. And sadly, I think that, you know, 239 00:11:24,720 --> 00:11:29,240 Speaker 2: like nuclear regulatory agencies or whatever, most Americans think that 240 00:11:29,360 --> 00:11:32,800 Speaker 2: they can't really have a voice in it, or aren't 241 00:11:32,800 --> 00:11:34,880 Speaker 2: prepared to have a voice in it because it's way 242 00:11:34,920 --> 00:11:37,240 Speaker 2: too smart and way too out of their depth. That's 243 00:11:37,240 --> 00:11:40,400 Speaker 2: not true when you boil down the technology and remove 244 00:11:40,400 --> 00:11:42,800 Speaker 2: the technology aspect of it, and you just talk about 245 00:11:42,840 --> 00:11:48,160 Speaker 2: fundamental ethics and morals. We Americans know exactly how this 246 00:11:48,200 --> 00:11:51,120 Speaker 2: stuff should be handled and where the line should be drawn. 247 00:11:51,200 --> 00:11:53,920 Speaker 2: So what would you say to anybody watching here who 248 00:11:53,960 --> 00:11:57,000 Speaker 2: feels concerns about it and they want to get involved, 249 00:11:57,080 --> 00:11:58,600 Speaker 2: How can we get involved? 250 00:12:00,040 --> 00:12:02,760 Speaker 3: Really encourage people to get involved at the local level. 251 00:12:02,520 --> 00:12:03,360 Speaker 1: And the state level. 252 00:12:03,400 --> 00:12:05,640 Speaker 3: First, there's a lot of people that are taking action 253 00:12:05,880 --> 00:12:09,640 Speaker 3: to call their elected officials, you know, ask them questions 254 00:12:09,640 --> 00:12:11,599 Speaker 3: about where they stand on these topics, ask them to 255 00:12:11,840 --> 00:12:14,160 Speaker 3: support safeguards. I think something like eighty percent of the 256 00:12:14,200 --> 00:12:18,200 Speaker 3: American people do support safeguards on AI, and frankly, most 257 00:12:18,200 --> 00:12:21,720 Speaker 3: state lawmakers around the country also reflect that and support safeguards. 258 00:12:21,720 --> 00:12:23,800 Speaker 3: But I think we need people to engage in that effort. 259 00:12:24,200 --> 00:12:26,680 Speaker 3: And then also, you know, as Congress is moving on 260 00:12:26,679 --> 00:12:28,920 Speaker 3: this kind of federal framework, we want the American people 261 00:12:28,960 --> 00:12:30,719 Speaker 3: to participate in that. We want them to call their 262 00:12:30,760 --> 00:12:33,480 Speaker 3: senators and their house members to say, here are the 263 00:12:33,480 --> 00:12:35,920 Speaker 3: things that are important to us. Here are the protections 264 00:12:35,960 --> 00:12:39,000 Speaker 3: we want to see. We don't want to see AI used, 265 00:12:39,240 --> 00:12:41,000 Speaker 3: you know, to replace human beings. We don't want to 266 00:12:41,000 --> 00:12:44,079 Speaker 3: see it used to create again child pornography or something 267 00:12:44,120 --> 00:12:47,360 Speaker 3: like that. We want protections that are meaningful, and so 268 00:12:47,400 --> 00:12:49,640 Speaker 3: I think encouraging folks to engage with their lawmakers is 269 00:12:49,679 --> 00:12:51,240 Speaker 3: the first step well. 270 00:12:51,280 --> 00:12:54,040 Speaker 2: And one way to stay informed and get engaged is 271 00:12:54,040 --> 00:12:58,600 Speaker 2: to go to its SECUREAI now dot org. Right, that's 272 00:12:58,600 --> 00:13:00,320 Speaker 2: the website. 273 00:13:00,000 --> 00:13:02,480 Speaker 3: It is, Yeah, that's our website, Secure ai noow dot org. 274 00:13:02,559 --> 00:13:04,320 Speaker 3: We have a lot of information if you want to 275 00:13:04,360 --> 00:13:06,599 Speaker 3: just kind of read up or watch some of the 276 00:13:07,160 --> 00:13:08,960 Speaker 3: interviews we've done to kind of get a sense of 277 00:13:09,000 --> 00:13:11,000 Speaker 3: what we're talking about, how we're thinking about it. 278 00:13:11,040 --> 00:13:12,839 Speaker 1: I think that's a good opportunity. 279 00:13:12,880 --> 00:13:16,160 Speaker 3: We also are tracking job losses related to AI, and 280 00:13:16,200 --> 00:13:18,600 Speaker 3: then thirdly, we have emails that go out to keep 281 00:13:18,640 --> 00:13:20,720 Speaker 3: you informed and give you ways to participate in this 282 00:13:20,760 --> 00:13:21,600 Speaker 3: important discumption. 283 00:13:22,640 --> 00:13:24,960 Speaker 2: Yeah, I mean listen, AI is happening, you know, and 284 00:13:24,960 --> 00:13:26,880 Speaker 2: you're not trying to stop it from happening, but you 285 00:13:26,960 --> 00:13:29,760 Speaker 2: just want to know that as AI is incorporated at 286 00:13:29,800 --> 00:13:34,880 Speaker 2: all levels of our lives in society, that it's done responsibly, safely, 287 00:13:34,920 --> 00:13:37,960 Speaker 2: as you mentioned, for our children and with our interests 288 00:13:38,040 --> 00:13:40,400 Speaker 2: in mind, not just a few handful of people who 289 00:13:40,400 --> 00:13:42,760 Speaker 2: are going to get rich and squash the rest of us. 290 00:13:42,880 --> 00:13:43,080 Speaker 1: Right. 291 00:13:43,640 --> 00:13:46,840 Speaker 3: Absolutely, I want to see this go well for human beings. 292 00:13:46,840 --> 00:13:48,880 Speaker 3: I want to see it go well for America. I 293 00:13:48,880 --> 00:13:51,079 Speaker 3: want America to lead on AI and I think that 294 00:13:51,280 --> 00:13:54,120 Speaker 3: innovation and safety kind of go hand in hand. And 295 00:13:54,160 --> 00:13:56,080 Speaker 3: for us to lead in the world, we have to 296 00:13:56,080 --> 00:13:58,520 Speaker 3: do it the right way. We have to deny China 297 00:13:58,640 --> 00:14:01,360 Speaker 3: the semiconductor chips that they would use to outpace us. 298 00:14:01,600 --> 00:14:02,200 Speaker 1: I think that's. 299 00:14:02,080 --> 00:14:05,320 Speaker 3: Important international security and geopolitics, and I think that we 300 00:14:05,400 --> 00:14:07,920 Speaker 3: have to, you know, we have to work to reap 301 00:14:08,000 --> 00:14:11,000 Speaker 3: these benefits. And I think I want to see, you know, 302 00:14:11,640 --> 00:14:14,840 Speaker 3: novel medicines developed that make us, you know, that save 303 00:14:14,920 --> 00:14:17,280 Speaker 3: lives and make us healthier. I want to see more efficiency, 304 00:14:17,320 --> 00:14:19,880 Speaker 3: more productivity, make us wealthier, and I want to see 305 00:14:19,880 --> 00:14:22,840 Speaker 3: that wealth spread throughout society. I don't want it to 306 00:14:22,920 --> 00:14:25,640 Speaker 3: just go to a handful of big tech oligarchs in 307 00:14:25,720 --> 00:14:28,520 Speaker 3: some new feudal system. I want to see the American 308 00:14:28,520 --> 00:14:30,400 Speaker 3: people benefit from this, and frankly, I want to see 309 00:14:30,440 --> 00:14:33,720 Speaker 3: the world benefit from this pretty incredible technology. 310 00:14:34,480 --> 00:14:36,440 Speaker 2: All right, Brandon stein Hazer, We'll leave it there. And 311 00:14:36,440 --> 00:14:39,120 Speaker 2: and again, I'm still educating myself on this. I'm still 312 00:14:39,200 --> 00:14:41,960 Speaker 2: learning part of my learning process so I can figure 313 00:14:41,960 --> 00:14:43,680 Speaker 2: out where I stand on a lot of these issues 314 00:14:44,120 --> 00:14:48,240 Speaker 2: is to get informed at places like SECUREAI, noow dot org. 315 00:14:48,800 --> 00:14:51,160 Speaker 2: Get the information. That's the very least you can do 316 00:14:51,240 --> 00:14:54,240 Speaker 2: for yourself, everybody, so that you can be informed and 317 00:14:54,280 --> 00:14:56,800 Speaker 2: you can actually have an informed opinion on something that 318 00:14:56,920 --> 00:14:59,280 Speaker 2: is affecting all of us. Brendon, thank you so much 319 00:14:59,320 --> 00:15:00,560 Speaker 2: for helping us out today with us. 320 00:15:01,280 --> 00:15:01,960 Speaker 1: Thank you, Larry. 321 00:15:05,600 --> 00:15:05,800 Speaker 3: Yeah.