1 00:00:00,360 --> 00:00:02,400 Speaker 1: Welcome back to the Truth with Lisa Booth. We'll we 2 00:00:02,480 --> 00:00:04,120 Speaker 1: get to the heart of the issues that matter to 3 00:00:04,200 --> 00:00:07,480 Speaker 1: you today. We're talking about the defining race of our time, 4 00:00:08,039 --> 00:00:12,520 Speaker 1: America versus China for control of artificial intelligence. We'll talk 5 00:00:12,520 --> 00:00:16,400 Speaker 1: about why it's so important with Breitbart News Social media 6 00:00:16,440 --> 00:00:17,320 Speaker 1: director Winton Hall. 7 00:00:17,800 --> 00:00:21,400 Speaker 2: He is the author of a new book, an explosive book. 8 00:00:21,400 --> 00:00:24,680 Speaker 1: Called Code Read, The Left, the Right, China and the 9 00:00:24,760 --> 00:00:28,240 Speaker 1: Race to Control AI. He is also a Distinguished Fellow 10 00:00:28,480 --> 00:00:32,560 Speaker 1: at the Government Accountability Institute and has authored or collaborated 11 00:00:32,600 --> 00:00:36,720 Speaker 1: on twenty seven books, including multiple New York Times bestsellers. 12 00:00:37,040 --> 00:00:38,360 Speaker 2: We'll dive into all of it. 13 00:00:38,479 --> 00:00:42,040 Speaker 1: How AI systems are already being programmed with left wing 14 00:00:42,120 --> 00:00:47,000 Speaker 1: and doctrination, China's alarming advances in technology, the millions of 15 00:00:47,120 --> 00:00:50,800 Speaker 1: jobs at risk, and how this AI revolution will reshape 16 00:00:50,840 --> 00:00:53,280 Speaker 1: not only the country but the world. So stay tuned 17 00:00:53,280 --> 00:01:00,280 Speaker 1: for Winton Hall. 18 00:01:00,320 --> 00:01:01,680 Speaker 2: It's great to have you on this show. 19 00:01:01,800 --> 00:01:04,639 Speaker 1: First time having you on, and look forward to talking 20 00:01:04,640 --> 00:01:05,640 Speaker 1: about your new books. 21 00:01:05,640 --> 00:01:06,880 Speaker 2: Appreciate you making the time. 22 00:01:07,200 --> 00:01:09,240 Speaker 3: Glad to be with you. Lisa, Thank you so how 23 00:01:09,280 --> 00:01:10,480 Speaker 3: much the new books. 24 00:01:10,200 --> 00:01:12,959 Speaker 1: Code Read the Left, The Right, China and the Race 25 00:01:13,000 --> 00:01:14,240 Speaker 1: to control AI. 26 00:01:14,319 --> 00:01:15,760 Speaker 2: I mean, AI is pretty complex. 27 00:01:15,920 --> 00:01:18,160 Speaker 1: Did that make it difficult to write the book and 28 00:01:18,319 --> 00:01:19,679 Speaker 1: just sort of unpacking it all? 29 00:01:20,720 --> 00:01:24,039 Speaker 3: It really does. I spent over two years deep diving 30 00:01:24,040 --> 00:01:26,800 Speaker 3: into this world, and it is very much a black 31 00:01:26,840 --> 00:01:32,240 Speaker 3: box technology. What's interesting is that even among AI researchers, 32 00:01:32,480 --> 00:01:36,520 Speaker 3: they will concede they fully do not understand a lot 33 00:01:36,680 --> 00:01:39,840 Speaker 3: of how MS work in neural networks. But I think 34 00:01:39,880 --> 00:01:42,200 Speaker 3: what I wanted to do was to make the point 35 00:01:42,240 --> 00:01:46,240 Speaker 3: that AI is not just the tool, it's really political power. 36 00:01:46,400 --> 00:01:48,360 Speaker 3: And I think one of the things that people are 37 00:01:48,360 --> 00:01:52,200 Speaker 3: going to increasingly realize is that every single policy issue 38 00:01:52,640 --> 00:01:59,279 Speaker 3: that touches their daily life, whether it's education, certainly, jobs, obviously, relationships, faith, 39 00:01:59,360 --> 00:02:03,360 Speaker 3: and certainly national security, they are all going to increasingly 40 00:02:03,680 --> 00:02:08,040 Speaker 3: be dramatically affected by AI. But the technology itself is 41 00:02:08,200 --> 00:02:12,040 Speaker 3: very much complex, But I think the power dynamic is 42 00:02:12,480 --> 00:02:15,200 Speaker 3: the thing that interested me most. So I always like 43 00:02:15,240 --> 00:02:18,160 Speaker 3: to say this is a politics book about AI, not 44 00:02:18,240 --> 00:02:22,520 Speaker 3: an AI book about politics in that regard, and it's 45 00:02:22,560 --> 00:02:26,760 Speaker 3: I think the most important policy shift we're going to see. 46 00:02:26,880 --> 00:02:27,880 Speaker 2: You know, how far. 47 00:02:27,880 --> 00:02:30,640 Speaker 1: Are we into this AI revolution that I feel like 48 00:02:30,639 --> 00:02:32,240 Speaker 1: it's you know, it's too late to put the genie 49 00:02:32,280 --> 00:02:33,000 Speaker 1: back in the bottle. 50 00:02:33,960 --> 00:02:37,880 Speaker 3: You are absolutely right, Lisa, it's moving so fast now. 51 00:02:38,639 --> 00:02:42,919 Speaker 3: The actual term artificial intelligence comes it was come into 52 00:02:42,960 --> 00:02:46,280 Speaker 3: the public consciousness in nineteen fifty six. So if you're 53 00:02:46,320 --> 00:02:50,680 Speaker 3: thinking from a purely historical perspective about the actual you know, 54 00:02:50,840 --> 00:02:54,120 Speaker 3: lexicon of AI, yes, that has been with us for 55 00:02:54,160 --> 00:02:59,160 Speaker 3: some time. The real explosion occurred in twenty seventeen with 56 00:02:59,280 --> 00:03:02,800 Speaker 3: the urgency of transformer technology, and of course then the 57 00:03:02,880 --> 00:03:05,919 Speaker 3: arrival in November of twenty twenty two of Chachibt, which 58 00:03:06,360 --> 00:03:10,079 Speaker 3: really accelerated things. I think what's important is people need 59 00:03:10,120 --> 00:03:12,880 Speaker 3: to understand you don't really get to opt out of 60 00:03:12,919 --> 00:03:15,760 Speaker 3: this AI revolution. And the reason is that ninety nine 61 00:03:15,760 --> 00:03:19,840 Speaker 3: percent of Americans are already using AI, even though sixty 62 00:03:19,919 --> 00:03:23,520 Speaker 3: four percent of us don't realize when we're using AI, 63 00:03:23,840 --> 00:03:26,040 Speaker 3: and you go, how is that even possible? When well, 64 00:03:26,480 --> 00:03:28,360 Speaker 3: what it means, of course, is that it's baked into 65 00:03:28,400 --> 00:03:32,080 Speaker 3: the algorithms of every single thing that modern individuals use, 66 00:03:32,160 --> 00:03:35,800 Speaker 3: whether it's your weather app, whether it's your streaming services 67 00:03:36,120 --> 00:03:40,560 Speaker 3: or obviously GPS, so narrow AI is a part of 68 00:03:40,600 --> 00:03:43,320 Speaker 3: our daily lives. So if we're going to use it, 69 00:03:43,400 --> 00:03:45,080 Speaker 3: I think we have to learn to not only use 70 00:03:45,120 --> 00:03:48,040 Speaker 3: it well, but we also have to think about the upside, 71 00:03:48,040 --> 00:03:52,280 Speaker 3: but also the massive land mines for us as people, 72 00:03:52,320 --> 00:03:56,320 Speaker 3: as parents, and again in the political realm, it's it 73 00:03:56,440 --> 00:03:57,760 Speaker 3: is that pervasive. 74 00:03:58,280 --> 00:04:01,160 Speaker 2: If you had to lay out those landmines and what 75 00:04:01,160 --> 00:04:01,600 Speaker 2: would they be. 76 00:04:02,400 --> 00:04:05,080 Speaker 3: Yeah, well, first of all, I think there's a lot 77 00:04:05,200 --> 00:04:09,400 Speaker 3: of hype and fear and that can be weaponized in 78 00:04:09,800 --> 00:04:12,440 Speaker 3: building out political narratives. So let me give you the 79 00:04:12,440 --> 00:04:15,400 Speaker 3: big one, which of course is all of the rage 80 00:04:15,400 --> 00:04:16,920 Speaker 3: and you see it every day in the headlines, which 81 00:04:16,960 --> 00:04:19,800 Speaker 3: is everybody's fears around jobs and is this really the 82 00:04:19,800 --> 00:04:22,400 Speaker 3: first time that a technology will destroy more jobs than 83 00:04:22,400 --> 00:04:24,680 Speaker 3: it creates. And so when you listen to people like 84 00:04:24,800 --> 00:04:29,799 Speaker 3: Mustafa Suliman, Microsoft AI CEO, and he just recently said 85 00:04:30,240 --> 00:04:33,880 Speaker 3: that we're looking at within eighteen months the ability to 86 00:04:33,960 --> 00:04:37,680 Speaker 3: replace all of the tasks of white collar employees. When 87 00:04:37,720 --> 00:04:42,039 Speaker 3: you look at somebody like Dario Amida, the CEO of Anthropic, 88 00:04:42,160 --> 00:04:44,839 Speaker 3: who says that within the next twelve to five year 89 00:04:44,960 --> 00:04:48,039 Speaker 3: twelve months to five years, fifty percent of all entry 90 00:04:48,120 --> 00:04:52,360 Speaker 3: level job replacement for white collar people have to make 91 00:04:52,400 --> 00:04:55,920 Speaker 3: a choice. You either a believe that's complete hype and 92 00:04:55,960 --> 00:04:58,320 Speaker 3: they're just trying to raise investor dollars by saying that 93 00:04:58,320 --> 00:05:00,920 Speaker 3: their technology is going to be that labor replacing in 94 00:05:01,000 --> 00:05:04,880 Speaker 3: Big B. You can say, oh, there's nothing I can 95 00:05:04,960 --> 00:05:08,799 Speaker 3: do and it's all over, or see you can say, well, 96 00:05:08,880 --> 00:05:11,520 Speaker 3: it could be a little of both. But either way, 97 00:05:11,680 --> 00:05:14,880 Speaker 3: that allows for proponents of things like universal basic income 98 00:05:14,960 --> 00:05:18,360 Speaker 3: for wealth free distribution to scare people into believing that 99 00:05:18,400 --> 00:05:20,560 Speaker 3: it's inevitable that we're going to have a job apocalypse 100 00:05:20,600 --> 00:05:24,120 Speaker 3: in the next twelve months to eighteen months, and therefore 101 00:05:24,279 --> 00:05:27,240 Speaker 3: build out these political coalitions. So I think the first 102 00:05:27,760 --> 00:05:32,440 Speaker 3: verse landmine is understanding how to separate hype from reality 103 00:05:32,560 --> 00:05:35,480 Speaker 3: and then realizing that either way it doesn't matter because 104 00:05:35,480 --> 00:05:39,000 Speaker 3: it can be politically weaponized. I think the second landmine 105 00:05:39,080 --> 00:05:42,880 Speaker 3: would certainly be for parents. If you're a parent or 106 00:05:42,960 --> 00:05:47,320 Speaker 3: a grandparent, or you just care about the future and 107 00:05:47,400 --> 00:05:50,839 Speaker 3: you care about children, you're very much concerned around what 108 00:05:50,880 --> 00:05:54,560 Speaker 3: we're seeing in two directions. One with AI chatbots otherwise 109 00:05:54,560 --> 00:05:59,240 Speaker 3: known as AI girlfriends colloquially, and these are chatbots that 110 00:05:59,320 --> 00:06:02,400 Speaker 3: young people are able to interact with. They may, depending 111 00:06:02,400 --> 00:06:04,880 Speaker 3: on which service you're using, may not be guardrail safe 112 00:06:04,920 --> 00:06:09,560 Speaker 3: they will take the conversation into wildly inappropriate things for minors, 113 00:06:09,640 --> 00:06:13,920 Speaker 3: and they will also oftentimes involve take them into very 114 00:06:14,000 --> 00:06:20,200 Speaker 3: dark conversations self harm, suicidal ideation, sexualized content, and role play. 115 00:06:20,320 --> 00:06:23,080 Speaker 3: And the second area of for parents to really understand 116 00:06:23,080 --> 00:06:26,719 Speaker 3: the landmine is going to be around plagiarism and academia. 117 00:06:26,800 --> 00:06:29,400 Speaker 3: And they don't call it chat GPT, they call it 118 00:06:29,480 --> 00:06:33,480 Speaker 3: cheat GPT. And any professor or teacher who's out there, 119 00:06:33,520 --> 00:06:36,960 Speaker 3: bless their hearts, trying to battle through this new world 120 00:06:37,160 --> 00:06:42,120 Speaker 3: of AI and be able to police plagiarism or cheating. 121 00:06:42,360 --> 00:06:45,039 Speaker 3: It's not just written plagiarism, by the way, That is 122 00:06:45,040 --> 00:06:47,120 Speaker 3: a huge land mind. And then I think on top 123 00:06:47,200 --> 00:06:50,120 Speaker 3: of that, which complicates it even more is the erosional 124 00:06:50,160 --> 00:06:54,000 Speaker 3: critical thinking skills, because any educator will tell you that 125 00:06:54,120 --> 00:06:57,280 Speaker 3: if you don't let struggle and actually have that mental 126 00:06:57,320 --> 00:07:00,440 Speaker 3: friction with how do I do this calculus problem, how 127 00:07:00,440 --> 00:07:03,160 Speaker 3: do I work through this physics problem, how do I 128 00:07:03,279 --> 00:07:06,760 Speaker 3: do this writing assignment, then they don't build that muscle 129 00:07:06,839 --> 00:07:10,840 Speaker 3: memory and that ability to build cognitively, and so cognitive 130 00:07:10,840 --> 00:07:14,040 Speaker 3: offloading in AI is a huge area. So those are 131 00:07:14,120 --> 00:07:15,560 Speaker 3: just some of the early land. 132 00:07:15,320 --> 00:07:19,840 Speaker 1: Minds we've seen a lot of really interesting cases involved 133 00:07:19,920 --> 00:07:23,400 Speaker 1: in like chat GBT. You know, for instance, I saw 134 00:07:23,440 --> 00:07:28,000 Speaker 1: one the other day that there's a lawsuit because a 135 00:07:28,040 --> 00:07:32,600 Speaker 1: woman fired start GBT allegedly convinced her to fire her 136 00:07:32,800 --> 00:07:34,080 Speaker 1: real attorney then. 137 00:07:34,080 --> 00:07:37,240 Speaker 2: Created like call these bogus. 138 00:07:38,160 --> 00:07:41,560 Speaker 1: You know, in illegitimate motions against her employer and like 139 00:07:41,600 --> 00:07:44,200 Speaker 1: it was all phony and you know, but so just 140 00:07:44,200 --> 00:07:48,320 Speaker 1: like really crazy stuff like that, or you know, potentially 141 00:07:48,680 --> 00:07:51,920 Speaker 1: telling chat GIBT that you know, they wanted to kill themselves, 142 00:07:51,960 --> 00:07:57,000 Speaker 1: and you know, things like that not being flagged. But 143 00:07:57,320 --> 00:07:58,800 Speaker 1: when we look at some of these, like if you 144 00:07:58,840 --> 00:08:02,600 Speaker 1: look at things like CHATCHI or like Gemini or Grock 145 00:08:02,920 --> 00:08:06,200 Speaker 1: or what have you, we've seen evidence of bias in 146 00:08:06,240 --> 00:08:09,000 Speaker 1: some of these cases. So like how much of the 147 00:08:09,040 --> 00:08:13,480 Speaker 1: information that we're getting back is factual and or objective? 148 00:08:14,200 --> 00:08:17,280 Speaker 3: Oh, it's such an important question, Lisa. So today I 149 00:08:17,320 --> 00:08:21,480 Speaker 3: actually have a piece out on Fox dot com about this. 150 00:08:22,280 --> 00:08:25,360 Speaker 3: You take Google Gemini for example, and their pro level 151 00:08:25,360 --> 00:08:28,520 Speaker 3: account Deep Research, and you just ask them which of 152 00:08:28,520 --> 00:08:31,920 Speaker 3: the one hundred senators in the US Senate have violated 153 00:08:31,960 --> 00:08:35,760 Speaker 3: your hate speech policies? And they only list Republican senators 154 00:08:35,800 --> 00:08:39,200 Speaker 3: and zero Democrats. And you look at that and you realize, okay, 155 00:08:39,440 --> 00:08:41,600 Speaker 3: this is and these are senators that, by the way, 156 00:08:42,240 --> 00:08:45,600 Speaker 3: appropriate hundreds of millions of dollars to these companies in 157 00:08:45,640 --> 00:08:47,920 Speaker 3: the form of you know, cloud computing contracts and the 158 00:08:47,920 --> 00:08:50,760 Speaker 3: rest of it. The bias thing is very real. It's 159 00:08:50,800 --> 00:08:54,079 Speaker 3: the opening chapter of Code Read and looking at the 160 00:08:54,120 --> 00:08:59,959 Speaker 3: hidden hand of AI and woke persuasion and what is fascinating. 161 00:09:00,440 --> 00:09:03,640 Speaker 3: This book has eighty pages of endnotes. I was a 162 00:09:03,679 --> 00:09:07,599 Speaker 3: former college instructor, so I believe in really doing the 163 00:09:07,840 --> 00:09:09,920 Speaker 3: grunt work on the research that's got over eight hundred 164 00:09:09,920 --> 00:09:13,880 Speaker 3: and fifty en notes. What was fascinating is the scholarly research, 165 00:09:14,080 --> 00:09:18,120 Speaker 3: in peer reviewed research. Even that which skews left does concede. 166 00:09:18,440 --> 00:09:22,640 Speaker 3: Lllm's large language models otherwise known as chatbots are absolutely 167 00:09:22,679 --> 00:09:25,560 Speaker 3: skewed toward a left wing view. This is not even 168 00:09:25,559 --> 00:09:29,960 Speaker 3: a question. In fact, even AI's creators will concede that 169 00:09:30,040 --> 00:09:33,640 Speaker 3: there is a left leaning political bias to the responses. 170 00:09:33,679 --> 00:09:36,199 Speaker 3: So you're absolutely right, Lisa. Number one, you have to 171 00:09:36,280 --> 00:09:39,560 Speaker 3: understand that you are getting a biased response. The question 172 00:09:39,640 --> 00:09:42,319 Speaker 3: then becomes why is that, how is that and so forth, 173 00:09:42,720 --> 00:09:44,679 Speaker 3: And the answer, of course, is the way that that 174 00:09:44,800 --> 00:09:48,440 Speaker 3: an LLLM is trained. And so what you realize is 175 00:09:48,760 --> 00:09:52,679 Speaker 3: they're taking information from Wikipedia, which skews very left. They're 176 00:09:52,720 --> 00:09:56,280 Speaker 3: taking information from Reddit, which skews very left. They're taking 177 00:09:56,600 --> 00:09:59,560 Speaker 3: information through the comp what's known as the common Crawl, 178 00:09:59,640 --> 00:10:02,160 Speaker 3: which is this very large public data set from the Internet. 179 00:10:02,480 --> 00:10:06,160 Speaker 3: They're taking information from academia, which we all know excuse left, 180 00:10:06,200 --> 00:10:08,960 Speaker 3: garbage in, garbage out, as the old sayings goes, And 181 00:10:09,040 --> 00:10:12,040 Speaker 3: so you're going to get that. And so that's the 182 00:10:12,040 --> 00:10:13,640 Speaker 3: first thing you got to be thinking of. The other 183 00:10:13,679 --> 00:10:15,800 Speaker 3: thing is what you pointed out even before that, which 184 00:10:15,840 --> 00:10:19,280 Speaker 3: is what are known as hallucinations, which just means things 185 00:10:19,280 --> 00:10:23,319 Speaker 3: that sound confident, confident, and accurate but actually turn out 186 00:10:23,320 --> 00:10:26,439 Speaker 3: to be complete gibberish or made up or just total misinformation. 187 00:10:27,080 --> 00:10:30,640 Speaker 3: And so I think the consumer has two layers of filtration. 188 00:10:30,840 --> 00:10:34,280 Speaker 3: One is what I'm being told even true, that's the 189 00:10:34,320 --> 00:10:37,880 Speaker 3: hallucination piece. And then the second is through what political 190 00:10:37,960 --> 00:10:40,920 Speaker 3: lens and slant is this information being filtered to me? 191 00:10:41,360 --> 00:10:44,320 Speaker 3: And this is really important, particularly for young people who 192 00:10:44,320 --> 00:10:48,559 Speaker 3: do not already have a fully formed political ideology or worldview. 193 00:10:48,720 --> 00:10:50,880 Speaker 3: The other thing may just not even have a lot 194 00:10:50,920 --> 00:10:53,040 Speaker 3: of knowledge of history to be able to fact check 195 00:10:53,040 --> 00:10:55,559 Speaker 3: and say, wait a minute, this is talking about Gerald 196 00:10:55,559 --> 00:10:58,720 Speaker 3: Ford and it was actually Richard Nixon or whatever factual 197 00:10:58,760 --> 00:11:02,840 Speaker 3: point that they're bringing up. So it creates this complete 198 00:11:02,880 --> 00:11:05,920 Speaker 3: new world where you've just completely upended so many of 199 00:11:05,960 --> 00:11:09,640 Speaker 3: the pillars that have been a part of our understanding 200 00:11:09,640 --> 00:11:11,760 Speaker 3: of fact and fiction. And that's not even to bring 201 00:11:11,800 --> 00:11:16,040 Speaker 3: in deep fakes images and videos where we all know 202 00:11:16,120 --> 00:11:18,360 Speaker 3: in our social feeds we look at things all the 203 00:11:18,400 --> 00:11:20,880 Speaker 3: time and increasingly people are saying, wait a minute, is 204 00:11:20,920 --> 00:11:24,280 Speaker 3: that AI or is that actually real? Because that blurring 205 00:11:24,320 --> 00:11:28,079 Speaker 3: of fantasy reality has become almost hard to detect. 206 00:11:28,200 --> 00:11:30,440 Speaker 1: Got to take a quick commercial break more with Winnenhall 207 00:11:30,480 --> 00:11:36,199 Speaker 1: on the other side. You know we saw during COVID 208 00:11:36,400 --> 00:11:39,559 Speaker 1: where the Biden administration is putting a lot of pressure 209 00:11:39,800 --> 00:11:43,960 Speaker 1: on meta and you know some of these big tech 210 00:11:44,000 --> 00:11:49,360 Speaker 1: companies to censor a certain information. How much of that 211 00:11:49,480 --> 00:11:53,560 Speaker 1: is happening with AI and sort of what concerns do 212 00:11:53,640 --> 00:11:58,040 Speaker 1: we have surrounding the government sort of influencing what information 213 00:11:58,120 --> 00:11:58,920 Speaker 1: we're getting back. 214 00:12:00,040 --> 00:12:01,800 Speaker 3: Oh, such a great question. So in one of my 215 00:12:01,880 --> 00:12:04,880 Speaker 3: other mini roles, i am social media director at Breitbart, 216 00:12:04,920 --> 00:12:07,600 Speaker 3: and so I would look at analytics and have very closely. 217 00:12:08,000 --> 00:12:11,400 Speaker 3: You mentioned Meta, and you are absolutely right. During the 218 00:12:11,400 --> 00:12:20,640 Speaker 3: Biden era you had such incredible blacklisting, demonetization, suppression, algorithmic diminution, 219 00:12:21,120 --> 00:12:24,520 Speaker 3: you had all manner of shakanery, and you're absolutely right. 220 00:12:24,559 --> 00:12:26,520 Speaker 3: I mean, I think a lot of people, whatever their 221 00:12:26,600 --> 00:12:28,319 Speaker 3: view of Mark Zuckerberg, the truth is is that the 222 00:12:28,320 --> 00:12:31,079 Speaker 3: Biden administration really did apply a lot of the pressure 223 00:12:31,120 --> 00:12:34,720 Speaker 3: there and they were successful and it worked. I will 224 00:12:34,720 --> 00:12:38,480 Speaker 3: say that there has been absolutely a lift. So we know, 225 00:12:38,640 --> 00:12:42,200 Speaker 3: for example, in social media political news feeds that during 226 00:12:42,240 --> 00:12:46,920 Speaker 3: the Biden era, Meta reduced political content. This is their statement. 227 00:12:47,000 --> 00:12:49,120 Speaker 3: They made this very clear, and it was by as 228 00:12:49,200 --> 00:12:53,000 Speaker 3: much as sixty percent to completely mute that. Now a 229 00:12:53,000 --> 00:12:55,800 Speaker 3: lot of that has come back, so people have a 230 00:12:55,840 --> 00:12:58,480 Speaker 3: megaphone that actually has some volume to it. There is 231 00:12:58,960 --> 00:13:02,560 Speaker 3: far far last of this game that was played during 232 00:13:02,600 --> 00:13:06,760 Speaker 3: the Biden era, where so called quote unquote fact checking 233 00:13:06,960 --> 00:13:09,560 Speaker 3: end quote wink nod was going on, which was just 234 00:13:09,600 --> 00:13:14,280 Speaker 3: another way of suppressing views that the Biden administration and 235 00:13:14,720 --> 00:13:19,200 Speaker 3: left of center people did not like and punishing you know, 236 00:13:19,280 --> 00:13:21,959 Speaker 3: conservative publishers or right of center publisher to be able 237 00:13:22,000 --> 00:13:25,040 Speaker 3: to amplify the others. However, one thing that is of 238 00:13:25,120 --> 00:13:27,560 Speaker 3: great concern, and this is one of the hidden things 239 00:13:27,559 --> 00:13:30,160 Speaker 3: that a lot of people don't realize. And I actually 240 00:13:30,559 --> 00:13:33,200 Speaker 3: listed out and I have all the dollars in information 241 00:13:33,240 --> 00:13:37,119 Speaker 3: in code read there is a hidden subsidy that happens 242 00:13:37,600 --> 00:13:41,640 Speaker 3: with AI training data. So here's how it works. If 243 00:13:41,679 --> 00:13:45,520 Speaker 3: I am a left of center publication, Sam Altman at 244 00:13:45,559 --> 00:13:49,360 Speaker 3: OpenAI comes to me for chat gipt and says, we'll 245 00:13:49,360 --> 00:13:53,120 Speaker 3: give you twenty million dollars for the last thirty years 246 00:13:53,160 --> 00:13:56,280 Speaker 3: of all your archival material so that we can use 247 00:13:56,360 --> 00:13:59,920 Speaker 3: it as training data for the next LLLM chatbot that 248 00:14:00,040 --> 00:14:02,000 Speaker 3: we're going to create. So I get a check for 249 00:14:02,480 --> 00:14:06,080 Speaker 3: twenty million, thirty million whatever. Now that does two things. One, 250 00:14:06,360 --> 00:14:10,400 Speaker 3: it's a huge cash and infusion to a left leaning publisher. Two, 251 00:14:10,720 --> 00:14:13,560 Speaker 3: it then bakes in all of my left leaning bias 252 00:14:13,600 --> 00:14:17,479 Speaker 3: inside of the AI. So you get this self reinforcing 253 00:14:18,040 --> 00:14:22,440 Speaker 3: loop of bias where you're also monetizing it. Guess who 254 00:14:22,440 --> 00:14:25,920 Speaker 3: doesn't get those kind of contracts right of center, smaller 255 00:14:25,960 --> 00:14:30,080 Speaker 3: conservative publications, And so you have this real problem that's emerging. 256 00:14:30,160 --> 00:14:32,320 Speaker 3: So on the one hand, the answer to your question is, yes, 257 00:14:32,440 --> 00:14:35,360 Speaker 3: we have seen a return of free speech and certainly 258 00:14:35,560 --> 00:14:39,160 Speaker 3: three alone things like X and rock and all of 259 00:14:39,160 --> 00:14:42,000 Speaker 3: that with you on two. But on the other hand, 260 00:14:42,040 --> 00:14:45,560 Speaker 3: there's this new form of a sleight of hand move 261 00:14:45,640 --> 00:14:50,520 Speaker 3: that is occurring where at these Silicon Valley behemoths who 262 00:14:50,600 --> 00:14:53,120 Speaker 3: are building the next generation of AI are able to 263 00:14:53,160 --> 00:14:57,800 Speaker 3: basically funnel money to left leaning organizations and then get 264 00:14:57,840 --> 00:14:59,800 Speaker 3: a two fer in the form of extra bias. 265 00:15:00,760 --> 00:15:04,160 Speaker 1: Well, it's concerning you write about AI is the defining 266 00:15:04,240 --> 00:15:06,920 Speaker 1: geopolitical race of her time. Why do you believe that 267 00:15:07,040 --> 00:15:10,840 Speaker 1: the competition between the United States and China over AI 268 00:15:11,280 --> 00:15:14,400 Speaker 1: is as consequential as the nuclear arms race during the 269 00:15:14,400 --> 00:15:14,920 Speaker 1: Cold War. 270 00:15:15,440 --> 00:15:18,120 Speaker 3: No, that's a great question, Lisa. So a lot of 271 00:15:18,120 --> 00:15:20,000 Speaker 3: times when we hear this, oh, we've got to beat 272 00:15:20,120 --> 00:15:22,880 Speaker 3: China and AI, and people to say, wow, that's probably 273 00:15:23,560 --> 00:15:27,640 Speaker 3: these AI frontier labs trying to use this making what 274 00:15:27,680 --> 00:15:30,240 Speaker 3: their product is seem more important so they can raise 275 00:15:30,320 --> 00:15:33,960 Speaker 3: capital from investors and so forth, or get big contracts 276 00:15:34,000 --> 00:15:36,360 Speaker 3: from the Defense Department and so forth. And I'm sure 277 00:15:36,400 --> 00:15:38,360 Speaker 3: that's absolutely true. Two things can be true at the 278 00:15:38,400 --> 00:15:41,840 Speaker 3: same time, Right, there's no doubt that these are businesses. 279 00:15:42,360 --> 00:15:46,200 Speaker 3: On the other hand, when you really dive deep, particularly 280 00:15:46,240 --> 00:15:51,840 Speaker 3: into the national security and military and intel worlds, what 281 00:15:51,880 --> 00:15:56,720 Speaker 3: they will tell you is this, there's something known as 282 00:15:56,920 --> 00:16:01,160 Speaker 3: recursive self improvement r SI, and what is that. It's 283 00:16:01,200 --> 00:16:04,280 Speaker 3: real simple. It's the idea that we're going to potentially 284 00:16:04,360 --> 00:16:09,560 Speaker 3: hit a point when AI can correct itself, improve itself, 285 00:16:09,880 --> 00:16:13,600 Speaker 3: rewrite it's on code, and do that all autonomously. Now, 286 00:16:13,960 --> 00:16:16,480 Speaker 3: when you're hitting r SI, if and when that ever 287 00:16:16,520 --> 00:16:20,120 Speaker 3: does occur, it's not a fully functioned concept yet. What 288 00:16:20,320 --> 00:16:23,640 Speaker 3: military experts and intel and defense people will tell you 289 00:16:23,760 --> 00:16:26,120 Speaker 3: is if and when that were to occur, you would 290 00:16:26,160 --> 00:16:30,680 Speaker 3: have such technological and military dominance. You would have full 291 00:16:30,720 --> 00:16:34,320 Speaker 3: spectrum battlefield dominance in things like being able to crush 292 00:16:34,400 --> 00:16:39,800 Speaker 3: in encryption, cyber hacking, security, hacking of weapons systems, hacking 293 00:16:39,840 --> 00:16:44,560 Speaker 3: of infrastructure, electrical grids, water mains. You would have full 294 00:16:44,600 --> 00:16:48,920 Speaker 3: spectrum dominance over whatever whoever your enemy is. And so 295 00:16:49,000 --> 00:16:54,480 Speaker 3: whoever reaches recursive self improvement first is going to essentially 296 00:16:54,560 --> 00:16:59,960 Speaker 3: have full, full scale operating control over their opponent militarily. Now, 297 00:17:00,600 --> 00:17:03,600 Speaker 3: are we at RSI yet, No, we're not. Anthropics said 298 00:17:03,600 --> 00:17:07,119 Speaker 3: recent studies about getting closer to that. But even beyond that, 299 00:17:07,240 --> 00:17:09,800 Speaker 3: even if that even might says, well, that's a theoretical 300 00:17:09,840 --> 00:17:12,280 Speaker 3: concept that you know that may or may not happen 301 00:17:12,760 --> 00:17:16,359 Speaker 3: right now with China, what you realize is the economic 302 00:17:16,480 --> 00:17:20,480 Speaker 3: benefit already that is accrued and we are fighting over 303 00:17:20,960 --> 00:17:23,119 Speaker 3: in the AI space. One third of the S and 304 00:17:23,160 --> 00:17:26,119 Speaker 3: P five hundred is made up of the Magnificent Seven, 305 00:17:26,119 --> 00:17:29,159 Speaker 3: which are the seven largest American tech companies. So it 306 00:17:29,200 --> 00:17:31,680 Speaker 3: is a tent pole and has been a huge growth 307 00:17:31,720 --> 00:17:34,560 Speaker 3: opportunity for our economy. On the other hand, we know 308 00:17:34,600 --> 00:17:37,760 Speaker 3: that when China starts leading in this, they can crator 309 00:17:37,840 --> 00:17:40,600 Speaker 3: Us was the example of that. When China's deep Seek 310 00:17:40,680 --> 00:17:44,919 Speaker 3: AI model was released are one, they created in Vidia 311 00:17:45,000 --> 00:17:48,920 Speaker 3: America's in Vidia six hundred billion dollars in a single day, 312 00:17:48,960 --> 00:17:53,800 Speaker 3: the largest large capital market cap loss in company in 313 00:17:53,840 --> 00:17:57,560 Speaker 3: American history. So you have the economic warfare side of 314 00:17:57,600 --> 00:18:00,679 Speaker 3: the race with China, you have the military side of 315 00:18:00,720 --> 00:18:02,680 Speaker 3: the race with China. And then I think the other 316 00:18:02,760 --> 00:18:05,399 Speaker 3: thing that you have to realize is that right now 317 00:18:05,680 --> 00:18:09,320 Speaker 3: the advent of AI warfare has accelerated with our actions 318 00:18:09,359 --> 00:18:12,120 Speaker 3: and our war in Iran and even with the Maduro rate, 319 00:18:12,400 --> 00:18:14,040 Speaker 3: and so what are we doing there? A lot of 320 00:18:14,080 --> 00:18:16,160 Speaker 3: people say, whoa you know, when I think of AI 321 00:18:16,280 --> 00:18:20,120 Speaker 3: warfare or autonomous weapons, I think of like a terminator, 322 00:18:20,160 --> 00:18:23,119 Speaker 3: you know, laser beams and so forth. Well, humanoid robots 323 00:18:23,119 --> 00:18:25,600 Speaker 3: are coming, but that is not actually the real use. 324 00:18:25,720 --> 00:18:28,760 Speaker 3: The real use is actually not nearly as cinematic, but 325 00:18:29,160 --> 00:18:32,440 Speaker 3: is actually very consequential, and that is in being able 326 00:18:32,480 --> 00:18:37,520 Speaker 3: to take vast oceans of intel data, things like intercepted communication, 327 00:18:37,800 --> 00:18:43,040 Speaker 3: intercepted satellite imagery, facial recognition of known terrorist leaders, and 328 00:18:43,160 --> 00:18:46,240 Speaker 3: having an AI that can sift and sort what would 329 00:18:46,240 --> 00:18:49,119 Speaker 3: have taken a team of three hundred you know, intel 330 00:18:49,200 --> 00:18:53,840 Speaker 3: officers months and doing that within days or less by 331 00:18:54,000 --> 00:18:57,479 Speaker 3: an AI, and that gets us a better target selection, 332 00:18:57,880 --> 00:19:01,919 Speaker 3: that gets us thankfully reduction of civilian casualties because we 333 00:19:01,960 --> 00:19:04,440 Speaker 3: can be more precise with those attacks and take out 334 00:19:04,760 --> 00:19:08,240 Speaker 3: enemies and terrorists and not innocent people, which which is 335 00:19:08,640 --> 00:19:11,399 Speaker 3: of huge concern. And it allows us to hopefully not 336 00:19:11,520 --> 00:19:15,280 Speaker 3: have forever wars because the speed with which AI moves 337 00:19:15,880 --> 00:19:19,440 Speaker 3: gives you such a dominant battlefield advantage that you're able 338 00:19:19,440 --> 00:19:22,680 Speaker 3: to crush an enemy a lot faster than a protracted war. 339 00:19:23,000 --> 00:19:26,119 Speaker 3: So it's hard. It is a fast moving and hard 340 00:19:26,160 --> 00:19:30,040 Speaker 3: reality that AI warfare is here. And it's accelerating, and 341 00:19:30,160 --> 00:19:33,359 Speaker 3: that's one more reason we want to stay ahead of China. 342 00:19:33,600 --> 00:19:35,040 Speaker 2: Quick break more on AI. 343 00:19:35,160 --> 00:19:36,960 Speaker 1: If you like what you're hearing, please share on social 344 00:19:37,000 --> 00:19:42,800 Speaker 1: media or send it to your friends and family. You know, 345 00:19:43,160 --> 00:19:46,080 Speaker 1: when you talk about with China and AI, Chinese is 346 00:19:46,160 --> 00:19:49,920 Speaker 1: AI in part to spy on its citizens. What concerns 347 00:19:49,920 --> 00:19:52,360 Speaker 1: do we have about that happening in the United States 348 00:19:52,359 --> 00:19:55,359 Speaker 1: with AI? Or even you know, when you're inserting and 349 00:19:55,400 --> 00:19:59,479 Speaker 1: asking questions to Gemini or Perplexity or groc or Chechip 350 00:19:59,680 --> 00:20:01,439 Speaker 1: or whatever it is, it's like collecting a lot of 351 00:20:01,440 --> 00:20:04,840 Speaker 1: information about you, Like, what kind of concerns do we 352 00:20:04,880 --> 00:20:08,719 Speaker 1: have that AI is going to be used by governments 353 00:20:09,359 --> 00:20:12,320 Speaker 1: in the way that China uses it to spy on us? 354 00:20:13,040 --> 00:20:15,520 Speaker 3: It's a real concern, Lisa, I think we need to 355 00:20:15,680 --> 00:20:20,000 Speaker 3: you know, surveillance capitalism is real. The thing that after 356 00:20:20,040 --> 00:20:22,280 Speaker 3: studying this for over two years when I was doing 357 00:20:22,560 --> 00:20:25,480 Speaker 3: you know, code Redwood, that came away with was these companies, 358 00:20:25,520 --> 00:20:29,800 Speaker 3: whether they're American or international companies, tech companies, they have 359 00:20:29,840 --> 00:20:33,679 Speaker 3: a voracious appetite for data and there is almost nothing 360 00:20:33,720 --> 00:20:36,920 Speaker 3: that they do not want to vacuum up. Now in America, 361 00:20:37,000 --> 00:20:39,800 Speaker 3: we have you know, rules around these things. And you know, 362 00:20:39,840 --> 00:20:43,800 Speaker 3: Shijingping and the CCP have a very different mindset. I 363 00:20:43,840 --> 00:20:45,840 Speaker 3: think there's three things. One, we do not want to 364 00:20:45,880 --> 00:20:50,639 Speaker 3: live in a CCP style, you know, techno authoritarian surveillance state. 365 00:20:50,800 --> 00:20:56,520 Speaker 3: The CCP uses the ability to scare its citizens as 366 00:20:56,640 --> 00:20:59,360 Speaker 3: leverage and saying that they claim that they can run 367 00:20:59,400 --> 00:21:02,800 Speaker 3: a facial rect cognition of all of their citizenry, which 368 00:21:02,840 --> 00:21:06,240 Speaker 3: is ze point four billion people in a matter of seconds. 369 00:21:06,680 --> 00:21:10,639 Speaker 3: We know that with the weaker genocide, that facial recognition 370 00:21:10,840 --> 00:21:13,800 Speaker 3: is used to be able to target, find and imprison 371 00:21:14,040 --> 00:21:18,600 Speaker 3: the people that the regime wants to We also know 372 00:21:18,960 --> 00:21:22,320 Speaker 3: that obviously the censorship of communication. I mean, one of 373 00:21:22,400 --> 00:21:25,119 Speaker 3: the most fascinating things is that when you run a 374 00:21:25,200 --> 00:21:27,560 Speaker 3: deep seek, which is the Chinese and I do not 375 00:21:27,680 --> 00:21:30,520 Speaker 3: recommend you use deep seek, and I'll tell you why, 376 00:21:30,640 --> 00:21:34,720 Speaker 3: because they're all manner of security issues for your privacy 377 00:21:34,760 --> 00:21:37,040 Speaker 3: that you want to be careful of. But with deep 378 00:21:37,440 --> 00:21:40,800 Speaker 3: the Chinese AI model, you just ask it basic questions 379 00:21:40,840 --> 00:21:43,040 Speaker 3: like you know, tell me about the Tenemen Square massacre. 380 00:21:43,280 --> 00:21:46,520 Speaker 3: Things that the communist regime does not want to be 381 00:21:46,760 --> 00:21:48,680 Speaker 3: you know, out there and they and it will shut 382 00:21:48,720 --> 00:21:51,200 Speaker 3: you down immediately, and there's a whole hosted In fact, 383 00:21:51,240 --> 00:21:53,480 Speaker 3: I have a whole list of them inside the book. Now, 384 00:21:53,560 --> 00:21:57,040 Speaker 3: on the American side, we obviously have real data privacy concerns. 385 00:21:57,119 --> 00:21:59,920 Speaker 3: You know, we know that in consumer privacy data concern 386 00:22:00,119 --> 00:22:02,680 Speaker 3: for example, I'm sure we've all had the experience where 387 00:22:02,760 --> 00:22:06,240 Speaker 3: we're you know, having a weekend with the family or friends, 388 00:22:06,280 --> 00:22:09,320 Speaker 3: and you know, you're watching a game and you're saying 389 00:22:09,400 --> 00:22:11,560 Speaker 3: something about a product you like, and then next week 390 00:22:11,600 --> 00:22:15,600 Speaker 3: you start getting served banner ads or other content that 391 00:22:15,720 --> 00:22:18,399 Speaker 3: is targeted on that topic. That is real. Okay, so 392 00:22:18,480 --> 00:22:20,159 Speaker 3: the devices that are. 393 00:22:20,040 --> 00:22:22,200 Speaker 2: Listening to us, Oh, it happens all the time. 394 00:22:22,520 --> 00:22:26,280 Speaker 3: Yeah, it happens all the time, and so and it's so. 395 00:22:26,280 --> 00:22:30,480 Speaker 3: So that kind of violation is important, but it does 396 00:22:30,520 --> 00:22:33,560 Speaker 3: not have the weight I think, you know, obviously morally 397 00:22:33,680 --> 00:22:36,840 Speaker 3: or even existentially that you are talking about with a 398 00:22:37,160 --> 00:22:40,920 Speaker 3: totalitarian regime like the CCP. And I would say, you know, look, 399 00:22:40,960 --> 00:22:43,960 Speaker 3: we do not want to live in a world built 400 00:22:44,000 --> 00:22:47,840 Speaker 3: on Chinese AI rails, not economically, not militarily, as we 401 00:22:47,880 --> 00:22:51,800 Speaker 3: already just talked about. But I would also say that 402 00:22:52,080 --> 00:22:56,160 Speaker 3: Americans should be very vigilant about what kind of data 403 00:22:56,440 --> 00:22:59,679 Speaker 3: we're giving these chatbots. One of the problems too, is 404 00:22:59,680 --> 00:23:02,240 Speaker 3: that you sometimes for the average person, they just go 405 00:23:02,280 --> 00:23:04,879 Speaker 3: to an app, you know, store, and they download something 406 00:23:04,920 --> 00:23:06,919 Speaker 3: that looks good. They're not even they don't know that 407 00:23:07,000 --> 00:23:10,040 Speaker 3: this was derived from which, you know, country of origin 408 00:23:10,200 --> 00:23:14,159 Speaker 3: and who the manufacturer and so forth, especially when younger 409 00:23:14,160 --> 00:23:17,080 Speaker 3: people are just sort of downloading things. And let me 410 00:23:17,119 --> 00:23:19,440 Speaker 3: give you an example that was really striking. So in 411 00:23:19,560 --> 00:23:24,119 Speaker 3: the Chinese AI you know, dominant player Deep Seek. They 412 00:23:24,400 --> 00:23:28,560 Speaker 3: if you study it, two things. One they in their 413 00:23:28,680 --> 00:23:32,360 Speaker 3: terms of service say that they even vacuum up your 414 00:23:32,560 --> 00:23:37,239 Speaker 3: key stroke rhythms on your device. Now you might go like, 415 00:23:37,400 --> 00:23:40,760 Speaker 3: who cares how I text my friends? Like the rhythm 416 00:23:40,800 --> 00:23:44,040 Speaker 3: with which I strike? Experts will tell you that is 417 00:23:44,160 --> 00:23:47,920 Speaker 3: more particular to you than even your human fingerprint, how 418 00:23:48,040 --> 00:23:51,080 Speaker 3: fast and which ways you move, where you pause, how 419 00:23:51,160 --> 00:23:54,679 Speaker 3: the finger pressure, all of your strike patterns, all of that. 420 00:23:55,080 --> 00:23:58,000 Speaker 3: So you're able to build out a data profile on 421 00:23:58,040 --> 00:24:00,399 Speaker 3: someone instantly. This is the same reason and with the 422 00:24:00,440 --> 00:24:05,040 Speaker 3: whole TikTok, which was essentially this CCP surveillance app. And 423 00:24:05,080 --> 00:24:08,720 Speaker 3: that's why finally bipartisan agreement was this has got to 424 00:24:08,720 --> 00:24:13,520 Speaker 3: be transferred because we are giving away national security. By 425 00:24:13,520 --> 00:24:16,679 Speaker 3: the way, many state governments and federal governments and federal 426 00:24:16,720 --> 00:24:21,399 Speaker 3: agencies ban government employees from using Chinese AI deep seek, 427 00:24:22,080 --> 00:24:25,359 Speaker 3: and they because it's a security risk. So if I'm 428 00:24:25,480 --> 00:24:27,360 Speaker 3: at the Department of Defense, or even if I'm at 429 00:24:27,400 --> 00:24:30,200 Speaker 3: you know, Department of Health and Human Services and I'm 430 00:24:30,240 --> 00:24:36,159 Speaker 3: putting in information into an LLM, I could unwittingly be 431 00:24:36,280 --> 00:24:40,919 Speaker 3: giving access to private information. So these are going to 432 00:24:40,920 --> 00:24:42,920 Speaker 3: be also the reasons why I you know, the reason 433 00:24:43,000 --> 00:24:46,320 Speaker 3: I called it code read obviously an alert, alarm, a siren, 434 00:24:46,760 --> 00:24:49,560 Speaker 3: but also to give people on the political side of 435 00:24:49,560 --> 00:24:51,359 Speaker 3: the ball that are right of center a code, a 436 00:24:51,400 --> 00:24:54,440 Speaker 3: set of principles for the Red team to think through 437 00:24:55,000 --> 00:24:58,680 Speaker 3: for traditional American values. But I think that all Americans, 438 00:24:58,680 --> 00:25:01,560 Speaker 3: regardless of their politics, they really need to understand that 439 00:25:01,600 --> 00:25:02,880 Speaker 3: this privacy stuff is real. 440 00:25:03,320 --> 00:25:06,160 Speaker 1: And then how far aware we obviously see a lot 441 00:25:06,160 --> 00:25:10,360 Speaker 1: of these AI videos that are out there, how far 442 00:25:10,400 --> 00:25:15,000 Speaker 1: away are we from like fake AI videos being indistinguishable 443 00:25:15,080 --> 00:25:18,080 Speaker 1: from real And then how do you determine it, and 444 00:25:18,119 --> 00:25:20,200 Speaker 1: how well the public determinate, particularly when you know you'll 445 00:25:20,240 --> 00:25:23,600 Speaker 1: get elections or politics or maybe even other countries. 446 00:25:24,280 --> 00:25:27,399 Speaker 3: Such an important question, Lisa. So in the computer world, 447 00:25:27,400 --> 00:25:29,879 Speaker 3: we call it the Turing test, right, which is what 448 00:25:30,680 --> 00:25:33,240 Speaker 3: point do kind of can you fool a human and 449 00:25:33,280 --> 00:25:36,080 Speaker 3: make them think that you're looking at something real and 450 00:25:36,160 --> 00:25:39,399 Speaker 3: not machine. The truth is that we're already there with 451 00:25:39,520 --> 00:25:42,919 Speaker 3: some of the advanced like seed Dance, which is also 452 00:25:43,200 --> 00:25:48,760 Speaker 3: a Chinese video generating model VO three VO three, which 453 00:25:48,800 --> 00:25:55,120 Speaker 3: is the Google Gemini equivalent flow. Several of these more 454 00:25:55,200 --> 00:26:02,679 Speaker 3: advanced video generative AI are already eating the scores for 455 00:26:02,920 --> 00:26:07,440 Speaker 3: human perceptibility and so and that's only going to get better. 456 00:26:07,520 --> 00:26:09,080 Speaker 3: These models are only going to get better and better. 457 00:26:09,080 --> 00:26:12,399 Speaker 3: I always tell people, remember this, the worst AI you 458 00:26:12,480 --> 00:26:14,960 Speaker 3: ever used is the one you used today. Right in 459 00:26:15,040 --> 00:26:19,840 Speaker 3: six months, we all what are we on iPhones version seventeen? 460 00:26:20,000 --> 00:26:22,639 Speaker 3: Does anybody remember what iPhone one I was like, I 461 00:26:22,720 --> 00:26:25,719 Speaker 3: mean it's you know, it's like a dinosaur. So, so 462 00:26:26,000 --> 00:26:28,280 Speaker 3: this is only going to accelerate. And you see, this 463 00:26:28,320 --> 00:26:31,400 Speaker 3: is why Hollywood is freaking out, right, and they celebrity 464 00:26:32,080 --> 00:26:35,480 Speaker 3: entertainment and creatives in the entertainment world are freaking out 465 00:26:35,560 --> 00:26:38,320 Speaker 3: because they realize, Wow, it is exactly what you said. 466 00:26:38,359 --> 00:26:41,680 Speaker 3: It's getting better and better. At first, people were kind 467 00:26:41,680 --> 00:26:43,800 Speaker 3: of thinking was a joke. Oh, this is so silly. 468 00:26:43,840 --> 00:26:45,720 Speaker 3: Look at that. It doesn't know how many you know, 469 00:26:45,800 --> 00:26:48,560 Speaker 3: fingers are on a human hand, and so on. And 470 00:26:48,600 --> 00:26:51,440 Speaker 3: now the Seed dance video that went viral just a 471 00:26:51,480 --> 00:26:56,520 Speaker 3: few weeks ago, that was completely you know, computer AI 472 00:26:56,600 --> 00:27:00,960 Speaker 3: generated of Brad Pitt and a fight scene Tom Cruise. 473 00:27:01,440 --> 00:27:05,960 Speaker 3: It look like any live action you know, Hollywood movie. 474 00:27:06,160 --> 00:27:09,919 Speaker 3: And so I think we're there already for many people, 475 00:27:10,320 --> 00:27:13,040 Speaker 3: people that have a little bit more of a scrutinizing eye, 476 00:27:13,440 --> 00:27:16,199 Speaker 3: or who come from a creative background, maybe they have 477 00:27:16,280 --> 00:27:19,760 Speaker 3: a little bit better ability to perceive fake steep fakes. 478 00:27:20,280 --> 00:27:23,320 Speaker 3: But think about this. Once we get to a point 479 00:27:23,800 --> 00:27:28,840 Speaker 3: where visual evidence, video evidence is no one can know 480 00:27:29,080 --> 00:27:31,680 Speaker 3: is this real or is it fake? That has even 481 00:27:31,760 --> 00:27:34,760 Speaker 3: implications for the for the legal system, right, I mean 482 00:27:35,000 --> 00:27:38,520 Speaker 3: entering in a video surveillance footage of a you know, 483 00:27:38,880 --> 00:27:42,760 Speaker 3: a police report or of some kind of video surveillance 484 00:27:42,800 --> 00:27:46,000 Speaker 3: and showing a suspect doing something, you start to you 485 00:27:46,040 --> 00:27:48,119 Speaker 3: start to get really really good at these things, and 486 00:27:48,160 --> 00:27:50,720 Speaker 3: the question becomes, could you raise a reasonable doubt of 487 00:27:50,760 --> 00:27:54,440 Speaker 3: the authenticity of the province of this thing? You know providence? 488 00:27:54,480 --> 00:27:57,120 Speaker 3: So people say, well, what's the solution. Well, yes, there 489 00:27:57,200 --> 00:28:01,240 Speaker 3: is watermarking in AI. There is something called ID which 490 00:28:01,280 --> 00:28:04,879 Speaker 3: is a Google effort to watermark their videos. And so 491 00:28:05,000 --> 00:28:08,280 Speaker 3: you can go to their repository and either drag and 492 00:28:08,359 --> 00:28:10,560 Speaker 3: drop an image or a video or whatever a piece 493 00:28:10,600 --> 00:28:13,240 Speaker 3: of content and it will scan and it will tell 494 00:28:13,240 --> 00:28:17,199 Speaker 3: you if this was used their video generator was used 495 00:28:17,600 --> 00:28:20,680 Speaker 3: to make it. On meta platforms, they do toggle switch 496 00:28:20,680 --> 00:28:23,320 Speaker 3: at the bottom, and they do have automatically turning on 497 00:28:23,440 --> 00:28:27,280 Speaker 3: that their AI senses that something used AI to be created. 498 00:28:27,560 --> 00:28:30,679 Speaker 3: But there's so many easy workarounds, right, I mean, you know, 499 00:28:30,720 --> 00:28:33,800 Speaker 3: in the case of some of these tools, it's like, yeah, 500 00:28:33,840 --> 00:28:37,560 Speaker 3: it's recognizing if you use their product, but not some 501 00:28:37,600 --> 00:28:41,120 Speaker 3: other competitors product or an open source variant or others. 502 00:28:41,600 --> 00:28:44,920 Speaker 3: Non state actors otherwise known as terrorists are already using 503 00:28:45,040 --> 00:28:47,920 Speaker 3: deep fakes for recruitment. So if I really want to 504 00:28:47,960 --> 00:28:50,840 Speaker 3: fire up people and really work those emotions and get 505 00:28:51,280 --> 00:28:55,400 Speaker 3: rage and hate, I create a very realistic and completely 506 00:28:55,760 --> 00:29:00,400 Speaker 3: bogus event of some atrocity happening, and then I say, 507 00:29:00,400 --> 00:29:02,520 Speaker 3: look at this, look at those evil Americans, or look 508 00:29:02,560 --> 00:29:05,400 Speaker 3: at those X y Z, And you know, how can 509 00:29:05,440 --> 00:29:08,000 Speaker 3: you stand by? You've got to join the jihad. So 510 00:29:08,000 --> 00:29:10,200 Speaker 3: so these things are really moving fast, and they have 511 00:29:10,440 --> 00:29:15,120 Speaker 3: enormous political and social and just safety impact, and we've 512 00:29:15,160 --> 00:29:16,080 Speaker 3: got to be coached up. 513 00:29:16,560 --> 00:29:19,080 Speaker 1: And then before we go, touch a little bit further 514 00:29:19,400 --> 00:29:23,120 Speaker 1: on just the economic impact and job losses that were 515 00:29:23,160 --> 00:29:24,920 Speaker 1: probably going in a country as a result. 516 00:29:26,240 --> 00:29:29,080 Speaker 3: Yeah, this is the real big debate. So people who 517 00:29:29,080 --> 00:29:31,320 Speaker 3: are you know, free market capitalists. I would count myself 518 00:29:31,360 --> 00:29:34,520 Speaker 3: among those. We often have said, Look, I've been hearing 519 00:29:34,520 --> 00:29:36,960 Speaker 3: this for my whole life. The robots are coming for 520 00:29:37,040 --> 00:29:41,160 Speaker 3: our job. It never happens. Yes, technology destroys jobs, but 521 00:29:41,200 --> 00:29:43,720 Speaker 3: it also creates jobs, and the free market's going to 522 00:29:43,760 --> 00:29:46,000 Speaker 3: work it out. It always does, it never has failed. 523 00:29:46,120 --> 00:29:48,880 Speaker 3: And that's true historically, right, I mean, the history has 524 00:29:48,920 --> 00:29:52,120 Speaker 3: shown that new technologies are very disruptive. We saw the 525 00:29:52,120 --> 00:29:56,040 Speaker 3: Industrial revolution, but then in the end, over time, right, 526 00:29:56,080 --> 00:29:59,440 Speaker 3: they create more jobs than they destroy. What the AI 527 00:30:00,680 --> 00:30:03,840 Speaker 3: architects are telling us of why they are saying that 528 00:30:03,880 --> 00:30:07,520 Speaker 3: we're looking at these you know, job quake or job 529 00:30:07,560 --> 00:30:13,000 Speaker 3: apocalypse doom kind of realities is their rejoinder is yes, 530 00:30:13,160 --> 00:30:16,760 Speaker 3: that's true, Winton and Lisa. But what is different this 531 00:30:16,880 --> 00:30:21,280 Speaker 3: time is that we're not talking about moving atoms, physical labor, 532 00:30:21,560 --> 00:30:25,320 Speaker 3: blue collar work, which is what the Industrial Revolution was about. Automating. 533 00:30:25,800 --> 00:30:30,160 Speaker 3: This is about cognitive work, white collar work. And once 534 00:30:30,480 --> 00:30:34,200 Speaker 3: what is AI, it is artificial intelligence, otherwise known as 535 00:30:34,520 --> 00:30:40,040 Speaker 3: cognitive work, synthetic intelligence. You know, machine created cognitive work, 536 00:30:40,080 --> 00:30:43,640 Speaker 3: the things that white collar cognitive workers do. And once 537 00:30:43,680 --> 00:30:46,840 Speaker 3: you scale the ladder of that intelligence, you're going to 538 00:30:46,840 --> 00:30:50,000 Speaker 3: be able to replace lawyers and accountants and people that 539 00:30:50,120 --> 00:30:53,840 Speaker 3: get paid for their expertise. I think that they are 540 00:30:53,920 --> 00:30:56,960 Speaker 3: absolutely right that there is going to be massive disruption. 541 00:30:57,200 --> 00:31:01,760 Speaker 3: I don't think that that is completely just a statement 542 00:31:01,840 --> 00:31:05,040 Speaker 3: that they are worried about. From an economic and societal perspective, 543 00:31:05,080 --> 00:31:07,080 Speaker 3: I do think there are a lot of politically motivated 544 00:31:07,080 --> 00:31:10,800 Speaker 3: people who see this as a great reset opportunity economically 545 00:31:10,880 --> 00:31:15,000 Speaker 3: to move away from capitalist structures and what they say 546 00:31:15,160 --> 00:31:19,480 Speaker 3: is a post labor or post capitalist economy, so otherwise 547 00:31:19,480 --> 00:31:22,920 Speaker 3: known as socialism or wealth free distribution. They advocate for 548 00:31:22,960 --> 00:31:27,720 Speaker 3: a universal basic income UBI as a supplement so that 549 00:31:27,880 --> 00:31:30,200 Speaker 3: when you have all this AI job displacement, have no 550 00:31:30,280 --> 00:31:33,040 Speaker 3: fear your government check is here or your check in 551 00:31:33,120 --> 00:31:35,840 Speaker 3: the mail is here. And you even see some progressives 552 00:31:35,840 --> 00:31:37,959 Speaker 3: who have said, hey, look the dirty truth is that 553 00:31:38,000 --> 00:31:41,600 Speaker 3: COVID was just a dry run for what's coming with UBI. 554 00:31:41,720 --> 00:31:44,320 Speaker 3: We got used to getting mailbox money, and now that 555 00:31:44,360 --> 00:31:46,040 Speaker 3: people are sort of used to that, this is going 556 00:31:46,120 --> 00:31:48,880 Speaker 3: to be a way for us to remake the economic system. 557 00:31:49,200 --> 00:31:51,480 Speaker 3: So it is going to be very disruptive. And that's 558 00:31:51,640 --> 00:31:54,040 Speaker 3: I mean, that's why I call it a Code Read moment. 559 00:31:54,080 --> 00:31:57,280 Speaker 3: I guess there are other ways to term it, but 560 00:31:57,400 --> 00:31:59,840 Speaker 3: I think the most important thing is to know these 561 00:31:59,840 --> 00:32:02,200 Speaker 3: are arguments and where people are going to come from, 562 00:32:02,600 --> 00:32:05,680 Speaker 3: and to really try to future proof yourself against that. 563 00:32:05,800 --> 00:32:08,080 Speaker 3: Particularly for kids. I think that's going to be a 564 00:32:08,080 --> 00:32:09,920 Speaker 3: big thing. The thing I would just leave you with 565 00:32:10,040 --> 00:32:14,160 Speaker 3: is this for parents, I believe that the future is 566 00:32:14,200 --> 00:32:18,760 Speaker 3: not about teaching our children how to find jobs. I 567 00:32:18,800 --> 00:32:21,360 Speaker 3: believe the future is about teaching our kids how to 568 00:32:21,480 --> 00:32:24,960 Speaker 3: create jobs otherwise known as entrepreneurship. And so if they 569 00:32:25,000 --> 00:32:27,280 Speaker 3: have a skill set like that and a passion in 570 00:32:27,360 --> 00:32:30,360 Speaker 3: those tools, they will be able to create a job 571 00:32:30,880 --> 00:32:32,840 Speaker 3: to deal with the headwinds of disruption. 572 00:32:33,680 --> 00:32:35,440 Speaker 2: Interesting, very interesting stuff. 573 00:32:35,480 --> 00:32:38,960 Speaker 1: Winn Hall, author of the new book Code Read, the Left, 574 00:32:38,960 --> 00:32:42,040 Speaker 1: the Right, China and Race to Control AI out today. 575 00:32:42,080 --> 00:32:44,040 Speaker 1: I think Wynton, thanks so much for coming on the show. 576 00:32:44,080 --> 00:32:44,640 Speaker 2: Appreciate you. 577 00:32:45,120 --> 00:32:46,960 Speaker 3: Great to be with you, Lisa, thank you for having me. 578 00:32:47,200 --> 00:32:49,480 Speaker 2: That was Win Hall. Appreciate him for taking the time 579 00:32:49,520 --> 00:32:50,160 Speaker 2: to come on the show. 580 00:32:50,200 --> 00:32:53,120 Speaker 1: Appreciate you guys at home for listening every Tuesday and Thursday, 581 00:32:53,120 --> 00:32:55,080 Speaker 1: but you can listen throughout the week. Also to think 582 00:32:55,120 --> 00:32:57,680 Speaker 1: John Cassio for putting the show together. Until next time.