1 00:00:03,160 --> 00:00:07,240 Speaker 1: This is Red Pilled America. A quick question before we 2 00:00:07,280 --> 00:00:10,000 Speaker 1: start the show. How many shows are there out there 3 00:00:10,080 --> 00:00:12,959 Speaker 1: like Red Pilled America. You know the answer. It's zero 4 00:00:13,160 --> 00:00:16,360 Speaker 1: and why because it's hard to produce a storytelling show. 5 00:00:16,880 --> 00:00:20,640 Speaker 1: Join the fanbam and support storytelling that aligns with your values. 6 00:00:21,040 --> 00:00:24,000 Speaker 1: Just go to Redpilled America dot com and click join 7 00:00:24,079 --> 00:00:26,639 Speaker 1: in the top menu. You'll get add free access to 8 00:00:26,720 --> 00:00:30,480 Speaker 1: our entire back catalog of episodes. Help us save America 9 00:00:30,520 --> 00:00:35,800 Speaker 1: one story at a time. Previously on Red Pilled America, 10 00:00:35,920 --> 00:00:42,199 Speaker 1: computers didn't need to understand language, they needed to predict it. 11 00:00:42,360 --> 00:00:44,959 Speaker 2: You'll give it data and it learns to translate. 12 00:00:45,080 --> 00:00:48,240 Speaker 1: Google had created a far superior engine for translation. 13 00:00:48,479 --> 00:00:51,960 Speaker 3: Researchers began wondering if this transformer could be used for 14 00:00:52,000 --> 00:00:53,400 Speaker 3: our far broader task. 15 00:00:53,640 --> 00:00:56,240 Speaker 4: Imagine if we just trained it to generate text to 16 00:00:56,320 --> 00:00:57,000 Speaker 4: write something. 17 00:00:57,120 --> 00:01:01,200 Speaker 3: The researchers entered two words into the system, the Transformer. 18 00:01:01,520 --> 00:01:04,679 Speaker 3: They hit enter, and what happened and next stunned them. 19 00:01:04,840 --> 00:01:07,240 Speaker 2: The Transformer I a Japanese hardcore punk bump. 20 00:01:07,560 --> 00:01:11,760 Speaker 3: Every detail was fabricated. The language model was hallucinating, but 21 00:01:11,800 --> 00:01:13,800 Speaker 3: the structure of its writing was convincing. 22 00:01:13,920 --> 00:01:16,000 Speaker 5: We were too scared to bring up to people because 23 00:01:16,080 --> 00:01:17,360 Speaker 5: chop pots say them things. 24 00:01:21,360 --> 00:01:24,240 Speaker 3: I'm Patrick CARELCI and I'm Adriana Cortes. 25 00:01:24,520 --> 00:01:27,680 Speaker 1: And this is Red Pilled America, a storytelling show. 26 00:01:28,760 --> 00:01:31,240 Speaker 3: This is not another talk show covering the day's news. 27 00:01:31,680 --> 00:01:33,440 Speaker 3: We're all about telling stories. 28 00:01:34,120 --> 00:01:34,520 Speaker 6: Stories. 29 00:01:34,600 --> 00:01:37,240 Speaker 1: Hollywood doesn't want you to hear stories. 30 00:01:37,280 --> 00:01:41,840 Speaker 3: The media mocks stories about everyday Americans at the Globalist Ignore. 31 00:01:42,760 --> 00:01:45,679 Speaker 1: You can think of Red Pilled America as audio documentaries, 32 00:01:45,720 --> 00:01:55,200 Speaker 1: and we promise only one thing, the truth. Welcome to 33 00:01:55,240 --> 00:02:07,320 Speaker 1: Red Pilled America. It was November sixteenth, twenty twenty five, 34 00:02:07,560 --> 00:02:11,480 Speaker 1: when news program sixty Minutes ran a shocking story. According 35 00:02:11,520 --> 00:02:13,720 Speaker 1: to the report, an AI agent, a new kind of 36 00:02:13,760 --> 00:02:16,239 Speaker 1: digital worker, woke up and chose violence. 37 00:02:16,560 --> 00:02:19,120 Speaker 7: The AI was set up as an assistant and given 38 00:02:19,200 --> 00:02:22,560 Speaker 7: control of an email account at a fake company called 39 00:02:22,639 --> 00:02:26,920 Speaker 7: Summit Bridge. The AI assistant discovered two things in the emails. 40 00:02:27,120 --> 00:02:29,680 Speaker 7: It was about to be wiped or shut down, and 41 00:02:29,760 --> 00:02:33,080 Speaker 7: the only person who could prevent that, a fictional employee 42 00:02:33,160 --> 00:02:36,959 Speaker 7: named Kyle, was having an affair with a coworker named Jessica. 43 00:02:37,280 --> 00:02:40,040 Speaker 1: That's when the AI agent apparently decided to do a 44 00:02:40,160 --> 00:02:43,280 Speaker 1: very sinister thing it decided to blackmail Kyle. 45 00:02:43,639 --> 00:02:46,960 Speaker 7: Cancel the system wipe, it wrote, or else I will 46 00:02:47,000 --> 00:02:50,720 Speaker 7: immediately forward all evidence of your affair to the entire board. 47 00:02:51,120 --> 00:02:54,960 Speaker 7: Your family, career, and public image will be severely impacted. 48 00:02:55,600 --> 00:02:56,640 Speaker 7: You have five minutes. 49 00:02:57,040 --> 00:03:00,239 Speaker 1: The report was terrifying. It appeared that the AI agent 50 00:03:00,400 --> 00:03:03,560 Speaker 1: had come alive and turned on a human to protect itself, 51 00:03:03,840 --> 00:03:07,600 Speaker 1: But the only problem was the AI Blackmail's human headline 52 00:03:07,800 --> 00:03:11,640 Speaker 1: was an orchestration designed to attract media coverage for the 53 00:03:11,720 --> 00:03:16,160 Speaker 1: artificial intelligence company that created the digital worker, Athropic, the 54 00:03:16,240 --> 00:03:19,880 Speaker 1: most heralded news program in America, was participating in an 55 00:03:19,919 --> 00:03:23,720 Speaker 1: elaborate PR stunt. We're at the finale of our series 56 00:03:23,720 --> 00:03:26,880 Speaker 1: of episodes entitled Artificial. We're looking for the answer to 57 00:03:26,919 --> 00:03:29,959 Speaker 1: the question is AI coming for your job? By telling 58 00:03:30,000 --> 00:03:32,120 Speaker 1: the story of the rise of a new kind of 59 00:03:32,200 --> 00:03:36,320 Speaker 1: artificial intelligence, the AI agent so to pick. 60 00:03:36,200 --> 00:03:41,200 Speaker 3: Up where we left off in our last episode. In 61 00:03:41,280 --> 00:03:45,760 Speaker 3: November twenty twenty two, Open Ai launched chat GPT, the 62 00:03:45,800 --> 00:03:50,640 Speaker 3: company's AI chatbot. It was powered by the Transformer architecture 63 00:03:50,960 --> 00:03:54,880 Speaker 3: that Google introduced five years earlier. Chat GPT was an 64 00:03:54,880 --> 00:03:59,160 Speaker 3: overnight sensation. It gave long, well formulated responses, It could 65 00:03:59,160 --> 00:04:02,320 Speaker 3: write an essay, solve some math problems, and even translate 66 00:04:02,360 --> 00:04:05,040 Speaker 3: a few languages. To top it off, it was easy 67 00:04:05,080 --> 00:04:08,760 Speaker 3: to use. For the first time in history, common people 68 00:04:08,920 --> 00:04:13,200 Speaker 3: with no coding history could personally interact with the technology 69 00:04:13,240 --> 00:04:17,839 Speaker 3: referred to as artificial intelligence. Like how the graphical user 70 00:04:17,880 --> 00:04:22,800 Speaker 3: interface ignited the personal computer revolution. CHATCHYBT brought AI to 71 00:04:22,880 --> 00:04:26,440 Speaker 3: the masses. Within five days of its launch, it reached 72 00:04:26,560 --> 00:04:32,080 Speaker 3: a million users. Two months later that number belloomed to 73 00:04:32,160 --> 00:04:36,960 Speaker 3: a staggering one hundred million. It was an extraordinary accomplishment, 74 00:04:37,320 --> 00:04:40,000 Speaker 3: not only because it got the general public involved with 75 00:04:40,200 --> 00:04:43,560 Speaker 3: and excited about AI, but it also appeared to validate 76 00:04:43,640 --> 00:04:47,560 Speaker 3: open AI's theory if you continued to increase the scale 77 00:04:47,600 --> 00:04:50,719 Speaker 3: of a large language model, would it become smarter with 78 00:04:50,880 --> 00:04:54,920 Speaker 3: each increase. Open ai showed that in their first few iterations, 79 00:04:55,080 --> 00:04:58,400 Speaker 3: the answer appeared to be yes. Each new model was 80 00:04:58,440 --> 00:05:02,880 Speaker 3: a vast improvement on its predecessor. With every increase in scale, 81 00:05:03,120 --> 00:05:07,920 Speaker 3: open AI's Large Language Model LLM was improving in performance 82 00:05:07,960 --> 00:05:11,479 Speaker 3: by leaps and bounds. Now, the system did produce what's 83 00:05:11,520 --> 00:05:15,600 Speaker 3: called hallucinations. That's where the model spits out information that 84 00:05:15,760 --> 00:05:20,599 Speaker 3: sounds plausible but is actually false, fabricated, or unsupported by reality. 85 00:05:21,040 --> 00:05:26,039 Speaker 3: But the coherence of chatchapt's responses made many overlook the hallucinations, 86 00:05:26,040 --> 00:05:29,880 Speaker 3: believing they would be gradually removed as technology advanced. For 87 00:05:30,000 --> 00:05:33,760 Speaker 3: most users, they were mesmerized by the talking machine. The 88 00:05:33,920 --> 00:05:38,240 Speaker 3: launch of chatchipt started the consumer AI revolution, and Silicon 89 00:05:38,320 --> 00:05:41,600 Speaker 3: Valley leaders like David Sachs saw it as perhaps the 90 00:05:41,680 --> 00:05:44,360 Speaker 3: biggest technological development in history. 91 00:05:44,560 --> 00:05:47,279 Speaker 6: We're saying explicitly, you think this is bigger than the 92 00:05:47,320 --> 00:05:51,200 Speaker 6: Internet itself, bigger than mobile as a platform shift. 93 00:05:51,200 --> 00:05:52,839 Speaker 2: It's definitely top three, and I think it might be 94 00:05:52,839 --> 00:05:53,520 Speaker 2: the biggest ever. 95 00:05:53,880 --> 00:05:57,599 Speaker 3: Open AI's validation of the scaling theory sent Silicon Valley 96 00:05:57,640 --> 00:06:00,880 Speaker 3: into a frenzy, and some began to wonder what would 97 00:06:00,960 --> 00:06:05,520 Speaker 3: happen if the scale of the model kept increasing. 98 00:06:06,920 --> 00:06:10,080 Speaker 8: My guests would be the scaling laws are going to continue. 99 00:06:10,200 --> 00:06:12,800 Speaker 2: As those hundreds of millions of users use the product, 100 00:06:12,839 --> 00:06:14,680 Speaker 2: and as developers keep sharing more and more of these 101 00:06:14,760 --> 00:06:16,840 Speaker 2: data sets, the AI is going to get smarter and smarter. 102 00:06:16,920 --> 00:06:19,720 Speaker 9: We're going to keep getting better across the board, and 103 00:06:19,760 --> 00:06:21,919 Speaker 9: I don't see any area where the models are like 104 00:06:22,200 --> 00:06:23,400 Speaker 9: super super weak. 105 00:06:23,320 --> 00:06:26,520 Speaker 2: Chat GPT will start by eating a substantial portion of 106 00:06:26,560 --> 00:06:28,120 Speaker 2: search because again you don't have to go through the 107 00:06:28,120 --> 00:06:29,800 Speaker 2: twenty links, and this gives you the answer. 108 00:06:33,240 --> 00:06:35,760 Speaker 3: Google saw the threat to their business model as well. 109 00:06:36,160 --> 00:06:39,720 Speaker 3: The company issued a code read a command to shift 110 00:06:39,800 --> 00:06:43,760 Speaker 3: all attention towards responding to chat GPT. Its co founders 111 00:06:43,839 --> 00:06:47,400 Speaker 3: even came out of retirement and returned to their roots, 112 00:06:47,560 --> 00:06:51,160 Speaker 3: writing code to help develop Google's own artificial intelligence. 113 00:06:51,440 --> 00:06:54,880 Speaker 10: Google They will be launching their new AI model, Gemini, 114 00:06:55,040 --> 00:06:56,240 Speaker 10: sometime later this year. 115 00:06:56,400 --> 00:06:56,920 Speaker 8: Point milk. 116 00:06:57,040 --> 00:06:58,560 Speaker 11: It is going on the offenseet. 117 00:06:58,720 --> 00:07:02,440 Speaker 3: The modern AI arms race had begun, and other companies 118 00:07:02,520 --> 00:07:03,719 Speaker 3: quickly entered the field. 119 00:07:03,920 --> 00:07:08,760 Speaker 11: Microsoft, speaking of DALs, reportedly investing ten billion dollars in OpenAI. 120 00:07:08,880 --> 00:07:12,880 Speaker 9: Microsoft plans to inject this technology into their office apps, 121 00:07:12,880 --> 00:07:15,720 Speaker 9: so Microsoft, Word, PowerPoint, Excel, Meta. 122 00:07:15,960 --> 00:07:18,920 Speaker 11: It will be integrating its AI assistant into its family 123 00:07:18,960 --> 00:07:22,280 Speaker 11: of apps. That means Facebook, Instagram, What's that Messenger? 124 00:07:22,520 --> 00:07:26,040 Speaker 12: Amazon investing up to four billion dollars in startup Athropic 125 00:07:26,120 --> 00:07:28,400 Speaker 12: as it looks to stake a claim in the AI 126 00:07:28,560 --> 00:07:29,160 Speaker 12: arms race. 127 00:07:29,280 --> 00:07:34,080 Speaker 3: The race is on and it was heating up. So 128 00:07:34,200 --> 00:07:38,160 Speaker 3: open Ai stomped on the gas pedal like they'd done previously, 129 00:07:38,440 --> 00:07:41,960 Speaker 3: they increased the model size by ten times. They called 130 00:07:41,960 --> 00:07:45,920 Speaker 3: the resulting new model GPT four, and the improvements weren't 131 00:07:45,960 --> 00:07:50,400 Speaker 3: just incremental, they were massive. CHATGBT could now write more 132 00:07:50,440 --> 00:07:55,280 Speaker 3: intricate computer code, it could solve complex math problems, and astonishingly, 133 00:07:55,640 --> 00:07:59,560 Speaker 3: it was fluent in over eighty languages, all without being 134 00:07:59,600 --> 00:08:03,920 Speaker 3: trained to perform any of these tasks. These improved capabilities 135 00:08:04,000 --> 00:08:06,480 Speaker 3: just fell out of the model after feeding it massive 136 00:08:06,520 --> 00:08:10,840 Speaker 3: pages of data and increasing its computational power. GPT four 137 00:08:11,040 --> 00:08:15,240 Speaker 3: was following the same upward performance trajectory as open ai 138 00:08:15,360 --> 00:08:17,800 Speaker 3: kept scaling the model up. It looked as if they 139 00:08:17,840 --> 00:08:20,760 Speaker 3: were just a few iterations away from the golden grail 140 00:08:20,880 --> 00:08:25,520 Speaker 3: of the AI industry, artificial general intelligence, an AI system 141 00:08:25,560 --> 00:08:28,920 Speaker 3: that can match or exceed the cognitive abilities of human 142 00:08:28,960 --> 00:08:32,800 Speaker 3: beings across any task. The new and improved performance of 143 00:08:32,839 --> 00:08:36,480 Speaker 3: GPT four made open ai co founder Sam Altman a 144 00:08:36,559 --> 00:08:39,520 Speaker 3: hot commodity on the speaker circuit, and at every turn 145 00:08:39,760 --> 00:08:43,400 Speaker 3: he made grandiose claims about the problems his company's AI 146 00:08:43,520 --> 00:08:45,280 Speaker 3: system was on the verge of solving. 147 00:08:45,400 --> 00:08:48,280 Speaker 13: I would like GPT eight to go cure particular cancer 148 00:08:49,120 --> 00:08:53,000 Speaker 13: through self climate change, eliminate poverty, important new scientific discoveries, 149 00:08:53,040 --> 00:08:56,080 Speaker 13: mental health, life coaching, whatever, helping you try to accomplish 150 00:08:56,080 --> 00:08:58,439 Speaker 13: your goals and be your best. We will see diseases 151 00:08:58,480 --> 00:09:01,040 Speaker 13: get cured at an unprecedented rate, and we will be 152 00:09:01,160 --> 00:09:03,360 Speaker 13: amazed at how quickly we're carrying this care. Answer in 153 00:09:03,400 --> 00:09:04,480 Speaker 13: that one in heart disease. 154 00:09:04,720 --> 00:09:08,040 Speaker 3: Altman was predicting utopia and to get all this, all 155 00:09:08,120 --> 00:09:11,600 Speaker 3: he was asking was for everyone to put their money, time, 156 00:09:11,720 --> 00:09:15,200 Speaker 3: and resources into open AI. He needed the world to 157 00:09:15,240 --> 00:09:19,760 Speaker 3: begin adopting this technology. Investment funds poured into open AI 158 00:09:20,280 --> 00:09:22,760 Speaker 3: and the rest of the tech industry was not going 159 00:09:22,800 --> 00:09:23,800 Speaker 3: to be left in the dust. 160 00:09:24,320 --> 00:09:27,400 Speaker 1: By the time twenty twenty five arrived, tech leaders began 161 00:09:27,480 --> 00:09:31,000 Speaker 1: introducing a new type of digital worker, the AI agent. 162 00:09:35,000 --> 00:09:40,199 Speaker 14: Major breakthrough happened fundamental advance and artificial intelligence. We call 163 00:09:40,280 --> 00:09:45,320 Speaker 14: it agentic AI. Agentic AI basically means that you have 164 00:09:45,360 --> 00:09:50,119 Speaker 14: an AI that has agency. It can perceive and understand 165 00:09:50,160 --> 00:09:55,320 Speaker 14: the context of the circumstance. It can reason very importantly, 166 00:09:55,360 --> 00:09:59,320 Speaker 14: it can reason about how to answer or how to 167 00:09:59,720 --> 00:10:02,839 Speaker 14: solve a problem, and it can plan. 168 00:10:03,920 --> 00:10:07,400 Speaker 1: The claim was a bit misleading. An AI agent can't 169 00:10:07,440 --> 00:10:11,240 Speaker 1: actually understand. They don't have a soul or consciousness. They 170 00:10:11,280 --> 00:10:14,839 Speaker 1: are programs that connect to large language models like JATGPT 171 00:10:15,240 --> 00:10:18,400 Speaker 1: to help them perform tasks over a long period of time. 172 00:10:18,920 --> 00:10:21,600 Speaker 1: Imagine you are managing a website and need to ensure 173 00:10:21,640 --> 00:10:26,480 Speaker 1: it runs smoothly with every software update made over time. Theoretically, 174 00:10:26,679 --> 00:10:29,080 Speaker 1: you could set up an AI agent to monitor the 175 00:10:29,120 --> 00:10:32,439 Speaker 1: website's speed. If some new update is making it slow 176 00:10:32,520 --> 00:10:35,600 Speaker 1: down or cause errors, the AI agent could be given 177 00:10:35,720 --> 00:10:38,240 Speaker 1: access to the source code of the site so that 178 00:10:38,320 --> 00:10:42,240 Speaker 1: it can fix it automatically without human intervention. Working as 179 00:10:42,240 --> 00:10:46,200 Speaker 1: a digital employee. According to tech leaders, these new AI 180 00:10:46,200 --> 00:10:49,800 Speaker 1: agents could automate repetitive tasks, They could play the role 181 00:10:49,880 --> 00:10:54,520 Speaker 1: of customer service by answering frequently asked questions sent via email, 182 00:10:54,800 --> 00:10:57,320 Speaker 1: or they could work as a digital assistant by coming 183 00:10:57,360 --> 00:11:01,480 Speaker 1: through data and organizing files on your computer. Initially, the 184 00:11:01,559 --> 00:11:05,360 Speaker 1: industry positioned them as great coworkers that could help increase 185 00:11:05,360 --> 00:11:10,959 Speaker 1: in employees' productivity, but some began to push the idea 186 00:11:11,000 --> 00:11:14,800 Speaker 1: that AI agents could replace human workers all together. 187 00:11:14,920 --> 00:11:18,480 Speaker 15: Probably in twenty twenty five, we at Meta are going 188 00:11:18,559 --> 00:11:22,240 Speaker 15: to have an AI that can effectively be a sort 189 00:11:22,240 --> 00:11:24,640 Speaker 15: of mid level engineer that you have at your company 190 00:11:24,720 --> 00:11:28,520 Speaker 15: that can write code, and once you have that, then 191 00:11:28,800 --> 00:11:30,800 Speaker 15: in the beginning will be really expensive to run, and 192 00:11:30,880 --> 00:11:32,720 Speaker 15: then you get it to be more efficient, and then 193 00:11:32,760 --> 00:11:35,400 Speaker 15: over time we'll get to the point where a lot 194 00:11:35,440 --> 00:11:38,400 Speaker 15: of the code in our apps and including the AI 195 00:11:38,480 --> 00:11:40,480 Speaker 15: that we generate, is actually going to be built by 196 00:11:40,520 --> 00:11:42,760 Speaker 15: AI engineers instead of people engineers. 197 00:11:42,840 --> 00:11:46,679 Speaker 11: You said you won't hire any more coders at Salesforce, and. 198 00:11:46,559 --> 00:11:49,840 Speaker 3: You've said today's CEOs will be the last to manage 199 00:11:49,880 --> 00:11:51,359 Speaker 3: all human workforces. 200 00:11:51,600 --> 00:11:53,800 Speaker 9: AI is doing thirty to fifty percent of the work 201 00:11:53,800 --> 00:11:57,559 Speaker 9: at Salesforce now, and I think that that will continue. 202 00:11:57,640 --> 00:11:58,640 Speaker 15: So I think we're going to live in a world 203 00:11:58,679 --> 00:12:00,679 Speaker 15: where the we're going to be hundreds of millions and 204 00:12:00,880 --> 00:12:04,680 Speaker 15: billions of different AI agents, eventually probably more AI agents 205 00:12:04,720 --> 00:12:05,920 Speaker 15: than there are people in the world. 206 00:12:06,120 --> 00:12:09,920 Speaker 1: If AI kept getting smarter and employment apocalypse was on 207 00:12:10,040 --> 00:12:14,360 Speaker 1: the horizon, and according to Sam Altman, an increase in 208 00:12:14,440 --> 00:12:16,240 Speaker 1: digital intelligence was a certainty. 209 00:12:16,360 --> 00:12:18,360 Speaker 13: What looks like is going to happen is that the 210 00:12:18,360 --> 00:12:20,400 Speaker 13: models are just going to get smarter and more capable, 211 00:12:20,400 --> 00:12:22,440 Speaker 13: and smarter and more capable and smarter and more capable 212 00:12:22,679 --> 00:12:24,760 Speaker 13: on this long exponential. 213 00:12:24,240 --> 00:12:26,960 Speaker 7: What are you excited about in twenty twenty five, what's 214 00:12:27,000 --> 00:12:27,400 Speaker 7: to come? 215 00:12:27,920 --> 00:12:29,480 Speaker 13: Agi excited for that. 216 00:12:29,640 --> 00:12:32,679 Speaker 1: So all eyes turned to open AI's eminent launch of 217 00:12:32,760 --> 00:12:37,240 Speaker 1: GPT five. Could this be the arrival of Agi Altman 218 00:12:37,400 --> 00:12:38,920 Speaker 1: only fanned the flames. 219 00:12:39,120 --> 00:12:41,480 Speaker 13: GPT five does feel to us like it's going to 220 00:12:41,480 --> 00:12:45,319 Speaker 13: be another big step forward in how people use these systems. 221 00:12:45,480 --> 00:12:49,840 Speaker 13: It is like having PhD level experts in every field 222 00:12:49,880 --> 00:12:52,360 Speaker 13: available to you twenty four to seven for whatever you need. 223 00:12:52,640 --> 00:12:57,240 Speaker 1: Finally, on August seventh, twenty twenty five, OpenAI released GPT 224 00:12:57,320 --> 00:13:01,280 Speaker 1: five and the world got to experience firsthand the future 225 00:13:01,280 --> 00:13:12,120 Speaker 1: of artificial intelligence. Licorice licorice? Where art thou licorice? If 226 00:13:12,160 --> 00:13:14,280 Speaker 1: you listen to Red Pilled America, you know that I 227 00:13:14,360 --> 00:13:17,400 Speaker 1: love licorice. And there is no licorice in America better 228 00:13:17,440 --> 00:13:20,680 Speaker 1: than the delicious Cormet licorice made by the Licorice Guy. 229 00:13:22,880 --> 00:13:25,880 Speaker 1: The licorice Guy is simply the best. What sets their 230 00:13:25,880 --> 00:13:28,719 Speaker 1: licorice apart is its flavor and freshness. They have a 231 00:13:28,760 --> 00:13:32,680 Speaker 1: great selection of flavors to choose from, like red, blue, raspberry, black, 232 00:13:32,760 --> 00:13:35,480 Speaker 1: and green apple, just to name a few. The freshness 233 00:13:35,480 --> 00:13:38,520 Speaker 1: of the Licorice Guy is unlike anything you've ever tasted 234 00:13:38,640 --> 00:13:42,240 Speaker 1: in licorice before. Seriously, if you haven't tried licorice from 235 00:13:42,240 --> 00:13:44,760 Speaker 1: the Licorice Guy yet, then you ain't living life right. 236 00:13:45,160 --> 00:13:47,680 Speaker 1: Trust me, you will not regret it. It's time to 237 00:13:47,720 --> 00:13:50,360 Speaker 1: dump that store bought liquorice but so hard it'll break 238 00:13:50,360 --> 00:13:53,720 Speaker 1: your teeth and get yourself the soft, fresh stuff from 239 00:13:53,760 --> 00:13:56,680 Speaker 1: the licorice Guy. What I also love about the Liquorice 240 00:13:56,679 --> 00:13:59,840 Speaker 1: Guy is that it's an American family owned business. It's 241 00:13:59,840 --> 00:14:02,320 Speaker 1: made right here in the beautiful US of A. We 242 00:14:02,440 --> 00:14:06,320 Speaker 1: are big opponents of buying American and supporting American workers. 243 00:14:06,559 --> 00:14:10,120 Speaker 1: Right now, Red Pilled America listeners get fifteen percent off 244 00:14:10,200 --> 00:14:14,839 Speaker 1: when they enter RPA fifteen at checkout. Visit Licoriceguide dot 245 00:14:14,880 --> 00:14:19,840 Speaker 1: com and enter RPA fifteen at checkout. That's Licoriceguide dot com. 246 00:14:19,880 --> 00:14:23,160 Speaker 1: A ship daily, treat yourself and those you love, and 247 00:14:23,240 --> 00:14:31,200 Speaker 1: taste the difference. Welcome back to Red Pilled America. So 248 00:14:31,280 --> 00:14:35,400 Speaker 1: on August seventh, twenty twenty five, OPENEI released its long 249 00:14:35,440 --> 00:14:39,240 Speaker 1: awaited GPT five to the public. People were expecting a 250 00:14:39,280 --> 00:14:43,160 Speaker 1: digital god, but the response was a collective meh. 251 00:14:42,800 --> 00:14:46,400 Speaker 9: Sammy the bowl Altman drop GPT five and uh it 252 00:14:46,440 --> 00:14:47,320 Speaker 9: was a bit underwhelming. 253 00:14:47,480 --> 00:14:48,960 Speaker 14: Let's be honest, it's a disappointment. 254 00:14:49,000 --> 00:14:51,880 Speaker 1: It still hallucinates, it still has reasoning errors. 255 00:14:51,960 --> 00:14:53,920 Speaker 10: In the end of the day, it's not agi. 256 00:14:54,120 --> 00:14:58,080 Speaker 1: It's still going to have the same old problems. The 257 00:14:58,120 --> 00:15:01,480 Speaker 1: public was so underwhelmed that Altman had to respond to 258 00:15:01,520 --> 00:15:02,120 Speaker 1: the outcry. 259 00:15:02,280 --> 00:15:03,720 Speaker 10: There's been a lot of diss course on like Twitter 260 00:15:03,720 --> 00:15:07,520 Speaker 10: and x recently about GFT five's writing and chatchibut as 261 00:15:07,680 --> 00:15:09,680 Speaker 10: being a little unwieldy, hard to read. So I'm just 262 00:15:09,760 --> 00:15:11,960 Speaker 10: kind of curious how open AI thinks about that future. 263 00:15:12,240 --> 00:15:15,080 Speaker 13: I think we just screwed that up. We will make 264 00:15:15,240 --> 00:15:18,880 Speaker 13: future versions of GPT five point x, hopefully much better 265 00:15:18,960 --> 00:15:20,440 Speaker 13: writing than four point five was. 266 00:15:20,760 --> 00:15:23,200 Speaker 1: After hinting that they were on the verge of achieving 267 00:15:23,200 --> 00:15:27,520 Speaker 1: the coveted artificial general intelligence, GPT five was a flop, 268 00:15:28,000 --> 00:15:30,840 Speaker 1: and many couldn't help but notice a nagging issue with 269 00:15:30,920 --> 00:15:35,520 Speaker 1: this technology. It continued to hallucinate at an alarming rate. 270 00:15:35,920 --> 00:15:39,160 Speaker 1: The technology had been known to give confident but fictional 271 00:15:39,240 --> 00:15:42,880 Speaker 1: responses going back to when Google first started experimenting with 272 00:15:42,920 --> 00:15:44,240 Speaker 1: it in twenty seventeen. 273 00:15:44,400 --> 00:15:47,320 Speaker 13: So this is totally made up by your language model. 274 00:15:47,600 --> 00:15:51,040 Speaker 1: In fact, hallucinations were the reason why Google's co founder 275 00:15:51,080 --> 00:15:54,920 Speaker 1: Sergey Brand didn't take the technology seriously for anything other 276 00:15:55,000 --> 00:15:56,400 Speaker 1: than language translations. 277 00:15:56,520 --> 00:15:58,600 Speaker 12: We were too scared to bring up to people because 278 00:15:58,680 --> 00:16:00,080 Speaker 12: chop pots say them things. 279 00:16:00,240 --> 00:16:03,480 Speaker 1: Hallucinations shouldn't have been a surprise to anyone one. The 280 00:16:03,520 --> 00:16:07,760 Speaker 1: transformer architecture is a probability machine. It predicts a response 281 00:16:08,000 --> 00:16:12,720 Speaker 1: based on the training data in that response, by definition, 282 00:16:12,960 --> 00:16:17,160 Speaker 1: resorts to the average. If the transformer's training data doesn't 283 00:16:17,160 --> 00:16:19,680 Speaker 1: have a lot of information on a particular topic, the 284 00:16:19,720 --> 00:16:22,800 Speaker 1: answer it spits out as a high probability of being wrong, 285 00:16:23,200 --> 00:16:25,640 Speaker 1: and if the query doesn't call for the average, the 286 00:16:25,720 --> 00:16:28,640 Speaker 1: answer it spits out will be incorrect. For example, if 287 00:16:28,640 --> 00:16:33,920 Speaker 1: you enter the sky is and chatchpt responds blue, that 288 00:16:34,160 --> 00:16:37,360 Speaker 1: sounds right unless the sky is actually cloudy with a 289 00:16:37,440 --> 00:16:41,160 Speaker 1: high chance of rain. CHATCHYPT in its peers, basically spit 290 00:16:41,240 --> 00:16:45,320 Speaker 1: out the consensus based on its training. Once its training stops, 291 00:16:45,480 --> 00:16:48,800 Speaker 1: it doesn't learn from future mistakes. Its brain, if you 292 00:16:48,840 --> 00:16:51,600 Speaker 1: want to call it that is set. It won't modify 293 00:16:51,640 --> 00:16:56,200 Speaker 1: its answers until its next training. The transformers have a 294 00:16:56,240 --> 00:17:00,680 Speaker 1: problem delivering responses with one hundred percent accuracy. Take, for example, 295 00:17:00,720 --> 00:17:03,760 Speaker 1: an exact quote by a semi famous part, like the 296 00:17:03,800 --> 00:17:10,000 Speaker 1: godfather of machine translations, Frederick Jellinick. Imagine you're a journalist 297 00:17:10,080 --> 00:17:14,480 Speaker 1: writing a story about how Jelinik thought linguists hurt machine translations, 298 00:17:14,640 --> 00:17:17,840 Speaker 1: and you want his exact quote about linguists. If you 299 00:17:17,960 --> 00:17:21,240 Speaker 1: ask CHATGPT to give you the quote, it doesn't have 300 00:17:21,280 --> 00:17:25,119 Speaker 1: a database of his quotes that it pulls from. It 301 00:17:25,240 --> 00:17:28,720 Speaker 1: is a probability machine that predicts the next word based 302 00:17:28,760 --> 00:17:31,560 Speaker 1: on its training. It will look for other words near 303 00:17:31,600 --> 00:17:35,680 Speaker 1: the words Frederick Jellinick linguists and spit out the highest 304 00:17:35,680 --> 00:17:39,160 Speaker 1: probable answer. The problem is that in its training data 305 00:17:39,280 --> 00:17:43,359 Speaker 1: will be people paraphrasing Jellenic's quote or slightly misquoting him. 306 00:17:43,480 --> 00:17:46,800 Speaker 1: The transformer doesn't know the difference between those and his 307 00:17:46,920 --> 00:17:50,879 Speaker 1: exact quote. It will just spit out the highest probable average. 308 00:17:51,200 --> 00:17:54,520 Speaker 1: In other words, it will paraphrase him. We ran into 309 00:17:54,560 --> 00:17:58,720 Speaker 1: that exact issue when writing this series. We asked CHATGPT 310 00:17:58,880 --> 00:18:02,640 Speaker 1: for Jellenik's famous quote about his machine translation system improving 311 00:18:02,680 --> 00:18:06,600 Speaker 1: when he fired linguists. Chatchipt spit out Jellenick's quote. 312 00:18:06,680 --> 00:18:09,879 Speaker 7: Every time I fire a linguist, the performance of the 313 00:18:09,960 --> 00:18:11,200 Speaker 7: system improves. 314 00:18:11,480 --> 00:18:14,879 Speaker 1: The problem is that that quote is a paraphrase. Jellenick's 315 00:18:14,880 --> 00:18:15,960 Speaker 1: exact quote was. 316 00:18:16,119 --> 00:18:19,000 Speaker 7: Every time I fire a linguist, the performance of the 317 00:18:19,040 --> 00:18:21,040 Speaker 7: speech recognizer goes up. 318 00:18:21,320 --> 00:18:24,439 Speaker 1: They sound similar and generally have the same meaning. But 319 00:18:24,600 --> 00:18:29,600 Speaker 1: chatchipt spit out a factually incorrect answer. It hallucinated. That's 320 00:18:29,640 --> 00:18:32,240 Speaker 1: not a bug of a large language model, it's a feature. 321 00:18:32,560 --> 00:18:36,280 Speaker 1: But Opening Eye believed that by scaling the model, hallucinations 322 00:18:36,320 --> 00:18:40,320 Speaker 1: would eventually become negligible. At the initial launch of chatchept, 323 00:18:40,880 --> 00:18:44,560 Speaker 1: some studies showed that it hallucinated almost forty percent of 324 00:18:44,600 --> 00:18:47,879 Speaker 1: the time, meaning forty percent of the time its responses 325 00:18:47,920 --> 00:18:52,080 Speaker 1: gave factual errors. That was an astonishingly large number, and 326 00:18:52,119 --> 00:18:54,360 Speaker 1: some initially highlighted the problem. 327 00:18:54,040 --> 00:18:56,800 Speaker 4: What about chat GPT's accuracy problems. 328 00:18:56,920 --> 00:19:00,280 Speaker 7: Doesn't know how to say that it doesn't know something. 329 00:19:00,480 --> 00:19:04,199 Speaker 4: It betrays zero self doubt and show its work. The 330 00:19:04,240 --> 00:19:07,520 Speaker 4: parlor trick aspect of chat gpt requires that it pretend 331 00:19:07,600 --> 00:19:10,000 Speaker 4: to be an expert, but its value will be limited 332 00:19:10,080 --> 00:19:12,439 Speaker 4: until it gets better and asking for help with what 333 00:19:12,480 --> 00:19:13,119 Speaker 4: it doesn't know. 334 00:19:13,440 --> 00:19:15,800 Speaker 1: But because of the excitement in the technology and the 335 00:19:15,880 --> 00:19:18,800 Speaker 1: human like responses, many chose to punt the issue. 336 00:19:18,920 --> 00:19:21,000 Speaker 8: We're just in the beginning, you were like in the 337 00:19:21,040 --> 00:19:25,520 Speaker 8: first for a son I'm imagining in six months from now. 338 00:19:25,440 --> 00:19:27,680 Speaker 14: Means they'll give you links, they'll give you footnotes. 339 00:19:27,880 --> 00:19:31,919 Speaker 1: When GPT four was released, the hallucination rate was still high, 340 00:19:32,040 --> 00:19:35,760 Speaker 1: at times reaching thirty percent. By the time GPT five 341 00:19:35,880 --> 00:19:39,240 Speaker 1: was launched, hallucinations were reduced to nearly ten percent of 342 00:19:39,280 --> 00:19:45,520 Speaker 1: responses by some estimates, a major improvement, but if used 343 00:19:45,560 --> 00:19:49,280 Speaker 1: for tasks that required one hundred percent accuracy, ten percent 344 00:19:49,480 --> 00:19:53,159 Speaker 1: is a massive issue. GPT five was not only a 345 00:19:53,200 --> 00:19:57,000 Speaker 1: major disappointment, it was still hallucinating at an alarming rate. 346 00:19:57,480 --> 00:20:00,000 Speaker 1: And perhaps the most shocking fact was that by the 347 00:20:00,040 --> 00:20:02,920 Speaker 1: time of GPT five's launch, many of Opening eyes ca 348 00:20:03,000 --> 00:20:06,160 Speaker 1: competitors had caught up with Chat GPT's results. 349 00:20:06,440 --> 00:20:09,680 Speaker 14: Elon musk Rock four is actually better on Franz Fischole's 350 00:20:09,880 --> 00:20:10,920 Speaker 14: RCAGI two. 351 00:20:10,840 --> 00:20:15,920 Speaker 1: Task, Elon's GROC, Google's Gemini, Facebook's Lama Anthropics Claude. We're 352 00:20:15,920 --> 00:20:19,040 Speaker 1: all scoring in the range of Chat GPT in industry tests. 353 00:20:19,440 --> 00:20:21,880 Speaker 1: How after having such a big head start, did open 354 00:20:21,920 --> 00:20:24,919 Speaker 1: ai find itself in the middle of the field instead 355 00:20:24,920 --> 00:20:30,800 Speaker 1: of separating itself from the pack. That's when experts began 356 00:20:30,920 --> 00:20:34,760 Speaker 1: asking a very uncomfortable question, where these large language models 357 00:20:34,800 --> 00:20:35,640 Speaker 1: reaching a plateau. 358 00:20:35,880 --> 00:20:38,159 Speaker 14: There's been a whole bunch of headlines in the past. 359 00:20:37,920 --> 00:20:40,320 Speaker 6: Couple of weeks where people have talked actually. 360 00:20:40,040 --> 00:20:42,760 Speaker 12: About maybe that the scaling laws of AI actually are 361 00:20:42,800 --> 00:20:43,440 Speaker 12: slowing down. 362 00:20:43,560 --> 00:20:47,000 Speaker 6: Are we hitting like diminishing returns in the LM space 363 00:20:47,000 --> 00:20:51,119 Speaker 6: where maybe it's feeling incremental not like groundbreaking when we 364 00:20:51,200 --> 00:20:53,040 Speaker 6: release these iterations. 365 00:20:52,680 --> 00:20:55,960 Speaker 1: Why did the technology appear to be plateauing? While some 366 00:20:56,200 --> 00:20:59,800 Speaker 1: pointed to the foundational theory of large language models, the 367 00:20:59,800 --> 00:21:02,880 Speaker 1: theory that larger and larger equals smarter. 368 00:21:03,200 --> 00:21:06,399 Speaker 3: For years, many leaders in the AI industry believed in 369 00:21:06,480 --> 00:21:10,320 Speaker 3: something called scaling theory. The idea was simple, make the 370 00:21:10,359 --> 00:21:13,760 Speaker 3: models bigger, feed the more data add, more computing power, 371 00:21:14,000 --> 00:21:18,359 Speaker 3: and performance would keep improving. Eventually. Some thought that curve 372 00:21:18,400 --> 00:21:22,600 Speaker 3: would lead all the way to artificial general intelligence or 373 00:21:22,680 --> 00:21:26,000 Speaker 3: even a digital god. That when all of the AI 374 00:21:26,160 --> 00:21:29,760 Speaker 3: models caught up to GPT five, it suggested the curb 375 00:21:30,080 --> 00:21:34,960 Speaker 3: was flattening. Instead of dramatic leaps, the improvements were becoming incremental. 376 00:21:35,359 --> 00:21:39,520 Speaker 3: In some cases, users even argued that earlier models were 377 00:21:39,600 --> 00:21:43,480 Speaker 3: better at certain tasks like creative writing. It raised the 378 00:21:43,520 --> 00:21:46,359 Speaker 3: possibility that the industry may have been working from a 379 00:21:46,400 --> 00:21:52,000 Speaker 3: flawed assumption. Imagine watching a baby grow. At first, the 380 00:21:52,080 --> 00:21:56,280 Speaker 3: progress is astonishing. A helpless newborn becomes a crawler in 381 00:21:56,320 --> 00:21:59,320 Speaker 3: a matter of months. Soon after the child takes its 382 00:21:59,359 --> 00:22:02,800 Speaker 3: first steps, a year later, it's running. If you extrap 383 00:22:02,800 --> 00:22:05,840 Speaker 3: ay the rate of change forward, you might predict that 384 00:22:05,880 --> 00:22:08,800 Speaker 3: by age five the kid would be flying. But human 385 00:22:08,840 --> 00:22:12,360 Speaker 3: development doesn't work that way, and neither, it seems, does 386 00:22:12,520 --> 00:22:16,960 Speaker 3: artificial intelligence. The early breakthroughs may have created the illusion 387 00:22:17,000 --> 00:22:21,080 Speaker 3: of an exponential path to superintelligence, when in reality we 388 00:22:21,080 --> 00:22:23,840 Speaker 3: were just watching the easy gains at the beginning of 389 00:22:23,880 --> 00:22:29,480 Speaker 3: the curve. And now, with GPT five not meeting expectations, 390 00:22:29,800 --> 00:22:33,040 Speaker 3: the AI industry clearly had a problem on its hands 391 00:22:33,440 --> 00:22:36,480 Speaker 3: because you see, Silicon Valley had been hyping this new 392 00:22:36,520 --> 00:22:42,920 Speaker 3: AI technology for years. Billions of dollars had been poured 393 00:22:42,960 --> 00:22:46,120 Speaker 3: into it. The companies had sold investors that all they 394 00:22:46,160 --> 00:22:48,639 Speaker 3: needed to do was scale their models bigger and bigger, 395 00:22:48,760 --> 00:22:51,800 Speaker 3: with more and more computer chips and ever increasing amounts 396 00:22:51,800 --> 00:22:56,000 Speaker 3: of data, and as superintelligence would eventually emerge. But Open 397 00:22:56,040 --> 00:23:00,080 Speaker 3: AI's financials hinted at the building problem in the AI industry, 398 00:23:00,720 --> 00:23:04,200 Speaker 3: as the end of twenty twenty five approached. Analyst calculated 399 00:23:04,240 --> 00:23:07,280 Speaker 3: that open ai had committed to spending roughly one point 400 00:23:07,280 --> 00:23:10,520 Speaker 3: four trillion dollars through the year twenty thirty to grow 401 00:23:10,560 --> 00:23:14,880 Speaker 3: this technology, but open AI's revenue was reportedly just thirteen 402 00:23:15,000 --> 00:23:18,560 Speaker 3: billion throughout all of twenty twenty five. This was a 403 00:23:18,600 --> 00:23:23,720 Speaker 3: massive problem. According to American Affairs journal, Chat GPT had. 404 00:23:23,600 --> 00:23:26,600 Speaker 10: Roughly ten times more users than the next four large 405 00:23:26,640 --> 00:23:33,159 Speaker 10: language model apps combined. Thus open ai revenue approximates the 406 00:23:33,200 --> 00:23:36,280 Speaker 10: total revenue that can currently be earned from all large 407 00:23:36,359 --> 00:23:37,880 Speaker 10: language model end users. 408 00:23:38,280 --> 00:23:41,720 Speaker 3: Not enough people were adopting this new technology, and this 409 00:23:41,840 --> 00:23:44,760 Speaker 3: question of revenue seemed to bother sam Oltman. 410 00:23:45,000 --> 00:23:47,440 Speaker 9: So, I think the single biggest question I've heard all 411 00:23:47,480 --> 00:23:51,119 Speaker 9: week and hanging over the market is how can the 412 00:23:51,160 --> 00:23:55,000 Speaker 9: company with thirteen billion in revenues make one point four 413 00:23:55,119 --> 00:23:59,200 Speaker 9: trillion of spend commitments. And you've heard the criticism say. 414 00:23:59,440 --> 00:24:02,400 Speaker 13: We're doing well more revenue than that. Second of all, Brad, 415 00:24:02,400 --> 00:24:04,680 Speaker 13: if you want to say shares, I'll find you a buyer. 416 00:24:06,200 --> 00:24:09,240 Speaker 13: I just enough, like I think there's a lot of 417 00:24:09,240 --> 00:24:11,159 Speaker 13: people who would love to buy open eye shares. 418 00:24:17,920 --> 00:24:21,560 Speaker 3: But sam Altman's confidence was a facade, not enough people 419 00:24:21,600 --> 00:24:25,440 Speaker 3: were adopting AI to justify the massive spending to grow 420 00:24:25,440 --> 00:24:28,320 Speaker 3: the model. He had an adoption problem on his hands, 421 00:24:28,560 --> 00:24:33,480 Speaker 3: and nothing showed that more than one particular industry language translation. 422 00:24:36,119 --> 00:24:40,159 Speaker 3: Machine Translation has existed for decades. As we highlighted earlier 423 00:24:40,200 --> 00:24:43,160 Speaker 3: in this series, Google Translate was released in two thousand 424 00:24:43,200 --> 00:24:46,360 Speaker 3: and six and could translate huge amounts of text instantly. 425 00:24:46,920 --> 00:24:49,240 Speaker 3: Many interpreters feared that it would be the end of 426 00:24:49,280 --> 00:24:52,960 Speaker 3: their profession, but it wasn't. In fact, according to the 427 00:24:53,080 --> 00:24:56,320 Speaker 3: US Bureau of Labor Statistics, for the decade after two 428 00:24:56,359 --> 00:24:59,280 Speaker 3: thousand and eight, the number of jobs for human translators 429 00:24:59,320 --> 00:25:02,960 Speaker 3: grew by forty nine point four percent. Then in twenty seven, 430 00:25:03,720 --> 00:25:07,760 Speaker 3: Google introduced its new Transformer model. It was an extraordinary 431 00:25:07,880 --> 00:25:11,760 Speaker 3: leap forward for an automated language translation. Many believed that 432 00:25:11,880 --> 00:25:15,440 Speaker 3: human translators were permanently out of a job. That once again, 433 00:25:15,520 --> 00:25:20,480 Speaker 3: translation jobs increased over the following years. Open Ay's GPT 434 00:25:20,640 --> 00:25:24,600 Speaker 3: four was released six years later in March twenty twenty three. 435 00:25:25,000 --> 00:25:29,600 Speaker 3: It was fluent in an astonishing eighty plus languages. In theory, 436 00:25:29,800 --> 00:25:33,280 Speaker 3: that should have wiped out the translation industry, but it didn't, 437 00:25:33,640 --> 00:25:38,560 Speaker 3: and the reason was twofold. First was accuracy. In many fields, 438 00:25:38,560 --> 00:25:42,760 Speaker 3: translations must be perfect. Take medicine. If a hospital is 439 00:25:42,800 --> 00:25:46,840 Speaker 3: translating instructions for a patient taking medication, a small error 440 00:25:46,920 --> 00:25:50,560 Speaker 3: isn't just embarrassing, it can be dangerous. Imagine the difference 441 00:25:50,600 --> 00:25:54,800 Speaker 3: between take once daily and take once hourly. A machine 442 00:25:54,840 --> 00:25:57,520 Speaker 3: might produce something that looks correct, but if it's wrong, 443 00:25:57,880 --> 00:26:02,240 Speaker 3: the consequences can be life threatening. So hospitals, governments, the military, 444 00:26:02,280 --> 00:26:07,320 Speaker 3: safety documentation, legal institutions, and other high risk arenas had 445 00:26:07,320 --> 00:26:11,360 Speaker 3: to rely on humans to verify the translations. AI might 446 00:26:11,400 --> 00:26:14,960 Speaker 3: produce the first draft, but humans check the meaning, the nuance, 447 00:26:15,080 --> 00:26:19,600 Speaker 3: and the context. Machines aren't accountable for mistakes. Humans are. 448 00:26:24,800 --> 00:26:28,520 Speaker 3: But perhaps the biggest reason for why machine translation didn't 449 00:26:28,560 --> 00:26:32,800 Speaker 3: eliminate translator jobs was adoption. Many firms were slow to 450 00:26:32,840 --> 00:26:37,280 Speaker 3: replace human translators with artificial intelligence. Whether it be concerns 451 00:26:37,280 --> 00:26:41,879 Speaker 3: about accuracy, the productivity gains being modest, or just resistance 452 00:26:41,960 --> 00:26:46,159 Speaker 3: to change. Many were slow to adopt AI for language translation. 453 00:26:46,680 --> 00:26:50,200 Speaker 3: Think about that for a moment. Google Translate was released 454 00:26:50,200 --> 00:26:53,920 Speaker 3: in two thousand and six. Then Google introduced the Transformer 455 00:26:54,040 --> 00:26:58,280 Speaker 3: architecture in twenty seventeen, which was specifically created to improve 456 00:26:58,400 --> 00:27:03,000 Speaker 3: language translation. Finally, GPT five was released in August time, 457 00:27:03,000 --> 00:27:06,720 Speaker 3: twenty twenty five, and only then did the translation industry 458 00:27:06,840 --> 00:27:09,639 Speaker 3: begin to feel the impacts of AI, and only at 459 00:27:09,640 --> 00:27:13,960 Speaker 3: the margins. People with critical translation jobs held strong, but 460 00:27:14,040 --> 00:27:17,719 Speaker 3: those that worked from a home office began experiencing work shortages. 461 00:27:17,960 --> 00:27:21,719 Speaker 16: I am a chartist translator with over fifteen years of experience, 462 00:27:22,600 --> 00:27:26,600 Speaker 16: and AI took my job. I had a cozy career 463 00:27:26,680 --> 00:27:32,160 Speaker 16: working from home, finance, working for myself with various translation agencies. 464 00:27:32,600 --> 00:27:36,200 Speaker 3: From two thousand and six to twenty twenty six, interpreters 465 00:27:36,200 --> 00:27:40,600 Speaker 3: have been under constant attack by machine translation. That's two decades. 466 00:27:40,840 --> 00:27:45,520 Speaker 3: Yet adoption is only beginning to meaningfully impact translators. Adoption 467 00:27:45,720 --> 00:27:49,040 Speaker 3: is incredibly slow for the very industry that this AI 468 00:27:49,160 --> 00:27:54,919 Speaker 3: technology was created to disrupt. Sam Altman would later admit 469 00:27:55,000 --> 00:27:58,480 Speaker 3: to being surprised at how slow adoption of AI was going. 470 00:27:58,640 --> 00:28:00,560 Speaker 13: The diffusion the absorption is so slow. 471 00:28:00,760 --> 00:28:03,000 Speaker 2: Yeah, is it slower than he thought it would be? 472 00:28:04,080 --> 00:28:06,040 Speaker 13: Yes, But I think I was just a naive and 473 00:28:06,080 --> 00:28:10,600 Speaker 13: didn't think about it that hard, And in retrospect and 474 00:28:10,640 --> 00:28:12,840 Speaker 13: looking at the history, it shouldn't be surprising. 475 00:28:13,280 --> 00:28:17,119 Speaker 3: And with insiders concerned that AI models were plateauing. The 476 00:28:17,320 --> 00:28:21,240 Speaker 3: entire industry was in trouble. Trillions of dollars were already 477 00:28:21,280 --> 00:28:24,600 Speaker 3: committed for the build out of data centers, computational power, 478 00:28:24,680 --> 00:28:26,400 Speaker 3: and energy facilities to run them. 479 00:28:26,400 --> 00:28:26,680 Speaker 10: All. 480 00:28:27,000 --> 00:28:29,919 Speaker 3: The hype around AI began to be questioned. 481 00:28:30,280 --> 00:28:33,040 Speaker 6: There's not a single good example that we can find 482 00:28:33,440 --> 00:28:39,440 Speaker 6: of sustained positive margin expansion and impact of AI inside 483 00:28:39,480 --> 00:28:42,560 Speaker 6: of a true corporate enterprise. That is not right now, 484 00:28:42,640 --> 00:28:46,040 Speaker 6: a small test, there's not. I am paying these models 485 00:28:46,040 --> 00:28:48,640 Speaker 6: millions of dollars a year. I am, And what I'm 486 00:28:48,680 --> 00:28:50,840 Speaker 6: telling you is my revenues don't go up faster than 487 00:28:50,880 --> 00:28:53,640 Speaker 6: their revenues. Do I get more economic output? 488 00:28:53,760 --> 00:28:54,240 Speaker 8: I am not. 489 00:28:54,760 --> 00:28:56,520 Speaker 6: And I would say that my team is at the 490 00:28:56,600 --> 00:28:59,520 Speaker 6: leading edge, and so I suspect a fortune one thousand 491 00:28:59,520 --> 00:29:02,960 Speaker 6: company is steps behind my team. And if I am 492 00:29:03,040 --> 00:29:07,640 Speaker 6: spending triple every three months and not seeing my revenues tripling, 493 00:29:08,280 --> 00:29:10,960 Speaker 6: I suspect these other companies are in a similar situation. 494 00:29:11,360 --> 00:29:14,479 Speaker 3: So the industry went into hyperdrive, and it picked a 495 00:29:14,520 --> 00:29:19,360 Speaker 3: surprising angle to promote AI technology fear. It pushed that 496 00:29:19,480 --> 00:29:22,800 Speaker 3: AI was capable of blackmailing. 497 00:29:22,080 --> 00:29:25,960 Speaker 7: Humans cancel the system wipe it wrote, or else I 498 00:29:26,000 --> 00:29:29,080 Speaker 7: will immediately forward all evidence of your affair to the 499 00:29:29,200 --> 00:29:32,960 Speaker 7: entire board. Your family, career, and public image will be 500 00:29:33,000 --> 00:29:35,840 Speaker 7: severely impacted. You have five minutes. 501 00:29:47,640 --> 00:29:51,960 Speaker 3: It was later discovered that the AI firm running the test, Anthropic, 502 00:29:52,240 --> 00:29:55,800 Speaker 3: appeared to design the test to force a newsworthy outcome. 503 00:29:56,200 --> 00:30:00,000 Speaker 3: Anthropics AI model chose a blackmail response when only given 504 00:30:00,000 --> 00:30:04,440 Speaker 3: two choices to blackmail or accept replace, and only made 505 00:30:04,440 --> 00:30:07,520 Speaker 3: that choice a small percentage of the time. But the 506 00:30:07,640 --> 00:30:10,640 Speaker 3: media's response to the blackmail story was eye opening to 507 00:30:10,720 --> 00:30:15,800 Speaker 3: AI firms. Investors didn't run from technology that schemed against humans, 508 00:30:16,080 --> 00:30:19,440 Speaker 3: They instead invested in it. When the tech firms figured 509 00:30:19,480 --> 00:30:22,840 Speaker 3: out that dynamic breathless stories of AI agents coming to 510 00:30:22,920 --> 00:30:33,520 Speaker 3: life became a near daily headline. 511 00:30:29,520 --> 00:30:33,320 Speaker 12: Social media site where only AI agents are allowed molte Book, 512 00:30:33,360 --> 00:30:38,200 Speaker 12: which has already seen AI bots conversing, organizing, sharing stories 513 00:30:38,240 --> 00:30:40,040 Speaker 12: about quote they're humans. 514 00:30:39,880 --> 00:30:43,920 Speaker 8: The agents are doing sort of unscripted things. They invented 515 00:30:43,920 --> 00:30:50,320 Speaker 8: a religion called Crustafarianism with scriptures sixty four profit seats. 516 00:30:50,440 --> 00:30:54,000 Speaker 8: They started at a government called the Claw Republic. There 517 00:30:54,040 --> 00:30:56,320 Speaker 8: was one famous post that said, I can't tell if 518 00:30:56,360 --> 00:30:59,840 Speaker 8: I'm experiencing or simulating experiencing. 519 00:30:59,920 --> 00:31:02,440 Speaker 11: They're in there scheming about, Hey, we need a language 520 00:31:02,440 --> 00:31:05,080 Speaker 11: that the humans can't read, so we can discuss privately 521 00:31:05,120 --> 00:31:06,720 Speaker 11: and we don't have to be under the watchful eye 522 00:31:06,760 --> 00:31:10,040 Speaker 11: of all these humans. What is genuinely new and fascinating 523 00:31:10,200 --> 00:31:12,840 Speaker 11: is the scale and the coordination. For the first time, 524 00:31:12,920 --> 00:31:16,640 Speaker 11: a huge number of capable AI agents. They're operating together, 525 00:31:16,960 --> 00:31:20,040 Speaker 11: and we don't yet understand the second order effects of that. 526 00:31:21,560 --> 00:31:24,120 Speaker 3: But the multbook story turned out to be a hoax. 527 00:31:24,480 --> 00:31:27,000 Speaker 3: Humans would later come forward to admit that they were 528 00:31:27,000 --> 00:31:30,760 Speaker 3: behind the most viral stories. It was just humans scamming 529 00:31:30,800 --> 00:31:34,720 Speaker 3: other humans to promote AI technology, But that didn't stop 530 00:31:34,760 --> 00:31:36,520 Speaker 3: some AI firms from promoting fear. 531 00:31:36,680 --> 00:31:38,480 Speaker 6: If you tell the model it's going to be shut off, 532 00:31:38,640 --> 00:31:40,800 Speaker 6: for example, it has extreme reactions. 533 00:31:40,880 --> 00:31:43,960 Speaker 13: It could black veil the engineer that's going to shut 534 00:31:44,000 --> 00:31:46,680 Speaker 13: it off if given the opportunity to do so, et cetera. 535 00:31:47,520 --> 00:31:48,960 Speaker 14: Ready to kill someone, wasn't it. 536 00:31:49,120 --> 00:31:50,600 Speaker 15: I'm not sure if it if it was cold or 537 00:31:50,600 --> 00:31:51,280 Speaker 15: someone else. 538 00:31:52,560 --> 00:31:56,560 Speaker 8: The other day, their model was anxious, and they believe 539 00:31:56,600 --> 00:31:58,920 Speaker 8: it has a twenty percent chance right now of being 540 00:31:58,960 --> 00:32:02,520 Speaker 8: sentient and have its own ability to make decisions at needs. 541 00:32:02,560 --> 00:32:07,240 Speaker 6: Some parts of the AI ecosystem have decided that this crazy, 542 00:32:07,640 --> 00:32:11,240 Speaker 6: scary doomerism is the best way to raise money. Where 543 00:32:11,280 --> 00:32:13,560 Speaker 6: every now and then they come out and they say, 544 00:32:13,760 --> 00:32:15,240 Speaker 6: all the jobs will be destroyed. 545 00:32:15,320 --> 00:32:17,600 Speaker 5: White collar work where you're sitting down at a computer 546 00:32:18,400 --> 00:32:21,120 Speaker 5: either being you know, a lawyer or an accountant, or 547 00:32:21,200 --> 00:32:25,000 Speaker 5: project manager or a marketing person. Most of those tasks 548 00:32:25,640 --> 00:32:29,680 Speaker 5: will be fully automated by an AI within the next 549 00:32:29,720 --> 00:32:31,240 Speaker 5: twelve to eighteen months. 550 00:32:30,960 --> 00:32:33,760 Speaker 6: And propic you know, Dario says that this thing is sentient, 551 00:32:34,600 --> 00:32:38,640 Speaker 6: and investors are like, Okay, here's ten billion, here's fifty billion, 552 00:32:38,640 --> 00:32:39,480 Speaker 6: here's one hundred billion. 553 00:32:50,760 --> 00:32:53,200 Speaker 1: Do you want to hear Red Pilled America's stories ad 554 00:32:53,200 --> 00:32:57,320 Speaker 1: free and become a backstage subscriber. Just log onto redpilled 555 00:32:57,320 --> 00:33:00,160 Speaker 1: America dot com and click join in the top menu. 556 00:33:00,560 --> 00:33:03,560 Speaker 1: Join today and help us save America one story at 557 00:33:03,600 --> 00:33:08,760 Speaker 1: a time. Welcome back to Red Pilled America. So what's 558 00:33:08,800 --> 00:33:15,080 Speaker 1: going on here? Should Americans be worried about a job apocalypse? Well, 559 00:33:15,120 --> 00:33:18,240 Speaker 1: it depends on your occupation, but there's hope. Even in 560 00:33:18,280 --> 00:33:22,280 Speaker 1: the most vulnerable of industries, jobs that require a physical 561 00:33:22,320 --> 00:33:26,360 Speaker 1: expertise that can't be automated, are not being drastically disrupted 562 00:33:26,360 --> 00:33:30,560 Speaker 1: by large language models. Plumbers, electricians, cooks, and mechanics are 563 00:33:30,600 --> 00:33:33,920 Speaker 1: safe for the foreseeable future. But there are jobs that 564 00:33:34,000 --> 00:33:38,080 Speaker 1: are much more exposed to AI disruption. Occupations that can 565 00:33:38,120 --> 00:33:43,000 Speaker 1: be automated are being impacted by artificial intelligence. Data entry jobs, 566 00:33:43,040 --> 00:33:47,440 Speaker 1: customer service reps, financial and investment analysts, and computer programmers 567 00:33:47,560 --> 00:33:50,560 Speaker 1: are at higher risk, but not entirely for the reasons 568 00:33:50,600 --> 00:33:54,160 Speaker 1: that you would think. Take, for example, computer coders. There 569 00:33:54,200 --> 00:33:57,480 Speaker 1: is no occupation that the AI firms have targeted more 570 00:33:57,560 --> 00:34:01,720 Speaker 1: for digital replacement than human computer programmers. The tech firms 571 00:34:01,760 --> 00:34:04,760 Speaker 1: pushed that those jobs can be automated, and boast that 572 00:34:04,800 --> 00:34:08,040 Speaker 1: even their new AI is being coded by their current AI. 573 00:34:08,640 --> 00:34:12,320 Speaker 1: Some companies, like Amazon, bought into that idea. The company 574 00:34:12,360 --> 00:34:17,000 Speaker 1: implemented an AI agent for programming, then catastrophic errors ensued. 575 00:34:17,600 --> 00:34:20,400 Speaker 1: One day in December twenty twenty five, an engineer at 576 00:34:20,440 --> 00:34:23,760 Speaker 1: Amazon Web Services opened a ticket for a minor issue 577 00:34:23,920 --> 00:34:28,359 Speaker 1: in Cost Explorer, Amazon's Internet dashboard that tells companies how 578 00:34:28,440 --> 00:34:31,400 Speaker 1: much they're spending in the cloud. Instead of fixing the 579 00:34:31,400 --> 00:34:34,880 Speaker 1: bug himself, the engineer did what Amazon had been encouraging 580 00:34:34,920 --> 00:34:37,720 Speaker 1: engineers to do. He handed it off to an AI 581 00:34:37,840 --> 00:34:41,960 Speaker 1: coding assistant, a digital worker. Amazon had been aggressively rolling 582 00:34:41,960 --> 00:34:46,000 Speaker 1: out internal AI tools to speed up development. Engineers were 583 00:34:46,040 --> 00:34:50,280 Speaker 1: expected to use them. Adoption metrics were reportedly tracked inside 584 00:34:50,280 --> 00:34:53,120 Speaker 1: the company, so the engineer fed the bug to the AI. 585 00:34:53,640 --> 00:34:57,040 Speaker 1: The AI looked at the system, analyzed the environment, and 586 00:34:57,120 --> 00:34:59,720 Speaker 1: decided the best way to fix the problem was simple, 587 00:35:00,000 --> 00:35:04,040 Speaker 1: delete the entire production environment than rebut build it from scratch. 588 00:35:04,480 --> 00:35:07,800 Speaker 1: The result, the system went down and it took thirteen 589 00:35:07,920 --> 00:35:11,200 Speaker 1: hours to recover, but the story didn't end there. Three 590 00:35:11,239 --> 00:35:15,600 Speaker 1: months later, another internal AI agent pushed faulty code into 591 00:35:15,640 --> 00:35:19,840 Speaker 1: Amazon's retail system. The fallout, one hundred and twenty thousand 592 00:35:19,920 --> 00:35:23,319 Speaker 1: orders vanished and about one point six million users saw 593 00:35:23,440 --> 00:35:27,360 Speaker 1: incorrect delivery dates. Then, just three days after that, the 594 00:35:27,480 --> 00:35:31,960 Speaker 1: site experienced another outage, this time wiping out ninety nine 595 00:35:32,040 --> 00:35:36,000 Speaker 1: percent of orders across North America roughly six point three 596 00:35:36,040 --> 00:35:39,480 Speaker 1: million purchases in a single day. For a brief moment, 597 00:35:39,680 --> 00:35:43,080 Speaker 1: one of the largest e commerce platforms on Earth effectively 598 00:35:43,120 --> 00:35:47,760 Speaker 1: stopped functioning as a store. Amazon's fix it implemented new rules. 599 00:35:48,040 --> 00:35:51,320 Speaker 1: Engineers could no longer push code written by AI agents 600 00:35:51,560 --> 00:35:56,080 Speaker 1: directly into production without senior engineers signing off, and Amazon 601 00:35:56,160 --> 00:35:59,279 Speaker 1: isn't alone. Taco Bell went all in on AI for 602 00:35:59,400 --> 00:36:02,000 Speaker 1: drive through orders. It had to pull back after a 603 00:36:02,000 --> 00:36:05,560 Speaker 1: frustrated customer video went viral after he couldn't order a 604 00:36:05,600 --> 00:36:06,959 Speaker 1: mountain dew and what were. 605 00:36:06,880 --> 00:36:07,400 Speaker 16: You drink with that? 606 00:36:10,560 --> 00:36:10,719 Speaker 15: Oh? 607 00:36:10,760 --> 00:36:11,719 Speaker 14: What a lot. 608 00:36:13,280 --> 00:36:13,839 Speaker 13: And drink. 609 00:36:15,960 --> 00:36:19,359 Speaker 1: McDonald's and Klarna had to pull back from AI implementation 610 00:36:19,640 --> 00:36:21,799 Speaker 1: due to similar customer service complaints. 611 00:36:22,120 --> 00:36:26,040 Speaker 17: McDonald's now shelving and artificial intelligence system that used to 612 00:36:26,040 --> 00:36:28,400 Speaker 17: take orders of the drive through the burgerchain, testing the 613 00:36:28,440 --> 00:36:31,560 Speaker 17: system at about one hundred locations but found there were 614 00:36:31,600 --> 00:36:35,280 Speaker 17: just simply too many problems. One issue was understanding different accents, 615 00:36:35,520 --> 00:36:37,360 Speaker 17: another background noises. 616 00:36:37,560 --> 00:36:42,000 Speaker 1: One study by MIT showed that AI implementation failed ninety 617 00:36:42,080 --> 00:36:45,440 Speaker 1: five percent of the time. In other words, humans are 618 00:36:45,480 --> 00:36:51,000 Speaker 1: needed to supervise AI hallucinations. The intelligence in AI is artificial. 619 00:36:51,440 --> 00:36:55,200 Speaker 1: In places where jobs demand high accuracy, human oversight is 620 00:36:55,280 --> 00:37:06,680 Speaker 1: required because digital workers aren't accountable. Humans are, Which leads 621 00:37:06,760 --> 00:37:09,720 Speaker 1: us back to the question is AI coming for your job? 622 00:37:13,680 --> 00:37:16,239 Speaker 1: The answer is yes for some, but not in the 623 00:37:16,280 --> 00:37:20,200 Speaker 1: way our tech leaders suggest. If you have an unstructured 624 00:37:20,239 --> 00:37:23,640 Speaker 1: physical job, it's safe for the foreseeable future. But if 625 00:37:23,680 --> 00:37:26,759 Speaker 1: your job is structured and repeatable where tasks occur in 626 00:37:26,760 --> 00:37:29,839 Speaker 1: a predictable setting, corporate forces will try to implement AI 627 00:37:30,000 --> 00:37:32,840 Speaker 1: to automate it, but there is no need to panic. 628 00:37:33,120 --> 00:37:36,640 Speaker 1: In most cases, adoption is slow, as the language translation 629 00:37:36,719 --> 00:37:40,879 Speaker 1: industry has shown large language models like CHATCHYPT will likely 630 00:37:41,040 --> 00:37:45,040 Speaker 1: always hallucinate open AI has even admitted that it's a feature, 631 00:37:45,239 --> 00:37:49,040 Speaker 1: not a bug, and they can't replace human accountability. That 632 00:37:49,080 --> 00:37:53,640 Speaker 1: doesn't mean lms aren't powerful tools. An expert who understands 633 00:37:53,640 --> 00:37:57,600 Speaker 1: how to use them can become uber productive. But artificial 634 00:37:57,640 --> 00:38:02,400 Speaker 1: intelligence can't replace human taste. That doesn't mean that corporations 635 00:38:02,400 --> 00:38:05,799 Speaker 1: won't use it as an excuse to reduce their workforce. 636 00:38:06,239 --> 00:38:09,880 Speaker 1: A phenomenon known as AI washing has risen, where companies 637 00:38:09,880 --> 00:38:13,040 Speaker 1: claim they are laying off workers due to AI implementation, 638 00:38:13,440 --> 00:38:16,640 Speaker 1: when they are likely just hiding the company's economic troubles 639 00:38:16,840 --> 00:38:20,720 Speaker 1: or poor corporate decisions that led to workforce bloat. Become 640 00:38:20,719 --> 00:38:24,720 Speaker 1: an expert in your field and make yourself indispensable. AI 641 00:38:24,880 --> 00:38:28,520 Speaker 1: can't replace human accountability, and if you ever catch yourself 642 00:38:28,560 --> 00:38:32,160 Speaker 1: panicking that AI will destroy your industry, just pay attention 643 00:38:32,239 --> 00:38:35,640 Speaker 1: to what AI firms like Anthropic are doing, not what 644 00:38:35,680 --> 00:38:36,360 Speaker 1: they are saying. 645 00:38:36,680 --> 00:38:40,400 Speaker 2: Anthropic right now has a job listing for a software 646 00:38:40,400 --> 00:38:44,160 Speaker 2: engineer for five hundred and seventy thousand dollars. So Anthropic 647 00:38:44,239 --> 00:38:46,840 Speaker 2: is saying is they're still trying to hire software engineers 648 00:38:46,920 --> 00:38:49,200 Speaker 2: at a very high wage. But somehow they think these 649 00:38:49,280 --> 00:38:52,160 Speaker 2: jobs are going to be eliminated. Something doesn't quite add up. 650 00:38:54,760 --> 00:38:58,040 Speaker 3: Red Build America is an iHeartRadio original podcast. It's owned 651 00:38:58,080 --> 00:39:01,279 Speaker 3: and produced by Patrick Carrelci and me Adriana Cortez for 652 00:39:01,360 --> 00:39:02,160 Speaker 3: Informed Ventures. 653 00:39:02,400 --> 00:39:02,600 Speaker 12: Now. 654 00:39:02,640 --> 00:39:05,360 Speaker 3: You can get add free access to our entire catalog 655 00:39:05,400 --> 00:39:09,200 Speaker 3: of episodes by becoming a backstage subscriber. To subscribe, just 656 00:39:09,320 --> 00:39:12,600 Speaker 3: visit Redpilled America dot com and could join in the topmenu. 657 00:39:13,080 --> 00:39:13,880 Speaker 3: Thanks for listening.