1 00:00:03,160 --> 00:00:07,240 Speaker 1: This is Red Pilled America. A quick question before we 2 00:00:07,280 --> 00:00:10,000 Speaker 1: start the show. How many shows are there out there 3 00:00:10,039 --> 00:00:12,959 Speaker 1: like Red Pilled America. You know the answer, it's zero? 4 00:00:13,160 --> 00:00:16,359 Speaker 1: And why because it's hard to produce a storytelling show. 5 00:00:16,880 --> 00:00:20,680 Speaker 1: Join the fanbam and support storytelling that aligns with your values. 6 00:00:21,040 --> 00:00:24,000 Speaker 1: Just go to Redpilled America dot com and click join 7 00:00:24,079 --> 00:00:26,639 Speaker 1: in the top menu. You'll get add free access to 8 00:00:26,720 --> 00:00:30,480 Speaker 1: our entire back catalog of episodes. Help us save America 9 00:00:30,520 --> 00:00:35,360 Speaker 1: one story at a time. Previously on Red Pilled America. 10 00:00:35,479 --> 00:00:38,520 Speaker 2: Artificial intelligence would be the ultimate version of Google. 11 00:00:40,720 --> 00:00:44,160 Speaker 3: AI could wipe out half of all entry level white 12 00:00:44,200 --> 00:00:44,960 Speaker 3: collar jobs. 13 00:00:45,000 --> 00:00:47,920 Speaker 4: Google Translate got started in two thousand and one. Google Translator. 14 00:00:47,960 --> 00:00:50,800 Speaker 1: I think it messed uff up pretty badly. Sometimes engineers 15 00:00:50,840 --> 00:00:54,600 Speaker 1: began asking a difficult question, why, after years of improvements, 16 00:00:54,840 --> 00:00:58,520 Speaker 1: was this system still struggling. The problem was context. 17 00:00:58,640 --> 00:01:02,600 Speaker 5: Solving that problem required a fundamentally new architecture. They called 18 00:01:02,640 --> 00:01:04,520 Speaker 5: their new design a transformer. 19 00:01:04,680 --> 00:01:08,280 Speaker 4: It's a generic system and you give it data and 20 00:01:08,360 --> 00:01:09,520 Speaker 4: it learns to translate. 21 00:01:09,600 --> 00:01:12,440 Speaker 1: What if instead of asking this system to predict a translation, 22 00:01:12,720 --> 00:01:16,080 Speaker 1: they simply asked it to generate text. This idea would 23 00:01:16,120 --> 00:01:21,200 Speaker 1: eventually launch the AI gold rush. 24 00:01:21,360 --> 00:01:24,840 Speaker 5: I'm Patrick Carelci and I'm Adriana Cortes. 25 00:01:24,520 --> 00:01:27,680 Speaker 1: And this is Red Pilled America, a storytelling show. 26 00:01:28,760 --> 00:01:31,240 Speaker 5: This is not another talk show covering the day's news. 27 00:01:31,680 --> 00:01:33,440 Speaker 5: We're all about telling stories. 28 00:01:34,120 --> 00:01:34,520 Speaker 6: Stories. 29 00:01:34,600 --> 00:01:37,240 Speaker 1: Hollywood doesn't want you to hear stories. 30 00:01:37,280 --> 00:01:41,840 Speaker 5: The media mocks stories about everyday Americans at the globalist ignore. 31 00:01:42,760 --> 00:01:45,679 Speaker 1: You can think of Red Pilled America as audio documentaries, 32 00:01:45,720 --> 00:01:55,200 Speaker 1: and we promise only one thing, the truth. Welcome to 33 00:01:55,280 --> 00:02:09,000 Speaker 1: Red Pilled America. It was October twenty fourteen when tech 34 00:02:09,040 --> 00:02:13,160 Speaker 1: billionaire Elon Musk began sounding the alarm with artificial intelligence. 35 00:02:13,160 --> 00:02:16,200 Speaker 2: We are summoning the demon. You know, all those stories 36 00:02:16,240 --> 00:02:18,560 Speaker 2: where as the guy with the pentagram in the holy 37 00:02:18,600 --> 00:02:20,320 Speaker 2: water and he's like, yeah, you're sure he can control 38 00:02:20,360 --> 00:02:22,519 Speaker 2: the demon? Then work out. 39 00:02:23,120 --> 00:02:27,240 Speaker 1: Publicly, his concern about AI was general, but privately that 40 00:02:27,400 --> 00:02:32,119 Speaker 1: fear was far more specific. Elon was worried artificial superintelligence 41 00:02:32,160 --> 00:02:34,720 Speaker 1: would fall into the hands of the most powerful company 42 00:02:34,760 --> 00:02:38,720 Speaker 1: in existence, Google, and that concern reached an apex when 43 00:02:38,760 --> 00:02:42,440 Speaker 1: Google introduced an entirely new form of AI. We're at 44 00:02:42,480 --> 00:02:45,560 Speaker 1: part two of our series of episodes entitled Artificial We're 45 00:02:45,560 --> 00:02:48,000 Speaker 1: looking for the answer to the question is AI coming 46 00:02:48,000 --> 00:02:50,560 Speaker 1: for your job? By telling the story behind the rise 47 00:02:50,600 --> 00:02:54,359 Speaker 1: of AI agents. 48 00:02:55,240 --> 00:02:56,560 Speaker 5: So to pick up where we left off in our 49 00:02:56,639 --> 00:03:00,919 Speaker 5: last episode. In June twenty seventeen, Google's AI team published 50 00:03:00,960 --> 00:03:04,760 Speaker 5: a monumental research paper Entire Attention Is All You Need. 51 00:03:05,120 --> 00:03:07,840 Speaker 5: In it, they introduced a new way to automate language 52 00:03:07,840 --> 00:03:11,360 Speaker 5: translation by devising a model for a computer to deeply 53 00:03:11,400 --> 00:03:16,080 Speaker 5: evaluate context. They called their new computer architecture a transformer. 54 00:03:16,720 --> 00:03:21,440 Speaker 5: Deciphering context is a necessary element in language translation. Prior 55 00:03:21,520 --> 00:03:26,000 Speaker 5: to Google's paper, translation software fumbled with this task, sometimes 56 00:03:26,160 --> 00:03:30,320 Speaker 5: laughably so. But with Google's new transformer, context could be 57 00:03:30,400 --> 00:03:35,520 Speaker 5: calculated a dramatically improved language translation, solving a core problem 58 00:03:35,560 --> 00:03:38,960 Speaker 5: for a company whose mission was to organize the world's information. 59 00:03:42,120 --> 00:03:46,520 Speaker 5: This new transformer architecture was viewed as an important technological 60 00:03:46,560 --> 00:03:50,720 Speaker 5: advancement designed to solve a specific engineering problem for Google, 61 00:03:51,000 --> 00:03:54,600 Speaker 5: how to make language translation faster and more accurate. But 62 00:03:54,680 --> 00:03:58,840 Speaker 5: inside one of Google's AI labs, Google Brain researchers began 63 00:03:58,920 --> 00:04:01,800 Speaker 5: wondering if this transformer could be used for a far 64 00:04:01,920 --> 00:04:06,800 Speaker 5: broader task. At its core, the transformer computer architecture to 65 00:04:06,880 --> 00:04:11,160 Speaker 5: identify patterns from enormous amounts of text. This raised an 66 00:04:11,160 --> 00:04:15,720 Speaker 5: intriguing question. Instead of asking the transformer to predict a translation, 67 00:04:16,240 --> 00:04:19,280 Speaker 5: what if it was simply asked to predict a response. 68 00:04:19,880 --> 00:04:23,320 Speaker 5: A Google AI researcher at the time, wukash Kaiser, would 69 00:04:23,400 --> 00:04:25,440 Speaker 5: later describe this Aha moment. 70 00:04:28,440 --> 00:04:31,320 Speaker 4: Imagine if we just trained it to generate text to 71 00:04:31,400 --> 00:04:32,480 Speaker 4: write something. 72 00:04:32,600 --> 00:04:35,599 Speaker 5: Instead of asking it to predict a translation. They thought, 73 00:04:35,839 --> 00:04:37,640 Speaker 5: let's see if we could just get it to predict 74 00:04:37,640 --> 00:04:42,360 Speaker 5: a response to any input. So the team decided to 75 00:04:42,400 --> 00:04:46,240 Speaker 5: try an experiment. Instead of feeding the model translation data, 76 00:04:46,279 --> 00:04:49,839 Speaker 5: they trained it using something very different. Thousands upon thousands 77 00:04:49,839 --> 00:04:53,600 Speaker 5: of Wikipedia articles. The system learned the nature of their content, 78 00:04:53,760 --> 00:04:57,640 Speaker 5: their formatting, their patterns of information. It absorbed how these 79 00:04:57,760 --> 00:05:01,839 Speaker 5: encyclopedia style entries were written. Then, after all the training, 80 00:05:02,040 --> 00:05:06,120 Speaker 5: the researchers and two words into the system the transformer. 81 00:05:06,440 --> 00:05:08,760 Speaker 5: They hit enter and waited to see what the machine 82 00:05:08,800 --> 00:05:13,479 Speaker 5: would produce. And what happened next stunned them. The system 83 00:05:13,520 --> 00:05:17,640 Speaker 5: didn't just output fragments of text, It generated a coherent, 84 00:05:17,800 --> 00:05:22,080 Speaker 5: complete narrative. Lukash Kaiser recalled what their language model kicked out. 85 00:05:22,320 --> 00:05:25,440 Speaker 4: The transformer Are a Japanese hardcore punk band born from 86 00:05:25,520 --> 00:05:26,640 Speaker 4: the heavy metal Revolution. 87 00:05:27,080 --> 00:05:30,039 Speaker 5: The model responded by writing an entry about a Japanese 88 00:05:30,120 --> 00:05:32,600 Speaker 5: band named the Transformer in the early years. 89 00:05:32,640 --> 00:05:34,760 Speaker 4: The band was formed in sixty eight, during the height 90 00:05:34,800 --> 00:05:38,560 Speaker 4: of Japanese music history. Then it goes on and you know, 91 00:05:38,760 --> 00:05:40,440 Speaker 4: it generates history. 92 00:05:40,160 --> 00:05:43,320 Speaker 5: Because like its training data, Wikipedia articles also displayed the 93 00:05:43,360 --> 00:05:44,200 Speaker 5: history of a topic. 94 00:05:44,520 --> 00:05:47,240 Speaker 4: In eighty one, the bassist Mitche O'Connor and the members 95 00:05:47,279 --> 00:05:48,840 Speaker 4: of the original lineup emerged. 96 00:05:49,279 --> 00:05:49,440 Speaker 7: Right. 97 00:05:49,520 --> 00:05:52,000 Speaker 4: The interesting part is it makes up some names, and 98 00:05:52,040 --> 00:05:54,600 Speaker 4: then we went to Wikipedia and checked. These people don't exist. 99 00:05:55,800 --> 00:05:58,680 Speaker 4: Neither does this band or anything like this. It's all 100 00:05:58,760 --> 00:05:59,760 Speaker 4: totally made up. 101 00:06:01,200 --> 00:06:04,360 Speaker 5: The language model didn't retrieve star answers. It didn't look 102 00:06:04,440 --> 00:06:08,400 Speaker 5: up a database and copy and paste memorized responses. Instead, 103 00:06:08,400 --> 00:06:11,719 Speaker 5: it was predicting a response to the prompt the Transformer, 104 00:06:12,279 --> 00:06:15,120 Speaker 5: like when your phone or word processor sometimes will suggest 105 00:06:15,160 --> 00:06:19,360 Speaker 5: the next word as you type. Google's language model expanded 106 00:06:19,400 --> 00:06:23,120 Speaker 5: on that concept by predicting the entire next passage. It 107 00:06:23,200 --> 00:06:26,919 Speaker 5: created a lore for the band, describing its history, its members, 108 00:06:26,960 --> 00:06:32,159 Speaker 5: its influence, its timeline. The writing followed recognizable Wikipedia patterns. 109 00:06:32,360 --> 00:06:34,640 Speaker 5: It sounded plausible, but none of it was real. 110 00:06:34,800 --> 00:06:37,760 Speaker 4: So this is totally made up by your language model. 111 00:06:38,040 --> 00:06:41,000 Speaker 5: The band didn't exist, the band members didn't exist. The 112 00:06:41,040 --> 00:06:45,560 Speaker 5: events it typed never happened. The system had completely fabricated everything, 113 00:06:45,920 --> 00:06:48,919 Speaker 5: yet it had done so in a structured, believable way. 114 00:06:49,279 --> 00:06:52,880 Speaker 5: It had learned to imitate human writing, not because it 115 00:06:52,960 --> 00:06:56,760 Speaker 5: understood language or new facts, but because it had identified 116 00:06:56,800 --> 00:07:01,520 Speaker 5: statistical patterns through its Wikipedia training, then predicted what words 117 00:07:01,600 --> 00:07:08,920 Speaker 5: should come next. When Google's a team ran the experiment again, 118 00:07:09,240 --> 00:07:11,600 Speaker 5: the system produced a different response. 119 00:07:11,840 --> 00:07:14,760 Speaker 4: The Transformer is a book by British Iluminatis Herman. Wearehead 120 00:07:15,000 --> 00:07:17,240 Speaker 4: set in a post apocalyptic world that border on a 121 00:07:17,280 --> 00:07:19,440 Speaker 4: mysterious alien knows the Transformer planet. 122 00:07:19,640 --> 00:07:22,600 Speaker 5: This time described The Transformers as a book. They wrote 123 00:07:22,640 --> 00:07:25,480 Speaker 5: a summary for it because that's what Wikipedia articles for 124 00:07:25,560 --> 00:07:26,120 Speaker 5: books had. 125 00:07:26,320 --> 00:07:29,600 Speaker 4: What's interesting, it makes a quote from the book. It 126 00:07:29,720 --> 00:07:33,520 Speaker 4: actually learned that inside Wikipedia pages about books you have 127 00:07:33,640 --> 00:07:34,720 Speaker 4: quotes from the books. 128 00:07:34,920 --> 00:07:39,080 Speaker 5: Every detail was fabricated. The language model was hallucinating, but 129 00:07:39,160 --> 00:07:42,320 Speaker 5: the structure of its writing was convincing. The machine was 130 00:07:42,360 --> 00:07:44,840 Speaker 5: behaving like a student who had studied the format of 131 00:07:44,960 --> 00:07:48,680 Speaker 5: Encyclopedia entries and was now guessing at filling in the blanks. 132 00:07:49,080 --> 00:07:52,800 Speaker 5: The researchers quickly realized an important detail. They had trained 133 00:07:52,800 --> 00:07:55,880 Speaker 5: the model using relatively little data over a short period 134 00:07:55,920 --> 00:07:59,640 Speaker 5: of time and with relatively small computing power. That what 135 00:07:59,720 --> 00:08:02,880 Speaker 5: if they scaled everything up. So they fed the language 136 00:08:02,880 --> 00:08:06,720 Speaker 5: model more data, they increased its computing resources, and extended 137 00:08:06,720 --> 00:08:11,520 Speaker 5: its training time, and to their surprise, the system's responses improved. 138 00:08:11,840 --> 00:08:14,880 Speaker 4: They like learned to make up poems in the middle 139 00:08:14,960 --> 00:08:16,680 Speaker 4: of articles about lyrics. 140 00:08:16,960 --> 00:08:20,040 Speaker 5: The system was giving even more believable answers, like a 141 00:08:20,080 --> 00:08:23,000 Speaker 5: student spending more time in class, reading more and given 142 00:08:23,080 --> 00:08:26,200 Speaker 5: more tools to process the information, the model was giving 143 00:08:26,240 --> 00:08:28,040 Speaker 5: better responses. 144 00:08:27,680 --> 00:08:30,520 Speaker 4: And they keep consistency. If it's a Japanese band, it 145 00:08:30,600 --> 00:08:32,760 Speaker 4: keeps talking with Japanese names about Japan. 146 00:08:33,120 --> 00:08:36,360 Speaker 5: The responses were still fiction. It would increased data and 147 00:08:36,400 --> 00:08:41,000 Speaker 5: computing power, its writing became more nuanced and far more convincing. 148 00:08:43,280 --> 00:08:46,520 Speaker 5: Google shared their findings with the world's small community of 149 00:08:46,559 --> 00:08:47,599 Speaker 5: AI researchers. 150 00:08:47,840 --> 00:08:49,720 Speaker 4: So I mean, I want to give you the feeling 151 00:08:49,800 --> 00:08:53,079 Speaker 4: that this is something exciting. This is not something computers 152 00:08:53,080 --> 00:08:55,880 Speaker 4: were able to do. Like a few years ago. 153 00:08:56,240 --> 00:08:59,280 Speaker 5: The Google AI team knew they were onto something. Their 154 00:08:59,280 --> 00:09:03,480 Speaker 5: transformer architecture was not just a better translation engine. It 155 00:09:03,559 --> 00:09:09,240 Speaker 5: was a fundamentally new kind of language machine, one capable 156 00:09:09,280 --> 00:09:12,840 Speaker 5: of generating text on virtually any topic, and it caught 157 00:09:12,880 --> 00:09:16,080 Speaker 5: the attention of a very small group of Silicon Valley stars, 158 00:09:16,240 --> 00:09:21,359 Speaker 5: who immediately recognized its potential power not just for generating translations, 159 00:09:21,559 --> 00:09:26,320 Speaker 5: but for everything, for answering questions, solving equations, creating stories, 160 00:09:26,360 --> 00:09:31,760 Speaker 5: writing code, and perhaps someday, simulating human conversation. This small 161 00:09:31,800 --> 00:09:35,120 Speaker 5: group of Silicon Valley insiders had been quietly plotting to 162 00:09:35,160 --> 00:09:38,680 Speaker 5: beat Google in the race to create a super intelligent computer, 163 00:09:39,080 --> 00:09:42,160 Speaker 5: and now they believed the search behemoth had created the 164 00:09:42,200 --> 00:09:46,680 Speaker 5: foundation of a new kind of artificial intelligence. This group 165 00:09:46,920 --> 00:09:50,240 Speaker 5: was led by Elon Musk, and their rebellion against Google 166 00:09:50,280 --> 00:09:53,200 Speaker 5: started surprisingly enough, during an adult. 167 00:09:52,880 --> 00:09:55,320 Speaker 2: Sleepover Larry Page. I used to be close friends. 168 00:09:56,640 --> 00:09:59,520 Speaker 1: That's Elon Musk, describing the status of his friendship with 169 00:09:59,559 --> 00:10:03,720 Speaker 1: Google co founder Larry Page in twenty fourteen. Elon and 170 00:10:03,800 --> 00:10:06,720 Speaker 1: Larry had been palsed since the late nineteen nineties in. 171 00:10:06,800 --> 00:10:09,719 Speaker 2: A stay at his house in Palo Alto, and I 172 00:10:09,760 --> 00:10:12,400 Speaker 2: talked to him late to the night about AI safety. 173 00:10:12,200 --> 00:10:15,160 Speaker 1: And on one of these nights, Elon began sensing something 174 00:10:15,200 --> 00:10:18,160 Speaker 1: alarming coming from his friend Larry, and at. 175 00:10:18,160 --> 00:10:22,000 Speaker 2: Least my perception was that Larry was not taking AI 176 00:10:22,000 --> 00:10:23,480 Speaker 2: safety seriously enough. 177 00:10:23,640 --> 00:10:27,280 Speaker 1: According to Elon, this was a massive problem because Google 178 00:10:27,320 --> 00:10:28,880 Speaker 1: had big AI ambitions. 179 00:10:29,120 --> 00:10:32,400 Speaker 2: He's made many public statements over the years to the 180 00:10:32,400 --> 00:10:36,000 Speaker 2: whole goal of Google is what's called AGI, artificial general 181 00:10:36,000 --> 00:10:37,880 Speaker 2: intelligence or official superintelligence. 182 00:10:38,120 --> 00:10:41,920 Speaker 1: As we've explained earlier in this series, Artificial general intelligence 183 00:10:42,000 --> 00:10:45,520 Speaker 1: or AGI is a theoretical AI system that says smart 184 00:10:45,600 --> 00:10:49,040 Speaker 1: or smarter than humans in every way, and most critically, 185 00:10:49,200 --> 00:10:53,640 Speaker 1: AGI can act on its own without human supervision. At 186 00:10:53,679 --> 00:10:56,640 Speaker 1: the time of this late night discussion, Google had recently 187 00:10:56,679 --> 00:11:01,240 Speaker 1: acquired deep Mind Technologies, a UK firm dedicated to developing 188 00:11:01,360 --> 00:11:06,040 Speaker 1: artificial intelligence. The co founder of deep Mind, Demisesabis, had 189 00:11:06,080 --> 00:11:10,120 Speaker 1: long expressed the desire to achieve superintelligent AGI. What kind 190 00:11:10,160 --> 00:11:13,000 Speaker 1: of interim goals might we expect to see on the 191 00:11:13,040 --> 00:11:14,599 Speaker 1: path towards AGI? 192 00:11:14,760 --> 00:11:16,959 Speaker 3: And sort of you know, related question is how can 193 00:11:16,960 --> 00:11:19,679 Speaker 3: we measure whether we're making progress towards this path? 194 00:11:19,960 --> 00:11:22,160 Speaker 4: And you know, what is it we can do to 195 00:11:22,280 --> 00:11:22,880 Speaker 4: kind of aid that. 196 00:11:23,480 --> 00:11:26,440 Speaker 1: In buying deep Mind, Google didn't just acquire any old 197 00:11:26,480 --> 00:11:30,400 Speaker 1: technology company. Deep Mind was a pioneer in an AI 198 00:11:30,600 --> 00:11:35,120 Speaker 1: architecture known as reinforcement learning or RL. RL is a 199 00:11:35,160 --> 00:11:38,280 Speaker 1: way of training artificial intelligence by letting it learn from 200 00:11:38,320 --> 00:11:41,720 Speaker 1: its environment through trial and error. Deep Mind had done 201 00:11:41,760 --> 00:11:45,840 Speaker 1: something remarkable. They trained their AI system to play dozens 202 00:11:45,880 --> 00:11:50,360 Speaker 1: of Atari video games very quickly. It mastered classics like 203 00:11:50,440 --> 00:11:54,000 Speaker 1: Space Invaders, Breakout and Pong. And here's the kicker, it 204 00:11:54,120 --> 00:11:58,439 Speaker 1: mastered them without being told the rules. This was revolutionary. 205 00:11:59,000 --> 00:12:02,640 Speaker 1: With the acquisition of deep Mind, Google had secured reportedly 206 00:12:03,040 --> 00:12:06,360 Speaker 1: two thirds of all AI researchers in the world. To 207 00:12:06,440 --> 00:12:10,080 Speaker 1: add to their near monopoly, Google had unlimited funding and 208 00:12:10,280 --> 00:12:13,920 Speaker 1: massive computational power, more than any company in the history 209 00:12:13,960 --> 00:12:17,560 Speaker 1: of the world. This concerned many in Silicon Valley, including 210 00:12:17,600 --> 00:12:18,679 Speaker 1: Elon Musk. 211 00:12:18,640 --> 00:12:22,320 Speaker 2: And the guy in charge you, Larry Page, did not 212 00:12:22,400 --> 00:12:24,720 Speaker 2: care about safety and even yelled at me and quote 213 00:12:24,720 --> 00:12:26,000 Speaker 2: me a species pro human. 214 00:12:26,240 --> 00:12:29,560 Speaker 1: As their heated discussion progressed, it became clear to Elon 215 00:12:29,679 --> 00:12:32,000 Speaker 1: that the world had a major problem on its hands. 216 00:12:32,200 --> 00:12:36,840 Speaker 2: He really seemed to be one sort of digital superintelligence, 217 00:12:36,880 --> 00:12:40,720 Speaker 2: basically digital god. If you will as soon as possible. 218 00:12:40,360 --> 00:12:43,880 Speaker 1: So Elon began publicly voicing his fears about the development 219 00:12:43,920 --> 00:12:44,600 Speaker 1: of AGI. 220 00:12:44,760 --> 00:12:47,599 Speaker 2: I think danger of AI is much greater than the 221 00:12:47,920 --> 00:12:51,240 Speaker 2: danger of nuclear warheads by a lot. Mark my words. 222 00:12:51,640 --> 00:12:55,120 Speaker 2: AI is far more dangerous than nukes far. 223 00:12:55,720 --> 00:12:59,680 Speaker 1: Elon's fear was that if Google's DeepMind team achieved AGI, 224 00:13:00,000 --> 00:13:03,400 Speaker 1: the single most powerful technology company that ever exist existed 225 00:13:03,720 --> 00:13:08,040 Speaker 1: would control a superintelligence. That was alarming enough, but what 226 00:13:08,080 --> 00:13:10,680 Speaker 1: if an evil force could someday take control of it 227 00:13:10,720 --> 00:13:13,880 Speaker 1: from Google. Elon would later expand on this concern. 228 00:13:14,120 --> 00:13:16,600 Speaker 2: I think we must have democrazation of AI technology, make 229 00:13:16,600 --> 00:13:20,520 Speaker 2: it widely available, meaning that no one company or a 230 00:13:20,520 --> 00:13:23,840 Speaker 2: small set of individuals has control over advanced AI technology. 231 00:13:24,120 --> 00:13:26,680 Speaker 2: That's very dangerous. Somebody could take it from them and 232 00:13:26,760 --> 00:13:28,439 Speaker 2: use it in a way that's bad. That I think 233 00:13:28,520 --> 00:13:30,520 Speaker 2: is quite a big danger. It just becomes a very 234 00:13:30,559 --> 00:13:33,959 Speaker 2: unstable situation. I think if you've got any incredibly powerful AI, 235 00:13:34,280 --> 00:13:36,280 Speaker 2: you just don't know who's going to control that. 236 00:13:36,760 --> 00:13:39,520 Speaker 1: So Elon started reaching out to a select group of 237 00:13:39,559 --> 00:13:43,040 Speaker 1: Silicon Valley insiders, and what happened next would mark the 238 00:13:43,080 --> 00:13:44,800 Speaker 1: beginning of a tech cold war. 239 00:13:53,040 --> 00:13:56,120 Speaker 5: Rough Greens has given our dog's new life. We've got 240 00:13:56,160 --> 00:13:59,520 Speaker 5: three very different dogs in our house. Pablo, our English bulldog, 241 00:13:59,559 --> 00:14:02,679 Speaker 5: Willow our bull. Mastive and Daisy are tiny bit mighty Chihuahwah, 242 00:14:03,080 --> 00:14:05,760 Speaker 5: So trust me when I say we notice when something 243 00:14:05,920 --> 00:14:06,800 Speaker 5: actually works. 244 00:14:07,000 --> 00:14:11,319 Speaker 1: Pablo has always struggled with skin issues. We tried switching foods, supplements, 245 00:14:11,520 --> 00:14:14,439 Speaker 1: you name it, and nothing really stuck. But after adding 246 00:14:14,520 --> 00:14:17,319 Speaker 1: rough Greens to his meals, his skin finally calmed down. 247 00:14:17,559 --> 00:14:21,120 Speaker 1: Less irritation, less scratching, and honestly, he just looks more 248 00:14:21,120 --> 00:14:22,320 Speaker 1: comfortable in his own skin. 249 00:14:22,640 --> 00:14:25,280 Speaker 5: Then there's Willow. She's our big girl, and for a 250 00:14:25,320 --> 00:14:29,440 Speaker 5: while she just seemed tired, slowing down, not as excited 251 00:14:29,440 --> 00:14:32,760 Speaker 5: about walks or playtime. Once we added rough Greens, it 252 00:14:32,840 --> 00:14:36,520 Speaker 5: was like she got her spark back, more energy, more enthusiasm. 253 00:14:36,880 --> 00:14:39,920 Speaker 5: It genuinely gave her new life. And Daisy she just 254 00:14:39,960 --> 00:14:40,680 Speaker 5: thrives on it. 255 00:14:41,000 --> 00:14:43,000 Speaker 1: What I love is that rough Greens isn't dog food. 256 00:14:43,160 --> 00:14:47,040 Speaker 1: It's a live nutritional supplement packed with vitamins, minerals, probiotics, 257 00:14:47,080 --> 00:14:50,320 Speaker 1: digestive enzymes, and omega oils. You don't have to change 258 00:14:50,320 --> 00:14:52,400 Speaker 1: your dog's food, you just added. 259 00:14:52,480 --> 00:14:54,960 Speaker 5: I don't just recommend rough Greens. I depend on it 260 00:14:54,960 --> 00:14:56,840 Speaker 5: to keep my dogs happy and healthy. 261 00:14:57,000 --> 00:14:59,800 Speaker 1: Don't change your dog's food, just add rough Greens. Rough 262 00:14:59,800 --> 00:15:03,200 Speaker 1: Greens is offering a free Jumpstart trial bag you US 263 00:15:03,200 --> 00:15:03,840 Speaker 1: cover shipping. 264 00:15:04,240 --> 00:15:08,920 Speaker 5: Use discount code RPA to claim your free Jumpstart trialbag 265 00:15:09,000 --> 00:15:15,120 Speaker 5: at Roughgreens dot com. That's Ruffgreens dot com promo code RPA. 266 00:15:15,480 --> 00:15:18,960 Speaker 5: So don't change your dog's food, just add Roughgreens and 267 00:15:19,040 --> 00:15:20,840 Speaker 5: watch the health benefits come alive. 268 00:15:23,240 --> 00:15:31,520 Speaker 1: Welcome back to red pilled America. So in twenty fourteen, 269 00:15:31,680 --> 00:15:34,680 Speaker 1: Elon Musk became concerned when the co founder of Google 270 00:15:34,840 --> 00:15:39,320 Speaker 1: expressed little concern for AI safety. Google was the most 271 00:15:39,360 --> 00:15:42,160 Speaker 1: powerful tech firm to have ever existed, and it was 272 00:15:42,200 --> 00:15:47,840 Speaker 1: aggressively pursuing artificial superintelligence. In Elon's eyes, if that technology 273 00:15:47,840 --> 00:15:50,520 Speaker 1: fell into the wrong hands, the stability of the world 274 00:15:50,560 --> 00:15:54,080 Speaker 1: would be at risk. So he began contacting Silicon Valley 275 00:15:54,120 --> 00:15:58,040 Speaker 1: insiders he suspected had similar concerns. One was a man 276 00:15:58,120 --> 00:16:04,760 Speaker 1: named Sam Altman. Sultman was the president of a tech 277 00:16:04,840 --> 00:16:08,880 Speaker 1: startup incubator named y Combinator. You explained the company in 278 00:16:08,880 --> 00:16:09,880 Speaker 1: twenty fifteen. 279 00:16:10,160 --> 00:16:12,280 Speaker 8: So we fund startups not only that, but mostly that 280 00:16:12,320 --> 00:16:14,640 Speaker 8: we sort of look at our role as enabling as 281 00:16:14,720 --> 00:16:17,440 Speaker 8: much innovation as we can in the world. Startups a 282 00:16:17,480 --> 00:16:19,240 Speaker 8: really good way to do that. We found about two 283 00:16:19,320 --> 00:16:21,760 Speaker 8: hundred and fifty startups per year in our core program 284 00:16:21,840 --> 00:16:23,800 Speaker 8: and now more than two thousand people that we funded. 285 00:16:24,160 --> 00:16:26,560 Speaker 8: People feel a very strong affinity to YC. They work 286 00:16:26,600 --> 00:16:30,040 Speaker 8: with other YC companies, help them fundraise higher, by each 287 00:16:30,040 --> 00:16:32,280 Speaker 8: other's products, whatever, and we really sort of try to 288 00:16:32,320 --> 00:16:35,680 Speaker 8: make that a tight community. 289 00:16:37,920 --> 00:16:40,680 Speaker 1: Y Combinator was at the center of the tech startup 290 00:16:40,720 --> 00:16:43,600 Speaker 1: world and helped launch some of the most successful tech 291 00:16:43,640 --> 00:16:49,640 Speaker 1: brands including Airbnb, Dropbox, Reddit, Stripe, Door, Dash, Instacart, and 292 00:16:49,680 --> 00:16:53,440 Speaker 1: many others. As president of y Combinator, no one had 293 00:16:53,480 --> 00:16:56,880 Speaker 1: more connections to Silicon Valley investors and tech gurus than 294 00:16:56,960 --> 00:16:59,760 Speaker 1: Sam Altman, and there would be no better endorsement of 295 00:16:59,800 --> 00:17:03,720 Speaker 1: a startup than him joining the project. I saw Altmann 296 00:17:03,760 --> 00:17:06,320 Speaker 1: as the perfect person to lead the business side of 297 00:17:06,320 --> 00:17:09,760 Speaker 1: an AI firm to take on Google. The two connected 298 00:17:09,760 --> 00:17:12,480 Speaker 1: with one of the most respected engineers in the industry, 299 00:17:12,560 --> 00:17:16,000 Speaker 1: Greg Brockman, who could build the infrastructure of an AI lab. 300 00:17:16,560 --> 00:17:20,240 Speaker 1: With seed money from y Combinator, Brockman launched Stripe, at 301 00:17:20,240 --> 00:17:24,160 Speaker 1: the time, the fastest growing payments technology and financial infrastructure 302 00:17:24,200 --> 00:17:29,359 Speaker 1: company in the world. For three joined forces, but like 303 00:17:29,400 --> 00:17:32,600 Speaker 1: a baseball team needs players, and not just front office staff. 304 00:17:32,800 --> 00:17:36,639 Speaker 1: They needed stellar AI researchers. The problem was that Google 305 00:17:36,680 --> 00:17:39,560 Speaker 1: had all the top prospects, so to enter the game, 306 00:17:39,720 --> 00:17:42,280 Speaker 1: they'd need to pluck someone from Google to stand a 307 00:17:42,400 --> 00:17:46,120 Speaker 1: chance against the search behemoth. That someone was ilias Atskever. 308 00:17:46,480 --> 00:17:48,960 Speaker 9: One day, I received an invitation to get dinner with 309 00:17:49,200 --> 00:17:52,359 Speaker 9: some Altman and Greg Brockman and Elan Musk. 310 00:17:52,640 --> 00:17:57,920 Speaker 1: That's Ilia. At the time, in twenty fifteen, Ilia was 311 00:17:57,960 --> 00:18:00,879 Speaker 1: one of the most important AI researchers in the world. 312 00:18:01,160 --> 00:18:03,800 Speaker 1: He was an early member of Google's fame a team 313 00:18:03,960 --> 00:18:08,000 Speaker 1: Google Brain. Ilia had often daydreamed of some day launching 314 00:18:08,040 --> 00:18:13,000 Speaker 1: his own AI laboratory, but raising the capital seemed impossible. Now, 315 00:18:13,040 --> 00:18:15,399 Speaker 1: as luck would have it, it was being asked to 316 00:18:15,520 --> 00:18:19,040 Speaker 1: join a dinner with Elon, Altman and Brockman, three people 317 00:18:19,119 --> 00:18:21,920 Speaker 1: who could make the dream of an artificial intelligence firm 318 00:18:22,000 --> 00:18:25,760 Speaker 1: come true. It was the beginnings of a tech startup 319 00:18:25,840 --> 00:18:26,760 Speaker 1: dream team. 320 00:18:26,800 --> 00:18:29,600 Speaker 9: So of course Ivan and here I was at a 321 00:18:29,680 --> 00:18:33,040 Speaker 9: dinner and they were discussing how could you start a 322 00:18:33,080 --> 00:18:36,160 Speaker 9: new AI lab which would be a competitor to Google 323 00:18:36,200 --> 00:18:39,639 Speaker 9: into DeepMind, which back then had absolute dominance. 324 00:18:39,840 --> 00:18:43,199 Speaker 1: Elon committed to providing the seed capital tens of millions 325 00:18:43,240 --> 00:18:46,359 Speaker 1: of dollars. Because it was his overall vision. He wanted 326 00:18:46,400 --> 00:18:49,040 Speaker 1: to call it open ai because it would be a nonprofit, 327 00:18:49,200 --> 00:18:52,520 Speaker 1: open source project, meaning the computer code for their artificial 328 00:18:52,520 --> 00:18:56,880 Speaker 1: intelligence would be publicly available for anyone to view, use, modify, 329 00:18:57,160 --> 00:19:00,240 Speaker 1: and distribute. The idea was that there shouldn't be just 330 00:19:00,359 --> 00:19:03,760 Speaker 1: one company in control of a super intelligent mischie. It 331 00:19:03,800 --> 00:19:07,320 Speaker 1: should be available to everyone. Greg Brockman would be the 332 00:19:07,359 --> 00:19:12,359 Speaker 1: engineering execution behind the company, translating research into working systems, 333 00:19:12,359 --> 00:19:16,359 Speaker 1: something he'd mastered at Stripe. Sam Altman would help with fundraising, 334 00:19:16,440 --> 00:19:20,600 Speaker 1: organizational structure, and public messaging, everything he'd been doing for 335 00:19:20,640 --> 00:19:24,080 Speaker 1: startups at y Combinator. The three wanted Iliya to be 336 00:19:24,119 --> 00:19:27,720 Speaker 1: the director of research, the technical mind behind the operation, 337 00:19:30,920 --> 00:19:33,560 Speaker 1: but Ilia was still comfortable working at Google Brain. 338 00:19:33,960 --> 00:19:36,720 Speaker 9: Then it was, of course for me to leave Google. 339 00:19:36,760 --> 00:19:40,560 Speaker 9: It was quite a difficult decision because Google was very. 340 00:19:40,400 --> 00:19:42,920 Speaker 1: Good to me. Elon remembered the negotiations. 341 00:19:43,200 --> 00:19:46,639 Speaker 2: Elia went back and forth, very canna stay at Google, 342 00:19:47,320 --> 00:19:48,320 Speaker 2: there's gonna leave, and it's. 343 00:19:48,400 --> 00:19:53,080 Speaker 1: Say Ilio would eventually exit Google to join open ai, poaching. 344 00:19:53,119 --> 00:19:57,320 Speaker 1: The Google employee eventually broke Elon's friendship with Larry Page. 345 00:19:57,400 --> 00:20:01,320 Speaker 1: On December eleventh, twenty fifteen, the team introduced open ai 346 00:20:01,480 --> 00:20:07,120 Speaker 1: to the world. Their goal was lofty to advance digital 347 00:20:07,160 --> 00:20:10,360 Speaker 1: intelligence in the way that is most likely to benefit 348 00:20:10,440 --> 00:20:13,879 Speaker 1: humanity as a whole, unconstrained by a need to generate 349 00:20:13,920 --> 00:20:18,399 Speaker 1: financial return. Shortly after launch, Elon underscored the mission of 350 00:20:18,440 --> 00:20:19,120 Speaker 1: open AI. 351 00:20:19,240 --> 00:20:23,560 Speaker 2: That's about minimizing the risk of existential harm in the 352 00:20:23,560 --> 00:20:24,520 Speaker 2: future now. 353 00:20:24,560 --> 00:20:27,639 Speaker 1: At the time of open AI's launch, the organization was 354 00:20:27,680 --> 00:20:30,879 Speaker 1: not settled on an architecture for their AI system, and 355 00:20:30,920 --> 00:20:34,399 Speaker 1: there were many possibilities, so they began experimenting with the 356 00:20:34,480 --> 00:20:37,840 Speaker 1: known models in parallel. But the area they spent most 357 00:20:37,880 --> 00:20:41,240 Speaker 1: of their time and resources exploring was an AI architecture 358 00:20:41,320 --> 00:20:45,240 Speaker 1: known as reinforcement learning, the same technology Deep Mind designed 359 00:20:45,280 --> 00:20:49,320 Speaker 1: to master video games. At the time, the industry consensus 360 00:20:49,440 --> 00:20:53,159 Speaker 1: was that AGI would likely emerge from this type of system, 361 00:20:53,440 --> 00:20:57,240 Speaker 1: But then in June twenty seventeen, Google published their famous 362 00:20:57,280 --> 00:21:00,880 Speaker 1: research paper Attention Is All You Need, introducing their language 363 00:21:00,920 --> 00:21:05,480 Speaker 1: translation architecture called a train transformer. As mentioned earlier, After 364 00:21:05,560 --> 00:21:08,960 Speaker 1: further experiments, they learned that if trained right, their new 365 00:21:09,000 --> 00:21:12,800 Speaker 1: transformer could generate texts for any prompt. Sure the answers 366 00:21:12,800 --> 00:21:15,520 Speaker 1: were fiction, but the responses were believable. 367 00:21:15,680 --> 00:21:17,200 Speaker 4: I want to give you the feeling that this is 368 00:21:17,240 --> 00:21:20,760 Speaker 4: something exciting. This is not something computers were able to 369 00:21:20,840 --> 00:21:22,720 Speaker 4: do like a few years ago. 370 00:21:22,960 --> 00:21:25,880 Speaker 1: The Google AI team quickly realized that it was more 371 00:21:25,920 --> 00:21:29,120 Speaker 1: than just a better translation engine. It was a fundamentally 372 00:21:29,160 --> 00:21:32,679 Speaker 1: new kind of language machine when capable of generating text 373 00:21:32,920 --> 00:21:36,080 Speaker 1: on virtually any topic. But when they presented their findings 374 00:21:36,080 --> 00:21:40,720 Speaker 1: to Google's leadership, the reaction was cautious, even skeptical. Google's 375 00:21:40,760 --> 00:21:44,399 Speaker 1: brand depended on accuracy. Its search engine was trusted because 376 00:21:44,440 --> 00:21:48,199 Speaker 1: it delivered reliable information, and they no doubt remembered the 377 00:21:48,280 --> 00:21:52,199 Speaker 1: ridicule they received from Google Translates errors Google Translator. I 378 00:21:52,240 --> 00:21:55,879 Speaker 1: think it messeduff up pretty badly. Sometimes Google's new transformer 379 00:21:55,960 --> 00:22:00,440 Speaker 1: system was producing answers that sounded authoritative but were completely fictional. 380 00:22:00,960 --> 00:22:04,520 Speaker 1: Years later, Google co founder Surgayed Wren would openly acknowledge 381 00:22:04,520 --> 00:22:05,359 Speaker 1: what happened next. 382 00:22:05,560 --> 00:22:07,879 Speaker 8: We actually didn't take it all that seriously and didn't 383 00:22:07,880 --> 00:22:10,920 Speaker 8: necessarily invest in scaling the compute. 384 00:22:11,000 --> 00:22:12,560 Speaker 5: And also we were too scared to bring. 385 00:22:12,440 --> 00:22:14,800 Speaker 4: It to people because chopbots say them things. 386 00:22:15,200 --> 00:22:19,439 Speaker 1: The Google Ai team had created something new, perhaps even revolutionary, 387 00:22:19,760 --> 00:22:23,200 Speaker 1: but management hesitated to deploy it. The risks to its 388 00:22:23,240 --> 00:22:30,560 Speaker 1: reputation seemed too great. But while Google hesitated, the small 389 00:22:30,600 --> 00:22:34,760 Speaker 1: startup open Ai immediately recognized the potential power of this 390 00:22:34,960 --> 00:22:40,040 Speaker 1: new transformer system, not just for generating translations, but for everything, 391 00:22:42,480 --> 00:22:47,639 Speaker 1: for answering questions, solving equations, creating stories, writing code, and 392 00:22:47,720 --> 00:22:52,720 Speaker 1: perhaps someday even simulating human conversation. Open Ai was founded 393 00:22:52,760 --> 00:22:55,200 Speaker 1: on a fear of Google, and now the search behemoth 394 00:22:55,400 --> 00:22:58,679 Speaker 1: appeared to have the golden goose, but didn't realize it. 395 00:22:59,000 --> 00:23:01,439 Speaker 1: Open Ai believed that they saw the foundation of a 396 00:23:01,480 --> 00:23:05,000 Speaker 1: new kind of intelligence, and unlike Google, they were willing 397 00:23:05,080 --> 00:23:05,879 Speaker 1: to take the risk. 398 00:23:06,280 --> 00:23:09,160 Speaker 5: So they did what many tech companies have done throughout history. 399 00:23:09,640 --> 00:23:15,520 Speaker 5: OpenAI stole Google's idea. If the search engine behemoth wasn't 400 00:23:15,520 --> 00:23:18,200 Speaker 5: going to test the limits of a transformer by expanding 401 00:23:18,200 --> 00:23:21,399 Speaker 5: on the model, OpenAI would because you see, they had 402 00:23:21,440 --> 00:23:24,720 Speaker 5: a hunch. When Google increased the size of their model 403 00:23:24,880 --> 00:23:28,600 Speaker 5: and fine tuned it the answers improved, the ai startup 404 00:23:28,640 --> 00:23:31,320 Speaker 5: thought that if they also increased the scale of this 405 00:23:31,440 --> 00:23:34,719 Speaker 5: language model, that made it even bigger than Google's experiment, 406 00:23:34,960 --> 00:23:38,040 Speaker 5: it could not only improve on predicting responses, but other 407 00:23:38,080 --> 00:23:45,320 Speaker 5: capabilities could possibly emerge. Led by Ilia Sutzkiv, OpenAI designed 408 00:23:45,320 --> 00:23:49,879 Speaker 5: a language model based on Google's transformer architecture. Just like 409 00:23:49,920 --> 00:23:54,439 Speaker 5: Google's research team, they removed the language translation goal. Instead 410 00:23:54,440 --> 00:23:58,119 Speaker 5: of training it on Wikipedia articles, they fed it seven 411 00:23:58,200 --> 00:24:02,320 Speaker 5: thousand unpublished novels, adding up to eight hundred million words. 412 00:24:03,400 --> 00:24:06,040 Speaker 5: The novels were mostly romance man of see sci fi 413 00:24:06,080 --> 00:24:08,840 Speaker 5: and the like. They wanted their system to be trained 414 00:24:08,920 --> 00:24:12,719 Speaker 5: on long, coherent narrative text. Novels were great for this 415 00:24:12,760 --> 00:24:16,280 Speaker 5: because they maintained context over many pages. In other words, 416 00:24:16,400 --> 00:24:19,840 Speaker 5: they were teaching it about context and structure. It was 417 00:24:19,880 --> 00:24:22,359 Speaker 5: a proof of concept phase where they were trying to 418 00:24:22,400 --> 00:24:25,680 Speaker 5: answer the question could pre training a transformer on large 419 00:24:25,680 --> 00:24:28,640 Speaker 5: amounts of text dramatically improve responses. 420 00:24:29,160 --> 00:24:31,639 Speaker 1: After feeding it the novels, they gave it topics to 421 00:24:31,680 --> 00:24:34,879 Speaker 1: address and reviewed the responses the model generated. If the 422 00:24:34,960 --> 00:24:37,760 Speaker 1: responses were off, they'd make tweaks to the model, then 423 00:24:37,840 --> 00:24:41,040 Speaker 1: run it again, training by fine tuning the machine until 424 00:24:41,040 --> 00:24:45,280 Speaker 1: the answer was coherent. Their transformer had adjustable parameters like 425 00:24:45,320 --> 00:24:48,560 Speaker 1: the slides and knobs on a gigantic sound mixing board. 426 00:24:52,400 --> 00:24:54,760 Speaker 1: In the case of a sound mixing board, each slider 427 00:24:54,840 --> 00:24:57,719 Speaker 1: and knob turns certain signals up and down and changes 428 00:24:57,800 --> 00:25:01,359 Speaker 1: how sound flows through the system. The OpenAI team built 429 00:25:01,400 --> 00:25:04,359 Speaker 1: digital sliders and knobs into the their transformer system to 430 00:25:04,400 --> 00:25:07,199 Speaker 1: fine tune the model. In fact, roughly one hundred and 431 00:25:07,200 --> 00:25:10,720 Speaker 1: seventeen million of them. A sound engineer would tweak each 432 00:25:10,760 --> 00:25:13,399 Speaker 1: slider and knob on a mixing board until the sound 433 00:25:13,440 --> 00:25:17,440 Speaker 1: is just right. That's how OpenAI trained their transformer. Their 434 00:25:17,480 --> 00:25:19,880 Speaker 1: model was given a topic to respond to, it would 435 00:25:19,880 --> 00:25:22,560 Speaker 1: spit out a response, then the team would fine tune 436 00:25:22,560 --> 00:25:25,280 Speaker 1: the model by adjusting those one hundred and seventeen million 437 00:25:25,320 --> 00:25:28,960 Speaker 1: sliders and knobs until its response was just right. In 438 00:25:29,080 --> 00:25:32,760 Speaker 1: June twenty eighteen, open Ai announced the results of their 439 00:25:32,800 --> 00:25:36,680 Speaker 1: first model. They called it gpt IE, or Generative pre 440 00:25:36,760 --> 00:25:40,760 Speaker 1: Trained Transformer one, and the results were promising. Training it 441 00:25:40,800 --> 00:25:43,240 Speaker 1: with thousands of books, then fine tuning it to predict 442 00:25:43,280 --> 00:25:45,880 Speaker 1: the next word turned out to be enough to help 443 00:25:45,920 --> 00:25:49,720 Speaker 1: it get good at things like answering questions and classifying text. 444 00:25:50,359 --> 00:25:53,159 Speaker 1: The results were promising enough to expand on the model, 445 00:25:53,440 --> 00:25:57,760 Speaker 1: so OpenAI built another transformer. This time instead of giving 446 00:25:57,800 --> 00:26:01,000 Speaker 1: it one hundred and seventeen million parameters. Those adjustable knobs 447 00:26:01,000 --> 00:26:04,600 Speaker 1: and sliders. They increased the skie ailed dramatically by over 448 00:26:04,640 --> 00:26:08,119 Speaker 1: a factor of ten. This time they had one point 449 00:26:08,200 --> 00:26:13,480 Speaker 1: five billion adjustable parameters. They also increased the amount of 450 00:26:13,560 --> 00:26:16,879 Speaker 1: data their model was trained on. Instead of using just 451 00:26:16,960 --> 00:26:22,119 Speaker 1: nonfiction books, they fed it news articles, blog posts, educational websites, 452 00:26:22,240 --> 00:26:26,920 Speaker 1: technical documentation, long form essays, opinion pieces, as well as 453 00:26:26,960 --> 00:26:30,879 Speaker 1: fictional material. Where gpt I was trained on only seven 454 00:26:30,920 --> 00:26:33,960 Speaker 1: thousand books, GPT two, as they called it, was trained 455 00:26:33,960 --> 00:26:37,480 Speaker 1: on roughly eight million documents. The size of their model 456 00:26:37,520 --> 00:26:40,119 Speaker 1: warranted a new name for the system. They called it 457 00:26:40,160 --> 00:26:45,040 Speaker 1: a large language model or LM. OpenAI completed their study 458 00:26:45,200 --> 00:26:49,119 Speaker 1: and the results were astounding. The large language model wrote convincing, 459 00:26:49,359 --> 00:26:53,119 Speaker 1: long structured essays. When fed a newspaper style article, it 460 00:26:53,200 --> 00:26:57,200 Speaker 1: expanded on the contents. GPT two would even mimic different 461 00:26:57,240 --> 00:27:01,560 Speaker 1: writing styles. It started to feel qualitatively different than gpt ie, 462 00:27:02,000 --> 00:27:06,359 Speaker 1: like the model was coming alive. Performance wasn't plateauing. It 463 00:27:06,480 --> 00:27:10,000 Speaker 1: kept improving smoothly As the scale of the model increased. 464 00:27:11,080 --> 00:27:14,320 Speaker 1: Open Ai began to suspect that increasing the model size 465 00:27:14,359 --> 00:27:19,119 Speaker 1: itself was driving intelligence. This assessment rocked the AI community. 466 00:27:19,480 --> 00:27:22,760 Speaker 1: By simply increasing the size of the model, they believed 467 00:27:22,800 --> 00:27:31,720 Speaker 1: intelligence began to emerge. Life can be pretty stressful these days. 468 00:27:31,920 --> 00:27:34,879 Speaker 1: You want to know what makes me feel better? Licorice 469 00:27:34,880 --> 00:27:39,000 Speaker 1: from the Licorice Guy. Call me crazy, but it's true 470 00:27:39,040 --> 00:27:42,639 Speaker 1: because I love licorice. Longtime listeners of Bread Pilled America 471 00:27:42,720 --> 00:27:45,960 Speaker 1: know that licorice is my absolute favorite candy, and the 472 00:27:46,119 --> 00:27:49,160 Speaker 1: very best liquorice, hands down, comes from the Licorice Guy. 473 00:27:49,400 --> 00:27:52,080 Speaker 1: I know licorice, and it doesn't get any better than this. 474 00:27:52,480 --> 00:27:55,399 Speaker 1: What truly sets it apart is its flavor and its freshness. 475 00:27:55,520 --> 00:27:58,640 Speaker 1: The softness of this licorice will blow your mind. I've 476 00:27:58,680 --> 00:28:01,560 Speaker 1: never had anything like it. The Licorice Guy offers jumbo 477 00:28:01,600 --> 00:28:07,600 Speaker 1: gourmet licorice in thistalgic seasonal flavors a red, blue, raspberry, cinnamon, watermelon, black, 478 00:28:07,680 --> 00:28:10,280 Speaker 1: and apple. Trust me, they are all delicious. What I 479 00:28:10,400 --> 00:28:12,560 Speaker 1: also love about the Liquorice Guy is that it's an 480 00:28:12,600 --> 00:28:15,320 Speaker 1: American family owned business, and you all know that I'm 481 00:28:15,359 --> 00:28:18,720 Speaker 1: a big proponent of supporting American companies. Right now, Red 482 00:28:18,720 --> 00:28:21,640 Speaker 1: Pilled America listeners get fifteen percent off when you enter 483 00:28:21,880 --> 00:28:26,280 Speaker 1: RPA fifteen at checkout, Visit licoricegui dot com and enter 484 00:28:26,520 --> 00:28:30,960 Speaker 1: RPA fifteen at checkout. That's licoricegui dot com. They ship daily, 485 00:28:31,200 --> 00:28:33,720 Speaker 1: treat yourself and those you love, and taste the difference. 486 00:28:36,880 --> 00:28:43,320 Speaker 1: Welcome back to red pilled America. So after increasing the 487 00:28:43,400 --> 00:28:47,360 Speaker 1: size of their large language model, OpenAI saw big improvements. 488 00:28:47,600 --> 00:28:52,760 Speaker 1: It wrote convincing long structured essays. Performance wasn't plateauing. It 489 00:28:52,880 --> 00:28:56,400 Speaker 1: kept improving smoothly. As the scale of their model increased, 490 00:28:56,880 --> 00:29:00,720 Speaker 1: OpenAI began to suspect that increasing the model's size itself 491 00:29:01,040 --> 00:29:05,040 Speaker 1: was driving intelligence. This is assessment rock the AI community. 492 00:29:06,160 --> 00:29:08,880 Speaker 1: By simply increasing the size of the model, they believed 493 00:29:08,920 --> 00:29:14,320 Speaker 1: intelligence began to emerge. In February twenty nineteen, OpenAI released 494 00:29:14,320 --> 00:29:17,200 Speaker 1: the results of their GPT two large language model to 495 00:29:17,240 --> 00:29:23,280 Speaker 1: the public, but the AI community was a bit surprised 496 00:29:23,320 --> 00:29:27,560 Speaker 1: by their disclosure, or more precisely, the lack thereof. When 497 00:29:27,560 --> 00:29:31,240 Speaker 1: the Google Ai team originally introduced the Transformer just a 498 00:29:31,240 --> 00:29:34,479 Speaker 1: few years earlier, they provided great detail to their model. 499 00:29:34,800 --> 00:29:38,880 Speaker 1: Open Ai, by contrast, initially withheld the details of their system. 500 00:29:39,400 --> 00:29:42,640 Speaker 1: This went against the norm of the AI research community, because, 501 00:29:42,680 --> 00:29:45,680 Speaker 1: according to open ai, they believed their innovation was so 502 00:29:45,840 --> 00:29:49,520 Speaker 1: dramatic that they feared people would misuse their model for evil. 503 00:29:49,920 --> 00:29:53,480 Speaker 1: At the time, Samultman addressed their decision to withhold information. 504 00:29:53,760 --> 00:29:56,000 Speaker 8: We've got this thing. We think it is so good 505 00:29:56,000 --> 00:29:58,040 Speaker 8: that it can be used for harm, so powerful that 506 00:29:58,080 --> 00:30:00,960 Speaker 8: it can use for harm, and we are going to 507 00:30:01,160 --> 00:30:03,600 Speaker 8: tell the world about it first, because we have not 508 00:30:03,720 --> 00:30:07,400 Speaker 8: yet built up sort of like societal antibodies for fake 509 00:30:07,480 --> 00:30:09,880 Speaker 8: text in the way we have for photoshopped images. And 510 00:30:10,480 --> 00:30:13,720 Speaker 8: the academic community did not like that, and it is 511 00:30:13,760 --> 00:30:15,680 Speaker 8: definitely against the norm of the field. 512 00:30:16,960 --> 00:30:19,600 Speaker 1: There were signs that the company appeared to be shifting 513 00:30:19,600 --> 00:30:23,040 Speaker 1: from its founding principles. Elon Musk left the board of 514 00:30:23,040 --> 00:30:30,760 Speaker 1: open Ai. The official reason given was that Elon wanted 515 00:30:30,800 --> 00:30:34,520 Speaker 1: to avoid a conflict of interest. His automotive company, Tesla, 516 00:30:34,840 --> 00:30:38,360 Speaker 1: was also working on AI systems to control itself driving cars, 517 00:30:38,720 --> 00:30:41,200 Speaker 1: but it would later be revealed that Elon tried and 518 00:30:41,320 --> 00:30:45,040 Speaker 1: failed to take control of the company. Samaltman won the 519 00:30:45,080 --> 00:30:48,880 Speaker 1: power struggle and quickly restructured open Ai from a nonprofit 520 00:30:48,920 --> 00:30:53,320 Speaker 1: to a for profit organization. By early twenty nineteen, Altman 521 00:30:53,440 --> 00:30:56,720 Speaker 1: left his position as president of y Combinator to become 522 00:30:56,880 --> 00:31:00,400 Speaker 1: CEO of open ai, and with the company now withholding 523 00:31:00,400 --> 00:31:04,280 Speaker 1: its GPT two results, the open in open ai seemed 524 00:31:04,280 --> 00:31:07,440 Speaker 1: to be a contradiction. The company appeared to be straying 525 00:31:07,480 --> 00:31:10,400 Speaker 1: from its original mission, but the buzz within the tech 526 00:31:10,440 --> 00:31:15,160 Speaker 1: industry was that open ai was onto something, something big, 527 00:31:15,720 --> 00:31:19,080 Speaker 1: and by mid twenty nineteen, the ai firm locked in 528 00:31:19,120 --> 00:31:21,520 Speaker 1: its first major corporate investor. 529 00:31:21,360 --> 00:31:25,560 Speaker 6: Microsoft going all in on ai, the world's biggest public company, 530 00:31:25,560 --> 00:31:29,120 Speaker 6: announcing an investment of one billion dollars in Elon Musk's 531 00:31:29,120 --> 00:31:31,400 Speaker 6: OpenAI to build artificial intelligence. 532 00:31:31,800 --> 00:31:35,000 Speaker 1: The intrigue in the company was building, but the same 533 00:31:35,080 --> 00:31:38,800 Speaker 1: company that inspired the creation of open ai, Google seemed 534 00:31:38,920 --> 00:31:42,720 Speaker 1: unconcerned with their growing buzz. Co founder Sergei Brand and 535 00:31:42,800 --> 00:31:45,720 Speaker 1: Larry Page stepped down from their top roles at the 536 00:31:45,760 --> 00:31:49,960 Speaker 1: search engine behemoth, both going into semi retirement, but their 537 00:31:50,000 --> 00:31:53,720 Speaker 1: relaxed worlds were about to be rocked. A year later, 538 00:31:53,760 --> 00:31:57,160 Speaker 1: in June twenty twenty, at the height of COVID nineteen shutdowns, 539 00:31:57,320 --> 00:32:01,280 Speaker 1: OpenAI announced the completion of its third model, GPT three. 540 00:32:01,800 --> 00:32:04,200 Speaker 1: This time the company increase the size of its model 541 00:32:04,240 --> 00:32:07,480 Speaker 1: by a whopping one hundred times, and the results were 542 00:32:07,520 --> 00:32:11,360 Speaker 1: a giant leap of improvement. GPT three could carry on 543 00:32:11,440 --> 00:32:17,600 Speaker 1: a conversation, it had complex reasoning and became proficient in 544 00:32:17,640 --> 00:32:23,240 Speaker 1: creative writing, and new skills started to emerge from the model. 545 00:32:23,680 --> 00:32:27,360 Speaker 1: It could write computer code and translate across many languages 546 00:32:27,520 --> 00:32:31,120 Speaker 1: without even being trained to do so, and perhaps most shockingly, 547 00:32:31,440 --> 00:32:34,000 Speaker 1: it could learn how to perform a new task after 548 00:32:34,040 --> 00:32:38,120 Speaker 1: seeing just a few examples. No training was required. What 549 00:32:38,320 --> 00:32:41,640 Speaker 1: OpenAI was reporting about the GPT three model stunned the 550 00:32:41,680 --> 00:32:45,440 Speaker 1: AI community. It was the first large language model being 551 00:32:45,480 --> 00:32:49,680 Speaker 1: recognized as a general purpose intelligence system. They'd taken the 552 00:32:49,760 --> 00:32:52,760 Speaker 1: engine that Google invented and souped it up. It was 553 00:32:52,800 --> 00:32:56,320 Speaker 1: as if OpenAI took Google's combustion engine and created a 554 00:32:56,360 --> 00:32:59,840 Speaker 1: Formula one racecar from it. But the problem was that 555 00:32:59,880 --> 00:33:03,000 Speaker 1: the only people that interfaced with their model were highly 556 00:33:03,160 --> 00:33:06,640 Speaker 1: skilled engineers. There was no interface where a non technical 557 00:33:06,680 --> 00:33:10,240 Speaker 1: person could interact with their AI system, so the excitement 558 00:33:10,280 --> 00:33:14,200 Speaker 1: remained confined to the tech community. But that would all 559 00:33:14,320 --> 00:33:18,480 Speaker 1: change on November thirtieth, twenty twenty two, when OpenAI released 560 00:33:18,640 --> 00:33:22,640 Speaker 1: chat GPT to the public. CHATGPT had a simple Internet 561 00:33:22,760 --> 00:33:26,400 Speaker 1: user interface where anyone could just converse with their GPT 562 00:33:26,480 --> 00:33:35,240 Speaker 1: three model through their web browser overnight. Chat gpt exploded. 563 00:33:34,760 --> 00:33:37,480 Speaker 2: Chat gpd, chat gpt, chat gpt. 564 00:33:37,680 --> 00:33:40,080 Speaker 10: Think of it as Google, but with more human like 565 00:33:40,160 --> 00:33:42,000 Speaker 10: features chat gpt. 566 00:33:42,440 --> 00:33:46,120 Speaker 3: This promises to be the viral sensation that could completely 567 00:33:46,360 --> 00:33:50,840 Speaker 3: reset how we do things. It is the embryonic version 568 00:33:50,920 --> 00:33:53,080 Speaker 3: of online artificial intelligence. 569 00:33:53,160 --> 00:33:55,800 Speaker 10: It took Netflix more than three years to reach one 570 00:33:55,880 --> 00:33:59,840 Speaker 10: million users, but it took chat gpt just five days. 571 00:33:59,960 --> 00:34:02,120 Speaker 3: The big step forward they made was turning it into 572 00:34:02,160 --> 00:34:04,760 Speaker 3: a chat into five. You do not need to be 573 00:34:04,800 --> 00:34:08,080 Speaker 3: a techie to use this. It is use a friendly. 574 00:34:08,200 --> 00:34:10,520 Speaker 3: It puts AI in the hangs of the masses. 575 00:34:10,600 --> 00:34:13,560 Speaker 4: This project from the Open AI Research Lab can write 576 00:34:13,640 --> 00:34:16,200 Speaker 4: essays and carry on convincing written conversation. 577 00:34:16,440 --> 00:34:19,360 Speaker 3: The big change is the existence of a paragraph is 578 00:34:19,400 --> 00:34:20,919 Speaker 3: no longer evidence of human thought. 579 00:34:21,040 --> 00:34:24,400 Speaker 10: It can write legal documents, software, even school essays. 580 00:34:24,480 --> 00:34:29,520 Speaker 4: The viral chatbot has now passed a Wharton MBA final exam, 581 00:34:29,600 --> 00:34:33,520 Speaker 4: the United States Medical Licensing Exam, and components of the 582 00:34:33,600 --> 00:34:34,280 Speaker 4: Bar Exam. 583 00:34:34,360 --> 00:34:37,240 Speaker 3: I think the hazard of what we're facing is almost 584 00:34:37,280 --> 00:34:40,920 Speaker 3: beyond what we can imagine, and I don't think we 585 00:34:40,960 --> 00:34:44,600 Speaker 3: are remotely ready for what's coming. Is this a watershed moment? 586 00:34:44,680 --> 00:34:46,320 Speaker 4: I think It's definitely a watershed moment. 587 00:34:46,640 --> 00:34:50,080 Speaker 1: Just two months after launch, jat GBT garnered one hundred 588 00:34:50,160 --> 00:34:57,640 Speaker 1: million users. As more and more people were using it, 589 00:34:57,719 --> 00:34:59,720 Speaker 1: a question quickly began to arise. 590 00:34:59,920 --> 00:35:02,520 Speaker 5: People are predicting it will wipe out whole industries. 591 00:35:03,400 --> 00:35:05,080 Speaker 2: Realtors are we going to be out of a job? 592 00:35:12,480 --> 00:35:18,880 Speaker 1: And it wasn't just workers that were worried. The company 593 00:35:18,920 --> 00:35:22,960 Speaker 1: that gave birth to the transformer became chat GPT's first target. 594 00:35:23,080 --> 00:35:25,279 Speaker 10: A lot of the headlines are that this might make 595 00:35:25,400 --> 00:35:29,040 Speaker 10: Google obsolete? Do you see that possibly becoming reality? 596 00:35:29,120 --> 00:35:30,520 Speaker 4: If I were Google, I would be a little bit 597 00:35:30,560 --> 00:35:32,879 Speaker 4: nervous and I see this. This thing is so far 598 00:35:32,960 --> 00:35:34,360 Speaker 4: beyond what a search engine does. 599 00:35:34,520 --> 00:35:36,879 Speaker 3: Google needs to get on top of this and add 600 00:35:36,880 --> 00:35:40,320 Speaker 3: this capability now. Because chat GPT, if it's simply adds search, 601 00:35:40,480 --> 00:35:41,520 Speaker 3: it's going to be ahead of the game. 602 00:35:41,760 --> 00:35:45,480 Speaker 1: Chat GPT was doing the unthinkable. It was threatening to 603 00:35:45,560 --> 00:35:48,120 Speaker 1: destroy Google's dominance in Internet search. 604 00:35:48,360 --> 00:35:50,360 Speaker 5: Open Ai launched chat gpt. 605 00:35:50,640 --> 00:35:52,640 Speaker 3: How did that verbrate around Google? 606 00:35:52,840 --> 00:35:55,080 Speaker 5: They reacted with a lot of panic. 607 00:35:56,080 --> 00:35:58,719 Speaker 6: They declared a code read, which. 608 00:35:58,520 --> 00:35:59,919 Speaker 2: Means all hands on deck. 609 00:36:00,200 --> 00:36:03,920 Speaker 1: The company reportedly pulled one thousand engine from all over 610 00:36:03,960 --> 00:36:07,319 Speaker 1: the organization to focus on one task to come up 611 00:36:07,320 --> 00:36:11,040 Speaker 1: with an answer to chatchipt. The situation was so dire 612 00:36:11,160 --> 00:36:15,120 Speaker 1: that the company's CEO confirmed that Google's co founders came 613 00:36:15,200 --> 00:36:16,080 Speaker 1: out of retirement. 614 00:36:16,239 --> 00:36:18,560 Speaker 9: I heard Sergey is back and working a bit on AI. 615 00:36:18,800 --> 00:36:21,560 Speaker 8: What is the involvement of Larry and Sergey these days? 616 00:36:21,640 --> 00:36:24,320 Speaker 4: SERGEI is actually spending more time in the office. 617 00:36:24,520 --> 00:36:25,879 Speaker 2: He's literally corey. 618 00:36:25,840 --> 00:36:29,440 Speaker 1: OpenAI didn't just launch a product, It triggered an AI 619 00:36:29,680 --> 00:36:34,480 Speaker 1: arms race. Chatgpt overturned the tech order, threatening giants that 620 00:36:34,600 --> 00:36:38,319 Speaker 1: once seemed untouchable, and in only a few years, the 621 00:36:38,400 --> 00:36:41,520 Speaker 1: industry found itself on the brink of something far bigger, 622 00:36:41,760 --> 00:36:45,560 Speaker 1: an innovation poised to transform the work of anyone who 623 00:36:45,600 --> 00:36:46,680 Speaker 1: sits at a keyboard. 624 00:36:47,280 --> 00:36:49,240 Speaker 5: Coming up on red Field America. 625 00:36:48,880 --> 00:36:51,160 Speaker 7: White collar work where you're sitting down at a computer 626 00:36:51,600 --> 00:36:53,880 Speaker 7: a lawyer, or an accountant, or project manager or a 627 00:36:53,920 --> 00:36:58,560 Speaker 7: marketing person. Most of those tasks will be fully automated 628 00:36:58,680 --> 00:37:01,759 Speaker 7: by an AI within the next twelve to eighteen months. 629 00:37:02,320 --> 00:37:05,600 Speaker 5: Red Build America has an iHeartRadio original podcast. It's owned 630 00:37:05,640 --> 00:37:08,840 Speaker 5: and produced by Patrick Carrelci and me Adriana Cortez for 631 00:37:08,920 --> 00:37:11,840 Speaker 5: Informed Ventures. Now you can get ad free access to 632 00:37:11,920 --> 00:37:15,480 Speaker 5: our entire catalog of episodes by becoming a backstage subscriber. 633 00:37:15,640 --> 00:37:19,080 Speaker 5: To subscribe, just visit Redpilled America dot com and could 634 00:37:19,160 --> 00:37:21,480 Speaker 5: join in the top menu. Thanks for listening.