1 00:00:05,160 --> 00:00:09,559 Speaker 1: Will AI make humans better? And what does this have 2 00:00:09,640 --> 00:00:13,560 Speaker 1: to do with linguistics or the movie Arrival or self 3 00:00:13,640 --> 00:00:18,400 Speaker 1: driving cars or debate and video games and elections and 4 00:00:18,600 --> 00:00:26,160 Speaker 1: chess and the ancient game of go. Welcome to Intercosmos 5 00:00:26,160 --> 00:00:29,600 Speaker 1: with me David Eagleman. I'm a neuroscientist and an author 6 00:00:29,640 --> 00:00:33,160 Speaker 1: at Stanford and in these episodes we seek to understand 7 00:00:33,240 --> 00:00:35,879 Speaker 1: why and how our lives look the way they do. 8 00:00:36,400 --> 00:00:40,560 Speaker 1: In today's episode is about whether AI will make us 9 00:00:40,760 --> 00:00:57,880 Speaker 1: better humans. I often find myself totally flabbergasted by the 10 00:00:58,080 --> 00:01:01,000 Speaker 1: change that I've seen just in my lifetime. When I 11 00:01:01,200 --> 00:01:05,520 Speaker 1: was a really little kid, personal computers didn't exist, and 12 00:01:05,600 --> 00:01:08,759 Speaker 1: then we passed through a door and suddenly they did. 13 00:01:09,160 --> 00:01:10,960 Speaker 1: And I saved up my money and I got a 14 00:01:11,319 --> 00:01:14,640 Speaker 1: common or VIC twenty computer, which was something my parents 15 00:01:14,640 --> 00:01:17,160 Speaker 1: had never seen the likes of. And I felt like 16 00:01:17,200 --> 00:01:20,919 Speaker 1: I was living the largest change in human history, because 17 00:01:20,959 --> 00:01:25,600 Speaker 1: for the first time, everybody could have a machine that 18 00:01:25,640 --> 00:01:28,840 Speaker 1: would do all kinds of things. But that turned out 19 00:01:28,920 --> 00:01:31,640 Speaker 1: not to even be the biggest change, because the next 20 00:01:31,680 --> 00:01:34,600 Speaker 1: stop was even bigger, and that was the idea that 21 00:01:34,680 --> 00:01:37,720 Speaker 1: some people had to build a system where we could 22 00:01:38,040 --> 00:01:41,960 Speaker 1: keep information on computers and make computers talk to each 23 00:01:42,000 --> 00:01:46,520 Speaker 1: other such that if the Soviets bombed America, the important 24 00:01:46,520 --> 00:01:51,360 Speaker 1: information wouldn't just be stored on one computer, and messages 25 00:01:51,440 --> 00:01:54,640 Speaker 1: could follow different routes on the network, and that way 26 00:01:54,680 --> 00:01:58,480 Speaker 1: you had a very robust system for keeping information. That 27 00:01:58,600 --> 00:02:01,920 Speaker 1: was of course Arpanet, which became the Internet, and not 28 00:02:02,080 --> 00:02:04,960 Speaker 1: long after, the idea was introduced of a way that 29 00:02:05,440 --> 00:02:08,959 Speaker 1: everyone could use this giant network, not just with text, 30 00:02:09,000 --> 00:02:10,880 Speaker 1: but with graphics, and. 31 00:02:10,800 --> 00:02:13,400 Speaker 2: That was the birth of the Worldwide Web. 32 00:02:13,919 --> 00:02:17,280 Speaker 1: And as soon as that technology existed, then people started 33 00:02:17,280 --> 00:02:20,480 Speaker 1: figuring out what to do with it, and one young 34 00:02:20,520 --> 00:02:23,280 Speaker 1: man started to sell books over the Internet. 35 00:02:22,919 --> 00:02:24,120 Speaker 2: And that became Amazon. 36 00:02:24,240 --> 00:02:28,400 Speaker 1: And two graduate students at Stanford asked the question of 37 00:02:28,680 --> 00:02:30,480 Speaker 1: how the heck we were going to be able to 38 00:02:30,600 --> 00:02:34,840 Speaker 1: find information on this sprawling network, and they created a 39 00:02:34,840 --> 00:02:38,680 Speaker 1: way of measuring all the connections between web pages and 40 00:02:38,720 --> 00:02:40,880 Speaker 1: that gave the ability to search for it, and. 41 00:02:40,760 --> 00:02:43,640 Speaker 2: That garage project grew into Google. 42 00:02:44,000 --> 00:02:46,959 Speaker 1: And around the same time, a Harvard kid was thinking 43 00:02:46,960 --> 00:02:50,799 Speaker 1: about a better way to allow his classmates to get 44 00:02:50,800 --> 00:02:53,520 Speaker 1: to know each other, which was traditionally done at the 45 00:02:53,520 --> 00:02:56,720 Speaker 1: beginning of the year by printing a booklet with everybody's 46 00:02:57,000 --> 00:03:00,200 Speaker 1: picture and name, and that little booklet was called called 47 00:03:00,240 --> 00:03:04,440 Speaker 1: a Facebook, and he thought of digitizing that, and on 48 00:03:04,480 --> 00:03:07,799 Speaker 1: and on, people increasingly figured out how to take this 49 00:03:07,960 --> 00:03:11,600 Speaker 1: new technology and make things that would live on top 50 00:03:11,680 --> 00:03:14,600 Speaker 1: of it. And I felt very lucky to have experienced 51 00:03:14,760 --> 00:03:19,560 Speaker 1: two truly world changing inventions in my lifetime, and I 52 00:03:19,639 --> 00:03:22,280 Speaker 1: knew that had launched us into a world that was 53 00:03:22,400 --> 00:03:28,000 Speaker 1: so different from what my great grandparents could have possibly imagined. 54 00:03:30,160 --> 00:03:33,600 Speaker 1: But now here we are in the middle of a 55 00:03:34,080 --> 00:03:37,680 Speaker 1: third revolution. It's related to the first two computers and 56 00:03:37,720 --> 00:03:41,200 Speaker 1: the Internet, but it makes them look like warm up acts. 57 00:03:41,440 --> 00:03:44,400 Speaker 1: And that's the fact that we have created a new 58 00:03:44,920 --> 00:03:49,360 Speaker 1: intelligent species that we're going to be sharing the planet 59 00:03:49,400 --> 00:03:51,800 Speaker 1: with from now on. This is not to say that 60 00:03:51,960 --> 00:03:55,760 Speaker 1: AI has exactly the same type of intelligence that human 61 00:03:55,800 --> 00:04:01,000 Speaker 1: brains have, but obviously it has absorbed the higher knowledge 62 00:04:01,080 --> 00:04:04,200 Speaker 1: sphere of humankind and it can spit that back to 63 00:04:04,280 --> 00:04:08,160 Speaker 1: us with all sorts of remixes. Now, the question is, 64 00:04:08,440 --> 00:04:11,320 Speaker 1: are we in trouble because of this new invention? Have 65 00:04:11,400 --> 00:04:15,240 Speaker 1: we taken things a step too far? Well, let me 66 00:04:15,280 --> 00:04:19,280 Speaker 1: give you an example that's on people's minds. A few 67 00:04:19,279 --> 00:04:23,400 Speaker 1: months ago, a Swiss research team carried out a secret 68 00:04:23,480 --> 00:04:28,440 Speaker 1: experiment in online persuasion. They went to a forum on 69 00:04:28,480 --> 00:04:33,320 Speaker 1: the website Reddit, and in this forum, users post opinions 70 00:04:33,400 --> 00:04:36,760 Speaker 1: and invite other people to challenge them. And so these 71 00:04:36,800 --> 00:04:42,520 Speaker 1: researchers quietly unleashed a set of AI accounts to pretend 72 00:04:42,640 --> 00:04:45,920 Speaker 1: that they were people and to try to change other 73 00:04:45,920 --> 00:04:51,600 Speaker 1: people's minds. So their large language models LMS, they participated 74 00:04:52,120 --> 00:04:55,600 Speaker 1: in the debating just like any other human. These bots 75 00:04:55,680 --> 00:05:00,560 Speaker 1: wrote arguments, they engaged with users, They debated with the 76 00:05:00,600 --> 00:05:05,760 Speaker 1: hope of changing minds. Now, the researchers measured success by 77 00:05:05,920 --> 00:05:10,839 Speaker 1: tallying whenever an original poster publicly admitted that their mind 78 00:05:10,960 --> 00:05:14,119 Speaker 1: had been changed. So how did the bots do well? 79 00:05:14,480 --> 00:05:19,000 Speaker 1: They achieved up to an eighteen percent success rate in 80 00:05:19,160 --> 00:05:22,400 Speaker 1: changing people's minds. Now, the critical piece you need to 81 00:05:22,440 --> 00:05:26,480 Speaker 1: know is that the average human success rate is about 82 00:05:26,880 --> 00:05:31,679 Speaker 1: three percent. So the bots had absolutely crushed their human 83 00:05:31,720 --> 00:05:37,560 Speaker 1: competition in an environment teeming with intellectuals. The bots not 84 00:05:37,680 --> 00:05:41,680 Speaker 1: only survived, but they thrived. In fact, one of the 85 00:05:41,720 --> 00:05:45,400 Speaker 1: AI accounts climbed into the ninety ninth percentile of all 86 00:05:45,520 --> 00:05:49,400 Speaker 1: users on this subreddit. It racked up ten thousand karma 87 00:05:49,440 --> 00:05:52,680 Speaker 1: points along the way. Now, one of the most surprising 88 00:05:52,839 --> 00:05:57,599 Speaker 1: aspects of the experiment was how effectively a small team 89 00:05:57,640 --> 00:06:02,400 Speaker 1: of researchers, operating with modest academic resources, how they were 90 00:06:02,400 --> 00:06:06,960 Speaker 1: able to outperform essentially every human debater on the platform. 91 00:06:07,279 --> 00:06:09,080 Speaker 2: So let that sink in for a second. 92 00:06:09,240 --> 00:06:13,279 Speaker 1: You have a handful of graduate students armed with an 93 00:06:13,400 --> 00:06:19,240 Speaker 1: LM and a sneaky deployment strategy, and they quietly demonstrated 94 00:06:19,279 --> 00:06:23,680 Speaker 1: what it might look like if influence operations were scaled 95 00:06:23,800 --> 00:06:27,599 Speaker 1: by AI. Now, when this data was released, one of 96 00:06:27,600 --> 00:06:32,040 Speaker 1: the scariest parts to people was that nobody noticed that 97 00:06:32,080 --> 00:06:36,320 Speaker 1: these debaters were actually AI and not real humans. This 98 00:06:36,520 --> 00:06:40,200 Speaker 1: reddit channel prides itself as one of the most critically 99 00:06:40,279 --> 00:06:44,719 Speaker 1: minded communities on the platform. If any place could sniff 100 00:06:44,760 --> 00:06:47,960 Speaker 1: out an imposter, it should have been here. But for 101 00:06:48,080 --> 00:06:51,960 Speaker 1: the entire four months of the experiment, the bots played 102 00:06:52,000 --> 00:06:56,960 Speaker 1: along undetected by the humans. So that's worrisome. And there's 103 00:06:57,040 --> 00:07:00,839 Speaker 1: another issue too, which is the role of personal data. 104 00:07:01,160 --> 00:07:03,760 Speaker 1: When the bots were given just a basic profile of 105 00:07:03,800 --> 00:07:06,520 Speaker 1: the user, their aged, their location, and their political leaning, 106 00:07:06,960 --> 00:07:11,000 Speaker 1: the success rate bumped up by one percentage point. Now, 107 00:07:11,040 --> 00:07:13,840 Speaker 1: you might say wait a minute, who cares. One percentage 108 00:07:13,840 --> 00:07:17,720 Speaker 1: point is very small, but in political terms, one percent 109 00:07:17,840 --> 00:07:21,720 Speaker 1: could be enough to swing a lot of national elections. 110 00:07:22,400 --> 00:07:25,880 Speaker 1: The difference between nudging public sentiment one way or another 111 00:07:26,400 --> 00:07:27,200 Speaker 1: can come down to. 112 00:07:27,200 --> 00:07:28,400 Speaker 2: Very tiny tweaks. 113 00:07:28,880 --> 00:07:32,200 Speaker 1: So the lesson that surface here is that even minimal 114 00:07:32,280 --> 00:07:38,440 Speaker 1: personalization can significantly sharpen the edge of an AI's persuasive power. 115 00:07:39,000 --> 00:07:44,240 Speaker 1: So when this story surfaced recently about these debaters not 116 00:07:44,400 --> 00:07:48,320 Speaker 1: being real human beings, the reactions were very grim because 117 00:07:48,400 --> 00:07:55,080 Speaker 1: everyone realized that this small academic experiment illustrated what stealth 118 00:07:55,400 --> 00:07:58,960 Speaker 1: influence operations could look like in the near future. If 119 00:07:59,000 --> 00:08:02,640 Speaker 1: a handful of real searchers could do this undetected, what 120 00:08:03,000 --> 00:08:07,760 Speaker 1: happens when you have state actors or corporations or political 121 00:08:07,800 --> 00:08:11,960 Speaker 1: campaigns with real resources doing the same thing at a 122 00:08:12,040 --> 00:08:16,800 Speaker 1: scale thousands of times larger. Also, what's worse is that 123 00:08:16,880 --> 00:08:20,400 Speaker 1: the failure of everyone to detect that these were bots 124 00:08:21,000 --> 00:08:25,040 Speaker 1: that raises serious questions about how we can protect public 125 00:08:25,120 --> 00:08:29,040 Speaker 1: discourse as we enter this new future. If smart reditor 126 00:08:29,240 --> 00:08:32,240 Speaker 1: debaters couldn't spot the difference between a human and a bot, 127 00:08:32,800 --> 00:08:37,160 Speaker 1: what hope is there for broader audiences. So even though 128 00:08:37,200 --> 00:08:41,319 Speaker 1: the major AI companies have all pledged to avoid building 129 00:08:41,360 --> 00:08:46,600 Speaker 1: models with dangerous capabilities like manipulating public opinion on mass. 130 00:08:47,400 --> 00:08:52,160 Speaker 1: This Reddit experiment suggests the thresholds may already be easier 131 00:08:52,200 --> 00:08:57,160 Speaker 1: to cross than any of us had anticipated. What this 132 00:08:57,240 --> 00:09:01,800 Speaker 1: all means is that with major elections in future years, 133 00:09:01,880 --> 00:09:06,040 Speaker 1: were genuinely gonna have to worry about this. The halcyon 134 00:09:06,600 --> 00:09:10,280 Speaker 1: days are gone when we could assume that the replies 135 00:09:10,320 --> 00:09:14,000 Speaker 1: to our online messages came from a fellow human, and 136 00:09:14,080 --> 00:09:18,640 Speaker 1: the risks are real and the potential for abuse is obvious. 137 00:09:19,200 --> 00:09:22,199 Speaker 1: But for today's episode, we're gonna look at all this 138 00:09:22,360 --> 00:09:25,600 Speaker 1: from a very different angle. I think it might be 139 00:09:25,679 --> 00:09:30,000 Speaker 1: worth asking a different question, which is why did the 140 00:09:30,040 --> 00:09:34,760 Speaker 1: bots succeed? After all, they didn't hack people's brains with 141 00:09:34,880 --> 00:09:39,920 Speaker 1: neural interfaces. They didn't spread fear or disinformation. They didn't 142 00:09:39,920 --> 00:09:45,840 Speaker 1: manipulate emotions or overwhelm users with noise. They simply made 143 00:09:46,520 --> 00:09:51,480 Speaker 1: better arguments. They didn't insult people while arguing with them. 144 00:09:51,520 --> 00:09:55,400 Speaker 1: They didn't do little jabs and digs. They just made 145 00:09:55,600 --> 00:10:02,400 Speaker 1: good arguments empathetically. The bots present their points calmly and 146 00:10:02,640 --> 00:10:08,199 Speaker 1: rationally and persuasively, and when users changed their minds. 147 00:10:08,320 --> 00:10:10,480 Speaker 2: It wasn't because they had been tricked. 148 00:10:10,520 --> 00:10:16,840 Speaker 1: It was because they recognized that another perspective was worth considering. 149 00:10:32,800 --> 00:10:37,319 Speaker 1: Humans often change their minds when faced with sound reasoning 150 00:10:37,360 --> 00:10:40,560 Speaker 1: that can be backed up, and mind changing is a 151 00:10:40,600 --> 00:10:44,640 Speaker 1: great thing. It means you're willing to reconsider some closely 152 00:10:44,679 --> 00:10:49,400 Speaker 1: held opinion when the facts or logic warranted. It's a 153 00:10:49,440 --> 00:10:53,560 Speaker 1: mark of intellectual strength, not a sign that you've been tricked. 154 00:10:54,240 --> 00:10:59,439 Speaker 1: In that light, the success of AI debaters isn't necessarily 155 00:10:59,480 --> 00:11:02,480 Speaker 1: a story about manipulation. It could also be a story 156 00:11:03,000 --> 00:11:07,920 Speaker 1: about raising the bar of debating. Now, as I followed 157 00:11:07,920 --> 00:11:11,200 Speaker 1: the outcome of this AI debater study, it struck me 158 00:11:11,679 --> 00:11:15,040 Speaker 1: that there may be a helpful precedent for thinking about 159 00:11:15,080 --> 00:11:18,880 Speaker 1: this moment, for thinking about how AI could. 160 00:11:18,640 --> 00:11:20,280 Speaker 2: Actually improve us. 161 00:11:20,640 --> 00:11:24,320 Speaker 1: So think about the events that unfolded in the hyper 162 00:11:24,360 --> 00:11:27,760 Speaker 1: competitive world of chess. If you can remember back to 163 00:11:27,840 --> 00:11:32,000 Speaker 1: nineteen ninety seven, IBM had built an AI system called 164 00:11:32,240 --> 00:11:37,160 Speaker 1: Deep Blue, and it defeated the world champion, Gary Kasprov. 165 00:11:37,320 --> 00:11:41,000 Speaker 1: This was seen as a seismic moment. The game of 166 00:11:41,120 --> 00:11:45,840 Speaker 1: chess had been quote unquote solved, and everyone worried that 167 00:11:46,040 --> 00:11:50,120 Speaker 1: human mastery was obsolete and then about two decades later, 168 00:11:50,559 --> 00:11:55,400 Speaker 1: Alpha Go beat the world's top Go player, and the 169 00:11:55,800 --> 00:12:02,080 Speaker 1: choir of concerned voices grew louder. But then something unexpected happened. 170 00:12:02,760 --> 00:12:06,240 Speaker 1: In May of twenty seventeen, the world's number one Go player, 171 00:12:06,720 --> 00:12:11,600 Speaker 1: Could G, faced off against his toughest opponent. G was 172 00:12:11,640 --> 00:12:14,920 Speaker 1: the reigning champion in Go, and you know this is 173 00:12:14,960 --> 00:12:18,360 Speaker 1: the game where two players use smooth black rocks or 174 00:12:18,400 --> 00:12:22,640 Speaker 1: white rocks to surround more territory than their opponent. In 175 00:12:22,720 --> 00:12:25,280 Speaker 1: G's case, he was playing against AI. He was playing 176 00:12:25,280 --> 00:12:29,120 Speaker 1: against Alpha Go, which had been trained on many millions 177 00:12:29,120 --> 00:12:34,079 Speaker 1: of games, and it had deeply absorbed the statistics of 178 00:12:34,240 --> 00:12:35,400 Speaker 1: possible plays. 179 00:12:35,480 --> 00:12:38,000 Speaker 2: So G lost the first game. 180 00:12:38,480 --> 00:12:42,400 Speaker 1: Alpha Go had pulled moves that none of G's human 181 00:12:42,440 --> 00:12:45,760 Speaker 1: opponents had ever thought of, and then G lost the 182 00:12:45,880 --> 00:12:49,080 Speaker 1: second game. The fact is he didn't stand a chance. 183 00:12:49,200 --> 00:12:52,319 Speaker 1: The AI had won over a human in a game 184 00:12:52,360 --> 00:12:56,640 Speaker 1: that was way more complex than chess, and subsequent versions 185 00:12:56,679 --> 00:13:00,840 Speaker 1: of this AI will without doubt continue to win ever more. 186 00:13:01,480 --> 00:13:04,120 Speaker 1: But that's not the interesting part of the story. The 187 00:13:04,280 --> 00:13:09,320 Speaker 1: interesting part is what happened next. G got over his 188 00:13:09,400 --> 00:13:14,400 Speaker 1: embarrassment and he became mesmerized by what the heck had 189 00:13:14,520 --> 00:13:19,640 Speaker 1: just transpired. He studied the games that he lost. Now, 190 00:13:19,679 --> 00:13:23,440 Speaker 1: before he had played Alpha Go, G had won most 191 00:13:23,480 --> 00:13:26,880 Speaker 1: of the games against his human opponents, But after he 192 00:13:27,000 --> 00:13:30,240 Speaker 1: played Alpha Go, he was able to beat his human 193 00:13:30,280 --> 00:13:35,040 Speaker 1: opponents even more easily. In other words, after his species 194 00:13:35,160 --> 00:13:39,040 Speaker 1: shaming defeat in twenty seventeen, G went on to play 195 00:13:39,360 --> 00:13:42,920 Speaker 1: twelve straight matches against fellow humans, and he won. 196 00:13:42,760 --> 00:13:45,560 Speaker 2: Them all in a row. Now, what had happened. 197 00:13:46,000 --> 00:13:50,320 Speaker 1: G had been exposed to new kinds of moves and 198 00:13:50,520 --> 00:13:55,840 Speaker 1: strategies that AlphaGo was pulling off that lay outside the 199 00:13:55,920 --> 00:14:00,400 Speaker 1: traditional ideas. All these moves were legal and pop pole, 200 00:14:01,120 --> 00:14:03,880 Speaker 1: but they were different from what had been played over 201 00:14:03,920 --> 00:14:08,320 Speaker 1: the previous twenty five hundred years. For go eficionados, this 202 00:14:08,400 --> 00:14:13,040 Speaker 1: included novelties like playing a stone directly diagonal to your 203 00:14:13,080 --> 00:14:18,319 Speaker 1: opponent's loan stone, or commonly playing six space extensions while 204 00:14:18,400 --> 00:14:22,520 Speaker 1: humans tend to prefer five space. So G reported that 205 00:14:22,680 --> 00:14:26,480 Speaker 1: playing against the AI was like opening a door to 206 00:14:26,600 --> 00:14:31,600 Speaker 1: another world. Some people worry that AI might make games 207 00:14:31,600 --> 00:14:35,680 Speaker 1: of Chess and Go irrelevant, but amazingly, that is not 208 00:14:35,800 --> 00:14:41,200 Speaker 1: what's happened. When AI Trump's Chess champions and Go champions 209 00:14:41,800 --> 00:14:46,840 Speaker 1: It does so with moves that seem inhumanly creative, but 210 00:14:46,960 --> 00:14:51,040 Speaker 1: all the moves are allowed by the rules. Humans simply 211 00:14:51,080 --> 00:14:53,520 Speaker 1: never thought to go there before. And the key is 212 00:14:53,560 --> 00:14:57,600 Speaker 1: that once the moves are seen by humans, then they're 213 00:14:57,640 --> 00:15:03,640 Speaker 1: easily incorporated into our models. GE's experience with Alpha Go 214 00:15:04,040 --> 00:15:09,560 Speaker 1: illuminated new nooks and crannies in his landscape. It exposed 215 00:15:10,080 --> 00:15:14,080 Speaker 1: pathways that have never been lit up before. So AI 216 00:15:14,280 --> 00:15:18,920 Speaker 1: immediately became a tool for steep improvement. And nowadays all 217 00:15:19,120 --> 00:15:22,040 Speaker 1: chests and Go players above a certain level they all 218 00:15:22,160 --> 00:15:27,720 Speaker 1: train with AI. They study the surprising and sometimes counterintuitive 219 00:15:28,120 --> 00:15:33,440 Speaker 1: and alien strategies of the artificial mind, and just like Koje, 220 00:15:33,760 --> 00:15:37,800 Speaker 1: today's grand masters play a deeper and more creative game 221 00:15:38,200 --> 00:15:43,240 Speaker 1: than ever before. In other words, many commentators are worried 222 00:15:43,320 --> 00:15:46,760 Speaker 1: that AI is going to leave humans far behind, and 223 00:15:46,800 --> 00:15:48,120 Speaker 1: in some respects that's true. 224 00:15:48,120 --> 00:15:52,320 Speaker 2: But as computation improves, so will we. 225 00:15:53,240 --> 00:15:57,320 Speaker 1: AI will illuminate dark parts of our maps, allowing us 226 00:15:57,320 --> 00:16:01,680 Speaker 1: to see new roads we didn't even suspect. The key 227 00:16:01,680 --> 00:16:03,600 Speaker 1: point I want to make here is that instead of 228 00:16:04,040 --> 00:16:09,560 Speaker 1: dampening human excellence, AI sparked a renaissance. In these games, 229 00:16:09,760 --> 00:16:14,560 Speaker 1: human minds are elevated by learning from our artificial cousins. 230 00:16:14,800 --> 00:16:17,160 Speaker 1: And if it were just chests and go, that's one thing. 231 00:16:17,200 --> 00:16:20,240 Speaker 1: But I think we can detect this pattern happening all 232 00:16:20,400 --> 00:16:23,480 Speaker 1: over the place. For example, the same thing has happened 233 00:16:23,560 --> 00:16:28,560 Speaker 1: in poker. Professional poker players master the art of bluffing. 234 00:16:28,640 --> 00:16:32,040 Speaker 1: They project strength when they're weak or weakness when they're strong. 235 00:16:32,400 --> 00:16:35,280 Speaker 1: This has always been seen like a deep test of 236 00:16:35,360 --> 00:16:40,680 Speaker 1: human psychology. The players read faces, they watch for micro expressions, 237 00:16:40,720 --> 00:16:43,720 Speaker 1: they guess intentions, So in this way, poker is a 238 00:16:43,760 --> 00:16:48,360 Speaker 1: really human game. But then Carnegie Mellon cooked up an 239 00:16:48,400 --> 00:16:51,920 Speaker 1: AI called Librotus. Obviously, it doesn't have a face, it 240 00:16:51,920 --> 00:16:54,720 Speaker 1: doesn't sweat, it doesn't blink, so it wasn't really like 241 00:16:54,960 --> 00:16:59,080 Speaker 1: playing another human. So Lebronis started playing poker, and it 242 00:16:59,160 --> 00:17:03,560 Speaker 1: did things that baffled the human players. The AI began 243 00:17:03,760 --> 00:17:08,160 Speaker 1: making betting choices that looked bizarre. Sometimes it over bet 244 00:17:08,240 --> 00:17:11,320 Speaker 1: the pot by huge margins, a move that the human 245 00:17:11,359 --> 00:17:15,840 Speaker 1: players considered reckless. Other times it made really tiny bets 246 00:17:15,960 --> 00:17:19,640 Speaker 1: in situations where no human would bother and the human 247 00:17:19,640 --> 00:17:23,639 Speaker 1: players dismissed this all as network nonsense. But as the 248 00:17:23,760 --> 00:17:29,880 Speaker 1: games went on, they realized these inhuman strategies were working. 249 00:17:29,920 --> 00:17:33,760 Speaker 1: The AI was winning, and it was doing so by 250 00:17:33,920 --> 00:17:38,480 Speaker 1: reinventing the language of poker. And now, just like with 251 00:17:38,600 --> 00:17:43,399 Speaker 1: chess and go, professional poker players study these strategies. You 252 00:17:43,440 --> 00:17:47,520 Speaker 1: have human players adopting patterns of play that just didn't 253 00:17:47,520 --> 00:17:52,639 Speaker 1: exist before the machines taught them. AI made humans better 254 00:17:52,720 --> 00:17:55,840 Speaker 1: poker players. And let me give you another example. There's 255 00:17:55,880 --> 00:18:00,679 Speaker 1: a strategy video game called StarCraft two where players build 256 00:18:00,800 --> 00:18:04,160 Speaker 1: armies and manage resources and try to outwit their opponent. 257 00:18:04,520 --> 00:18:07,919 Speaker 1: So the company DeepMind set some AI agents on it, 258 00:18:08,080 --> 00:18:12,600 Speaker 1: and those agents showed humans an entirely different way of 259 00:18:12,680 --> 00:18:16,720 Speaker 1: going about running the strategy. At first, people thought the 260 00:18:16,840 --> 00:18:22,040 Speaker 1: AI play seemed unfair and robotic, but then human players 261 00:18:22,119 --> 00:18:26,640 Speaker 1: began to adapt. They started rethinking their own strategies. They 262 00:18:26,680 --> 00:18:32,640 Speaker 1: started redesigning their build orders. They discovered things like sometimes 263 00:18:32,960 --> 00:18:37,000 Speaker 1: sacrificing entire groups of units early in the game could 264 00:18:37,040 --> 00:18:41,160 Speaker 1: produce long term advantages. These were not moves that humans 265 00:18:41,160 --> 00:18:43,760 Speaker 1: had come up with, or possibly ever would have come 266 00:18:43,840 --> 00:18:47,520 Speaker 1: up with, but once they were revealed, they became part 267 00:18:47,600 --> 00:18:50,760 Speaker 1: of the human playbook. Or take a different video game 268 00:18:50,920 --> 00:18:54,879 Speaker 1: called Dota two. You've got professional players. But then open 269 00:18:54,920 --> 00:18:58,760 Speaker 1: AI built a system that used strategies that looked to 270 00:18:58,880 --> 00:19:03,560 Speaker 1: human professionals like they were clumsy. The AI would push 271 00:19:03,600 --> 00:19:08,320 Speaker 1: aggressively when humans would retreat or take risks that seemed absurd, 272 00:19:08,840 --> 00:19:12,679 Speaker 1: but more often than not, the machines won, and just 273 00:19:12,720 --> 00:19:16,120 Speaker 1: like with Chests and Go and StarCraft, the human professionals 274 00:19:16,160 --> 00:19:19,479 Speaker 1: were forced to ask themselves had they been playing with 275 00:19:19,640 --> 00:19:24,800 Speaker 1: blinders on all along? After all, the machines were uncovering 276 00:19:24,960 --> 00:19:29,199 Speaker 1: landscapes that we just never knew were there. In other words, 277 00:19:29,240 --> 00:19:33,960 Speaker 1: whenever AI discovers new pathways inside the universe of a game, 278 00:19:34,280 --> 00:19:38,439 Speaker 1: it illuminates something for us about the fence lines of 279 00:19:38,520 --> 00:19:44,000 Speaker 1: our own imagination. Humans had decades to explore video games, 280 00:19:44,040 --> 00:19:48,719 Speaker 1: and centuries to explore poker, and millennia to explore the 281 00:19:48,760 --> 00:19:53,560 Speaker 1: games of Chess and Go. But within weeks AI uncovered 282 00:19:53,680 --> 00:19:58,240 Speaker 1: strategies that had never even struck us. And this is 283 00:19:58,320 --> 00:20:02,880 Speaker 1: the point of today's episode. AI can expand the possibility 284 00:20:02,880 --> 00:20:05,800 Speaker 1: space for us. What we've seen in the past few 285 00:20:05,880 --> 00:20:09,720 Speaker 1: years is the limits of our imagination. Even in domains 286 00:20:10,080 --> 00:20:13,880 Speaker 1: we thought we had mastered, our internal models might be 287 00:20:14,240 --> 00:20:17,639 Speaker 1: a lot more narrow than we had ever realized. But 288 00:20:17,760 --> 00:20:21,720 Speaker 1: by playing with machines, we're learning how they think, and 289 00:20:21,840 --> 00:20:26,040 Speaker 1: more importantly, how we might think differently. So beyond chests 290 00:20:26,119 --> 00:20:29,400 Speaker 1: or go or video games, the bigger game is how 291 00:20:29,440 --> 00:20:32,560 Speaker 1: we can expand our own internal models. A lot of 292 00:20:32,600 --> 00:20:36,440 Speaker 1: people cast this as man versus machine, but I think 293 00:20:36,480 --> 00:20:40,920 Speaker 1: a more productive lens is seeing it as man learning 294 00:20:41,000 --> 00:20:45,159 Speaker 1: from machine. The prize is a new way of seeing 295 00:20:45,200 --> 00:20:47,960 Speaker 1: the game, and maybe, by extension, a new way of 296 00:20:48,080 --> 00:20:49,000 Speaker 1: seeing the world. 297 00:20:51,359 --> 00:20:52,880 Speaker 2: So let's come back to that. 298 00:20:53,000 --> 00:20:57,080 Speaker 1: Amazing Swiss experiment where they made debate bots on Reddit 299 00:20:57,480 --> 00:21:01,760 Speaker 1: that performed way better than humans. I suggest it's possible 300 00:21:01,800 --> 00:21:05,520 Speaker 1: and maybe even likely, that a similar dynamic is going 301 00:21:05,560 --> 00:21:10,800 Speaker 1: to unfold in the world of persuasive argumentation that happened 302 00:21:10,800 --> 00:21:14,520 Speaker 1: in chess and go in video games. If AI agents 303 00:21:14,960 --> 00:21:19,720 Speaker 1: can model the best forms of debate, clear and structured 304 00:21:20,040 --> 00:21:25,480 Speaker 1: and empathetic and rational, then we humans can learn something 305 00:21:25,960 --> 00:21:30,040 Speaker 1: from our artificial cousins. We can try out new moves, 306 00:21:30,119 --> 00:21:36,000 Speaker 1: We can sharpen our skills on digital grinding stones. Imagine 307 00:21:36,040 --> 00:21:41,680 Speaker 1: a future where students practice crafting arguments by debating highly 308 00:21:41,720 --> 00:21:48,520 Speaker 1: skilled AI tutors. Imagine online discussions becoming more useful because 309 00:21:48,680 --> 00:21:55,480 Speaker 1: users have gotten used to high quality exchanges. Imagine politicians 310 00:21:55,520 --> 00:22:00,000 Speaker 1: and journalists and everyday citizens pushed to improve their things 311 00:22:00,920 --> 00:22:06,399 Speaker 1: and better articulate their positions. So, rather than dumbing down conversation, 312 00:22:06,880 --> 00:22:11,640 Speaker 1: the rise of high performing debate bots could nudge public 313 00:22:11,680 --> 00:22:16,600 Speaker 1: discourse toward a new level of reasoned discussion. If that 314 00:22:16,680 --> 00:22:19,400 Speaker 1: turns out to be the case, then we may come 315 00:22:19,440 --> 00:22:22,160 Speaker 1: to see AI not as an enemy but as a 316 00:22:22,200 --> 00:22:25,840 Speaker 1: sparring partner, just like in Chest and Go, and that 317 00:22:26,359 --> 00:22:45,639 Speaker 1: has very different implications. Now, some people will take just 318 00:22:45,680 --> 00:22:49,280 Speaker 1: the opposite position. If AI is so much better at 319 00:22:49,280 --> 00:22:52,919 Speaker 1: debating than we are, won't that cause us to lose 320 00:22:53,000 --> 00:22:56,280 Speaker 1: the skill entirely because there doesn't seem to be a 321 00:22:56,280 --> 00:22:59,320 Speaker 1: point in being a good debater if the computer can 322 00:22:59,359 --> 00:23:01,760 Speaker 1: do it better. But I think this is not an 323 00:23:01,800 --> 00:23:06,560 Speaker 1: issue because even when we extrapolate this era of AI, 324 00:23:07,240 --> 00:23:10,000 Speaker 1: people will still be talking with each other most of 325 00:23:10,040 --> 00:23:12,879 Speaker 1: the time. You'll still be with your family over dinner, 326 00:23:13,200 --> 00:23:16,160 Speaker 1: or with your friends over a coffee, or arguing with 327 00:23:16,200 --> 00:23:19,320 Speaker 1: your neighbor about where the fence goes, or debating with 328 00:23:19,400 --> 00:23:22,240 Speaker 1: a stranger at a town hall or whatever. It's not 329 00:23:22,359 --> 00:23:25,520 Speaker 1: like we are plugging into the matrix through a port 330 00:23:25,520 --> 00:23:27,520 Speaker 1: in the back of our neck, and we're only going 331 00:23:27,560 --> 00:23:33,480 Speaker 1: to be communicating with machines. We are intensely social creatures, 332 00:23:33,960 --> 00:23:37,280 Speaker 1: and the success of our species has been in part 333 00:23:37,320 --> 00:23:41,280 Speaker 1: because of our massive sociability. So I think we will 334 00:23:41,560 --> 00:23:46,359 Speaker 1: debate other humans all the time, and AI is just 335 00:23:46,440 --> 00:23:49,600 Speaker 1: going to train us to be a little bit better 336 00:23:49,680 --> 00:23:52,520 Speaker 1: at it. Now, I don't want to minimize the risks 337 00:23:52,560 --> 00:23:56,719 Speaker 1: we're facing. We need audit tools and authentication systems that 338 00:23:56,800 --> 00:23:59,960 Speaker 1: can verify whether content was written by humans or AIA. 339 00:24:00,640 --> 00:24:06,320 Speaker 1: We need technical solutions like water marking or cryptographic content authentication, 340 00:24:06,760 --> 00:24:11,720 Speaker 1: and providence tracking. This is all essential, but beyond these 341 00:24:11,760 --> 00:24:17,280 Speaker 1: defensive measures, we should also recognize the opportunity because like 342 00:24:17,320 --> 00:24:22,000 Speaker 1: the chests and go engines that reshaped how champion players think, 343 00:24:22,440 --> 00:24:27,840 Speaker 1: debate bots will reshape how we reason and argue and 344 00:24:28,040 --> 00:24:31,959 Speaker 1: understand one another. If we take this on correctly, AI 345 00:24:32,320 --> 00:24:36,359 Speaker 1: might just up our game. It's still early days, but 346 00:24:36,440 --> 00:24:39,639 Speaker 1: my impression so far is that AI is already playing 347 00:24:39,640 --> 00:24:43,359 Speaker 1: this role in many areas, including for example, the arts. 348 00:24:44,119 --> 00:24:48,160 Speaker 1: At least in some ways, it's amplifying human creativity. It's 349 00:24:48,200 --> 00:24:52,000 Speaker 1: like a jazz partner who plays a riff you weren't expecting, 350 00:24:52,480 --> 00:24:56,159 Speaker 1: or a painting mentor who introduces a color that you 351 00:24:56,240 --> 00:24:59,560 Speaker 1: never thought to use. AI forces us out of our 352 00:24:59,680 --> 00:25:03,240 Speaker 1: group in the arts. It shows us that the boundaries 353 00:25:03,240 --> 00:25:08,000 Speaker 1: of our imagination can be stretched by encountering minds, even 354 00:25:08,040 --> 00:25:13,520 Speaker 1: synthetic minds that think differently. These AI systems presumably don't 355 00:25:13,560 --> 00:25:18,159 Speaker 1: have esthetic tastes or emotional longing in the way that 356 00:25:18,200 --> 00:25:22,119 Speaker 1: we do, but they're awesome at doing remixes and trying 357 00:25:22,200 --> 00:25:25,920 Speaker 1: strange new things out, and in this way they teach 358 00:25:26,040 --> 00:25:31,359 Speaker 1: us just how much more flexible and expansive our own 359 00:25:31,400 --> 00:25:35,920 Speaker 1: creativity can be. The takeaway is that AI shakes us 360 00:25:36,040 --> 00:25:40,480 Speaker 1: loose from esthetic grooves that we might never leave on 361 00:25:40,520 --> 00:25:44,480 Speaker 1: our own, and AI is doing exactly that in science 362 00:25:44,600 --> 00:25:48,399 Speaker 1: as well. Just as one example, in material science and 363 00:25:48,520 --> 00:25:53,080 Speaker 1: drug design, AI is proposing molecular structures that humans just 364 00:25:53,160 --> 00:25:57,200 Speaker 1: wouldn't think to try already. It's nudging us to rethink 365 00:25:57,280 --> 00:26:01,879 Speaker 1: what counts as a reasonable chemical design, and we're seeing 366 00:26:01,880 --> 00:26:06,320 Speaker 1: the same thing in AI assisted math proofs. There are 367 00:26:06,359 --> 00:26:11,680 Speaker 1: systems like lean and GPT fueled theorem provers. There are 368 00:26:11,720 --> 00:26:17,400 Speaker 1: these systems that are suggesting lemmas or strategies that mathematicians 369 00:26:18,080 --> 00:26:20,560 Speaker 1: just hadn't thought of. They could have thought of it, 370 00:26:20,840 --> 00:26:24,720 Speaker 1: they just never did. These AI math proofs sometimes come 371 00:26:24,760 --> 00:26:28,879 Speaker 1: out strange or elegant or messy, but in all cases 372 00:26:29,480 --> 00:26:37,520 Speaker 1: they force mathematicians to think differently about structure and possibility. 373 00:26:37,720 --> 00:26:42,760 Speaker 1: So AI can serve as a creativity engine in arts 374 00:26:42,760 --> 00:26:46,679 Speaker 1: and in science, pushing us outside of our intuition driven 375 00:26:47,160 --> 00:26:50,000 Speaker 1: blind spots. And there's something else that AI might be 376 00:26:50,000 --> 00:26:52,639 Speaker 1: able to help us with, which is our personal lives 377 00:26:52,920 --> 00:26:56,680 Speaker 1: and how AI can uncover blind spots there as well. 378 00:26:57,080 --> 00:26:59,399 Speaker 1: If you're a regular listener, you know I've talked on 379 00:26:59,480 --> 00:27:03,040 Speaker 1: many previous episodes about the ways in which we fool 380 00:27:03,119 --> 00:27:05,000 Speaker 1: ourselves because you're. 381 00:27:04,800 --> 00:27:06,000 Speaker 2: Not one thing. 382 00:27:06,119 --> 00:27:09,560 Speaker 1: Instead, you're built of many different drives, or, as I 383 00:27:09,560 --> 00:27:11,960 Speaker 1: wrote my book Incognito, you can think about the brain 384 00:27:12,359 --> 00:27:15,880 Speaker 1: as a team of rivals. So think about the way 385 00:27:15,920 --> 00:27:20,000 Speaker 1: that many people approach fitness. Someone might sign up for 386 00:27:20,320 --> 00:27:24,199 Speaker 1: an expensive year long gym membership because they're determined to 387 00:27:24,200 --> 00:27:26,520 Speaker 1: get in shape, and on paper, that looks like a 388 00:27:26,520 --> 00:27:29,760 Speaker 1: great decision. It's an investment in their health. But at 389 00:27:29,800 --> 00:27:33,280 Speaker 1: the same time, that person keeps telling themselves that they 390 00:27:33,320 --> 00:27:37,120 Speaker 1: just don't have the time to exercise, so months pass, 391 00:27:37,440 --> 00:27:41,560 Speaker 1: the membership goes unused, the rationalization continues. We've all seen 392 00:27:41,600 --> 00:27:46,280 Speaker 1: this sort of thing. An AI personal assistant reviewing their 393 00:27:46,400 --> 00:27:51,520 Speaker 1: spending and scheduling could point out the contradiction. It could say, Hey, 394 00:27:51,520 --> 00:27:54,199 Speaker 1: you know what, You've paid twelve hundred bucks for a 395 00:27:54,359 --> 00:27:58,000 Speaker 1: gym that you rarely visit, but you also spend seven 396 00:27:58,040 --> 00:28:01,840 Speaker 1: hours a week watching streaming shows and an hour every 397 00:28:01,920 --> 00:28:05,920 Speaker 1: day doom scrolling on social media. You say you don't 398 00:28:05,960 --> 00:28:10,920 Speaker 1: have time, but your calendar suggests otherwise. We humans are 399 00:28:10,960 --> 00:28:13,960 Speaker 1: so good at fooling ourselves with our little stories. We 400 00:28:14,040 --> 00:28:18,240 Speaker 1: smooth over inconsistencies without even realizing it. But a good 401 00:28:18,320 --> 00:28:21,960 Speaker 1: AI isn't going to buy the story. It's going to 402 00:28:22,080 --> 00:28:24,960 Speaker 1: see the data and it's going to highlight the bias 403 00:28:25,040 --> 00:28:28,119 Speaker 1: for us. And when it does, it forces us to 404 00:28:28,160 --> 00:28:31,160 Speaker 1: confront something we might have preferred to ignore. 405 00:28:31,600 --> 00:28:34,439 Speaker 2: But in this way it can make us better. 406 00:28:34,920 --> 00:28:37,240 Speaker 1: Finally, I'll just mention something in my life where I'm 407 00:28:37,280 --> 00:28:41,160 Speaker 1: noticing that AI is improving me. I have been driving 408 00:28:41,200 --> 00:28:44,400 Speaker 1: for decades, and knock on wood, I've never had an accident, 409 00:28:44,680 --> 00:28:48,040 Speaker 1: presumably because I'm a perfectly good driver. But for the 410 00:28:48,120 --> 00:28:51,640 Speaker 1: past half year, I haven't driven myself around too much 411 00:28:51,760 --> 00:28:54,320 Speaker 1: because I have a Tesla with full self driving mode 412 00:28:54,360 --> 00:28:57,200 Speaker 1: and I let it drive me everywhere. So you just 413 00:28:57,240 --> 00:28:59,160 Speaker 1: tell the Tesla where you want to go, and it 414 00:28:59,200 --> 00:29:00,960 Speaker 1: does all the every bit of it. 415 00:29:01,320 --> 00:29:02,720 Speaker 2: And here's the important part. 416 00:29:03,360 --> 00:29:06,640 Speaker 1: I've had to admit that it's a better driver than 417 00:29:06,680 --> 00:29:09,520 Speaker 1: I am. There's the obvious stuff, like the fact that 418 00:29:09,560 --> 00:29:13,160 Speaker 1: it never blinks or sneezes or gets distracted by something, 419 00:29:13,200 --> 00:29:16,640 Speaker 1: but instead it has cameras that take thirty six frames 420 00:29:16,680 --> 00:29:20,000 Speaker 1: per second and never ever rest, even for the length 421 00:29:20,000 --> 00:29:23,560 Speaker 1: of an eyeblink. But more than that is a deeper, 422 00:29:23,680 --> 00:29:26,720 Speaker 1: more subtle issue. It's taught me that there are certain 423 00:29:26,760 --> 00:29:31,800 Speaker 1: reactions I have that aren't optimized. For example, when some 424 00:29:32,080 --> 00:29:34,800 Speaker 1: car pulls down traffic in front of me, I tend 425 00:29:34,880 --> 00:29:38,640 Speaker 1: to slow down, but on full self driving mode, the 426 00:29:38,720 --> 00:29:41,720 Speaker 1: Tesla just keeps going about the same speed that it 427 00:29:41,840 --> 00:29:44,720 Speaker 1: was going. And while I would have thought that that 428 00:29:44,800 --> 00:29:47,960 Speaker 1: seems a little aggressive, it turns out to be just 429 00:29:48,160 --> 00:29:50,640 Speaker 1: fine because the other car gets up to speed, so 430 00:29:50,680 --> 00:29:53,240 Speaker 1: there was no real need to slow down. It turns 431 00:29:53,280 --> 00:29:57,440 Speaker 1: out it's not aggressive, it's just optimized. And I see 432 00:29:57,480 --> 00:30:00,480 Speaker 1: lots of examples of this subtle differences in the way 433 00:30:00,720 --> 00:30:04,680 Speaker 1: that it drives versus me, and I am learning from it. 434 00:30:05,360 --> 00:30:07,920 Speaker 1: And there's also a relationship point to be made here, 435 00:30:08,240 --> 00:30:12,000 Speaker 1: because my wife generally thinks that I am a backseat driver, 436 00:30:12,120 --> 00:30:15,680 Speaker 1: because I'll often react with my body when she's driving, 437 00:30:16,080 --> 00:30:18,560 Speaker 1: and she's a perfectly great driver, but I can't help 438 00:30:18,600 --> 00:30:21,560 Speaker 1: myself because I would slow down when someone pulls out 439 00:30:21,560 --> 00:30:24,400 Speaker 1: in front of me and she doesn't. But ever since 440 00:30:24,440 --> 00:30:30,160 Speaker 1: I've seen how AI drives, I'm no longer an insufferable passenger. 441 00:30:30,680 --> 00:30:33,800 Speaker 1: Now I know what optimized driving is, and I admit 442 00:30:33,880 --> 00:30:37,280 Speaker 1: that she was closer to it than I was. So 443 00:30:37,400 --> 00:30:39,200 Speaker 1: I don't know if I'm going to argue that AI 444 00:30:39,320 --> 00:30:43,120 Speaker 1: is going to help marriages, but maybe Okay, so let 445 00:30:43,120 --> 00:30:46,160 Speaker 1: me zoom out to the big picture. We have very 446 00:30:46,200 --> 00:30:49,120 Speaker 1: limited internal models, and the main thing we're going to 447 00:30:49,200 --> 00:30:54,200 Speaker 1: get from AI is an illumination of ideas outside the 448 00:30:54,240 --> 00:30:57,040 Speaker 1: borders of our models. And to this end, I was 449 00:30:57,080 --> 00:31:00,840 Speaker 1: thinking the other day about the twenty sixteen movie Rival. 450 00:31:01,080 --> 00:31:03,880 Speaker 1: If you haven't seen it, this is a wonderful science 451 00:31:03,880 --> 00:31:07,520 Speaker 1: fiction drama where a linguist is recruited by the US 452 00:31:07,680 --> 00:31:14,160 Speaker 1: military after mysterious alien spacecraft appear around the world, So 453 00:31:14,360 --> 00:31:18,920 Speaker 1: rather than focusing on flashy battles, the film centers on communication, 454 00:31:19,640 --> 00:31:25,040 Speaker 1: decoding the alien strange language to understand why they have come. Now, 455 00:31:25,040 --> 00:31:29,240 Speaker 1: there's good suspense, but the movie is actually quietly building 456 00:31:29,280 --> 00:31:34,080 Speaker 1: towards its philosophical core. It's a concept from linguistics known 457 00:31:34,120 --> 00:31:37,840 Speaker 1: as the Sapier Wharf hypothesis, which I've talked about on 458 00:31:37,880 --> 00:31:42,160 Speaker 1: a couple of previous episodes. This hypothesis suggests that the 459 00:31:42,360 --> 00:31:45,920 Speaker 1: language you speak doesn't only allow you to express your thoughts, 460 00:31:46,000 --> 00:31:49,560 Speaker 1: but more than that, it shapes the way you think 461 00:31:49,640 --> 00:31:55,200 Speaker 1: and even possibly how you perceive reality itself. So in 462 00:31:55,240 --> 00:31:59,840 Speaker 1: the movie Arrival, this hypothesis becomes the lens through which 463 00:32:00,080 --> 00:32:03,120 Speaker 1: the entire story unfolds. So I won't give away the 464 00:32:03,160 --> 00:32:06,200 Speaker 1: spoiler of the movie, but it pivots on this point. 465 00:32:06,800 --> 00:32:11,640 Speaker 1: Once the linguist learns the alien's language, she's able to 466 00:32:12,120 --> 00:32:17,880 Speaker 1: perceive and experience the world differently. Upon learning their language, 467 00:32:17,920 --> 00:32:22,000 Speaker 1: she now has powers that humans don't normally have. So 468 00:32:22,680 --> 00:32:25,520 Speaker 1: I think that's a good metaphor for thinking about our 469 00:32:25,760 --> 00:32:30,600 Speaker 1: moment with AI landing here like an alien species. Are 470 00:32:30,640 --> 00:32:34,880 Speaker 1: we going to be exposed to ideas and concepts that 471 00:32:35,080 --> 00:32:40,680 Speaker 1: expand our thinking. When we think about the arrival of AI, 472 00:32:41,240 --> 00:32:44,880 Speaker 1: it's tempting to frame it as a contest will the 473 00:32:44,960 --> 00:32:48,640 Speaker 1: machine replace the worker and the scientist, and the gamer 474 00:32:48,720 --> 00:32:49,440 Speaker 1: and the composer. 475 00:32:50,280 --> 00:32:54,440 Speaker 2: But the more interesting story is not about competition. 476 00:32:55,000 --> 00:33:00,880 Speaker 1: It's about collaboration and how AI is going to stretch 477 00:33:00,920 --> 00:33:09,160 Speaker 1: the boundaries of human imagination. Go to eagleman dot com 478 00:33:09,160 --> 00:33:12,520 Speaker 1: slash podcast for more information and to find further reading. 479 00:33:13,240 --> 00:33:15,600 Speaker 1: Check out my newsletter on substack and be a part 480 00:33:15,640 --> 00:33:18,560 Speaker 1: of the online chats there. And you can watch the 481 00:33:18,640 --> 00:33:21,800 Speaker 1: videos of Inner Cosmos on YouTube, where you can leave comments. 482 00:33:23,720 --> 00:33:28,840 Speaker 1: Until next time, I'm David Eagleman, and this is Inner Cosmos.