1 00:00:04,600 --> 00:00:08,559 Speaker 1: What's it like to have a much lower IQ than 2 00:00:08,600 --> 00:00:11,880 Speaker 1: you currently have or to have a much higher IQ? 3 00:00:15,520 --> 00:00:18,400 Speaker 1: Welcome to the inner Cosmos with me David Eagleman. I'm 4 00:00:18,400 --> 00:00:21,599 Speaker 1: a neuroscientist and an author at Stanford and in these 5 00:00:21,720 --> 00:00:26,599 Speaker 1: episodes we sail deeply into our three pound universe to 6 00:00:26,720 --> 00:00:30,920 Speaker 1: understand why and how our lives look the way they do. 7 00:00:37,520 --> 00:00:43,160 Speaker 1: Today's episode is about intelligence. What is it What would 8 00:00:43,240 --> 00:00:46,960 Speaker 1: it be like to have the intelligence of a mosquito, 9 00:00:47,640 --> 00:00:51,400 Speaker 1: or a horse or a squirrel. What would it be 10 00:00:51,560 --> 00:00:56,640 Speaker 1: like to presumably understand only very basic things right around you, 11 00:00:57,400 --> 00:01:01,760 Speaker 1: not doing sophisticated simulation of the future like we do 12 00:01:01,880 --> 00:01:06,040 Speaker 1: as humans. And what can we say about the present 13 00:01:06,200 --> 00:01:12,520 Speaker 1: and future of intelligence that is artificial? Okay, so let's 14 00:01:12,520 --> 00:01:15,319 Speaker 1: start with this question of what is it like to 15 00:01:15,440 --> 00:01:20,520 Speaker 1: have a different level of intelligence? I see people post 16 00:01:20,600 --> 00:01:24,880 Speaker 1: this question sometimes online on forums like Quora. Someone will write, 17 00:01:25,680 --> 00:01:29,240 Speaker 1: I have an IQ of sixty eight, what is it 18 00:01:29,520 --> 00:01:34,000 Speaker 1: like to have a higher IQ? Now? First of all, 19 00:01:34,040 --> 00:01:38,400 Speaker 1: I think this is an amazing question because it acknowledges 20 00:01:38,480 --> 00:01:42,160 Speaker 1: that not everyone is having the same experience on the inside, 21 00:01:42,520 --> 00:01:45,400 Speaker 1: and so someone is taking the time to ask, what 22 00:01:45,440 --> 00:01:48,440 Speaker 1: would it be like to have what is called a 23 00:01:48,640 --> 00:01:54,960 Speaker 1: higher intelligence level. What's the experience of that? Now, the 24 00:01:55,040 --> 00:01:58,000 Speaker 1: interesting thing in life is that we can't run a 25 00:01:58,120 --> 00:02:02,480 Speaker 1: control experiment on our own experience of the world, and 26 00:02:02,520 --> 00:02:06,480 Speaker 1: so whatever IQ you have, you sort of have just 27 00:02:06,560 --> 00:02:10,120 Speaker 1: that one experience of reality. But to get at this, 28 00:02:10,560 --> 00:02:13,040 Speaker 1: let's start by thinking about what it would be like 29 00:02:13,600 --> 00:02:17,400 Speaker 1: to have a much lower IQ than you do now. 30 00:02:18,040 --> 00:02:19,880 Speaker 1: One way to get at this is to ask the 31 00:02:19,960 --> 00:02:24,200 Speaker 1: question of what would it be like to be a squirrel. 32 00:02:24,600 --> 00:02:27,080 Speaker 1: I'm choosing a squirrel just because I was watching one 33 00:02:27,160 --> 00:02:30,800 Speaker 1: in my backyard yesterday, and I'm watching him run along 34 00:02:30,840 --> 00:02:33,280 Speaker 1: the top of the fence and climb up the tree 35 00:02:33,320 --> 00:02:35,959 Speaker 1: trunk and find a little scrap of food and look 36 00:02:35,960 --> 00:02:41,800 Speaker 1: around nervously. And a squirrel's cerebrum has about one hundred 37 00:02:41,880 --> 00:02:47,280 Speaker 1: million neurons, while ours has about one hundred billion neurons, 38 00:02:47,480 --> 00:02:52,200 Speaker 1: so a thousand times more. Now, size isn't everything, which 39 00:02:52,200 --> 00:02:55,960 Speaker 1: we'll return to a little bit. Presumably the issue is 40 00:02:56,000 --> 00:03:01,040 Speaker 1: the algorithm that's running. But we can watch the behavior 41 00:03:01,520 --> 00:03:04,079 Speaker 1: of lots and lots of squirrels over lots of time, 42 00:03:04,720 --> 00:03:08,200 Speaker 1: and it certainly doesn't seem like they're having the kind 43 00:03:08,240 --> 00:03:12,240 Speaker 1: of capacity for thought that we are. So I was 44 00:03:12,320 --> 00:03:14,800 Speaker 1: watching this squirrel, and I thought, what would it be 45 00:03:15,000 --> 00:03:18,800 Speaker 1: like to be able to jump around from branch to branch, 46 00:03:18,919 --> 00:03:23,640 Speaker 1: but have no hope, presumably of ever discovering that force 47 00:03:23,680 --> 00:03:28,280 Speaker 1: equals masstimes acceleration, or for that matter, or not even 48 00:03:28,440 --> 00:03:31,920 Speaker 1: ever being able to discover that e equals mc squared, 49 00:03:32,040 --> 00:03:35,320 Speaker 1: or just basic things like how do you build a chair, 50 00:03:35,920 --> 00:03:39,000 Speaker 1: or how do you think about the competition between Amazon 51 00:03:39,040 --> 00:03:43,640 Speaker 1: and Netflix, or just the idea that the telephone lines 52 00:03:43,640 --> 00:03:47,120 Speaker 1: they're running on top of are carrying megabytes of information 53 00:03:47,360 --> 00:03:50,720 Speaker 1: flowing as zeros and ones as one member of our 54 00:03:50,760 --> 00:03:54,520 Speaker 1: species communicates to another, or that more broadly, we have 55 00:03:54,880 --> 00:04:00,680 Speaker 1: airwaves and fiber optics carrying a flow of zetabytes information 56 00:04:01,200 --> 00:04:05,760 Speaker 1: in a massive ocean of data around us. For the squirrel, 57 00:04:06,240 --> 00:04:11,160 Speaker 1: none of this exists. For the squirrel, none of this 58 00:04:11,960 --> 00:04:16,040 Speaker 1: is comprehensible. It's thinking about its acorn and where it 59 00:04:16,400 --> 00:04:19,320 Speaker 1: hid its last one, and it's thinking about safety and 60 00:04:19,360 --> 00:04:22,400 Speaker 1: maybe about mating. As far as we know, or as 61 00:04:22,400 --> 00:04:25,400 Speaker 1: far as we can tell, which of course has its limitations. 62 00:04:26,040 --> 00:04:30,480 Speaker 1: The squirrel is not ruminating on a play it saw 63 00:04:30,560 --> 00:04:35,120 Speaker 1: last night. By the squirrel equivalent of Shakespeare and what 64 00:04:35,160 --> 00:04:38,800 Speaker 1: it means about the aspirations of a monarch and the 65 00:04:39,240 --> 00:04:44,360 Speaker 1: cruelty inherent in the competition for power. It's not thinking 66 00:04:44,440 --> 00:04:47,440 Speaker 1: about how to get to the moon or how to 67 00:04:47,480 --> 00:04:50,800 Speaker 1: build the next vaccine. Now, it's not to say that 68 00:04:50,839 --> 00:04:55,159 Speaker 1: there aren't specialized kinds of intelligence. Every move that the 69 00:04:55,200 --> 00:04:58,800 Speaker 1: squirrel makes along the tree branch is very impressive. There 70 00:04:58,880 --> 00:05:01,440 Speaker 1: is no way that I can could hope to stick 71 00:05:01,800 --> 00:05:04,520 Speaker 1: landing after landing like that from the branch of one 72 00:05:04,600 --> 00:05:06,880 Speaker 1: tree to the next to the next. But the squirrel 73 00:05:07,200 --> 00:05:11,560 Speaker 1: is capable of doing that. But even though it performs 74 00:05:11,600 --> 00:05:15,760 Speaker 1: this incredible ballet in the face of gravity, it's presumably 75 00:05:16,560 --> 00:05:18,880 Speaker 1: never going to get to the point where it can 76 00:05:19,400 --> 00:05:24,159 Speaker 1: characterize gravity in equation form to understand how it should 77 00:05:24,279 --> 00:05:26,960 Speaker 1: move if it were on a different planet, or to 78 00:05:27,080 --> 00:05:31,400 Speaker 1: conceptualize gravity as curvature in the fabric of space time. 79 00:05:32,279 --> 00:05:35,880 Speaker 1: Although humans are capable of getting that in high school, 80 00:05:36,520 --> 00:05:41,880 Speaker 1: presumably squirrels of any age would find the concept well 81 00:05:41,920 --> 00:05:45,400 Speaker 1: beyond what their brains could even hope to have a 82 00:05:45,520 --> 00:05:49,160 Speaker 1: flicker of. And when you look around the animal kingdom 83 00:05:49,200 --> 00:05:54,120 Speaker 1: around us, we find lots of creatures presumably occupying very 84 00:05:54,160 --> 00:05:58,719 Speaker 1: different levels of intelligence. So a bat has let's call it, 85 00:05:58,800 --> 00:06:01,680 Speaker 1: ten million neurons, and again it's not the size but 86 00:06:01,720 --> 00:06:02,840 Speaker 1: the structure that matters. 87 00:06:02,880 --> 00:06:07,520 Speaker 2: But presumably they are not writing the equivalent of bat 88 00:06:07,600 --> 00:06:11,279 Speaker 2: books or building a little bat Internet where they can 89 00:06:11,360 --> 00:06:15,960 Speaker 2: capture for eternity everything that every generation of bats before 90 00:06:16,000 --> 00:06:17,280 Speaker 2: them has learned. 91 00:06:18,240 --> 00:06:21,839 Speaker 1: A fish has only about one hundred thousand neurons, a 92 00:06:21,839 --> 00:06:26,200 Speaker 1: house cricket has fifty thousand neurons. A common fruitfly has 93 00:06:26,240 --> 00:06:30,440 Speaker 1: only about two five hundred neurons in the equivalent to 94 00:06:30,560 --> 00:06:33,680 Speaker 1: its brain. So if you were a fruit fly, you 95 00:06:34,000 --> 00:06:37,800 Speaker 1: simply don't have the notion of seeing the moon in 96 00:06:37,839 --> 00:06:41,320 Speaker 1: the sky and thinking, okay, that's an orbiting sphere, and 97 00:06:41,360 --> 00:06:44,640 Speaker 1: I'm going to derive a plan with my fellow flies 98 00:06:44,760 --> 00:06:48,080 Speaker 1: to get there by building new technologies that we can 99 00:06:48,160 --> 00:06:50,800 Speaker 1: fit our little bodies inside of so that we can 100 00:06:50,880 --> 00:06:54,400 Speaker 1: survive in low oxygen. And what if you're a mosquito, 101 00:06:54,480 --> 00:06:57,800 Speaker 1: with your little mosquito brain, all you know is the 102 00:06:57,839 --> 00:07:02,000 Speaker 1: mad attraction to certain odors which indicate a warm blood 103 00:07:02,040 --> 00:07:06,560 Speaker 1: and animal, and the pleasure of slipping your proboscis in 104 00:07:06,680 --> 00:07:10,440 Speaker 1: and satisfying your thirst with the warm liquid. You presumably 105 00:07:10,720 --> 00:07:13,840 Speaker 1: don't even have the concept of blood, that it has 106 00:07:13,920 --> 00:07:18,040 Speaker 1: plasma and cells that are specialized for grabbing oxygen, and 107 00:07:18,080 --> 00:07:21,240 Speaker 1: all kinds of useful machinery for defending the host animal, 108 00:07:21,240 --> 00:07:23,680 Speaker 1: and so on and so on. So I've thought about 109 00:07:23,720 --> 00:07:26,680 Speaker 1: these issues for years, and I think there's a way 110 00:07:26,720 --> 00:07:29,800 Speaker 1: to understand what it is like to be so limited. 111 00:07:30,280 --> 00:07:32,840 Speaker 1: The way that the mosquito might look at the human, 112 00:07:33,240 --> 00:07:35,080 Speaker 1: if it could even have a concept of a human, 113 00:07:35,520 --> 00:07:38,400 Speaker 1: how might look at the human and think, oh, my gosh, 114 00:07:38,440 --> 00:07:41,520 Speaker 1: I can't believe they understand all that stuff, and they 115 00:07:41,520 --> 00:07:44,640 Speaker 1: do it all at once. And the reason we can 116 00:07:44,720 --> 00:07:48,560 Speaker 1: understand what it's like to be limited is because we 117 00:07:48,600 --> 00:07:52,560 Speaker 1: are up against problems all the time that we're just 118 00:07:52,680 --> 00:07:57,200 Speaker 1: smart enough to recognize, but not yet smart enough to solve. 119 00:07:58,000 --> 00:08:02,440 Speaker 1: Take the origin of life, this is a massively difficult problem. 120 00:08:02,760 --> 00:08:06,040 Speaker 1: I worked with brilliant scientists like Sidney Brenner and Francis 121 00:08:06,120 --> 00:08:09,280 Speaker 1: Krik at the Salt Innstitute, who worked on theories of 122 00:08:09,320 --> 00:08:12,280 Speaker 1: the origin of life. And these were among the smartest 123 00:08:12,320 --> 00:08:16,360 Speaker 1: biologists of the twentieth century. And even they were like 124 00:08:16,440 --> 00:08:20,800 Speaker 1: little fruit flies when trying to tackle that problem. They 125 00:08:20,960 --> 00:08:25,120 Speaker 1: constantly were aware of the enormous gaps in whatever story 126 00:08:25,160 --> 00:08:28,480 Speaker 1: they were hoping to put together, because we just don't 127 00:08:28,520 --> 00:08:32,640 Speaker 1: have much of any lasting data from the past three 128 00:08:32,679 --> 00:08:36,559 Speaker 1: point eight billion years, and we're talking about how trillions 129 00:08:36,559 --> 00:08:39,040 Speaker 1: of atoms might come together in just the right way 130 00:08:39,080 --> 00:08:43,240 Speaker 1: over time to form things that can self replicate. It's 131 00:08:43,720 --> 00:08:46,440 Speaker 1: the kind of problem that when you really start to 132 00:08:46,520 --> 00:08:49,760 Speaker 1: reach your arms down into it, you realize that even 133 00:08:49,960 --> 00:08:53,720 Speaker 1: very smart human brains just aren't equipped for a problem 134 00:08:53,760 --> 00:08:57,280 Speaker 1: of that size. Or just take something like thinking about 135 00:08:57,320 --> 00:09:02,480 Speaker 1: the cosmos with one hundred billion galaxies in it and 136 00:09:03,000 --> 00:09:07,840 Speaker 1: one hundred billion stars inside each of those galaxies, and 137 00:09:08,000 --> 00:09:12,240 Speaker 1: uncountable numbers of planets rotating around those stars, and then 138 00:09:12,679 --> 00:09:16,800 Speaker 1: trying to picture or answer whether there is life elsewhere 139 00:09:16,800 --> 00:09:19,840 Speaker 1: in the galaxy and what it would look like. It's 140 00:09:19,920 --> 00:09:23,640 Speaker 1: clear that our human brains aren't so good at grocking 141 00:09:23,760 --> 00:09:26,960 Speaker 1: numbers like that. Even though we can estimate the numbers 142 00:09:26,960 --> 00:09:29,960 Speaker 1: and we can use words to talk about them, we're 143 00:09:29,960 --> 00:09:34,839 Speaker 1: not really capable of understanding them. And it's the same 144 00:09:34,840 --> 00:09:37,640 Speaker 1: thing when we study neuroscience. Of course, we've got in 145 00:09:37,679 --> 00:09:41,400 Speaker 1: the ballpark of eighty six billion neurons and each one 146 00:09:41,440 --> 00:09:44,040 Speaker 1: of those is connected to so many of its neighbors, 147 00:09:44,080 --> 00:09:47,840 Speaker 1: about ten thousand, that if you took a cubic millimeter 148 00:09:48,000 --> 00:09:51,679 Speaker 1: of brain tissue, there are more connections in there than 149 00:09:51,760 --> 00:09:54,880 Speaker 1: there are stars in the Milky Way galaxy. This thing 150 00:09:54,960 --> 00:09:58,400 Speaker 1: that we're facing on a daily basis this thing the 151 00:09:58,440 --> 00:10:02,640 Speaker 1: brain that tens of thousands of people on the planet's study, 152 00:10:02,679 --> 00:10:06,320 Speaker 1: this three pound organ that we have completely cornered. It 153 00:10:06,400 --> 00:10:10,920 Speaker 1: is so vastly complex that there is no way for 154 00:10:11,040 --> 00:10:16,560 Speaker 1: a human brain to understand itself. And in neuroscience we 155 00:10:16,640 --> 00:10:21,520 Speaker 1: have foundational problems that we can't answer, like why something 156 00:10:21,880 --> 00:10:26,920 Speaker 1: feels good? Take an orgasm? Why does an orgasm feel good? 157 00:10:27,200 --> 00:10:30,080 Speaker 1: We can, of course tell the evolutionary story, which is 158 00:10:30,120 --> 00:10:33,680 Speaker 1: that it benefits the species to reproduce, so it is 159 00:10:33,840 --> 00:10:37,360 Speaker 1: advantageous for it to feel good. But the question from 160 00:10:37,400 --> 00:10:40,439 Speaker 1: a neurobiology point of view is how do you build 161 00:10:40,480 --> 00:10:46,520 Speaker 1: a network that feels anything. Take a big, impressive artificial 162 00:10:46,600 --> 00:10:50,479 Speaker 1: neural network like GPT four. It can do incredibly impressive 163 00:10:50,559 --> 00:10:54,040 Speaker 1: work by taking some prompt and written language and generating 164 00:10:54,120 --> 00:10:57,320 Speaker 1: words that would statistically go with that prompt and so on. 165 00:10:57,400 --> 00:11:01,080 Speaker 1: It's mind blowing how well it appears to do. But 166 00:11:01,160 --> 00:11:07,240 Speaker 1: GPT four presumably can't feel pain or pleasure. There's nothing 167 00:11:07,280 --> 00:11:10,760 Speaker 1: about one of its sentences that it generates that it 168 00:11:10,880 --> 00:11:15,760 Speaker 1: appreciates as hilarious or tear jerking. It doesn't have any 169 00:11:16,120 --> 00:11:21,520 Speaker 1: capacity to feel concerned about its survival or demise. When 170 00:11:21,520 --> 00:11:25,679 Speaker 1: the programmers turn the computer off. It's just running numbers 171 00:11:26,200 --> 00:11:30,240 Speaker 1: down a long, complex algorithmic network, and that's it. So 172 00:11:30,320 --> 00:11:34,840 Speaker 1: how do we ever come to feel something? This is 173 00:11:34,880 --> 00:11:39,920 Speaker 1: perhaps the central unsolved question in neuroscience. It's usually summarized 174 00:11:39,920 --> 00:11:45,840 Speaker 1: as consciousness, and specifically the hard problem of consciousness, which 175 00:11:45,880 --> 00:11:49,720 Speaker 1: is to say, why does all this signaling moving through 176 00:11:49,760 --> 00:11:54,240 Speaker 1: networks of cells feel like something? How could you ever 177 00:11:54,360 --> 00:11:58,439 Speaker 1: program a computer to feel pain or detect some wavelength 178 00:11:58,480 --> 00:12:03,320 Speaker 1: of electromagnetic radiation and as purple, or to detect some 179 00:12:03,440 --> 00:12:10,560 Speaker 1: wavelength of electromagnetic radiation and experience it as purpleness, or 180 00:12:10,600 --> 00:12:14,640 Speaker 1: to enjoy the beauty of a sunset. These are totally 181 00:12:14,720 --> 00:12:19,720 Speaker 1: unsolved questions in neuroscience, and presumably there are whole classes 182 00:12:19,840 --> 00:12:23,360 Speaker 1: of problems that we are not even smart enough to 183 00:12:23,480 --> 00:12:28,800 Speaker 1: realize our questions that we could be asking. So, despite 184 00:12:28,840 --> 00:12:34,480 Speaker 1: the incredible pride filling progress of our species. Our ignorance 185 00:12:34,880 --> 00:12:40,120 Speaker 1: vastly outstrips our knowledge, and that affords us just a 186 00:12:40,240 --> 00:12:44,960 Speaker 1: little bit of insight into the limitations of our brains, 187 00:12:45,000 --> 00:12:48,400 Speaker 1: the glass walls of our fish bowl, and lets us 188 00:12:48,679 --> 00:12:53,000 Speaker 1: even very roughly imagine what it would be like to 189 00:12:53,160 --> 00:12:56,560 Speaker 1: have the intelligence of a squirrel. And these are the 190 00:12:56,600 --> 00:12:58,679 Speaker 1: things that I was thinking about when I wrote a 191 00:12:58,800 --> 00:13:02,840 Speaker 1: fictional short story called Descentive Species in my book Some 192 00:13:03,520 --> 00:13:05,280 Speaker 1: and so I'm going to read it here, and then 193 00:13:05,280 --> 00:13:10,400 Speaker 1: I'll come back to the question of intelligence. In the afterlife, 194 00:13:10,960 --> 00:13:14,960 Speaker 1: you are treated to a generous opportunity. You can choose 195 00:13:15,000 --> 00:13:16,960 Speaker 1: whatever you would like to be in the next life. 196 00:13:17,320 --> 00:13:19,840 Speaker 1: Would you like to be a member of the opposite sex, 197 00:13:20,400 --> 00:13:26,240 Speaker 1: born into royalty, a philosopher with bottomless profundity, a soldier 198 00:13:26,400 --> 00:13:31,480 Speaker 1: facing triumphant battles. But perhaps you've just returned here from 199 00:13:31,520 --> 00:13:36,199 Speaker 1: a hard life. Perhaps you were tortured by the enormity 200 00:13:36,240 --> 00:13:40,640 Speaker 1: of the decisions and responsibilities that surrounded you. And now 201 00:13:40,640 --> 00:13:45,680 Speaker 1: there's only one thing you yearn for, simplicity that's permissible. 202 00:13:46,200 --> 00:13:49,920 Speaker 1: So for the next round, you choose to be a horse. 203 00:13:50,920 --> 00:13:55,360 Speaker 1: You covet the bliss of that simple life. Afternoons of 204 00:13:55,480 --> 00:13:59,920 Speaker 1: grazing and grassy fields, the handsome angles of your skeleton 205 00:14:00,520 --> 00:14:04,000 Speaker 1: and the prominence of your muscles. The peace of the 206 00:14:04,120 --> 00:14:09,320 Speaker 1: slow flicking tail, or the steam rifling through your nostrils. 207 00:14:09,400 --> 00:14:15,280 Speaker 1: As you lope across snow blanketed planes, you announce your decision. 208 00:14:16,280 --> 00:14:20,360 Speaker 1: Incantations are muttered, a wand is waved, and your body 209 00:14:20,400 --> 00:14:26,840 Speaker 1: begins to metamorphose into a horse. Your muscles start to bulge, 210 00:14:27,360 --> 00:14:30,600 Speaker 1: A mat of strong hair erupts to cover you like 211 00:14:30,640 --> 00:14:35,320 Speaker 1: a comfortable blanket in winter. The thickening and lengthening of 212 00:14:35,360 --> 00:14:39,160 Speaker 1: your neck immediately feels normal as it comes about. Your 213 00:14:39,200 --> 00:14:44,280 Speaker 1: carotid arteries grow in diameter, your fingers blend hoofward, your 214 00:14:44,360 --> 00:14:50,040 Speaker 1: knees stiffen, your hips strengthen. And meanwhile, as your skull 215 00:14:50,320 --> 00:14:55,000 Speaker 1: lengthens into its new shape, your brain races in its changes. 216 00:14:55,600 --> 00:15:00,680 Speaker 1: Your cortex retreats as your cerebellum grows. The homuncular smelts 217 00:15:00,880 --> 00:15:07,120 Speaker 1: man to horse neurons, redirect synapses, unplug and replug on 218 00:15:07,200 --> 00:15:12,800 Speaker 1: their way to equestrian patterns, and your dream of understanding 219 00:15:12,880 --> 00:15:15,800 Speaker 1: what it is like to be a horse gallops toward 220 00:15:15,880 --> 00:15:20,760 Speaker 1: you from the distance. Your concern about human affairs begins 221 00:15:20,800 --> 00:15:25,480 Speaker 1: to slip away, Your cynicism about human behavior melts, and 222 00:15:25,560 --> 00:15:29,160 Speaker 1: even your human way of thinking begins to drift away 223 00:15:29,200 --> 00:15:34,280 Speaker 1: from you. Suddenly, for just a moment, you are aware 224 00:15:34,520 --> 00:15:39,000 Speaker 1: of the problem you overlooked. The more you become a horse, 225 00:15:39,720 --> 00:15:44,080 Speaker 1: the more you forget the original wish. You forget what 226 00:15:44,160 --> 00:15:47,360 Speaker 1: it was like to be a human, wondering what it 227 00:15:47,440 --> 00:15:51,520 Speaker 1: was like to be a horse. This moment of lucidity 228 00:15:52,040 --> 00:15:56,200 Speaker 1: does not last long, but it serves as the punishment 229 00:15:56,320 --> 00:16:02,840 Speaker 1: for your sins. A Promethean entrails hecking moment, crouching half horse, 230 00:16:03,320 --> 00:16:07,520 Speaker 1: half man with the knowledge that you cannot appreciate the 231 00:16:07,600 --> 00:16:12,840 Speaker 1: destination without knowing the starting point. You cannot revel in 232 00:16:12,920 --> 00:16:18,400 Speaker 1: the simplicity unless you remember the alternatives. And that's not 233 00:16:18,480 --> 00:16:21,960 Speaker 1: the worst of your revelation. You realize that the next 234 00:16:22,000 --> 00:16:26,160 Speaker 1: time you return here with your thick horse brain, you 235 00:16:26,200 --> 00:16:31,240 Speaker 1: won't have the capacity to ask to become a human again. 236 00:16:31,920 --> 00:16:37,440 Speaker 1: You won't understand what a human is. Your choice to 237 00:16:37,680 --> 00:16:43,080 Speaker 1: slide down the intelligent sladder is irreversible, and just before 238 00:16:43,160 --> 00:16:49,080 Speaker 1: you lose your final human faculties, you painfully ponder what 239 00:16:49,760 --> 00:16:55,600 Speaker 1: magnificent extraterrestrial creature enthralled with the idea of finding a 240 00:16:55,720 --> 00:17:00,320 Speaker 1: simpler life, chose in the last round to be come 241 00:17:01,040 --> 00:17:06,880 Speaker 1: a human. So in the story, I try to give 242 00:17:06,920 --> 00:17:10,920 Speaker 1: a way to think about the possibility that something could 243 00:17:11,000 --> 00:17:13,720 Speaker 1: be much smarter than us, that we are not at 244 00:17:13,760 --> 00:17:16,480 Speaker 1: the top of the latter but maybe somewhere in the middle, 245 00:17:16,880 --> 00:17:20,239 Speaker 1: and that to some other creatures in the universe, we 246 00:17:20,320 --> 00:17:23,359 Speaker 1: would appear to be like the squirrels are to us. 247 00:17:24,080 --> 00:17:27,640 Speaker 1: It certainly could be that in this vast cosmos there 248 00:17:27,680 --> 00:17:31,560 Speaker 1: are intelligences that are so much higher than ours that 249 00:17:31,600 --> 00:17:36,160 Speaker 1: we lack even a good imagination or vocabulary to paint 250 00:17:36,200 --> 00:17:40,280 Speaker 1: these creatures in the same way that presumably the squirrels 251 00:17:40,520 --> 00:17:44,120 Speaker 1: would be unable to give a reasonable description of us 252 00:17:44,160 --> 00:17:48,720 Speaker 1: and what we're up to. Maybe these extraterrestrials can understand 253 00:17:48,760 --> 00:17:51,679 Speaker 1: the entirety of cosmic evolution by the time they're in 254 00:17:51,720 --> 00:17:55,280 Speaker 1: second grade, and they can keep in mind the trillions 255 00:17:55,320 --> 00:17:59,600 Speaker 1: of animal species on this planet and all the other planets, 256 00:17:59,840 --> 00:18:03,880 Speaker 1: and keep track of all the interactions and therefore understand 257 00:18:04,040 --> 00:18:08,800 Speaker 1: the biological history and future of a planet at depth. 258 00:18:09,640 --> 00:18:13,440 Speaker 1: While we mostly just use the word evolution to capture 259 00:18:13,560 --> 00:18:18,760 Speaker 1: something that we can't comprehend at a deep level. Now, 260 00:18:18,880 --> 00:18:21,879 Speaker 1: let me put on the table that I'm completely uncompelled 261 00:18:21,960 --> 00:18:25,480 Speaker 1: by the claims that there are UFOs, or nowadays they're 262 00:18:25,520 --> 00:18:29,480 Speaker 1: called UAPs. But I have a very smart friend named 263 00:18:29,560 --> 00:18:32,040 Speaker 1: Kevin who told me the other day that he has 264 00:18:32,119 --> 00:18:35,400 Speaker 1: no problem believing that that's true. Now, I'm not defending 265 00:18:35,400 --> 00:18:38,720 Speaker 1: his position, but his stance was simply that if you 266 00:18:38,880 --> 00:18:43,760 Speaker 1: imagine the aliens are much smarter than we are, then 267 00:18:43,800 --> 00:18:48,040 Speaker 1: the particulars of what we're looking for, some Morse code 268 00:18:48,080 --> 00:18:51,840 Speaker 1: signal or some take me to your leader's sign, that's 269 00:18:51,880 --> 00:18:54,240 Speaker 1: actually the wrong thing for us to be looking for. 270 00:18:54,760 --> 00:18:59,720 Speaker 1: Because if we imagine some civilization that is, say, totally 271 00:18:59,760 --> 00:19:03,360 Speaker 1: differ from us and three million years ahead of us, 272 00:19:03,960 --> 00:19:07,399 Speaker 1: and they are to us as we are to the squirrels, 273 00:19:08,200 --> 00:19:12,399 Speaker 1: it's certainly not difficult to imagine the possibility that we 274 00:19:12,520 --> 00:19:16,800 Speaker 1: are simply not smart enough to construct a good model 275 00:19:16,800 --> 00:19:21,760 Speaker 1: of them and therefore even recognize them. Now, you might 276 00:19:21,800 --> 00:19:24,720 Speaker 1: assume that if they're much smarter than us, then they 277 00:19:24,720 --> 00:19:28,440 Speaker 1: could dumb themselves down to communicate with us the way 278 00:19:28,480 --> 00:19:30,639 Speaker 1: that we sort of know how to talk with a 279 00:19:30,760 --> 00:19:35,480 Speaker 1: child at a child's level. But our ability to model 280 00:19:35,720 --> 00:19:39,600 Speaker 1: lesser intelligences is still pretty terrible. I mean, you still 281 00:19:39,600 --> 00:19:41,800 Speaker 1: have no idea how to go out in your yard 282 00:19:41,960 --> 00:19:46,560 Speaker 1: and communicate with squirrels. Just try having a meaningful conversation. 283 00:19:46,920 --> 00:19:50,359 Speaker 1: Good luck. We're so much smarter than a squirrel, but 284 00:19:50,400 --> 00:19:54,119 Speaker 1: we have no idea how to plug into their neural networks. 285 00:19:54,720 --> 00:19:56,560 Speaker 1: Or just go to a zoo and try to have 286 00:19:56,600 --> 00:20:01,080 Speaker 1: a conversation with a panda bear and commune unicate to him. 287 00:20:01,520 --> 00:20:03,840 Speaker 1: Take me to your leader, or do that with a 288 00:20:03,880 --> 00:20:06,000 Speaker 1: camel or a dolphin. You get the point, which is 289 00:20:06,000 --> 00:20:10,520 Speaker 1: that just because you are smarter doesn't necessitate that you 290 00:20:10,640 --> 00:20:14,080 Speaker 1: know how to talk to these other animals. And this 291 00:20:14,160 --> 00:20:18,000 Speaker 1: is the situation that we could hypothetically be in with 292 00:20:18,160 --> 00:20:22,920 Speaker 1: extraterrestrial civilizations. That we are here even though we don't 293 00:20:23,000 --> 00:20:26,640 Speaker 1: recognize that they are there, because we don't even have 294 00:20:26,720 --> 00:20:31,320 Speaker 1: the capacity to imagine them, and they have no meaningful 295 00:20:31,359 --> 00:20:34,840 Speaker 1: way to communicate with us, because our needs and desires 296 00:20:34,880 --> 00:20:38,960 Speaker 1: are so different from what they can even understand. Now, 297 00:20:39,080 --> 00:20:43,760 Speaker 1: this lack of communication across species or across planets is 298 00:20:43,800 --> 00:20:48,040 Speaker 1: really brought into relief when we consider that intelligence is 299 00:20:48,080 --> 00:20:51,680 Speaker 1: not one thing, but there are many different behaviors that 300 00:20:51,720 --> 00:20:56,800 Speaker 1: we might put under the umbrella of intelligence. To this point. 301 00:20:56,880 --> 00:21:00,800 Speaker 1: In nineteen seventy four, the philosopher Thomas Nagel wrote an 302 00:21:00,920 --> 00:21:05,000 Speaker 1: essay called what is it Like to Be a Bat? 303 00:21:05,800 --> 00:21:09,399 Speaker 1: Because fundamentally, being a bat is a pretty different experience 304 00:21:09,440 --> 00:21:12,200 Speaker 1: than being a different human. If you are a blind 305 00:21:12,320 --> 00:21:15,919 Speaker 1: echo locating bat, you emit chirps in the dark, and 306 00:21:15,960 --> 00:21:20,000 Speaker 1: you receive back echoes of your chirps, and you translate 307 00:21:20,200 --> 00:21:24,119 Speaker 1: those air compression waves into a three dimensional picture of 308 00:21:24,160 --> 00:21:26,680 Speaker 1: what is in front of you. You make a mental 309 00:21:27,119 --> 00:21:30,760 Speaker 1: map of your surroundings this way. So Nagel asked this 310 00:21:30,880 --> 00:21:33,280 Speaker 1: question of what it's like to be a bat in 311 00:21:33,359 --> 00:21:37,080 Speaker 1: the context of consciousness, as in, given that we have 312 00:21:37,200 --> 00:21:40,879 Speaker 1: such a different sensory world, is there any way that 313 00:21:40,920 --> 00:21:44,159 Speaker 1: we could understand what it would be like to be 314 00:21:44,240 --> 00:21:47,600 Speaker 1: in such a different way of detecting and sensing the world. 315 00:21:48,000 --> 00:21:50,240 Speaker 1: But this same question could be applied to what we're 316 00:21:50,280 --> 00:21:54,640 Speaker 1: thinking about here, which is intelligence. Intelligence in the context 317 00:21:54,680 --> 00:21:58,080 Speaker 1: of a bat allows the bat to navigate around and 318 00:21:58,200 --> 00:22:01,639 Speaker 1: find food, and talk with other and adapt when the 319 00:22:01,680 --> 00:22:05,800 Speaker 1: conditions change. But it's hard to directly compare it to 320 00:22:05,880 --> 00:22:10,760 Speaker 1: human intelligence because they have traits and adaptations that are 321 00:22:10,880 --> 00:22:15,120 Speaker 1: very sophisticated in their own way. Like I said, with echolocation, 322 00:22:15,200 --> 00:22:18,200 Speaker 1: they're creating this three D map in their space. They're 323 00:22:18,280 --> 00:22:22,640 Speaker 1: using auditory information in real time, and they can have 324 00:22:22,760 --> 00:22:26,280 Speaker 1: such precision that they can detect an object as thin 325 00:22:26,359 --> 00:22:30,359 Speaker 1: as a hair and fly around, that they can figure 326 00:22:30,359 --> 00:22:33,560 Speaker 1: out the size and shape and speed of objects like 327 00:22:33,600 --> 00:22:36,439 Speaker 1: a little moth flying around, so they can zoom in 328 00:22:36,520 --> 00:22:39,720 Speaker 1: on it and grab it. And they also have sophisticated 329 00:22:39,880 --> 00:22:44,840 Speaker 1: social behavior, but presumably about different social things than what 330 00:22:44,880 --> 00:22:47,400 Speaker 1: we care about. And we know that they do all 331 00:22:47,520 --> 00:22:50,120 Speaker 1: kinds of problem solving, but it strikes me that it's 332 00:22:50,160 --> 00:22:54,480 Speaker 1: really difficult to know what sorts of problems they solve, 333 00:22:55,000 --> 00:22:57,880 Speaker 1: because some of the problems are so foreign to us 334 00:22:58,280 --> 00:23:01,439 Speaker 1: that we don't even know how to think about them. 335 00:23:01,960 --> 00:23:04,560 Speaker 1: So all this leads us back around to the main 336 00:23:04,680 --> 00:23:08,679 Speaker 1: question for today, which is what is intelligence? How do 337 00:23:08,720 --> 00:23:10,080 Speaker 1: we define it? 338 00:23:10,880 --> 00:23:11,080 Speaker 2: Well? 339 00:23:11,080 --> 00:23:13,359 Speaker 1: As it turns out, this has not been an easy 340 00:23:13,440 --> 00:23:16,520 Speaker 1: question for scientists, and it has come with lots of debate. 341 00:23:17,119 --> 00:23:18,960 Speaker 1: And this is one of those things where we all 342 00:23:18,960 --> 00:23:22,080 Speaker 1: have an intuition about what we mean by the word. 343 00:23:22,800 --> 00:23:25,840 Speaker 1: But the trick from a neuroscience perspective is how do 344 00:23:25,880 --> 00:23:28,919 Speaker 1: you rigorously define it and therefore, how do you study it? 345 00:23:29,880 --> 00:23:32,919 Speaker 1: When we talk about intelligence, let's say just human intelligence, 346 00:23:32,960 --> 00:23:35,439 Speaker 1: what are we even talking about. We all have a 347 00:23:35,920 --> 00:23:39,439 Speaker 1: sense of what an intelligent person is, but what is 348 00:23:39,520 --> 00:23:43,160 Speaker 1: happening in their brain that is different from someone else 349 00:23:43,160 --> 00:23:45,760 Speaker 1: who you might think is not so intelligent. How do 350 00:23:46,000 --> 00:23:52,639 Speaker 1: giant networks of individual neurons, billions of them manipulate information 351 00:23:52,720 --> 00:23:56,199 Speaker 1: that you've taken in before and simulate possible futures and 352 00:23:56,280 --> 00:24:00,359 Speaker 1: evaluate those and throw out all the information that doesn't matter. 353 00:24:01,000 --> 00:24:04,720 Speaker 1: And do people who are intelligent store knowledge in a 354 00:24:04,760 --> 00:24:08,679 Speaker 1: different way, Maybe not categorically different, but just perhaps in 355 00:24:08,680 --> 00:24:12,080 Speaker 1: a way that's more distilled or more easily retrievable. So 356 00:24:12,119 --> 00:24:14,800 Speaker 1: these are the kind of questions we're facing now. The 357 00:24:14,840 --> 00:24:18,439 Speaker 1: first thing to appreciate about intelligence in the brain is 358 00:24:18,440 --> 00:24:23,360 Speaker 1: that size does not seem to matter. Andre the giant 359 00:24:23,680 --> 00:24:26,880 Speaker 1: had a brain volume that might have been eight times 360 00:24:26,920 --> 00:24:30,520 Speaker 1: the size of yours, but he was probably not eight 361 00:24:30,600 --> 00:24:34,760 Speaker 1: times smarter than you. In fact, what is so remarkable 362 00:24:35,080 --> 00:24:39,520 Speaker 1: is that brains that are enormous, like in elephants, and 363 00:24:39,720 --> 00:24:43,080 Speaker 1: brains that are very tiny, like a little mouse brain 364 00:24:43,960 --> 00:24:48,320 Speaker 1: can both tackle very complex problems like foraging for food 365 00:24:48,359 --> 00:24:52,040 Speaker 1: and setting up a home and mating and defending itself 366 00:24:52,080 --> 00:24:58,400 Speaker 1: against predators. The Spanish neuroscientist Santiago Romoni Cajol, like many 367 00:24:58,440 --> 00:25:02,000 Speaker 1: neuroscientists before and after him, was really struck by this thought, 368 00:25:02,359 --> 00:25:06,200 Speaker 1: and he had this beautiful comparison of large and small 369 00:25:06,240 --> 00:25:11,480 Speaker 1: brains to large and small clocks like big ben and 370 00:25:11,840 --> 00:25:17,359 Speaker 1: a wristwatch both tell the time with equal accuracy despite 371 00:25:17,440 --> 00:25:20,320 Speaker 1: the size difference. So all this is to say that 372 00:25:20,400 --> 00:25:24,159 Speaker 1: when we stare at brains, this secret of intelligence is 373 00:25:24,200 --> 00:25:46,720 Speaker 1: not immediately obvious just from looking at the brain. Now, 374 00:25:46,920 --> 00:25:50,000 Speaker 1: when we look across species, we can see what we 375 00:25:50,119 --> 00:25:54,600 Speaker 1: might mean by intelligence. For example, good problem solving skills. 376 00:25:55,359 --> 00:25:59,320 Speaker 1: Some primates are really good at using tools, like orangutans, 377 00:25:59,680 --> 00:26:02,760 Speaker 1: while well other primates like bonobo's are really good at 378 00:26:02,880 --> 00:26:07,280 Speaker 1: social intelligence. If you look at purposes, you find that 379 00:26:07,320 --> 00:26:11,160 Speaker 1: they are better problem solvers and do much more than 380 00:26:11,200 --> 00:26:15,480 Speaker 1: say other swimmers like catfish. And when we examine humans 381 00:26:15,840 --> 00:26:19,160 Speaker 1: we see that somebody can be a genius in one 382 00:26:19,280 --> 00:26:23,280 Speaker 1: domain but quite bad at another. Rinaldo is a genius 383 00:26:23,280 --> 00:26:25,600 Speaker 1: at soccer, but he might not be so great at 384 00:26:25,640 --> 00:26:29,080 Speaker 1: differential equations. I recently saw a video of a kid 385 00:26:29,119 --> 00:26:32,119 Speaker 1: who can do a Rubik's cuban about three seconds, but 386 00:26:32,200 --> 00:26:36,280 Speaker 1: he's autistic and therefore is not particularly good at anything 387 00:26:36,359 --> 00:26:40,800 Speaker 1: involving social interaction. So how can we put a measure 388 00:26:40,880 --> 00:26:44,439 Speaker 1: to what we are talking about here? About a century 389 00:26:44,440 --> 00:26:46,960 Speaker 1: and a half ago people started working on the question 390 00:26:47,480 --> 00:26:52,160 Speaker 1: of how you could quantify this. The British scientists Sir 391 00:26:52,240 --> 00:26:54,560 Speaker 1: Francis Galton was one of the first that I know 392 00:26:54,600 --> 00:26:58,880 Speaker 1: of who said we should be able to measure intelligence. 393 00:26:59,400 --> 00:27:02,840 Speaker 1: So is that you could quantify it by measuring things 394 00:27:02,920 --> 00:27:06,719 Speaker 1: like the strength of someone's eyesight and hearing, or the 395 00:27:06,760 --> 00:27:10,240 Speaker 1: strength of their grip. So that approach didn't last long, 396 00:27:10,720 --> 00:27:14,480 Speaker 1: but by nineteen oh five, two scientists, Alfred Binet and 397 00:27:14,520 --> 00:27:19,080 Speaker 1: Theodore Simon built the Simon Benay test as a way 398 00:27:19,119 --> 00:27:24,080 Speaker 1: of quantifying some number for intelligence. And then in nineteen 399 00:27:24,160 --> 00:27:27,640 Speaker 1: sixteen here at Stanford University there was an educator named 400 00:27:27,680 --> 00:27:30,880 Speaker 1: Lewis Turman who developed the test more and he renamed 401 00:27:30,880 --> 00:27:33,720 Speaker 1: it the Stanford Bena test, which you might have heard 402 00:27:33,720 --> 00:27:36,480 Speaker 1: of because it is still used now. These sorts of 403 00:27:36,520 --> 00:27:39,080 Speaker 1: tests allow us to put a number on something, but 404 00:27:39,119 --> 00:27:43,439 Speaker 1: we still know what exactly we're measuring. The psychologist Charles 405 00:27:43,480 --> 00:27:46,840 Speaker 1: Spearman was intrigued by this question, and he made an observation, 406 00:27:47,480 --> 00:27:50,800 Speaker 1: which is that if you do well on one task, 407 00:27:50,960 --> 00:27:54,359 Speaker 1: something like verbal skills, you tend to also do well 408 00:27:54,440 --> 00:27:58,840 Speaker 1: at other tasks, like spatial skills, and so these things correlated, 409 00:27:59,240 --> 00:28:02,480 Speaker 1: and he speculates that there was some sort of general 410 00:28:02,720 --> 00:28:06,040 Speaker 1: intelligence involved here, and so he used the letter G 411 00:28:07,240 --> 00:28:10,679 Speaker 1: for this idea of a general intelligence factor, like a 412 00:28:10,760 --> 00:28:14,960 Speaker 1: general skill set of the brain. And other researchers noticed 413 00:28:14,960 --> 00:28:19,520 Speaker 1: this correlation also between very different sorts of tasks like 414 00:28:19,760 --> 00:28:24,640 Speaker 1: memory and perception and language and solving new problems and 415 00:28:24,720 --> 00:28:27,800 Speaker 1: pattern recognition and a whole bunch of others, and so 416 00:28:27,840 --> 00:28:31,040 Speaker 1: it still wasn't clear what intelligence is, but it's clear 417 00:28:31,080 --> 00:28:34,200 Speaker 1: that these things correlated, and so in nineteen twenty one 418 00:28:34,560 --> 00:28:38,320 Speaker 1: researcher wrote that while it is difficult to define precisely 419 00:28:38,360 --> 00:28:44,280 Speaker 1: what intelligence is, tests tested. So some people felt that 420 00:28:44,400 --> 00:28:48,120 Speaker 1: there's one thing, this G, that underlies lots of different skills, 421 00:28:48,160 --> 00:28:50,680 Speaker 1: and others felt that maybe these are completely separate things 422 00:28:50,760 --> 00:28:54,360 Speaker 1: and intelligence is not one thing. So that's the debate 423 00:28:54,400 --> 00:28:57,200 Speaker 1: that got rolling over a century ago and it remains 424 00:28:57,240 --> 00:29:01,120 Speaker 1: an unsolved issue. The fact is that if intelligence were 425 00:29:01,200 --> 00:29:03,959 Speaker 1: just one thing, you might expect to sometimes see a 426 00:29:04,040 --> 00:29:08,160 Speaker 1: small bit of brain damage where someone loses skills across 427 00:29:08,160 --> 00:29:11,880 Speaker 1: different types of intelligence, or with the introduction of brain 428 00:29:11,960 --> 00:29:14,440 Speaker 1: imaging some decades ago, we might be able to see 429 00:29:14,840 --> 00:29:19,320 Speaker 1: a single small network becoming active even with very different problems. 430 00:29:20,080 --> 00:29:25,400 Speaker 1: But interestingly, this is still unresolved because some researchers ask 431 00:29:25,480 --> 00:29:29,280 Speaker 1: participants to do very different kinds of tasks like verbal 432 00:29:29,280 --> 00:29:33,200 Speaker 1: and perceptual and spatial things while their brain is getting scanned, 433 00:29:33,560 --> 00:29:36,440 Speaker 1: and they find that all of these tasks lead to 434 00:29:36,480 --> 00:29:40,600 Speaker 1: activity in an area called the lateral frontal cortex, and 435 00:29:40,680 --> 00:29:44,920 Speaker 1: so it might be interpreted to support the unitary intelligence 436 00:29:45,000 --> 00:29:49,239 Speaker 1: hypothesis because you're seeing one area becoming active even when 437 00:29:49,240 --> 00:29:52,080 Speaker 1: people are doing different kinds of tasks. But on the 438 00:29:52,120 --> 00:29:55,080 Speaker 1: other hand, we're always faced with the problem that our 439 00:29:55,200 --> 00:29:59,680 Speaker 1: current brain reading technology only lights up areas where there's 440 00:29:59,800 --> 00:30:02,280 Speaker 1: a lo a lot of activation, and it doesn't catch 441 00:30:02,320 --> 00:30:05,600 Speaker 1: the areas that are more diffuse where the real detailed 442 00:30:05,600 --> 00:30:08,600 Speaker 1: action might be happening. And there's also an issue that 443 00:30:08,960 --> 00:30:14,240 Speaker 1: highly intelligent people find particular tasks less challenging and so 444 00:30:14,280 --> 00:30:17,360 Speaker 1: they often show less activity in the frontal cortex, not more. 445 00:30:17,760 --> 00:30:21,560 Speaker 1: And so it may be that even with our terrific technology, 446 00:30:22,160 --> 00:30:24,320 Speaker 1: it's still a little bit too crude to tell us 447 00:30:24,400 --> 00:30:27,920 Speaker 1: what intelligence is by simply going around and looking for 448 00:30:27,960 --> 00:30:30,320 Speaker 1: a spot or a collection of spots in the brain. 449 00:30:31,200 --> 00:30:32,640 Speaker 1: This is in the same way that you're not going 450 00:30:32,720 --> 00:30:36,280 Speaker 1: to look at chat GPT and say, ah, what makes 451 00:30:36,280 --> 00:30:39,840 Speaker 1: it intelligent? Are these few nodes here out of the 452 00:30:39,880 --> 00:30:43,440 Speaker 1: billions of nodes. Instead, it's a function of the whole 453 00:30:43,560 --> 00:30:47,640 Speaker 1: of the activity running through the enormous system. So it 454 00:30:47,680 --> 00:30:49,920 Speaker 1: may turn out that intelligence is not going to be 455 00:30:50,000 --> 00:30:54,360 Speaker 1: captured by a single brain area or even a system. 456 00:30:54,880 --> 00:30:57,280 Speaker 1: For all we know, it might not even be about neurons, 457 00:30:57,360 --> 00:31:00,480 Speaker 1: but about what's going on at the molecular level inside 458 00:31:00,520 --> 00:31:02,840 Speaker 1: of neurons, which means we might just be looking for 459 00:31:02,920 --> 00:31:06,480 Speaker 1: some correl it at the level of neurons. Now, all 460 00:31:06,520 --> 00:31:09,000 Speaker 1: that speculative, but I just want to make clear that 461 00:31:09,160 --> 00:31:13,080 Speaker 1: often in neuroscience we are like the drunk looking for 462 00:31:13,160 --> 00:31:16,080 Speaker 1: the keys under the street light because the lighting is 463 00:31:16,160 --> 00:31:18,560 Speaker 1: better there, even though we dropped our keys over there. 464 00:31:19,240 --> 00:31:23,480 Speaker 1: Our technology has its limitations, and we often gravitate towards 465 00:31:23,560 --> 00:31:25,920 Speaker 1: the street light and ask if we happen to be 466 00:31:25,960 --> 00:31:28,640 Speaker 1: able to find the keys there, And sometimes that strategy 467 00:31:28,640 --> 00:31:32,760 Speaker 1: works and sometimes it doesn't. Now, part of the challenge 468 00:31:32,800 --> 00:31:37,400 Speaker 1: in asking what intelligence is is that the word probably 469 00:31:37,480 --> 00:31:41,080 Speaker 1: tries to hold up too much weight by itself, because 470 00:31:41,320 --> 00:31:44,800 Speaker 1: what we call intelligence is almost certainly made up of 471 00:31:45,040 --> 00:31:49,480 Speaker 1: multiple facets. For example, some people break this down to 472 00:31:50,200 --> 00:31:54,920 Speaker 1: analytic intelligence like you use in math problems, or creative 473 00:31:54,960 --> 00:31:59,560 Speaker 1: intelligence like writing a caption for a cartoon, or practical 474 00:31:59,600 --> 00:32:01,920 Speaker 1: intelligen just like how to operate well in the world. 475 00:32:02,600 --> 00:32:06,200 Speaker 1: So one question is whether these different categories of intelligence 476 00:32:06,280 --> 00:32:10,240 Speaker 1: truly represent different things with fence lines around them, or 477 00:32:10,280 --> 00:32:13,360 Speaker 1: whether they're underpinned by the same mechanisms in the brain 478 00:32:13,880 --> 00:32:18,040 Speaker 1: or overlapping mechanisms. But the problem is even trickier than that, 479 00:32:18,200 --> 00:32:21,640 Speaker 1: because even within any of these categories, we still have 480 00:32:21,720 --> 00:32:25,920 Speaker 1: to answer questions like how knowledge gets stored and retrieved, 481 00:32:26,280 --> 00:32:29,240 Speaker 1: how it can get restructured, how it can get erased, 482 00:32:29,280 --> 00:32:32,520 Speaker 1: and so on. So the question of what intelligence is 483 00:32:33,280 --> 00:32:36,920 Speaker 1: has attracted scientists throughout the ages to propose all kinds 484 00:32:36,920 --> 00:32:40,440 Speaker 1: of different answers, none of which may be mutually exclusive, 485 00:32:40,480 --> 00:32:44,720 Speaker 1: but they're all different angles on answering what it is 486 00:32:44,800 --> 00:32:49,200 Speaker 1: when somebody is intelligent. So let's look at some proposals. 487 00:32:49,560 --> 00:32:55,600 Speaker 1: One proposal is that intelligence has to do with squelching distractors. Technically, 488 00:32:55,600 --> 00:32:59,040 Speaker 1: this is called resolving cognitive conflict. So for example, let's 489 00:32:59,040 --> 00:33:03,280 Speaker 1: say we're playing the Simon Says game, where I say, 490 00:33:03,600 --> 00:33:05,959 Speaker 1: Simon says, look to your left, and then you do it. 491 00:33:06,400 --> 00:33:09,880 Speaker 1: But let's say I say lift your arm, but I 492 00:33:09,880 --> 00:33:13,000 Speaker 1: don't preface it with Simon says. Then what you're supposed 493 00:33:13,040 --> 00:33:17,480 Speaker 1: to do is override your reflex to lift your arm. 494 00:33:18,120 --> 00:33:22,120 Speaker 1: This is an example where you'd have cognitive conflict. So 495 00:33:22,160 --> 00:33:25,880 Speaker 1: the way neuroscientists study this is, for example, by using 496 00:33:25,960 --> 00:33:31,040 Speaker 1: something called a three back task. So imagine you're watching 497 00:33:31,080 --> 00:33:34,280 Speaker 1: a series of faces getting presented on the screen. So 498 00:33:34,320 --> 00:33:37,880 Speaker 1: first you see Tom Cruise, and then you see Beyonce, 499 00:33:38,240 --> 00:33:41,360 Speaker 1: and then you see Taylor Swift, and then you see 500 00:33:41,560 --> 00:33:45,200 Speaker 1: Anthony Hopkins and so on. Your job is simply to say, 501 00:33:45,640 --> 00:33:48,920 Speaker 1: when you see a face that matched the face that 502 00:33:48,960 --> 00:33:52,520 Speaker 1: you saw three faces ago, in other words, three back. 503 00:33:53,160 --> 00:33:57,760 Speaker 1: If you then see Emma Thompson and then Taylor Swift again, 504 00:33:57,840 --> 00:34:02,760 Speaker 1: you'd say, yes, Taylor's matched what I saw three faces ago. 505 00:34:03,640 --> 00:34:07,600 Speaker 1: But if you see Zendaia and then Jennifer Lawrence and 506 00:34:07,640 --> 00:34:11,880 Speaker 1: then Zendia again, that's a distractor because her face was 507 00:34:11,920 --> 00:34:14,040 Speaker 1: only two a go. And so you're supposed to hold 508 00:34:14,080 --> 00:34:17,840 Speaker 1: your tongue or specifically not press your button. So to 509 00:34:17,920 --> 00:34:21,560 Speaker 1: perform this task requires not only a small window of 510 00:34:21,600 --> 00:34:26,040 Speaker 1: working memory, but you have to squelch distractors. You have 511 00:34:26,080 --> 00:34:29,319 Speaker 1: to squelch faces that matched what was two faces ago, 512 00:34:29,440 --> 00:34:32,520 Speaker 1: or four faces ago or five faces. You can only 513 00:34:32,600 --> 00:34:37,160 Speaker 1: hit the button when the face matches what was three ago. Now, 514 00:34:37,320 --> 00:34:39,000 Speaker 1: you run this test on a whole bunch of people 515 00:34:39,120 --> 00:34:43,759 Speaker 1: with different levels of G generalized intelligence score, and what 516 00:34:43,800 --> 00:34:48,040 Speaker 1: you find is that people with a high G are 517 00:34:48,280 --> 00:34:51,000 Speaker 1: better at the task, in large part because they don't 518 00:34:51,040 --> 00:34:54,960 Speaker 1: respond to the distractors. When you do this in brain imaging, 519 00:34:55,000 --> 00:34:59,400 Speaker 1: you find that particular areas come online, like the anterior 520 00:34:59,440 --> 00:35:03,440 Speaker 1: singular texts and the lateral prefrontal cortex, and these areas 521 00:35:03,440 --> 00:35:08,040 Speaker 1: seem to be necessary for overriding the cognitive conflict. So 522 00:35:08,160 --> 00:35:11,960 Speaker 1: that's one idea for what intelligence is, but other studies 523 00:35:12,000 --> 00:35:16,600 Speaker 1: suggest no, it's not about conflict resolution. Instead, intelligence is 524 00:35:16,640 --> 00:35:20,840 Speaker 1: about how many things you can hold in working memory. So, 525 00:35:20,960 --> 00:35:24,759 Speaker 1: for example, our visual memory can only hold let's say 526 00:35:25,040 --> 00:35:27,600 Speaker 1: three or four objects in mind at any given time. 527 00:35:27,920 --> 00:35:31,280 Speaker 1: So let's imagine that I show you some colored shapes 528 00:35:31,360 --> 00:35:35,040 Speaker 1: like a green triangle and a red circle and a 529 00:35:35,120 --> 00:35:38,799 Speaker 1: blue square, and then a moment later, I show you 530 00:35:38,840 --> 00:35:42,279 Speaker 1: a similar image, and I ask you where any of 531 00:35:42,320 --> 00:35:46,000 Speaker 1: these shapes are colors different? And you can probably do 532 00:35:46,120 --> 00:35:49,600 Speaker 1: this for three or four objects. But as it turns out, 533 00:35:49,640 --> 00:35:52,759 Speaker 1: some people are only able to retain the information from 534 00:35:52,920 --> 00:35:55,839 Speaker 1: one or two objects, and other people can hold more, 535 00:35:55,920 --> 00:35:59,080 Speaker 1: let's say five objects, And so some people have suggested 536 00:35:59,120 --> 00:36:02,320 Speaker 1: that that is really related to intelligence, with the idea 537 00:36:02,440 --> 00:36:05,920 Speaker 1: being that critical reasoning depends on how many things you 538 00:36:05,920 --> 00:36:09,120 Speaker 1: can hold in your working memory. If you can hold 539 00:36:09,160 --> 00:36:12,040 Speaker 1: more things in your head at any one time, you'll 540 00:36:12,040 --> 00:36:16,400 Speaker 1: be better able to manipulate things for solving problems. So again, 541 00:36:16,440 --> 00:36:20,239 Speaker 1: people have done brain imaging with EEG and fMRI and 542 00:36:20,360 --> 00:36:23,600 Speaker 1: found a little area in the posterior paridal cortex that 543 00:36:23,760 --> 00:36:28,440 Speaker 1: seems to give a memory bottleneck and correlates with what 544 00:36:28,640 --> 00:36:48,520 Speaker 1: different people can hold in mind. Now it seems likely 545 00:36:48,920 --> 00:36:53,000 Speaker 1: that working memory capacity won't be the final unlock to 546 00:36:53,080 --> 00:36:56,760 Speaker 1: the question of intelligence, but it probably plays a role. 547 00:36:57,600 --> 00:37:00,680 Speaker 1: So what other ideas are there? Well? As it turns out, 548 00:37:00,719 --> 00:37:03,480 Speaker 1: people in the late nineteen nineties got excited about the 549 00:37:03,520 --> 00:37:08,040 Speaker 1: idea of forming associations in the brain. And there's a 550 00:37:08,080 --> 00:37:13,320 Speaker 1: particular type of receptor in the brain called an NMDA receptor. 551 00:37:13,840 --> 00:37:16,200 Speaker 1: Don't worry about the details here, I'll link a paper 552 00:37:16,239 --> 00:37:19,960 Speaker 1: on the website. But you can genetically engineer this receptor 553 00:37:20,000 --> 00:37:23,040 Speaker 1: in a mouse and show that the mouse can link 554 00:37:23,120 --> 00:37:27,480 Speaker 1: things more strongly, like this light predicts food, or this 555 00:37:27,680 --> 00:37:31,440 Speaker 1: is the location where some reward is located. So a 556 00:37:31,520 --> 00:37:35,320 Speaker 1: scientist named Joe Chen and his colleagues at Princeton engineered 557 00:37:35,360 --> 00:37:38,320 Speaker 1: a strain of mouse to have more of this NMDA 558 00:37:38,600 --> 00:37:41,239 Speaker 1: receptor subunit. And this hit the news at the end 559 00:37:41,239 --> 00:37:45,359 Speaker 1: of the nineties because these mice called Doogie mice after 560 00:37:45,400 --> 00:37:48,239 Speaker 1: the TV show Doogie Howser MD, which was about a 561 00:37:48,239 --> 00:37:52,839 Speaker 1: really smart kid. These Doogie mice outperformed normal mice in 562 00:37:52,920 --> 00:37:56,759 Speaker 1: recognizing things they had seen before, or swimming their way 563 00:37:56,760 --> 00:37:59,200 Speaker 1: through a pool of milky water to remember where a 564 00:37:59,440 --> 00:38:03,239 Speaker 1: hidden plat form was. Now, this news made a real 565 00:38:03,280 --> 00:38:05,919 Speaker 1: splash when it came out because the idea was that wow, 566 00:38:05,960 --> 00:38:09,880 Speaker 1: we've just invented intelligent mice. But we do have to 567 00:38:09,920 --> 00:38:13,759 Speaker 1: ask whether we think the doogie mice are more intelligent 568 00:38:14,520 --> 00:38:18,640 Speaker 1: just because they can do these laboratory tests better. After all, 569 00:38:18,719 --> 00:38:24,000 Speaker 1: intelligence is more than simply nailing down associations, and the 570 00:38:24,040 --> 00:38:26,680 Speaker 1: other thing to keep in mind is that all animals 571 00:38:26,800 --> 00:38:32,000 Speaker 1: have to balance the things they know against exploring new possibilities. 572 00:38:32,040 --> 00:38:36,439 Speaker 1: This is known as the balance between exploitation and exploration. 573 00:38:37,080 --> 00:38:39,560 Speaker 1: The reason animals have to balance this is because the 574 00:38:39,600 --> 00:38:42,960 Speaker 1: world changes and you never know exactly how and when 575 00:38:43,000 --> 00:38:46,160 Speaker 1: it's going to change. So if you are an animal 576 00:38:46,200 --> 00:38:49,920 Speaker 1: who's used to finding worms under the green rocks, you 577 00:38:49,960 --> 00:38:53,120 Speaker 1: want to spend some of your time exploring under the 578 00:38:53,120 --> 00:38:55,319 Speaker 1: blue rocks and the red rocks too, because you never 579 00:38:55,480 --> 00:38:58,120 Speaker 1: know when things in the world are going to change. 580 00:38:58,840 --> 00:39:03,840 Speaker 1: So the googie mice seemed to be more about exploiting 581 00:39:03,920 --> 00:39:08,319 Speaker 1: knowledge that they learned and less about exploration. But that's 582 00:39:08,320 --> 00:39:11,000 Speaker 1: not necessarily a good thing. It depends on what happens 583 00:39:11,040 --> 00:39:16,760 Speaker 1: with the world. So just forming stronger associations is probably 584 00:39:16,800 --> 00:39:21,319 Speaker 1: not going to be the full answer to what intelligence is. Now, 585 00:39:21,360 --> 00:39:23,960 Speaker 1: there's another pathway we can sniff down when we're looking 586 00:39:23,960 --> 00:39:28,600 Speaker 1: for the root of intelligence, and that is the Eureka moment. 587 00:39:28,800 --> 00:39:33,800 Speaker 1: That is what happens when two concepts suddenly fit together. 588 00:39:34,600 --> 00:39:36,920 Speaker 1: Like I remember the moment when I was a kid 589 00:39:37,680 --> 00:39:40,719 Speaker 1: when I learned that fog is just the same thing 590 00:39:40,760 --> 00:39:42,960 Speaker 1: as a cloud, but it's low to the ground, And 591 00:39:43,000 --> 00:39:46,080 Speaker 1: it was a physical sensation for me to have these 592 00:39:46,120 --> 00:39:51,239 Speaker 1: two concepts fit together. Or if you're a detective, you 593 00:39:51,320 --> 00:39:53,799 Speaker 1: might have a bunch of clues on your desk and 594 00:39:53,840 --> 00:39:58,080 Speaker 1: then suddenly, aha, it all coalesces into a narrative because 595 00:39:58,080 --> 00:40:01,279 Speaker 1: all the facts fit. Now, what has just happened in 596 00:40:01,360 --> 00:40:04,480 Speaker 1: your brain? And how does your brain know and alert 597 00:40:04,520 --> 00:40:08,120 Speaker 1: you that a fit has been achieved. This is the 598 00:40:08,600 --> 00:40:12,520 Speaker 1: restructuring of information. And I just want to make clear 599 00:40:12,600 --> 00:40:16,359 Speaker 1: we are nothing like a computer that takes in files 600 00:40:16,440 --> 00:40:23,040 Speaker 1: of facts. Instead, we're always structuring and restructuring information. Now. 601 00:40:23,040 --> 00:40:24,839 Speaker 1: One of the places we can see that is when 602 00:40:25,440 --> 00:40:29,239 Speaker 1: a monkey learns a task. What you'll notice is that 603 00:40:29,360 --> 00:40:31,880 Speaker 1: you can't tell the monkey the rules of the task. 604 00:40:31,920 --> 00:40:34,000 Speaker 1: They have to figure it out themselves by doing it 605 00:40:34,080 --> 00:40:37,680 Speaker 1: over and over and getting reinforced with let's say, juice 606 00:40:37,680 --> 00:40:40,800 Speaker 1: in their mouth or something like that. Over hundreds of trials, 607 00:40:41,280 --> 00:40:43,160 Speaker 1: and monkeys can learn this way and they can get 608 00:40:43,200 --> 00:40:48,480 Speaker 1: better through time. Their performance just rises like a shallowly 609 00:40:48,640 --> 00:40:51,799 Speaker 1: sloped line. But if you give the same task to 610 00:40:51,880 --> 00:40:56,719 Speaker 1: an undergraduate, something very different happens. They'll try a few 611 00:40:56,800 --> 00:41:00,600 Speaker 1: things and then they'll suddenly get it, and they're performance 612 00:41:00,800 --> 00:41:04,279 Speaker 1: jumps up. Suddenly they have an Aha moment, they have 613 00:41:04,320 --> 00:41:08,840 Speaker 1: a Eureka. Now, this observation implies that humans are doing 614 00:41:08,920 --> 00:41:12,880 Speaker 1: something that monkeys can't. Perhaps this has to do something 615 00:41:12,920 --> 00:41:17,000 Speaker 1: with restructuring knowledge, or perhaps the human student gets to 616 00:41:17,120 --> 00:41:20,440 Speaker 1: try out lots of hypotheses and evaluate them and then 617 00:41:20,560 --> 00:41:25,000 Speaker 1: restructure things accordingly. But whatever the issue is, this certainly 618 00:41:25,040 --> 00:41:27,360 Speaker 1: seems to play a role in what we think of 619 00:41:27,440 --> 00:41:32,279 Speaker 1: as intelligence. And it also suggests that animal models of 620 00:41:32,320 --> 00:41:36,120 Speaker 1: intelligence are going to be too limited for some of 621 00:41:36,160 --> 00:41:41,160 Speaker 1: the forms of sophisticated reasoning that we care about. And 622 00:41:41,200 --> 00:41:43,160 Speaker 1: I'll give you another thing that we might look for. 623 00:41:43,480 --> 00:41:47,120 Speaker 1: What if intelligence is about the ability to make good 624 00:41:47,320 --> 00:41:51,720 Speaker 1: predictions about the world. In previous episodes, I've talked about 625 00:41:51,719 --> 00:41:56,920 Speaker 1: the internal model, and I've emphasized that the only reason 626 00:41:57,080 --> 00:42:00,840 Speaker 1: the brain builds an internal model is so that we 627 00:42:00,880 --> 00:42:05,480 Speaker 1: can make better predictions about the future. So emulation of 628 00:42:05,520 --> 00:42:09,440 Speaker 1: possible futures is a giant part of what intelligent brains do. 629 00:42:10,280 --> 00:42:14,200 Speaker 1: As the philosopher Carl Popper said, this is what allows 630 00:42:14,360 --> 00:42:19,719 Speaker 1: our hypotheses to die in our stead. My friend and 631 00:42:19,760 --> 00:42:23,000 Speaker 1: colleague Jeff Hawkins has emphasized this for a couple of decades, 632 00:42:23,320 --> 00:42:27,200 Speaker 1: that we only have memory in order to make predictions. 633 00:42:27,480 --> 00:42:29,960 Speaker 1: So the idea is that you write down things that 634 00:42:30,080 --> 00:42:33,200 Speaker 1: happen to you that seem salient, and you use those 635 00:42:33,360 --> 00:42:38,719 Speaker 1: building blocks to springboard into possible futures. As Jeff puts it, 636 00:42:39,239 --> 00:42:43,880 Speaker 1: intelligence is the capacity of the brain to predict the 637 00:42:44,000 --> 00:42:48,560 Speaker 1: future by analogy to the past, and we can find 638 00:42:48,600 --> 00:42:52,760 Speaker 1: lots of evidence for that in examples of brain damage, 639 00:42:52,760 --> 00:42:57,000 Speaker 1: where people lose the ability to store memory and as 640 00:42:57,040 --> 00:43:00,759 Speaker 1: a result are unable to simulate the future. So this 641 00:43:00,800 --> 00:43:04,759 Speaker 1: whole memory prediction framework almost certainly plays a role in intelligence. 642 00:43:05,000 --> 00:43:07,520 Speaker 1: But there are a lot of unanswered questions here. For example, 643 00:43:08,200 --> 00:43:12,600 Speaker 1: there are a huge number of possible future moves. How 644 00:43:12,640 --> 00:43:16,600 Speaker 1: does the brain simulate them all? Perhaps an intelligence simulator 645 00:43:17,000 --> 00:43:20,200 Speaker 1: saves time by developing tricks so that you don't have 646 00:43:20,320 --> 00:43:25,480 Speaker 1: to simulate everything. So there are lots of proposals and 647 00:43:25,560 --> 00:43:30,400 Speaker 1: possibilities for what intelligence is in the brain, and probably 648 00:43:30,440 --> 00:43:33,120 Speaker 1: there are many other possibilities that we haven't even begun 649 00:43:33,200 --> 00:43:36,879 Speaker 1: to explore or know how to explore. So I want 650 00:43:36,920 --> 00:43:40,200 Speaker 1: to pose a question about intelligence, and this one is 651 00:43:40,239 --> 00:43:44,239 Speaker 1: really important, and that is the question of why do 652 00:43:44,400 --> 00:43:50,160 Speaker 1: we have lions in zoos? After all, a lion is 653 00:43:50,320 --> 00:43:54,480 Speaker 1: so much more powerful than you are. A lion can 654 00:43:54,680 --> 00:43:58,400 Speaker 1: easily kill a human. It has these razor sharp claws, 655 00:43:58,480 --> 00:44:03,400 Speaker 1: and its body is all muscle and speed, and yet 656 00:44:04,200 --> 00:44:09,120 Speaker 1: we put lions in zoos. How well, there's only one 657 00:44:09,160 --> 00:44:13,600 Speaker 1: thing we have over lions, and that is intelligence, and 658 00:44:13,840 --> 00:44:21,160 Speaker 1: intelligence enables control. We don't brute force the lion into 659 00:44:21,160 --> 00:44:25,080 Speaker 1: the cage, we don't wrestle a man. Demand Instead, we 660 00:44:25,160 --> 00:44:29,839 Speaker 1: do things like set up traps, or develop chemicals that 661 00:44:29,920 --> 00:44:34,000 Speaker 1: happen to interact with their neurochemistry and put them to sleep, 662 00:44:34,080 --> 00:44:37,560 Speaker 1: and then we package that into a syringe and use 663 00:44:37,680 --> 00:44:40,960 Speaker 1: explosives to launch it really quickly down a metal barrel 664 00:44:41,000 --> 00:44:44,399 Speaker 1: so it punctures their skin. All of these things are 665 00:44:44,520 --> 00:44:49,160 Speaker 1: moves that the lion cannot possibly predict because it couldn't 666 00:44:49,200 --> 00:44:54,880 Speaker 1: possibly conceive of them. And that's what makes so salient. 667 00:44:55,080 --> 00:45:00,840 Speaker 1: Our contemporary discussions about AI, because often when someone is 668 00:45:00,880 --> 00:45:04,400 Speaker 1: thinking about the question of whether AI could control humans, 669 00:45:05,080 --> 00:45:09,120 Speaker 1: they think about physically manhandling us with robots. But that 670 00:45:09,160 --> 00:45:12,760 Speaker 1: seems really unlikely because it's so hard to build physical robots. 671 00:45:12,760 --> 00:45:16,880 Speaker 1: You're constantly tending to the toilet of the robot machinery, 672 00:45:16,880 --> 00:45:19,360 Speaker 1: You're trying to keep all the pieces and parts together 673 00:45:19,440 --> 00:45:22,719 Speaker 1: and not have a wire pop somewhere. But the important 674 00:45:22,760 --> 00:45:26,840 Speaker 1: concept to get straight is that for AI to control humans, 675 00:45:26,880 --> 00:45:33,840 Speaker 1: they don't need brute force. Why because intelligence enables control. 676 00:45:34,760 --> 00:45:38,959 Speaker 1: Could we imagine a scenario in which the AI does 677 00:45:39,040 --> 00:45:43,680 Speaker 1: something that we can't predict because we can't possibly conceive 678 00:45:43,719 --> 00:45:47,000 Speaker 1: of it. Sure, And the interesting part is that there's 679 00:45:47,040 --> 00:45:50,440 Speaker 1: a whole space of scenarios that we can conceive of 680 00:45:50,520 --> 00:45:53,600 Speaker 1: and write science fiction novels about, But there's also the 681 00:45:53,680 --> 00:45:59,320 Speaker 1: space of the unknowns Now, I'm not suggesting that modern 682 00:45:59,360 --> 00:46:01,960 Speaker 1: AI is going to move in that direction, because at 683 00:46:02,000 --> 00:46:05,960 Speaker 1: the moment it's just doing very sophisticated statistical games and 684 00:46:06,040 --> 00:46:10,040 Speaker 1: it doesn't have any particular desire for power. But I 685 00:46:10,080 --> 00:46:13,200 Speaker 1: think for sure things are going to get strange as 686 00:46:13,280 --> 00:46:18,240 Speaker 1: we grow into a world with another intelligence, one which 687 00:46:18,440 --> 00:46:21,000 Speaker 1: has read every single book and blog post ever written 688 00:46:21,080 --> 00:46:24,000 Speaker 1: by humans, and knows every map that we've ever made, 689 00:46:24,280 --> 00:46:28,400 Speaker 1: from streets to chemical signaling, and can create a video 690 00:46:28,560 --> 00:46:32,200 Speaker 1: of any new idea, and can simulate new combinations of 691 00:46:32,320 --> 00:46:35,880 Speaker 1: machines and fractions of a second. So this is the 692 00:46:35,920 --> 00:46:40,799 Speaker 1: reason it's important to understand what intelligence is when we 693 00:46:40,880 --> 00:46:45,520 Speaker 1: talk about artificial intelligence now. Earlier this year, I published 694 00:46:45,560 --> 00:46:51,080 Speaker 1: a paper about how we might meaningfully assess intelligence in AI, 695 00:46:51,600 --> 00:46:55,000 Speaker 1: and I discussed this in episode seven. In other words, 696 00:46:55,280 --> 00:46:58,320 Speaker 1: how would we know if some artificial neural network like 697 00:46:58,440 --> 00:47:04,440 Speaker 1: chat GPT we're actually intelligent versus just computing the probability 698 00:47:04,480 --> 00:47:07,280 Speaker 1: of the next word based on a slurry of everything 699 00:47:07,360 --> 00:47:10,440 Speaker 1: humans have ever written. Well, for sure, it is just 700 00:47:10,520 --> 00:47:13,839 Speaker 1: computing the probability of the next word. But the surprise 701 00:47:13,960 --> 00:47:16,680 Speaker 1: has been all the stuff that we didn't expect it 702 00:47:16,719 --> 00:47:21,360 Speaker 1: to be able to do. With this straightforward statistical prediction model, 703 00:47:21,760 --> 00:47:25,839 Speaker 1: it does more than it was programmed or expected to do. 704 00:47:26,960 --> 00:47:29,600 Speaker 1: So that has left the whole field with a question 705 00:47:29,680 --> 00:47:33,799 Speaker 1: of whether simply having enough data gives us something that 706 00:47:33,960 --> 00:47:40,000 Speaker 1: is actually intelligent or whether it just seems intelligent. So 707 00:47:40,160 --> 00:47:43,200 Speaker 1: in that previous episode, I proposed that the tests we 708 00:47:43,280 --> 00:47:48,440 Speaker 1: currently have, like the Turing test, are outdated as a 709 00:47:48,520 --> 00:47:52,680 Speaker 1: test for meaningful intelligence. Why because the Turing test can 710 00:47:52,719 --> 00:47:55,640 Speaker 1: already be passed and it still doesn't tell us really 711 00:47:55,680 --> 00:47:57,920 Speaker 1: what we need to know. And it's the same with 712 00:47:58,000 --> 00:48:00,680 Speaker 1: other tests that have been proposed in the AST, like 713 00:48:01,120 --> 00:48:05,280 Speaker 1: the Loveless test, which asks whether computers could ever be creative, 714 00:48:05,719 --> 00:48:08,359 Speaker 1: and all it takes is a few seconds with mid 715 00:48:08,440 --> 00:48:12,360 Speaker 1: journey or chat GPT to see that that landmark is 716 00:48:12,440 --> 00:48:15,759 Speaker 1: also in the rear view mirror. So what I've proposed 717 00:48:15,840 --> 00:48:20,000 Speaker 1: is not about moving the goalpost. It's about fundamentally asking 718 00:48:20,080 --> 00:48:24,680 Speaker 1: what is the right test for a meaningful sort of intelligence. 719 00:48:25,520 --> 00:48:28,000 Speaker 1: So what I suggested is that we will know if 720 00:48:28,040 --> 00:48:32,200 Speaker 1: a system has some real intelligence once it starts doing 721 00:48:32,760 --> 00:48:37,360 Speaker 1: meaningful scientific discovery and puts all the scientists out of business, 722 00:48:37,920 --> 00:48:42,840 Speaker 1: because scientific discovery is something that requires a meaningful level 723 00:48:42,840 --> 00:48:45,440 Speaker 1: of intelligence. And I'm not talking about the type of 724 00:48:45,480 --> 00:48:49,240 Speaker 1: science that's just piecing together things in the literature, although 725 00:48:49,280 --> 00:48:52,160 Speaker 1: that's of course very useful. I'm talking about the type 726 00:48:52,200 --> 00:48:56,000 Speaker 1: of science where you think of something new that doesn't 727 00:48:56,040 --> 00:49:00,680 Speaker 1: already exist, and you simulate that and you evaluate whether 728 00:49:00,840 --> 00:49:04,799 Speaker 1: this crazy model you just came up with would give 729 00:49:04,840 --> 00:49:08,520 Speaker 1: a good understanding of the facts on the ground. So, 730 00:49:08,600 --> 00:49:12,160 Speaker 1: for example, when Alfred Wegner proposed that the continental plates 731 00:49:12,200 --> 00:49:17,359 Speaker 1: were drifting, that gave a totally different explanation for all 732 00:49:17,480 --> 00:49:20,600 Speaker 1: kinds of data, including the fact that South America and 733 00:49:20,719 --> 00:49:23,520 Speaker 1: Africa seemed to plug into each other like puzzle pieces, 734 00:49:23,840 --> 00:49:26,480 Speaker 1: And it gave an explanation for mountain ranges and so on. 735 00:49:27,000 --> 00:49:30,120 Speaker 1: And he simulated what would be the case what we 736 00:49:30,120 --> 00:49:33,359 Speaker 1: would expect to see if this were true, and he 737 00:49:33,400 --> 00:49:37,680 Speaker 1: realized it made a good match to the data around him. 738 00:49:37,960 --> 00:49:40,880 Speaker 1: Or when Einstein imagined what it would be like to 739 00:49:41,120 --> 00:49:43,919 Speaker 1: ride on a beam of light and this is how 740 00:49:43,920 --> 00:49:48,120 Speaker 1: he derived the theory of special relativity, or when Charles 741 00:49:48,200 --> 00:49:51,000 Speaker 1: Darwin came up with a theory of evolution by natural 742 00:49:51,040 --> 00:49:55,200 Speaker 1: selection by thinking about all the animals that weren't here. 743 00:49:56,000 --> 00:49:58,719 Speaker 1: I suggest that these are the kind of things that 744 00:49:58,880 --> 00:50:02,840 Speaker 1: humans can do that represent real intelligence, the kind of 745 00:50:02,840 --> 00:50:07,080 Speaker 1: intelligence that has made our species more successful than any 746 00:50:07,120 --> 00:50:11,000 Speaker 1: other on the planet. So is modern AI intelligent in 747 00:50:11,040 --> 00:50:14,840 Speaker 1: this way? As of this recording, there's no simple answer 748 00:50:14,840 --> 00:50:18,320 Speaker 1: to this. There are arguments on all sides that generative 749 00:50:18,320 --> 00:50:20,719 Speaker 1: AI has actually reached some sort of intelligence or that 750 00:50:20,760 --> 00:50:23,279 Speaker 1: it hasn't. But it's not easy at the moment to 751 00:50:23,320 --> 00:50:27,440 Speaker 1: come to a clear conclusion on this. And although AI 752 00:50:27,640 --> 00:50:30,040 Speaker 1: intelligence might not be quite the same thing as what 753 00:50:30,120 --> 00:50:32,520 Speaker 1: we have, I suspect it's going to matter a lot 754 00:50:32,560 --> 00:50:37,520 Speaker 1: for us to better understand what human intelligence is made of, 755 00:50:38,160 --> 00:50:41,920 Speaker 1: so we can understand when AI grows up to be 756 00:50:42,040 --> 00:50:45,680 Speaker 1: the same or better and why. And I suspect that 757 00:50:45,800 --> 00:50:49,120 Speaker 1: the simple existence of AI is going to help us 758 00:50:49,480 --> 00:50:52,640 Speaker 1: think through these problems, because we're going to try things 759 00:50:52,680 --> 00:50:56,719 Speaker 1: and get over our naive assumptions about what intelligence might be. 760 00:50:57,080 --> 00:51:01,080 Speaker 1: For example, from at least the nineteen fifty onward, the 761 00:51:01,239 --> 00:51:05,279 Speaker 1: old way of trying to build artificial intelligence was to 762 00:51:05,320 --> 00:51:09,760 Speaker 1: give a computer a giant list of facts. You explain 763 00:51:09,840 --> 00:51:12,560 Speaker 1: that birds have wings and beaks and feathers, and they fly, 764 00:51:13,000 --> 00:51:14,640 Speaker 1: and then maybe you have to teach it that there 765 00:51:14,680 --> 00:51:18,080 Speaker 1: are some exceptions to the rule, like ostriches or penguins, 766 00:51:18,320 --> 00:51:21,399 Speaker 1: and you keep giving it these rules and structure. And 767 00:51:21,440 --> 00:51:26,120 Speaker 1: that approach never worked, and the field of artificial intelligence 768 00:51:26,239 --> 00:51:29,960 Speaker 1: descended into its winter. So what we learned from that 769 00:51:30,880 --> 00:51:34,880 Speaker 1: is that intelligence is probably not a series of propositions, 770 00:51:34,920 --> 00:51:38,880 Speaker 1: but rather it's stored in a very different way, for example, 771 00:51:38,920 --> 00:51:44,360 Speaker 1: a giant cascade of information in vast networks. And so 772 00:51:45,239 --> 00:51:49,720 Speaker 1: studying intelligence that is artificial, that's what's going to sharpen 773 00:51:49,760 --> 00:51:55,480 Speaker 1: our focus on intelligence that is evolved. So let's wrap up. 774 00:51:55,920 --> 00:51:58,400 Speaker 1: As you know, if you've been a listener to this podcast, 775 00:51:58,520 --> 00:52:01,560 Speaker 1: I'm obsessed with the way that we all see the 776 00:52:01,600 --> 00:52:05,120 Speaker 1: world from different points of view, not least because we 777 00:52:05,160 --> 00:52:08,080 Speaker 1: have subtly different genetic details in our brains from person 778 00:52:08,120 --> 00:52:11,560 Speaker 1: to person, as well as different life experiences which have 779 00:52:11,640 --> 00:52:14,880 Speaker 1: wired up the circuitry. And as a result, we also 780 00:52:14,960 --> 00:52:19,280 Speaker 1: have different intelligences that allow us to see the world 781 00:52:19,719 --> 00:52:23,839 Speaker 1: differently and sometimes with more or less clarity. And what 782 00:52:23,880 --> 00:52:27,200 Speaker 1: we've done today is looked at the complexity of what 783 00:52:27,320 --> 00:52:31,200 Speaker 1: seems like a simple question, what is intelligence? We know 784 00:52:31,320 --> 00:52:34,680 Speaker 1: that there are differences between species and even within members 785 00:52:34,680 --> 00:52:37,400 Speaker 1: of any species, but we don't always know how to 786 00:52:37,480 --> 00:52:41,239 Speaker 1: capture that. And the fact that we can address the 787 00:52:41,360 --> 00:52:43,880 Speaker 1: question but after one hundred years still not come to 788 00:52:43,920 --> 00:52:49,160 Speaker 1: a clear answer probably indicates that the word intelligence simply 789 00:52:49,520 --> 00:52:53,600 Speaker 1: holds up too many different things, different skills, whether that's 790 00:52:53,640 --> 00:52:56,920 Speaker 1: the squelching of distractors or the number of things you 791 00:52:56,960 --> 00:52:59,520 Speaker 1: can hold in memory at any given moment, or the 792 00:53:00,239 --> 00:53:05,400 Speaker 1: formatting of information or making associations, or the ability to 793 00:53:05,440 --> 00:53:09,200 Speaker 1: simulate possible futures. It seems to me that one of 794 00:53:09,239 --> 00:53:13,279 Speaker 1: the most meaningful tests for the intelligence of our species 795 00:53:13,600 --> 00:53:18,400 Speaker 1: will be this. Will we be able to define and 796 00:53:18,680 --> 00:53:25,080 Speaker 1: understand intelligence before we create it and perhaps get taken 797 00:53:25,120 --> 00:53:28,640 Speaker 1: over by it, That will be the true test of 798 00:53:28,719 --> 00:53:36,080 Speaker 1: the intelligence of our species. Go to eagleman dot com 799 00:53:36,120 --> 00:53:41,200 Speaker 1: slash podcast for more information and define further reading. Send 800 00:53:41,239 --> 00:53:45,000 Speaker 1: me an email at podcast at eagleman dot com with 801 00:53:45,160 --> 00:53:48,279 Speaker 1: questions or discussion, and I'll be making more episodes in 802 00:53:48,320 --> 00:53:53,560 Speaker 1: which I address those. Until next time, I'm David Eagleman, 803 00:53:53,840 --> 00:54:03,680 Speaker 1: and this is Inner Cosmos.