1 00:00:04,240 --> 00:00:07,240 Speaker 1: Welcome to Tech Stuff, a production of I Heart Radios 2 00:00:07,320 --> 00:00:13,840 Speaker 1: How Stuff Works. Hey there, and welcome to tech Stuff. 3 00:00:13,880 --> 00:00:16,880 Speaker 1: I'm your host, Jonathan Strickland. I'm an executive producer with 4 00:00:16,880 --> 00:00:18,760 Speaker 1: How Stuff Works in My Heart Radio, and I love 5 00:00:18,920 --> 00:00:22,400 Speaker 1: all things tech. And there's a topic I have touched 6 00:00:22,440 --> 00:00:26,239 Speaker 1: on on several occasions in past episodes, but I really 7 00:00:26,320 --> 00:00:29,840 Speaker 1: wanted to dig down today into this topic because it's 8 00:00:29,840 --> 00:00:33,400 Speaker 1: one of those that's fascinating and is an underpinning for 9 00:00:33,520 --> 00:00:37,480 Speaker 1: tons of speculative fiction and horror stories. And since we're 10 00:00:37,520 --> 00:00:40,320 Speaker 1: now in October, I think here this would be kind 11 00:00:40,360 --> 00:00:44,720 Speaker 1: of thematically linked to Halloween text style. It turns out 12 00:00:44,760 --> 00:00:48,400 Speaker 1: that's pretty hard to do. Halloween technology stories have already 13 00:00:48,440 --> 00:00:52,519 Speaker 1: covered stuff like haunted house technology. So today we're going 14 00:00:52,560 --> 00:00:56,000 Speaker 1: to talk about consciousness and whether or not it might 15 00:00:56,040 --> 00:01:00,680 Speaker 1: be possible that machines could one day achieve consciousness. Now, 16 00:01:00,960 --> 00:01:04,520 Speaker 1: I could start this off by talking about the Turing test, 17 00:01:04,600 --> 00:01:07,520 Speaker 1: which many people have used as the launch point for 18 00:01:07,560 --> 00:01:11,880 Speaker 1: a machine intelligence and machine consciousness debates. The way we 19 00:01:11,959 --> 00:01:15,120 Speaker 1: understand that test today, which by the way, is slightly 20 00:01:15,160 --> 00:01:19,200 Speaker 1: different from the test that Alan Turing first proposed, is 21 00:01:19,200 --> 00:01:23,080 Speaker 1: that you have a human interviewer who through a computer interface, 22 00:01:23,520 --> 00:01:27,080 Speaker 1: asks questions of a subject, and the subject might be 23 00:01:27,120 --> 00:01:30,800 Speaker 1: another human, or it might be a computer program posing 24 00:01:30,959 --> 00:01:34,559 Speaker 1: as a human, and the the interviewer just sees text 25 00:01:34,600 --> 00:01:37,240 Speaker 1: on a screen. So if the interviewer is unable to 26 00:01:37,280 --> 00:01:40,880 Speaker 1: pass a certain threshold of being able to tell the difference, 27 00:01:40,920 --> 00:01:43,120 Speaker 1: to be able to determine whether it was a machine 28 00:01:43,200 --> 00:01:46,720 Speaker 1: or a person, then the program or machine that's being 29 00:01:46,760 --> 00:01:50,200 Speaker 1: tested is said to have passed the Turing test. It 30 00:01:50,280 --> 00:01:54,360 Speaker 1: doesn't mean the program or machine is conscious or even intelligent, 31 00:01:54,800 --> 00:01:59,200 Speaker 1: but rather says that to outward appearances, it seems to 32 00:01:59,360 --> 00:02:03,680 Speaker 1: be intel, religent, and conscious. See, we humans can't be 33 00:02:03,800 --> 00:02:08,360 Speaker 1: absolutely sure that other humans are conscious and intelligent. We 34 00:02:08,440 --> 00:02:12,560 Speaker 1: assume that they are because each of us knows of 35 00:02:12,600 --> 00:02:16,280 Speaker 1: our own consciousness and our own intelligence. We have a 36 00:02:16,360 --> 00:02:22,000 Speaker 1: personal experience with that direct personal experience, and other people 37 00:02:22,040 --> 00:02:26,760 Speaker 1: seem to display behaviors that indicate they too possess those traits, 38 00:02:26,760 --> 00:02:30,840 Speaker 1: and they too have a personal experience. But we cannot 39 00:02:31,000 --> 00:02:34,320 Speaker 1: be those other people, and so we have to grant 40 00:02:34,400 --> 00:02:38,000 Speaker 1: them the consideration that they too are conscious and intelligent. 41 00:02:38,320 --> 00:02:42,040 Speaker 1: And I agree that is very big of us. This 42 00:02:42,120 --> 00:02:46,240 Speaker 1: is actually called the problem of other minds in the 43 00:02:46,280 --> 00:02:49,360 Speaker 1: field of philosophy, and the problem is this, it is 44 00:02:49,400 --> 00:02:53,000 Speaker 1: impossible for any one of us to step outside of 45 00:02:53,000 --> 00:02:58,080 Speaker 1: ourselves and into any other person's consciousness. We cannot feel 46 00:02:58,360 --> 00:03:02,480 Speaker 1: what other people are feeling or experience their thoughts firsthand. 47 00:03:02,840 --> 00:03:05,799 Speaker 1: We are aware of our own abilities, but we are 48 00:03:05,880 --> 00:03:10,839 Speaker 1: only aware of the appearance that other people share those abilities. 49 00:03:11,160 --> 00:03:14,919 Speaker 1: So assuming that other people also experience consciousness rather than 50 00:03:15,040 --> 00:03:18,760 Speaker 1: imitating it, really, really, well, that's a step we all 51 00:03:18,880 --> 00:03:23,440 Speaker 1: have to take. Turings point is that if we do 52 00:03:23,560 --> 00:03:27,400 Speaker 1: grant that consideration to other people, why would we not 53 00:03:27,560 --> 00:03:31,120 Speaker 1: do it to machines as well? I mean, the machine 54 00:03:31,160 --> 00:03:34,440 Speaker 1: appears to possess the same qualities as a human. This 55 00:03:34,520 --> 00:03:38,440 Speaker 1: is a hypothetical machine, so we cannot experience what that 56 00:03:38,480 --> 00:03:41,200 Speaker 1: machine is going through, just as we can't experience what 57 00:03:41,320 --> 00:03:45,440 Speaker 1: another person is going through, at least not the intrinsic 58 00:03:45,560 --> 00:03:49,680 Speaker 1: personal level. So why would we not grant the machine 59 00:03:50,000 --> 00:03:53,600 Speaker 1: the same consideration that we would grant to people. And 60 00:03:53,840 --> 00:03:56,960 Speaker 1: Touring was being a little cheeky, But while I just 61 00:03:57,000 --> 00:03:59,800 Speaker 1: gave kind of a super fast, high level description of 62 00:03:59,800 --> 00:04:03,120 Speaker 1: the Herring test, that's not actually where I want to start. 63 00:04:03,600 --> 00:04:07,640 Speaker 1: I want to begin with the concept of consciousness itself. Now, 64 00:04:07,640 --> 00:04:10,160 Speaker 1: the reason I want to do this isn't just to 65 00:04:10,760 --> 00:04:13,920 Speaker 1: make a longer podcast. It's because I think one of 66 00:04:13,920 --> 00:04:18,640 Speaker 1: the most fundamental problems with the discussion about AI intelligence, 67 00:04:18,920 --> 00:04:22,480 Speaker 1: self awareness, and consciousness is that there tends to be 68 00:04:22,560 --> 00:04:26,600 Speaker 1: a pretty large disconnect between the biologists and the doctors 69 00:04:26,600 --> 00:04:30,960 Speaker 1: who specialize in neuroscience, particularly cognitive neuroscience, and does have 70 00:04:31,080 --> 00:04:35,120 Speaker 1: some understanding about the nature of consciousness and people. And 71 00:04:35,160 --> 00:04:38,080 Speaker 1: then you have computer scientists who have a deep understanding 72 00:04:38,080 --> 00:04:41,800 Speaker 1: of how computers process information. And while we frequently will 73 00:04:41,839 --> 00:04:46,560 Speaker 1: compare brains to computers, that comparison is not one to one. 74 00:04:46,720 --> 00:04:50,360 Speaker 1: It is largely a comparison of convenience, and in some 75 00:04:50,440 --> 00:04:53,400 Speaker 1: cases you could argue it's not terribly useful, it might 76 00:04:53,440 --> 00:04:57,000 Speaker 1: actually be counterproductive. And so I think at least some 77 00:04:57,160 --> 00:05:00,880 Speaker 1: of the speculation about machine consciousness is based on a 78 00:05:00,960 --> 00:05:05,160 Speaker 1: lack of understanding of how complicated and mysterious this topic 79 00:05:05,320 --> 00:05:08,599 Speaker 1: is in the first place, and this ends up being 80 00:05:08,640 --> 00:05:14,520 Speaker 1: really tricky. Consciousness isn't an easily defined quality or quantity. 81 00:05:14,680 --> 00:05:17,320 Speaker 1: Some people like to say, we don't so much define 82 00:05:17,360 --> 00:05:21,040 Speaker 1: consciousness by what it is but rather what it isn't 83 00:05:21,480 --> 00:05:23,240 Speaker 1: and this will also will will kind of bring us 84 00:05:23,240 --> 00:05:27,000 Speaker 1: into the realm of philosophy. Now, I'm gonna be honest 85 00:05:27,000 --> 00:05:30,400 Speaker 1: with you, guys, the realm of philosophy is not one 86 00:05:30,600 --> 00:05:34,920 Speaker 1: I'm terribly comfortable in. I'm pretty pragmatic, and philosophy deals 87 00:05:34,920 --> 00:05:38,680 Speaker 1: with a lot of stuff that is, at least for now, unknowable. 88 00:05:38,960 --> 00:05:43,080 Speaker 1: Philosophy sometimes asks questions that we do not and cannot 89 00:05:43,160 --> 00:05:46,640 Speaker 1: have the answer to, and in many cases we may 90 00:05:46,760 --> 00:05:50,040 Speaker 1: never be able to answer those questions. And the pragmatist 91 00:05:50,080 --> 00:05:53,839 Speaker 1: emmy says, well, why bother asking the question if you 92 00:05:53,920 --> 00:05:56,440 Speaker 1: can never get the answer. Let's just focus on the 93 00:05:56,440 --> 00:06:00,760 Speaker 1: stuff we actually can answer. Now, I realize this is 94 00:06:00,800 --> 00:06:04,839 Speaker 1: a limitation on my part. I'm owning that I'm not 95 00:06:04,960 --> 00:06:08,320 Speaker 1: out to upset the philosophical apple cart. I'm just of 96 00:06:08,320 --> 00:06:12,360 Speaker 1: a different philosophical bent. And I realized that just because 97 00:06:12,400 --> 00:06:16,640 Speaker 1: we can't answer some questions right now, that doesn't necessarily 98 00:06:16,680 --> 00:06:20,080 Speaker 1: mean they will all go unanswered for all time. We 99 00:06:20,200 --> 00:06:23,080 Speaker 1: might glean a way of answering at least some of them, 100 00:06:23,240 --> 00:06:27,080 Speaker 1: though I suspect a few will be forever unanswered. If 101 00:06:27,080 --> 00:06:31,200 Speaker 1: we go with the basic Dictionary definition of consciousness. It's 102 00:06:31,320 --> 00:06:35,240 Speaker 1: quote the state of being awake and aware of one's 103 00:06:35,320 --> 00:06:39,720 Speaker 1: surroundings end quote. But what this doesn't tell us is 104 00:06:39,760 --> 00:06:43,520 Speaker 1: what's going on that lets us do that. It also 105 00:06:43,520 --> 00:06:47,719 Speaker 1: doesn't talk about being aware of oneself, which we largely 106 00:06:47,760 --> 00:06:51,560 Speaker 1: consider consciousness to be part of. Is not just aware 107 00:06:51,560 --> 00:06:55,640 Speaker 1: of your surroundings, but aware that you exist within those surroundings, 108 00:06:56,000 --> 00:06:59,720 Speaker 1: your relationship to your surroundings, and things that are going 109 00:06:59,760 --> 00:07:03,320 Speaker 1: on within you, yourself, your feelings, and your thoughts. The 110 00:07:03,360 --> 00:07:05,280 Speaker 1: fact that you can process all of this, you can 111 00:07:05,320 --> 00:07:09,840 Speaker 1: reflect upon yourself. We tend to group that into consciousness 112 00:07:09,840 --> 00:07:13,440 Speaker 1: as well. So how is it that we can feel 113 00:07:13,560 --> 00:07:16,600 Speaker 1: things and be aware of those feelings? How is it 114 00:07:16,800 --> 00:07:21,000 Speaker 1: that we can have intentions and be aware of our intentions. 115 00:07:21,280 --> 00:07:24,760 Speaker 1: We are more complex than beings that simply react to 116 00:07:24,840 --> 00:07:29,040 Speaker 1: sensory input. We are more than beings that respond to 117 00:07:29,080 --> 00:07:33,280 Speaker 1: stuff like hunger, fear, or the desire to procreate. We 118 00:07:33,360 --> 00:07:38,240 Speaker 1: have motivations, sometimes really complex motivations, and we can reflect 119 00:07:38,240 --> 00:07:40,880 Speaker 1: on those, We can examine them, we can question them, 120 00:07:40,920 --> 00:07:45,720 Speaker 1: we can even change them. So how do we do this? Now? 121 00:07:45,720 --> 00:07:48,120 Speaker 1: We know this is special because some of the things 122 00:07:48,200 --> 00:07:52,000 Speaker 1: we can do are shared among a very few species 123 00:07:52,280 --> 00:07:56,040 Speaker 1: on Earth. For example, we humans can recognize our own 124 00:07:56,080 --> 00:08:00,400 Speaker 1: reflections in a mirror, starting it around age two or so. 125 00:08:01,120 --> 00:08:03,760 Speaker 1: We can see the mirror image and we recognize the 126 00:08:03,760 --> 00:08:08,280 Speaker 1: mirriage images of us. Now, there are only eight species 127 00:08:08,440 --> 00:08:12,040 Speaker 1: that can do this that we know about anyway. Those 128 00:08:12,080 --> 00:08:17,800 Speaker 1: species are the great apes. So you've got humans, gorillas orangutans, binobos, 129 00:08:17,800 --> 00:08:22,760 Speaker 1: and chimpanzees, the magpie, the dolphin, and that's it. Oh, 130 00:08:22,800 --> 00:08:27,120 Speaker 1: and the magpies are birds, right, That's that's all of them. 131 00:08:27,120 --> 00:08:30,040 Speaker 1: Recognizing one's own form and a mirror shows a sense 132 00:08:30,080 --> 00:08:35,360 Speaker 1: of self awareness, literally, of awareness of one's self. Now, 133 00:08:35,360 --> 00:08:38,319 Speaker 1: there are a lot of great resources online and offline 134 00:08:38,360 --> 00:08:42,120 Speaker 1: that go into the theme of consciousness. Heck, there are 135 00:08:42,240 --> 00:08:47,040 Speaker 1: numerous college level courses and graduate level courses dedicated to 136 00:08:47,080 --> 00:08:49,520 Speaker 1: this topic. So I'm not going to be able to 137 00:08:49,559 --> 00:08:53,400 Speaker 1: go into all the different hypotheses, arguments, counter arguments, et 138 00:08:53,440 --> 00:08:58,160 Speaker 1: cetera in this episode, but I can cover some basics. Also, 139 00:08:58,920 --> 00:09:02,880 Speaker 1: highly recommend you check out v Sauces video on YouTube 140 00:09:02,880 --> 00:09:08,000 Speaker 1: that's titled what is Consciousness? Because it's really good. And no, 141 00:09:08,160 --> 00:09:10,520 Speaker 1: I don't know, Michael, I have no connection to him. 142 00:09:10,679 --> 00:09:13,760 Speaker 1: I've never met him. This is just an honest recommendation 143 00:09:13,840 --> 00:09:16,840 Speaker 1: from me, and I have no connection whatsoever to that 144 00:09:16,960 --> 00:09:20,000 Speaker 1: video series. The video includes a link to what v 145 00:09:20,120 --> 00:09:23,440 Speaker 1: Sauce dubs a lean back, which is a playlist of 146 00:09:23,800 --> 00:09:27,720 Speaker 1: related videos on the subject at hand, in this case, consciousness. 147 00:09:27,720 --> 00:09:30,160 Speaker 1: Those are also really fascinating. But I do want to 148 00:09:30,200 --> 00:09:33,040 Speaker 1: point out that, at least at the time of this recording, 149 00:09:33,360 --> 00:09:35,800 Speaker 1: a couple of the videos in that playlist have since 150 00:09:35,840 --> 00:09:38,920 Speaker 1: been delisted from YouTube for whatever reason. So there are 151 00:09:38,920 --> 00:09:41,480 Speaker 1: a couple of blank spots in there. But what those 152 00:09:41,559 --> 00:09:46,200 Speaker 1: videos show, and what countless papers and courses and presentations 153 00:09:46,240 --> 00:09:50,400 Speaker 1: also show, is that the brain is so incredibly complex 154 00:09:50,480 --> 00:09:54,240 Speaker 1: and nuanced that we don't know what we don't know. 155 00:09:54,920 --> 00:09:57,360 Speaker 1: We do know that there are some pretty funky things 156 00:09:57,440 --> 00:09:59,640 Speaker 1: going up in the gray matter up in our noggins, 157 00:10:00,000 --> 00:10:02,920 Speaker 1: and we also know that many of the explanations given 158 00:10:02,960 --> 00:10:08,559 Speaker 1: to describe consciousness rely upon some assumptions that we don't 159 00:10:08,559 --> 00:10:13,240 Speaker 1: have any substantial evidence for. You can't really assert something 160 00:10:13,280 --> 00:10:16,440 Speaker 1: to be true if it's based on a premise that 161 00:10:16,559 --> 00:10:19,680 Speaker 1: you also don't know to be true. That's not how 162 00:10:19,720 --> 00:10:22,640 Speaker 1: good science works. This is also why I reject the 163 00:10:22,720 --> 00:10:27,000 Speaker 1: arguments around stuff like ghost hunting equipment. The use of 164 00:10:27,000 --> 00:10:31,079 Speaker 1: that equipment is predicated on the argument the ghosts exist 165 00:10:31,440 --> 00:10:34,560 Speaker 1: and they have certain influences on their environment. But we 166 00:10:34,600 --> 00:10:37,800 Speaker 1: haven't proven that ghosts exist in the first place, let 167 00:10:37,800 --> 00:10:40,840 Speaker 1: alone that they can affect the environment. So selling a 168 00:10:40,920 --> 00:10:45,640 Speaker 1: meter that supposedly detects a ghostly presence from electromagnetic fluctuations 169 00:10:45,840 --> 00:10:48,920 Speaker 1: makes no logical sense. For us to know that to 170 00:10:49,000 --> 00:10:52,160 Speaker 1: be true, we would already have to have established that 171 00:10:52,240 --> 00:10:55,920 Speaker 1: one ghosts are real and two that they have these 172 00:10:55,960 --> 00:11:00,560 Speaker 1: electromagnetic fluctuation effects, and we haven't done that. It's like 173 00:11:00,720 --> 00:11:04,000 Speaker 1: working science in reverse. That's not how it works. Anyway. 174 00:11:04,040 --> 00:11:07,000 Speaker 1: There are a lot of arguments about consciousness that suggests 175 00:11:07,080 --> 00:11:11,520 Speaker 1: perhaps there's some ineffable force that informs it. You can 176 00:11:11,559 --> 00:11:14,680 Speaker 1: call it the spirit or the soul or whatever. So 177 00:11:14,760 --> 00:11:17,960 Speaker 1: that argument suggests that this thing we've never proven to 178 00:11:18,080 --> 00:11:22,960 Speaker 1: have existed is what gives consciousness its own and that's 179 00:11:22,960 --> 00:11:26,080 Speaker 1: a problem. We can't really state that. I mean, you 180 00:11:26,120 --> 00:11:28,880 Speaker 1: can't say the reason this thing exists is that this 181 00:11:29,080 --> 00:11:32,960 Speaker 1: other thing that we've never proven to exist makes it exist. Well, 182 00:11:33,679 --> 00:11:36,040 Speaker 1: that you've just made it harder to even prove anything, 183 00:11:37,160 --> 00:11:40,200 Speaker 1: and we have evidence that also shows that that whole 184 00:11:40,240 --> 00:11:44,000 Speaker 1: idea doesn't hold water. The evidence comes in the form 185 00:11:44,120 --> 00:11:47,760 Speaker 1: of brain disorders, brain diseases, and brain damage. We have 186 00:11:47,840 --> 00:11:51,920 Speaker 1: seen that disease and damage to the brain affects consciousness, 187 00:11:52,120 --> 00:11:57,360 Speaker 1: which suggests that consciousness manifests from the actual form and 188 00:11:57,480 --> 00:12:02,040 Speaker 1: function of our brains, not from any mysterious force. Our 189 00:12:02,080 --> 00:12:06,080 Speaker 1: ability to perceive, to process information, to have an understanding 190 00:12:06,160 --> 00:12:09,360 Speaker 1: of the self, to have an accurate reflection of what's 191 00:12:09,360 --> 00:12:13,640 Speaker 1: going on around us within our own conceptual reality, all 192 00:12:13,679 --> 00:12:18,000 Speaker 1: of that appears to be predicated primarily upon the brain. Now, 193 00:12:18,040 --> 00:12:20,600 Speaker 1: originally I was planning to give a rundown on some 194 00:12:20,640 --> 00:12:24,320 Speaker 1: of the prevailing theories about consciousness. In other words, I 195 00:12:24,320 --> 00:12:27,360 Speaker 1: want to summarize the various schools of thought about how 196 00:12:27,400 --> 00:12:33,040 Speaker 1: consciousness actually arises. But as I dove down into the research, 197 00:12:33,480 --> 00:12:37,000 Speaker 1: it became apparent really quickly that such a discussion would 198 00:12:37,040 --> 00:12:41,679 Speaker 1: require so much groundwork and more importantly, a much deeper 199 00:12:41,800 --> 00:12:46,040 Speaker 1: understanding on my part than would be practical for this podcast. 200 00:12:46,360 --> 00:12:49,360 Speaker 1: So instead of talking about the higher order theory of 201 00:12:49,360 --> 00:12:54,920 Speaker 1: consciousness versus the general workspace theory versus integrated information theory. 202 00:12:54,960 --> 00:12:58,240 Speaker 1: I'll take a step back, and I'll say there's a 203 00:12:58,240 --> 00:13:01,360 Speaker 1: lot of ongoing debate about the subject, and no one 204 00:13:01,600 --> 00:13:05,920 Speaker 1: has conclusively proven that any particular theory or argument is 205 00:13:06,000 --> 00:13:10,080 Speaker 1: most likely true. Each theory has its strengths and its weaknesses. 206 00:13:10,400 --> 00:13:13,880 Speaker 1: And complicating matters further is that we haven't refined our 207 00:13:14,000 --> 00:13:19,600 Speaker 1: language around the concepts enough to differentiate various ideas. That 208 00:13:19,679 --> 00:13:23,400 Speaker 1: means you can't talk about an organism being conscious of 209 00:13:23,440 --> 00:13:28,160 Speaker 1: something and that degree of consciousness is somehow inherently specific, 210 00:13:28,200 --> 00:13:31,640 Speaker 1: it's not. That's the issue. So, for example, I could 211 00:13:31,679 --> 00:13:35,200 Speaker 1: say a rat is conscious of a rat terrier, type 212 00:13:35,200 --> 00:13:38,320 Speaker 1: of dog that hunts down rats, and so as a 213 00:13:38,360 --> 00:13:41,600 Speaker 1: result of this consciousness of the rat terrier, the rat 214 00:13:41,640 --> 00:13:44,240 Speaker 1: attempts to remain hidden so as not to be killed. 215 00:13:44,559 --> 00:13:47,600 Speaker 1: But does that mean the rat merely perceives the rat 216 00:13:47,720 --> 00:13:49,959 Speaker 1: terrier and thus is trying to stay out of its way, 217 00:13:50,160 --> 00:13:53,840 Speaker 1: And that's as far as the consciousness goes, or doesn't 218 00:13:53,840 --> 00:13:56,760 Speaker 1: mean that the rat actually has a deeper, more meaningful 219 00:13:56,880 --> 00:14:01,040 Speaker 1: awareness of the rat terrier. The language isn't much help here, 220 00:14:01,400 --> 00:14:05,640 Speaker 1: and moreover, there's debate about what degrees of consciousness there 221 00:14:05,679 --> 00:14:10,360 Speaker 1: even are. Also While I've been harping on consciousness, that's 222 00:14:10,400 --> 00:14:14,000 Speaker 1: not the only concept we have to consider. Another is intelligence, 223 00:14:14,320 --> 00:14:18,640 Speaker 1: which is distinct from consciousness, and there are some similarities. 224 00:14:19,080 --> 00:14:24,000 Speaker 1: Like consciousness, intelligence is predicated upon brain functions. Again, a 225 00:14:24,120 --> 00:14:28,240 Speaker 1: long history of investigating brain disorders and brain damage indicates this, 226 00:14:28,640 --> 00:14:31,880 Speaker 1: as it can affect not just consciousness but also intelligence. 227 00:14:32,400 --> 00:14:36,280 Speaker 1: So what is intelligence? Well, get ready for this, But 228 00:14:36,480 --> 00:14:41,480 Speaker 1: like consciousness, there's no single agreed upon definition or theory 229 00:14:41,680 --> 00:14:44,960 Speaker 1: of intelligence. In general, we use the word intelligence to 230 00:14:45,040 --> 00:14:49,240 Speaker 1: describe the ability to think, to learn, to absorb knowledge, 231 00:14:49,280 --> 00:14:53,040 Speaker 1: and to make use of it to develop skills. Intelligence 232 00:14:53,120 --> 00:14:56,200 Speaker 1: is what allowed humans to learn how to make basic tools, 233 00:14:56,240 --> 00:14:58,640 Speaker 1: to gain an understanding of how to cultivate plants and 234 00:14:58,680 --> 00:15:03,320 Speaker 1: develop agriculture, to develop architecture, to understand mathematic principles, and 235 00:15:03,440 --> 00:15:07,200 Speaker 1: all sorts of stuff. So in humans, we tend to 236 00:15:07,280 --> 00:15:10,640 Speaker 1: lump consciousness and intelligence together. We tend to think in 237 00:15:10,760 --> 00:15:14,480 Speaker 1: terms of being intelligent and being self aware, but the 238 00:15:14,520 --> 00:15:17,800 Speaker 1: two need not necessarily go hand in hand. There are 239 00:15:17,800 --> 00:15:20,320 Speaker 1: many people who believe that it could be possible to 240 00:15:20,360 --> 00:15:25,760 Speaker 1: construct an artificial intelligence or an artificial consciousness independently of 241 00:15:25,800 --> 00:15:28,960 Speaker 1: one another. When we come back. I'll explain more, but 242 00:15:29,040 --> 00:15:40,400 Speaker 1: first let's take a quick break. So, in a very 243 00:15:40,440 --> 00:15:44,520 Speaker 1: general sense, the group of hypotheses that fall into the 244 00:15:44,560 --> 00:15:50,160 Speaker 1: integrated information theory umbrella state that consciousness emerges through linking 245 00:15:50,320 --> 00:15:55,080 Speaker 1: elements in our brains. These would be neurons processing large 246 00:15:55,160 --> 00:15:58,960 Speaker 1: amounts of information, and that it's the scale of this 247 00:15:59,200 --> 00:16:03,400 Speaker 1: endeavor that then leads to consciousness. In other words, if 248 00:16:03,400 --> 00:16:07,320 Speaker 1: you have enough processors working on enough information and they're 249 00:16:07,320 --> 00:16:10,680 Speaker 1: all interconnected with each other and it's very complicated, bang, 250 00:16:11,200 --> 00:16:16,160 Speaker 1: you get consciousness. Now, it is clear our brains process 251 00:16:16,200 --> 00:16:19,240 Speaker 1: a lot of information. If you do a search in 252 00:16:19,320 --> 00:16:24,120 Speaker 1: textbooks or online, you'll frequently encounter the stat their brains 253 00:16:24,160 --> 00:16:27,920 Speaker 1: have around one hundred billion neurons in them and ten 254 00:16:28,080 --> 00:16:32,880 Speaker 1: times as many glial cells. Neurons are like the processors 255 00:16:32,880 --> 00:16:35,320 Speaker 1: in a computer system, and glial cells would be the 256 00:16:35,360 --> 00:16:39,920 Speaker 1: support systems and insulators or those processors. Anyway, those numbers 257 00:16:39,920 --> 00:16:43,680 Speaker 1: have since come under some dispute. As an associate professor 258 00:16:43,720 --> 00:16:49,760 Speaker 1: at Vanderbilt University named Susanna Herculano Husel, she explained that 259 00:16:49,840 --> 00:16:54,520 Speaker 1: the old way of estimating how many neurons the brain 260 00:16:54,600 --> 00:16:57,920 Speaker 1: had appeared to be based on taking slices of the brain, 261 00:16:58,400 --> 00:17:00,880 Speaker 1: estimating the number of neurons in that slice, and then 262 00:17:00,960 --> 00:17:04,200 Speaker 1: kind of extrapolating that number to apply across the brain 263 00:17:04,240 --> 00:17:07,800 Speaker 1: in general. But that ignores stuff like the density of 264 00:17:07,880 --> 00:17:10,440 Speaker 1: cells and the distribution of the cells across the brain. 265 00:17:10,560 --> 00:17:13,919 Speaker 1: So what she did, and this also falls into the 266 00:17:13,920 --> 00:17:18,480 Speaker 1: category of Halloween horror stories, she took a brain and 267 00:17:18,560 --> 00:17:22,160 Speaker 1: she freaking dissolved it. She could then get account of 268 00:17:22,200 --> 00:17:27,960 Speaker 1: the neuron nuclei that was in the soupy mix. By 269 00:17:27,960 --> 00:17:32,000 Speaker 1: her accounting, the brain has closer to eighty six billion 270 00:17:32,240 --> 00:17:36,040 Speaker 1: neurons and just as many glial cells. Still a lot 271 00:17:36,080 --> 00:17:38,280 Speaker 1: of cells, mind you, But you gotta admit it's a 272 00:17:38,320 --> 00:17:42,520 Speaker 1: bit of a blow to lose fourteen billion neurons overnight. Still, 273 00:17:42,640 --> 00:17:46,920 Speaker 1: we're talking about billions of neurons that interconnect through an 274 00:17:46,920 --> 00:17:51,359 Speaker 1: incredibly complex system in our brains, with different regions of 275 00:17:51,400 --> 00:17:54,879 Speaker 1: the brain handling different things. And so, yeah, we're processing 276 00:17:54,920 --> 00:17:57,840 Speaker 1: a lot of information all the time, and we do 277 00:17:58,000 --> 00:18:01,439 Speaker 1: happen to be conscious. So could it be possible that 278 00:18:02,040 --> 00:18:05,080 Speaker 1: with a sufficiently powerful computer system, perhaps made up of 279 00:18:05,480 --> 00:18:10,200 Speaker 1: hundreds or thousands or tens of thousands of individual computers, 280 00:18:10,240 --> 00:18:13,800 Speaker 1: each with hundreds of processors that you could end up 281 00:18:13,880 --> 00:18:17,640 Speaker 1: with an emergent consciousness, or, as some people have proposed, 282 00:18:17,920 --> 00:18:21,639 Speaker 1: could the Internet itself become conscious due to the fact 283 00:18:21,680 --> 00:18:25,480 Speaker 1: that it is an enormous system of interconnected nodes that's 284 00:18:25,520 --> 00:18:31,879 Speaker 1: pushing around incredible amounts of information. Well, maybe maybe it's possible. 285 00:18:32,359 --> 00:18:36,200 Speaker 1: But here's the kicker. This theory doesn't actually explain the 286 00:18:36,240 --> 00:18:41,480 Speaker 1: mechanism by which the consciousness emerges. See, it's one thing 287 00:18:41,560 --> 00:18:45,960 Speaker 1: to process information, it's another thing to be aware of 288 00:18:45,960 --> 00:18:49,359 Speaker 1: that experience. So when I perceive a color, I'm not 289 00:18:49,440 --> 00:18:54,560 Speaker 1: just perceiving a color. I'm aware that I'm experiencing that color. 290 00:18:54,880 --> 00:18:56,960 Speaker 1: Or to put it in another way, I can relate 291 00:18:57,080 --> 00:18:59,760 Speaker 1: something to how it makes me feel, or some other 292 00:19:00,040 --> 00:19:04,560 Speaker 1: subjective experience that's personal to me. So a machine might 293 00:19:04,680 --> 00:19:08,760 Speaker 1: objectively be able to return data about stuff like what 294 00:19:08,960 --> 00:19:11,600 Speaker 1: is a color of a piece of paper? It analyzes 295 00:19:11,640 --> 00:19:13,879 Speaker 1: the light that's being reflected off that piece of paper, 296 00:19:14,080 --> 00:19:16,720 Speaker 1: it compares that light to a spectrum of colors. But 297 00:19:16,800 --> 00:19:19,160 Speaker 1: that's still not the same thing as having the subjective 298 00:19:19,240 --> 00:19:23,120 Speaker 1: experience of perceiving the color. And there may well be 299 00:19:23,200 --> 00:19:26,840 Speaker 1: some connection between the complexity of the interconnected neurons in 300 00:19:26,840 --> 00:19:29,840 Speaker 1: our brains and the amount of information that we're processing 301 00:19:30,359 --> 00:19:34,720 Speaker 1: and our sense of consciousness. But the theory doesn't actually 302 00:19:34,720 --> 00:19:39,040 Speaker 1: explain what that connection is. It's more like saying, hey, 303 00:19:39,280 --> 00:19:43,720 Speaker 1: maybe this thing we have, this consciousness experience, is also 304 00:19:43,840 --> 00:19:47,800 Speaker 1: linked to this other thing, without actually making the link 305 00:19:47,880 --> 00:19:51,240 Speaker 1: between the two. It appears to be correlative but not 306 00:19:51,400 --> 00:19:56,160 Speaker 1: necessarily causal to relate that to our personal experience. Imagine 307 00:19:56,200 --> 00:19:59,960 Speaker 1: that you've just poofed into existence. You have no prior 308 00:20:00,119 --> 00:20:03,560 Speaker 1: knowledge of the world, or the physics in that world, 309 00:20:03,680 --> 00:20:07,160 Speaker 1: or basic stuff like that, so you're drawing conclusions about 310 00:20:07,200 --> 00:20:11,240 Speaker 1: the world around you based solely on your observations as 311 00:20:11,240 --> 00:20:14,119 Speaker 1: you wander around and do stuff. And at one point 312 00:20:14,600 --> 00:20:18,040 Speaker 1: you see an interesting looking rock on the path, so 313 00:20:18,119 --> 00:20:20,119 Speaker 1: you bend over and you pick up the rock, and 314 00:20:20,160 --> 00:20:23,360 Speaker 1: when you do, it starts to rain, and you think, well, 315 00:20:23,400 --> 00:20:26,120 Speaker 1: maybe I caused it to rain because I picked up 316 00:20:26,119 --> 00:20:29,120 Speaker 1: this rock. And maybe it happens a few times where 317 00:20:29,160 --> 00:20:31,160 Speaker 1: you pick up a rock and it starts to rain, 318 00:20:31,440 --> 00:20:34,480 Speaker 1: which seems to support your thesis. But does that mean 319 00:20:34,560 --> 00:20:38,880 Speaker 1: you're actually causing the effects that you are observing. If so, 320 00:20:39,320 --> 00:20:43,080 Speaker 1: what is it about picking up the rock that's making 321 00:20:43,080 --> 00:20:46,879 Speaker 1: it rain. Now, even in this absurd case that I'm making, 322 00:20:47,320 --> 00:20:50,199 Speaker 1: you could argue that if there's never an instance in 323 00:20:50,240 --> 00:20:53,280 Speaker 1: which picking up the rock wasn't immediately followed by rain, 324 00:20:53,760 --> 00:20:56,360 Speaker 1: there's a lot of evidence to suggest the two are linked, 325 00:20:56,680 --> 00:20:59,960 Speaker 1: but you still can't explain why they are linked. Wine 326 00:21:00,119 --> 00:21:03,479 Speaker 1: is one caused the other? And that's a problem because 327 00:21:03,760 --> 00:21:07,680 Speaker 1: without that piece, you're never really totally sure that you're 328 00:21:07,720 --> 00:21:10,800 Speaker 1: on the right track. That's kind of where we are 329 00:21:10,920 --> 00:21:14,000 Speaker 1: with consciousness. We've got a lot of ideas about what 330 00:21:14,080 --> 00:21:17,320 Speaker 1: makes it happen, but those ideas are mostly missing key 331 00:21:17,400 --> 00:21:22,399 Speaker 1: pieces that explain why it's happening. Now, it's possible that 332 00:21:22,480 --> 00:21:26,159 Speaker 1: we cannot reduce consciousness any further than we already have, 333 00:21:26,720 --> 00:21:29,400 Speaker 1: and maybe that means we never really get a handle 334 00:21:29,480 --> 00:21:32,200 Speaker 1: on what makes it happen. It's also possible that we 335 00:21:32,240 --> 00:21:36,480 Speaker 1: could facilitate the emergence of consciousness and machines without knowing 336 00:21:36,920 --> 00:21:40,680 Speaker 1: how we did it. Essentially, that would be like stumbling 337 00:21:40,760 --> 00:21:44,280 Speaker 1: upon the phenomenon by luck. We just happened to create 338 00:21:44,320 --> 00:21:47,960 Speaker 1: the conditions necessary to allow some form of artificial consciousness 339 00:21:47,960 --> 00:21:51,280 Speaker 1: to emerge. Now, I think this might be possible, but 340 00:21:51,359 --> 00:21:54,399 Speaker 1: it strikes me as a long shot. I think of 341 00:21:54,440 --> 00:21:57,399 Speaker 1: it like being locked in a dark warehouse filled with 342 00:21:57,560 --> 00:22:01,200 Speaker 1: every mechanical part you can imagine, and you start trying 343 00:22:01,200 --> 00:22:04,040 Speaker 1: to put things together in complete darkness, and then the 344 00:22:04,119 --> 00:22:06,360 Speaker 1: lights come on and you see that you have created 345 00:22:06,359 --> 00:22:10,240 Speaker 1: a perfect replica of an F fifteen fighter jet. Is 346 00:22:10,280 --> 00:22:14,760 Speaker 1: that possible? Well, I mean, yeah, I guess, but it 347 00:22:14,800 --> 00:22:19,679 Speaker 1: seems overwhelmingly unlikely. But again, this is based off ignorance. 348 00:22:19,920 --> 00:22:22,160 Speaker 1: It's based off the fact that it hasn't happened yet, 349 00:22:22,600 --> 00:22:26,679 Speaker 1: so I could be totally wrong here. Now, on the 350 00:22:26,680 --> 00:22:31,200 Speaker 1: flip side of that, programmers, engineers, and scientists have created 351 00:22:31,240 --> 00:22:35,040 Speaker 1: computer systems that can process information and intricate ways to 352 00:22:35,080 --> 00:22:37,960 Speaker 1: come up with solutions to problems that seem, at least 353 00:22:37,960 --> 00:22:41,399 Speaker 1: at first glance, to be similar to how we humans think. 354 00:22:41,840 --> 00:22:45,320 Speaker 1: We even have names for systems that reflect biological systems, 355 00:22:45,320 --> 00:22:49,119 Speaker 1: like artificial neural networks. Now the name might make it 356 00:22:49,160 --> 00:22:53,880 Speaker 1: sound like it's a robot brain, but it's not quite that. Instead, 357 00:22:54,200 --> 00:22:56,720 Speaker 1: it's a model for computing in which components in the 358 00:22:56,760 --> 00:23:01,200 Speaker 1: system act kind of like neurons. They're interconnected and each 359 00:23:01,240 --> 00:23:05,199 Speaker 1: one does a specific process. The nodes in the computer 360 00:23:05,280 --> 00:23:09,040 Speaker 1: system connect to other nodes. So you feed the system 361 00:23:09,080 --> 00:23:12,240 Speaker 1: input whatever it is you want to process, and then 362 00:23:12,520 --> 00:23:16,199 Speaker 1: the nodes that accept that input performs some form of 363 00:23:16,240 --> 00:23:21,200 Speaker 1: operation on it and then send that resulting data the 364 00:23:21,200 --> 00:23:26,000 Speaker 1: the answer after they've processed this information onto other nodes 365 00:23:26,200 --> 00:23:29,719 Speaker 1: in the network. It's a nonlinear approach to computing, and 366 00:23:29,760 --> 00:23:33,480 Speaker 1: by adjusting the processes each node performs, this is also 367 00:23:33,560 --> 00:23:37,080 Speaker 1: like known as adjusting the weight of the nodes, you 368 00:23:37,119 --> 00:23:40,600 Speaker 1: can tweak the outcomes. Now, this is incredibly useful. If 369 00:23:40,640 --> 00:23:44,280 Speaker 1: you already know the outcome you want, you can tweak 370 00:23:44,320 --> 00:23:47,359 Speaker 1: the system so that it learns or is trained to 371 00:23:47,480 --> 00:23:52,200 Speaker 1: recognize something specific. For example, you could train a computer 372 00:23:52,280 --> 00:23:55,720 Speaker 1: system to recognize faces, so you would feed it images. 373 00:23:55,920 --> 00:23:58,240 Speaker 1: Some of the images would have faces in them, some 374 00:23:58,320 --> 00:24:01,399 Speaker 1: would not have faces in them. I might have something 375 00:24:01,440 --> 00:24:03,280 Speaker 1: that could be a face, but it's hard to tell. 376 00:24:03,359 --> 00:24:06,679 Speaker 1: Maybe it's a shape in a picture that looks kind 377 00:24:06,720 --> 00:24:09,840 Speaker 1: of like a face, but it's not actually someone's face. Anyway, 378 00:24:09,880 --> 00:24:12,800 Speaker 1: you train the computer model to try and separate the 379 00:24:12,840 --> 00:24:16,480 Speaker 1: faces from the non faces, and it might take many 380 00:24:16,560 --> 00:24:20,040 Speaker 1: iterations to get the model trained up using your starting data. 381 00:24:20,080 --> 00:24:23,399 Speaker 1: Your training data. Now, once you do have your computer 382 00:24:23,440 --> 00:24:25,720 Speaker 1: model trained up. You've tweaked all the nodes so that 383 00:24:26,240 --> 00:24:30,080 Speaker 1: it is reliably producing results that say, yes, this is 384 00:24:30,119 --> 00:24:33,159 Speaker 1: a face or no, this isn't. You can now feed 385 00:24:33,480 --> 00:24:36,880 Speaker 1: that same computer model brand new images that it has 386 00:24:37,000 --> 00:24:40,359 Speaker 1: never seen before, and it can perform the same functions. 387 00:24:40,680 --> 00:24:44,400 Speaker 1: You have taught the computer model how to do something. 388 00:24:44,760 --> 00:24:48,760 Speaker 1: But this isn't like spontaneous intelligence, and it's not connected 389 00:24:48,800 --> 00:24:52,760 Speaker 1: to consciousness. You couldn't really call it thinking so much 390 00:24:52,760 --> 00:24:57,760 Speaker 1: as just being trained to recognize specific patterns. Pretty well. Now, 391 00:24:57,800 --> 00:25:00,639 Speaker 1: that's just one example of putting an art official neural 392 00:25:00,680 --> 00:25:03,919 Speaker 1: network to use. There are lots of others, and there 393 00:25:03,920 --> 00:25:07,960 Speaker 1: are also systems like IBM S Watson, which also appears 394 00:25:08,119 --> 00:25:11,879 Speaker 1: at you know, casual glance to think. This was helped 395 00:25:11,880 --> 00:25:14,840 Speaker 1: in no small part by the very public display of 396 00:25:14,880 --> 00:25:18,960 Speaker 1: Watson competing on special episodes of Jeopardy, and which went 397 00:25:19,040 --> 00:25:22,800 Speaker 1: up against human opponents who were former Jeopardy champions themselves. 398 00:25:23,080 --> 00:25:27,840 Speaker 1: Watson famously couldn't call upon the Internet to search for answers. 399 00:25:28,200 --> 00:25:31,560 Speaker 1: All the data the computer could access was self contained 400 00:25:31,640 --> 00:25:36,240 Speaker 1: in its undeniably voluminous storage, and the computer had to 401 00:25:36,280 --> 00:25:39,720 Speaker 1: parse what the clues in jeopardy were actually looking for 402 00:25:40,080 --> 00:25:42,600 Speaker 1: then come up with an appropriate response. And to make 403 00:25:42,680 --> 00:25:47,000 Speaker 1: matters more tricky, the computer wasn't returning a guaranteed right answer. 404 00:25:47,359 --> 00:25:49,920 Speaker 1: The computer had to come to a judgment on how 405 00:25:50,000 --> 00:25:52,920 Speaker 1: confident it was that the answer it had arrived at 406 00:25:53,240 --> 00:25:56,920 Speaker 1: was the correct one. If the confidence met a certain threshold, 407 00:25:57,440 --> 00:26:01,000 Speaker 1: then Watson would submit an answer. If it did not 408 00:26:01,200 --> 00:26:05,480 Speaker 1: meet that threshold, Watson would remain silent. It's a remarkable achievement, 409 00:26:05,800 --> 00:26:08,240 Speaker 1: and it has lots of potential applications, many of which 410 00:26:08,240 --> 00:26:12,199 Speaker 1: are actually in action today. But it's still not quite 411 00:26:12,320 --> 00:26:15,280 Speaker 1: at the level of a machine thinking like a human, 412 00:26:15,359 --> 00:26:17,600 Speaker 1: and I don't think anyone at IBM would suggest that 413 00:26:17,680 --> 00:26:21,680 Speaker 1: it possesses any sense of consciousness. When we come back, 414 00:26:21,880 --> 00:26:25,639 Speaker 1: i'll talk about a famous thought experiment that really starts 415 00:26:25,680 --> 00:26:29,560 Speaker 1: to examine whether or not machines could ever attain intelligence 416 00:26:29,600 --> 00:26:40,080 Speaker 1: and consciousness. But first let's take another quick break. And 417 00:26:40,320 --> 00:26:45,080 Speaker 1: now this brings me to a famous thought experiment proposed 418 00:26:45,200 --> 00:26:49,000 Speaker 1: by John Searle, a philosopher who questioned whether we could 419 00:26:49,040 --> 00:26:52,600 Speaker 1: say a machine, even one so proficient that could deliver 420 00:26:52,720 --> 00:26:57,840 Speaker 1: reliable answers on demand, would ever truly be intelligent at 421 00:26:57,880 --> 00:27:01,160 Speaker 1: least on a level similar to what we human identify 422 00:27:01,200 --> 00:27:06,120 Speaker 1: as being intelligent. It's called the Chinese room argument, which 423 00:27:06,200 --> 00:27:11,600 Speaker 1: Searle included in his article titled Minds, Brains, and Programs 424 00:27:11,640 --> 00:27:15,800 Speaker 1: for the Behavioral and Brain Sciences Journal. Here's the premise 425 00:27:16,240 --> 00:27:19,720 Speaker 1: of the thought experiment. Imagine that you are in a 426 00:27:19,800 --> 00:27:23,280 Speaker 1: simple room. The room has a table and a chair. 427 00:27:23,760 --> 00:27:27,720 Speaker 1: There's a ream of blank paper, there's a brush, there's 428 00:27:27,760 --> 00:27:31,320 Speaker 1: some ink, and there's also a large book within the 429 00:27:31,400 --> 00:27:35,040 Speaker 1: room that contains pairs of Chinese symbols. In the book. 430 00:27:35,480 --> 00:27:38,040 Speaker 1: Oh and we also have to imagine that you don't 431 00:27:38,119 --> 00:27:42,560 Speaker 1: understand or recognize these Chinese symbols. They mean nothing to you. 432 00:27:43,160 --> 00:27:45,600 Speaker 1: There's also a door to the room, and the door 433 00:27:45,720 --> 00:27:48,800 Speaker 1: has a mail slot, and every now and again someone 434 00:27:48,880 --> 00:27:51,720 Speaker 1: slides a piece of paper through the slot. The piece 435 00:27:51,720 --> 00:27:55,320 Speaker 1: of paper has one of those Chinese symbols printed on it. 436 00:27:55,840 --> 00:27:58,359 Speaker 1: And it's your job to go through the book and 437 00:27:58,440 --> 00:28:02,200 Speaker 1: find the matching symbol in the book, plus the corresponding 438 00:28:02,320 --> 00:28:05,199 Speaker 1: symbol in the pair, because remember I said there were 439 00:28:05,200 --> 00:28:08,640 Speaker 1: symbols that were paired together. You then take a blank 440 00:28:08,720 --> 00:28:13,520 Speaker 1: sheet of paper, you draw the corresponding symbol from that 441 00:28:13,680 --> 00:28:17,119 Speaker 1: pair onto the sheet of paper, and finally you slip 442 00:28:17,200 --> 00:28:20,320 Speaker 1: that piece of paper through the mail slot, presumably to 443 00:28:20,359 --> 00:28:23,040 Speaker 1: the person who gave you the first piece of paper 444 00:28:23,480 --> 00:28:26,320 Speaker 1: and the original part of this problem. So to an 445 00:28:26,320 --> 00:28:29,880 Speaker 1: outside observer, let's say it's actually the person who's slipping 446 00:28:29,960 --> 00:28:33,000 Speaker 1: the piece of paper to you, it would seem that 447 00:28:33,080 --> 00:28:38,720 Speaker 1: whomever is inside the door actually understands Chinese symbols. They 448 00:28:38,720 --> 00:28:44,400 Speaker 1: can recognize the significance of whatever symbol was was contributed, 449 00:28:44,560 --> 00:28:47,400 Speaker 1: was sent in through the mail slot, and then match 450 00:28:47,440 --> 00:28:51,280 Speaker 1: it to whatever the corresponding data is for that particular symbol, 451 00:28:51,520 --> 00:28:54,000 Speaker 1: and then return that to the user. So to the 452 00:28:54,040 --> 00:28:57,320 Speaker 1: outside observer, it appears as though whatever is inside the 453 00:28:57,400 --> 00:29:02,560 Speaker 1: room comprehends what it is doing. But argues Searle, that's 454 00:29:02,560 --> 00:29:07,200 Speaker 1: only an illusion because the person inside the room doesn't 455 00:29:07,240 --> 00:29:10,480 Speaker 1: know what any of those symbols actually means. So, if 456 00:29:10,520 --> 00:29:13,600 Speaker 1: if this is you, you have no context. You don't 457 00:29:13,680 --> 00:29:17,720 Speaker 1: know what any individual symbol stands for, nor do you 458 00:29:17,800 --> 00:29:22,040 Speaker 1: understand why any symbol would be prepared with any other symbol. 459 00:29:22,280 --> 00:29:24,880 Speaker 1: You don't know the reasoning behind that. All you have 460 00:29:25,440 --> 00:29:27,880 Speaker 1: is a book of rules, but the rules only state 461 00:29:28,160 --> 00:29:32,440 Speaker 1: what your response should be given a specific input, the 462 00:29:32,520 --> 00:29:35,600 Speaker 1: rules don't tell you why, either on a granular level 463 00:29:35,640 --> 00:29:38,719 Speaker 1: of what the symbols actually mean or on a larger 464 00:29:38,760 --> 00:29:42,000 Speaker 1: scale when it comes to what you're actually accomplishing in 465 00:29:42,040 --> 00:29:45,320 Speaker 1: this endeavor. All you are doing is filling a physical 466 00:29:45,400 --> 00:29:48,680 Speaker 1: action over and over based on a set of rules 467 00:29:48,920 --> 00:29:52,320 Speaker 1: you don't understand. And Searle then uses this argument to 468 00:29:52,320 --> 00:29:54,800 Speaker 1: say that essentially we have to think the same way 469 00:29:54,840 --> 00:29:59,440 Speaker 1: about machines. The machines process information based on the input 470 00:29:59,520 --> 00:30:02,560 Speaker 1: they receive eve and the program that they are following. 471 00:30:03,000 --> 00:30:07,280 Speaker 1: That's it. They don't have awareness or understanding of what 472 00:30:07,320 --> 00:30:10,920 Speaker 1: the information is. Searle was taking aim at a particular 473 00:30:11,040 --> 00:30:15,880 Speaker 1: concept in AI, often dubbed strong AI or general AI. 474 00:30:16,000 --> 00:30:20,440 Speaker 1: It's a sort of general artificial intelligence. So it's something 475 00:30:20,440 --> 00:30:24,360 Speaker 1: that we could or would compared directly to human intelligence, 476 00:30:24,560 --> 00:30:27,120 Speaker 1: even if it didn't work the same way as our 477 00:30:27,120 --> 00:30:30,360 Speaker 1: intelligence works. The argument is that the capacity and the 478 00:30:30,400 --> 00:30:33,120 Speaker 1: outcomes would be similar enough for us to make the comparison. 479 00:30:33,880 --> 00:30:36,360 Speaker 1: This is the type of intelligence that we see in 480 00:30:36,480 --> 00:30:41,800 Speaker 1: science fiction doomsday scenarios where the machines have rebelled against humans, 481 00:30:42,000 --> 00:30:46,240 Speaker 1: or the machines appear to misinterpret simple requests or the 482 00:30:46,280 --> 00:30:50,680 Speaker 1: machines come to conclusions that, while logically sound, spelled doom 483 00:30:51,000 --> 00:30:54,800 Speaker 1: for us all. The classic example of this, by the way, 484 00:30:54,840 --> 00:30:58,560 Speaker 1: is appealing to a super smart artificial intelligence and you say, 485 00:30:58,680 --> 00:31:01,640 Speaker 1: could you please bring about world peace because we're we're 486 00:31:01,680 --> 00:31:05,360 Speaker 1: all sorts of messed up, and the intelligence processes this 487 00:31:05,440 --> 00:31:09,760 Speaker 1: and then concludes that while there are at least two humans, 488 00:31:09,800 --> 00:31:12,240 Speaker 1: there can never be a guarantee for peace because there's 489 00:31:12,240 --> 00:31:17,440 Speaker 1: always the opportunity for disagreement and violence between two humans, 490 00:31:17,480 --> 00:31:21,000 Speaker 1: and so to achieve true peace, the computer then goes 491 00:31:21,080 --> 00:31:24,560 Speaker 1: on a killing spree to wipe out all of humanity. Now, 492 00:31:24,600 --> 00:31:29,320 Speaker 1: Cerl is not necessarily saying that computers won't contribute to 493 00:31:29,400 --> 00:31:33,120 Speaker 1: a catastrophic outcome for humanity. Instead, he's saying they're not 494 00:31:33,160 --> 00:31:37,680 Speaker 1: actually thinking or processing information in a truly intelligent way. 495 00:31:37,720 --> 00:31:41,280 Speaker 1: They are arriving in outcomes through a series of processes 496 00:31:41,560 --> 00:31:44,880 Speaker 1: that might appear to be intelligent at first glance, but 497 00:31:44,920 --> 00:31:47,880 Speaker 1: when you break them down, it all reveals themselves to 498 00:31:47,960 --> 00:31:51,840 Speaker 1: be nothing more than a very complex series of mathematical processes. 499 00:31:52,040 --> 00:31:55,040 Speaker 1: You can even break it down further into binary and 500 00:31:55,120 --> 00:31:58,640 Speaker 1: say that ultimately each apparent decision would just be a 501 00:31:58,680 --> 00:32:02,400 Speaker 1: particular sequence of switches that are in the on or 502 00:32:02,520 --> 00:32:06,160 Speaker 1: off position, and the stays of each switch would be 503 00:32:06,240 --> 00:32:09,120 Speaker 1: determined by the input and the program you were running, 504 00:32:09,360 --> 00:32:15,040 Speaker 1: not some intelligent artificial creation that is reasoning through a problem. Essentially, 505 00:32:15,360 --> 00:32:21,720 Speaker 1: Searle's argument boils down to the difference between syntax and semantics. 506 00:32:22,480 --> 00:32:25,600 Speaker 1: Syntax would be the set of rules that you would 507 00:32:25,640 --> 00:32:29,800 Speaker 1: follow with those symbols. For example, in English, the letter 508 00:32:29,920 --> 00:32:34,000 Speaker 1: Q is nearly always followed by the letter you. The 509 00:32:34,360 --> 00:32:38,680 Speaker 1: few exceptions to this rule mostly involved romanizing words from 510 00:32:38,720 --> 00:32:42,960 Speaker 1: other language, uh, in which the letter Q represents a 511 00:32:43,040 --> 00:32:47,280 Speaker 1: sound that's not natively present in English. So you could 512 00:32:47,360 --> 00:32:50,400 Speaker 1: program a machine to follow the basic rule that the 513 00:32:50,520 --> 00:32:54,160 Speaker 1: symbol Q should be followed by the symbol you, assuming 514 00:32:54,200 --> 00:32:57,440 Speaker 1: you're eliminating all those exceptions I just mentioned. But that 515 00:32:57,520 --> 00:33:02,600 Speaker 1: doesn't lead to a grasp of semantics, which is actual meaning. Moreover, 516 00:33:02,920 --> 00:33:05,959 Speaker 1: Searle asserts that it's impossible to come to a grasp 517 00:33:06,040 --> 00:33:10,200 Speaker 1: of semantics merely through a mastery of syntax. You might 518 00:33:10,320 --> 00:33:14,720 Speaker 1: know those rules flawlessly, but Searle argues, you still wouldn't 519 00:33:14,800 --> 00:33:18,160 Speaker 1: understand why there are rules, or what the output of 520 00:33:18,160 --> 00:33:22,320 Speaker 1: those rules means, or even what the input means. There 521 00:33:22,360 --> 00:33:25,600 Speaker 1: are some general counter arguments that philosophers have made to 522 00:33:25,720 --> 00:33:30,760 Speaker 1: Searle's thought experiment, and according to the Stanford Encyclopedia of Philosophy, 523 00:33:30,800 --> 00:33:36,280 Speaker 1: which is a phenomenal resource, it's also incredibly dense. But 524 00:33:36,600 --> 00:33:40,440 Speaker 1: these counter arguments tend to fall into three groups. The 525 00:33:40,520 --> 00:33:43,760 Speaker 1: first group agrees with Searle that the person inside the 526 00:33:43,800 --> 00:33:47,760 Speaker 1: room clearly has no understanding of the Chinese symbols. But 527 00:33:47,840 --> 00:33:51,440 Speaker 1: the group counters the notion that the system as a 528 00:33:51,440 --> 00:33:54,120 Speaker 1: whole can't understand it. In fact, they say the opposite. 529 00:33:54,120 --> 00:33:56,880 Speaker 1: They say, yes, the person inside the room doesn't understand, 530 00:33:57,000 --> 00:34:01,040 Speaker 1: but you're looking at a speci of a component of 531 00:34:01,080 --> 00:34:04,560 Speaker 1: a larger system. And if we consider the system, or 532 00:34:04,640 --> 00:34:09,520 Speaker 1: maybe a virtual mind that exists due to the system, 533 00:34:09,560 --> 00:34:12,640 Speaker 1: that does have an understanding, this is sort of like 534 00:34:12,680 --> 00:34:16,960 Speaker 1: saying a neuron in the brain doesn't understand anything. It 535 00:34:17,000 --> 00:34:21,080 Speaker 1: sends along signals that collectively and through mechanisms we don't 536 00:34:21,080 --> 00:34:24,839 Speaker 1: fully understand, become thoughts that we can become conscious of. 537 00:34:25,200 --> 00:34:27,719 Speaker 1: So in this argument, the person in the room is 538 00:34:27,800 --> 00:34:30,640 Speaker 1: just a component of an overall system, and the system 539 00:34:30,719 --> 00:34:34,480 Speaker 1: possesses intelligence even if the component does not. The second 540 00:34:34,480 --> 00:34:38,520 Speaker 1: group argues that if the computer system either could simulate 541 00:34:38,560 --> 00:34:42,120 Speaker 1: the operation of a brain, perhaps with billions of nodes, 542 00:34:42,360 --> 00:34:45,680 Speaker 1: approaching the complexity of a human brain with billions of neurons, 543 00:34:46,440 --> 00:34:49,600 Speaker 1: or if the system were to inhabit a robotic body 544 00:34:49,640 --> 00:34:53,239 Speaker 1: that could have direct interaction with its environment, then the 545 00:34:53,280 --> 00:34:58,840 Speaker 1: system could manifest intelligence. The third group rejects Cearl's arguments 546 00:34:58,840 --> 00:35:03,520 Speaker 1: more thoroughly and on the basis of various grounds, ranging 547 00:35:03,520 --> 00:35:06,719 Speaker 1: from Searle's experiment being too narrow in scope to an 548 00:35:06,760 --> 00:35:11,040 Speaker 1: argument about what the word understand actually means. This is 549 00:35:11,080 --> 00:35:14,000 Speaker 1: where things get a bit more lucy goosey, And sometimes 550 00:35:14,040 --> 00:35:17,920 Speaker 1: I feel like arguments in this group amount to oh yeah, 551 00:35:18,040 --> 00:35:20,680 Speaker 1: but again, I'm pragmatic, so I tend to have a 552 00:35:20,800 --> 00:35:24,600 Speaker 1: pretty strong bias against these arguments, and I recognize that 553 00:35:24,719 --> 00:35:28,080 Speaker 1: this means I'm not giving them fair consideration because of 554 00:35:28,120 --> 00:35:31,319 Speaker 1: those biases. A few of these arguments take issue with 555 00:35:31,320 --> 00:35:35,680 Speaker 1: Searle's assertion that one cannot grasp semantics through an understanding 556 00:35:35,719 --> 00:35:39,600 Speaker 1: of syntax. And here's something that I find really interesting. 557 00:35:39,920 --> 00:35:44,839 Speaker 1: Searle originally published this argument way back in nineteen It's 558 00:35:44,880 --> 00:35:48,719 Speaker 1: been nearly forty years since he first proposed it, and 559 00:35:48,800 --> 00:35:52,440 Speaker 1: to this day there is no consensus on whether or 560 00:35:52,480 --> 00:35:55,920 Speaker 1: not his argument is sound. So why is that? Well, 561 00:35:55,920 --> 00:35:58,880 Speaker 1: it's because, as I've covered in this episode, the concepts 562 00:35:58,920 --> 00:36:03,600 Speaker 1: of intelligence and more to the point, consciousness are wibbly wobbly, 563 00:36:03,960 --> 00:36:07,360 Speaker 1: though not as far as I can tell, Timey, whymy. 564 00:36:07,480 --> 00:36:11,240 Speaker 1: When we can't even nail down specific definitions for words 565 00:36:11,280 --> 00:36:15,000 Speaker 1: like understand, it becomes difficult to even tell when we're 566 00:36:15,040 --> 00:36:18,799 Speaker 1: agreeing or disagreeing on certain topics. It could be that 567 00:36:18,840 --> 00:36:22,200 Speaker 1: while people are in a debate and are using words 568 00:36:22,200 --> 00:36:25,279 Speaker 1: in different ways, it turns out they're actually in agreement 569 00:36:25,360 --> 00:36:30,560 Speaker 1: with one another. Such is the messiness that is intelligence. Further, 570 00:36:31,080 --> 00:36:34,280 Speaker 1: we've not yet observed anything in the machine world that seems, 571 00:36:34,400 --> 00:36:39,200 Speaker 1: upon closer examination, to reflect true intelligence and consciousness, at 572 00:36:39,280 --> 00:36:42,560 Speaker 1: least as the way we experience it. In fact, we 573 00:36:42,600 --> 00:36:45,719 Speaker 1: can't say that we've seen any artificial constructs that have 574 00:36:45,840 --> 00:36:49,399 Speaker 1: experienced anything, because, as far as we know, no such 575 00:36:49,440 --> 00:36:53,600 Speaker 1: device has any awareness of itself. Now, I'm not sure 576 00:36:54,040 --> 00:36:57,080 Speaker 1: if we'll ever create a machine that will have true 577 00:36:57,120 --> 00:37:00,960 Speaker 1: intelligence and consciousness, using the word true here to mean 578 00:37:01,200 --> 00:37:04,960 Speaker 1: human like. Now, I feel pretty confident that if it 579 00:37:05,120 --> 00:37:09,000 Speaker 1: is possible, we will get around to it eventually. It 580 00:37:09,080 --> 00:37:12,799 Speaker 1: might take way more resources than we currently estimate, or 581 00:37:12,840 --> 00:37:16,240 Speaker 1: maybe it will just require a different computational approach. Maybe 582 00:37:16,239 --> 00:37:20,960 Speaker 1: it'll rely on bleeding edge technologies like quantum computing. I figure, 583 00:37:21,000 --> 00:37:24,239 Speaker 1: if it's something we can do, we will do it. 584 00:37:24,239 --> 00:37:28,160 Speaker 1: It's just a question of time, really, And further, it's 585 00:37:28,160 --> 00:37:31,120 Speaker 1: hard for me to come to a conclusion other than 586 00:37:31,280 --> 00:37:37,200 Speaker 1: it will ultimately prove possible to make an intelligent conscious construct. Now. 587 00:37:37,239 --> 00:37:41,040 Speaker 1: I believe that because I believe our own intelligence and 588 00:37:41,120 --> 00:37:46,160 Speaker 1: our own consciousness is firmly rooted in our brains. I 589 00:37:46,360 --> 00:37:49,640 Speaker 1: don't think there's anything mystical involved. And while we don't 590 00:37:49,680 --> 00:37:52,320 Speaker 1: have a full picture of how it happens in our brains, 591 00:37:52,760 --> 00:37:55,839 Speaker 1: we at least know that it does happen, and we 592 00:37:55,920 --> 00:37:58,319 Speaker 1: know some of the questions to ask and have some 593 00:37:58,440 --> 00:38:01,400 Speaker 1: ideas on how to search for answers. It's not a 594 00:38:01,400 --> 00:38:04,239 Speaker 1: complete picture, and we still have a very long way 595 00:38:04,280 --> 00:38:06,880 Speaker 1: to go, but I think it's if it's possible to 596 00:38:06,960 --> 00:38:09,640 Speaker 1: build a full understanding of how our brains work with 597 00:38:09,680 --> 00:38:13,640 Speaker 1: regard to intelligence and consciousness, we'll get there too, sooner 598 00:38:13,719 --> 00:38:18,880 Speaker 1: or later. Probably later, I suppose there's still the chance 599 00:38:19,200 --> 00:38:23,720 Speaker 1: that we could create an intelligent and or conscious machine 600 00:38:23,880 --> 00:38:28,400 Speaker 1: just by luck or accident. And while I intuitively feel 601 00:38:28,560 --> 00:38:31,960 Speaker 1: that this is unlikely, I have to admit that intuition 602 00:38:32,320 --> 00:38:36,319 Speaker 1: isn't really reliable in these matters. It feels to me 603 00:38:36,960 --> 00:38:40,200 Speaker 1: like it is the longest of long shots, but that's 604 00:38:40,400 --> 00:38:42,799 Speaker 1: entirely based on the fact that we haven't managed to 605 00:38:42,800 --> 00:38:46,799 Speaker 1: do it up until now, and including now. Maybe the 606 00:38:46,920 --> 00:38:49,880 Speaker 1: right sequence of events is right around the corner. Just 607 00:38:49,960 --> 00:38:53,640 Speaker 1: because it hasn't happened yet doesn't mean it can't or 608 00:38:53,800 --> 00:38:57,520 Speaker 1: won't happen at all, And it's good to remember the 609 00:38:57,600 --> 00:39:02,000 Speaker 1: machines don't need to be particular early intelligent or conscious 610 00:39:02,280 --> 00:39:06,880 Speaker 1: to be useful or potentially dangerous. We can see examples 611 00:39:06,880 --> 00:39:09,400 Speaker 1: of that playing out already with devices that have some 612 00:39:09,520 --> 00:39:13,560 Speaker 1: limited or weak AI. And by limited I mean it's 613 00:39:13,600 --> 00:39:17,440 Speaker 1: not general intelligence. I don't mean that the AI itself 614 00:39:17,520 --> 00:39:21,120 Speaker 1: is somehow unsophisticated or primitive. So it may not even 615 00:39:21,239 --> 00:39:24,759 Speaker 1: matter if we never create devices that have true or 616 00:39:24,840 --> 00:39:28,400 Speaker 1: human like intelligence. We might be able to accomplish just 617 00:39:28,480 --> 00:39:31,840 Speaker 1: as much with something that does not have those capabilities. 618 00:39:32,719 --> 00:39:35,840 Speaker 1: And in other words, this is a very complicated topic 619 00:39:36,560 --> 00:39:39,560 Speaker 1: one that I think gets oversimplified, and a lot of 620 00:39:39,560 --> 00:39:45,120 Speaker 1: fiction and also just a lot of speculative prognostications about 621 00:39:45,120 --> 00:39:47,760 Speaker 1: the future. I mean, you'll see a lot of videos 622 00:39:47,800 --> 00:39:52,040 Speaker 1: about how in the future AI is going to perform 623 00:39:52,280 --> 00:39:55,040 Speaker 1: a more intrinsic role, or maybe it will be an 624 00:39:55,040 --> 00:39:58,759 Speaker 1: existential threat to humanity or whatever it may be. And 625 00:39:58,800 --> 00:40:00,640 Speaker 1: I think a lot of that is predict cated upon 626 00:40:02,200 --> 00:40:08,359 Speaker 1: a deep misunderstanding or underestimation of how complicated cognitive neuroscience 627 00:40:08,400 --> 00:40:11,920 Speaker 1: actually is and how little we really understand when it 628 00:40:11,920 --> 00:40:14,960 Speaker 1: comes to our own consciousness, let alone how we would 629 00:40:14,960 --> 00:40:18,960 Speaker 1: bring about such a thing in a different device. What 630 00:40:19,000 --> 00:40:21,880 Speaker 1: do you guys think? And do you think that maybe 631 00:40:21,960 --> 00:40:26,480 Speaker 1: I'm overstating the complexity? Do you think that I'm off base? 632 00:40:26,560 --> 00:40:28,920 Speaker 1: Do you agree with me? And do you have any 633 00:40:29,000 --> 00:40:31,920 Speaker 1: other topics you would like me to cover. I invite 634 00:40:31,920 --> 00:40:34,000 Speaker 1: you to let me know. Send me an email. The 635 00:40:34,000 --> 00:40:37,439 Speaker 1: address is tech stuff at how stuff works dot com, 636 00:40:37,560 --> 00:40:39,360 Speaker 1: or drop me a line on Facebook or Twitter. The 637 00:40:39,400 --> 00:40:41,719 Speaker 1: handle it both of those is tech Stuff h s W. 638 00:40:41,960 --> 00:40:45,120 Speaker 1: Don't forget to go to our website that's tech Stuff 639 00:40:45,200 --> 00:40:47,520 Speaker 1: podcast dot com. That's where we have an archive of 640 00:40:47,560 --> 00:40:50,319 Speaker 1: all of our past episodes, as well as a link 641 00:40:50,440 --> 00:40:53,200 Speaker 1: to our online store, where every purchase you make goes 642 00:40:53,239 --> 00:40:55,799 Speaker 1: to help the show. We greatly appreciate it, and I 643 00:40:55,840 --> 00:41:03,239 Speaker 1: will talk to you again really soon. Text Stuff is 644 00:41:03,239 --> 00:41:05,759 Speaker 1: a production of I heart Radio's How Stuff Works. For 645 00:41:05,840 --> 00:41:08,799 Speaker 1: more podcasts from I heart Radio, visit the i heart 646 00:41:08,880 --> 00:41:12,040 Speaker 1: Radio app, Apple Podcasts, or wherever you listen to your 647 00:41:12,080 --> 00:41:12,800 Speaker 1: favorite shows.