1 00:00:04,600 --> 00:00:12,720 Speaker 1: Welcome to tech Stuff, a production from iHeartRadio. Hey there, 2 00:00:12,800 --> 00:00:16,480 Speaker 1: and welcome to tech Stuff. I'm your host, Jonathan Strickland. 3 00:00:16,480 --> 00:00:20,079 Speaker 1: I'm an executive producer with iHeartRadio, and I love all 4 00:00:20,160 --> 00:00:23,520 Speaker 1: things tech and I owe you guys an apology. We're 5 00:00:23,520 --> 00:00:27,440 Speaker 1: you're having a rerun episode today, So this is a 6 00:00:27,480 --> 00:00:33,240 Speaker 1: rerun from twenty nineteen about machine consciousness, and it's again 7 00:00:33,360 --> 00:00:37,400 Speaker 1: one of those tricky concepts. Consciousness is a difficult thing 8 00:00:37,440 --> 00:00:41,320 Speaker 1: to define, even for humans. So sit back and enjoy 9 00:00:41,400 --> 00:00:44,000 Speaker 1: this rerun. I'll talk to you again at the end 10 00:00:44,040 --> 00:00:47,320 Speaker 1: of the episode. There's a topic I have touched on 11 00:00:47,320 --> 00:00:51,280 Speaker 1: on several occasions in past episodes, but I really wanted 12 00:00:51,320 --> 00:00:54,600 Speaker 1: to dig down today into this topic because it's one 13 00:00:54,600 --> 00:00:58,480 Speaker 1: of those that's fascinating and is an underpinning for tons 14 00:00:58,520 --> 00:01:02,280 Speaker 1: of speculative fiction and horror stories. And since we're now 15 00:01:02,360 --> 00:01:06,040 Speaker 1: in October, I fig this would be kind of thematically 16 00:01:06,120 --> 00:01:09,959 Speaker 1: linked to Halloween text style. It turns out that's pretty 17 00:01:09,959 --> 00:01:13,600 Speaker 1: hard to do Halloween technology stories. I've already covered stuff 18 00:01:13,680 --> 00:01:17,560 Speaker 1: like haunted house technology. So today we're going to talk 19 00:01:17,640 --> 00:01:21,280 Speaker 1: about consciousness and whether or not it might be possible 20 00:01:21,400 --> 00:01:26,080 Speaker 1: that machines could one day achieve consciousness. Now, I could 21 00:01:26,160 --> 00:01:29,440 Speaker 1: start this off by talking about the Turing test, which 22 00:01:29,480 --> 00:01:32,640 Speaker 1: many people have used as the launch point for machine 23 00:01:32,640 --> 00:01:37,160 Speaker 1: intelligence and machine consciousness debates. The way we understand that 24 00:01:37,240 --> 00:01:40,560 Speaker 1: test today, which by the way, is slightly different from 25 00:01:40,600 --> 00:01:44,080 Speaker 1: the test that Alan Turing first proposed, is that you 26 00:01:44,120 --> 00:01:48,480 Speaker 1: have a human interviewer who, through a computer interface, asks 27 00:01:48,600 --> 00:01:52,400 Speaker 1: questions of a subject, and the subject might be another human, 28 00:01:52,880 --> 00:01:56,320 Speaker 1: or it might be a computer program posing as a human, 29 00:01:56,800 --> 00:01:59,840 Speaker 1: and the interviewer just sees text on a screen. So 30 00:02:00,120 --> 00:02:03,400 Speaker 1: if the interviewer is unable to pass a certain threshold 31 00:02:03,760 --> 00:02:05,960 Speaker 1: of being able to tell the difference, to be able 32 00:02:06,000 --> 00:02:08,320 Speaker 1: to determine whether it was a machine or a person, 33 00:02:08,880 --> 00:02:12,239 Speaker 1: then the program or machine that's being tested is said 34 00:02:12,320 --> 00:02:15,560 Speaker 1: to have passed the Turing test. It doesn't mean the 35 00:02:15,600 --> 00:02:19,839 Speaker 1: program or machine is conscious or even intelligent, but rather 36 00:02:19,960 --> 00:02:24,919 Speaker 1: says that to outward appearances, it seems to be intelligent 37 00:02:25,000 --> 00:02:29,720 Speaker 1: and conscious. See, we humans can't be absolutely sure that 38 00:02:29,840 --> 00:02:33,799 Speaker 1: other humans are conscious and intelligent. We assume that they 39 00:02:33,840 --> 00:02:38,720 Speaker 1: are because each of us knows of our own consciousness 40 00:02:38,760 --> 00:02:42,440 Speaker 1: and our own intelligence. We have a personal experience with 41 00:02:42,560 --> 00:02:47,520 Speaker 1: that direct personal experience, and other people seem to display 42 00:02:47,560 --> 00:02:51,679 Speaker 1: behaviors that indicate they too possess those traits, and they 43 00:02:51,720 --> 00:02:56,160 Speaker 1: too have a personal experience. But we cannot be those 44 00:02:56,280 --> 00:02:59,400 Speaker 1: other people, and so we have to grant them the 45 00:02:59,440 --> 00:03:03,160 Speaker 1: consideration that they too are conscious and intelligent. And I 46 00:03:03,240 --> 00:03:07,240 Speaker 1: agree that is very big of us. This is actually 47 00:03:07,320 --> 00:03:12,160 Speaker 1: called the problem of other minds in the field of philosophy, 48 00:03:12,440 --> 00:03:15,560 Speaker 1: and the problem is this, it is impossible for any 49 00:03:15,680 --> 00:03:19,040 Speaker 1: one of us to step outside of ourselves and into 50 00:03:19,080 --> 00:03:23,640 Speaker 1: any other person's consciousness. We cannot feel what other people 51 00:03:23,760 --> 00:03:28,160 Speaker 1: are feeling or experience their thoughts firsthand. We are aware 52 00:03:28,240 --> 00:03:31,520 Speaker 1: of our own abilities, but we are only aware of 53 00:03:31,600 --> 00:03:36,400 Speaker 1: the appearance that other people share those abilities. So assuming 54 00:03:36,440 --> 00:03:40,360 Speaker 1: that other people also experience consciousness rather than imitating it, 55 00:03:40,600 --> 00:03:44,120 Speaker 1: really really, well, that's a step we all have to take. 56 00:03:45,280 --> 00:03:49,960 Speaker 1: Turing's point is that if we do grant that consideration 57 00:03:50,040 --> 00:03:52,600 Speaker 1: to other people, why would we not do it to 58 00:03:52,680 --> 00:03:57,080 Speaker 1: machines as well. I mean, the machine appears to possess 59 00:03:57,280 --> 00:04:00,760 Speaker 1: the same qualities as a human. This is a hypothetical machine, 60 00:04:01,200 --> 00:04:04,200 Speaker 1: so we cannot experience what that machine is going through, 61 00:04:04,480 --> 00:04:07,760 Speaker 1: just as we can't experience what another person is going through. 62 00:04:07,880 --> 00:04:12,360 Speaker 1: At least not on the intrinsic personal level. So why 63 00:04:12,360 --> 00:04:16,080 Speaker 1: would we not grant the machine the same consideration that 64 00:04:16,120 --> 00:04:19,360 Speaker 1: we would grant to people? And Touring was being a 65 00:04:19,360 --> 00:04:22,200 Speaker 1: little cheeky. But while I just gave kind of a 66 00:04:22,240 --> 00:04:25,880 Speaker 1: super fast, high level description of the Turing test, that's 67 00:04:25,920 --> 00:04:28,600 Speaker 1: not actually where I want to start. I want to 68 00:04:28,640 --> 00:04:32,560 Speaker 1: begin with the concept of consciousness itself. Now, the reason 69 00:04:32,600 --> 00:04:35,640 Speaker 1: I want to do this isn't just to make a 70 00:04:35,680 --> 00:04:38,840 Speaker 1: longer podcast. It's because I think one of the most 71 00:04:39,000 --> 00:04:44,479 Speaker 1: fundamental problems with the discussion about AI intelligence, self awareness, 72 00:04:44,520 --> 00:04:47,720 Speaker 1: and consciousness is that there tends to be a pretty 73 00:04:47,800 --> 00:04:51,920 Speaker 1: large disconnect between the biologists and the doctors who specialize 74 00:04:51,960 --> 00:04:56,760 Speaker 1: in neuroscience, particularly cognitive neuroscience, and thus have some understanding 75 00:04:56,960 --> 00:05:00,400 Speaker 1: about the nature of consciousness and people. And then you 76 00:05:00,440 --> 00:05:02,920 Speaker 1: have computer scientists who have a deep understanding of how 77 00:05:02,920 --> 00:05:07,440 Speaker 1: computers process information. And while we frequently will compare brains 78 00:05:07,560 --> 00:05:11,400 Speaker 1: to computers, that comparison is not one to one. It 79 00:05:11,480 --> 00:05:15,360 Speaker 1: is largely a comparison of convenience, and in some cases 80 00:05:15,400 --> 00:05:18,400 Speaker 1: you could argue it's not terribly useful, it might actually 81 00:05:18,600 --> 00:05:21,880 Speaker 1: be counterproductive. And so I think at least some of 82 00:05:21,920 --> 00:05:25,800 Speaker 1: the speculation about machine consciousness is based on a lack 83 00:05:25,839 --> 00:05:30,040 Speaker 1: of understanding of how complicated and mysterious this topic is 84 00:05:30,160 --> 00:05:34,000 Speaker 1: in the first place, and this ends up being really tricky. 85 00:05:34,320 --> 00:05:39,800 Speaker 1: Consciousness isn't an easily defined quality or quantity. Some people 86 00:05:39,880 --> 00:05:42,880 Speaker 1: like to say, we don't so much define consciousness by 87 00:05:42,920 --> 00:05:46,320 Speaker 1: what it is, but rather what it isn't. And this 88 00:05:46,360 --> 00:05:48,679 Speaker 1: will also will kind of bring us into the realm 89 00:05:48,839 --> 00:05:52,200 Speaker 1: of philosophy. Now, I'm going to be honest with you, guys. 90 00:05:52,960 --> 00:05:56,680 Speaker 1: The realm of philosophy is not one I'm terribly comfortable in. 91 00:05:57,000 --> 00:05:59,760 Speaker 1: I'm pretty pragmatic, and philosophy deals with a lot of 92 00:05:59,760 --> 00:06:04,680 Speaker 1: stuff that is, at least for now, unknowable. Philosophy sometimes 93 00:06:04,680 --> 00:06:08,080 Speaker 1: asks questions that we do not and cannot have the 94 00:06:08,120 --> 00:06:11,760 Speaker 1: answer to, and in many cases we may never be 95 00:06:11,839 --> 00:06:15,520 Speaker 1: able to answer those questions. And the pragmatist em says, well, 96 00:06:16,279 --> 00:06:19,320 Speaker 1: why bother asking the question if you can never get 97 00:06:19,320 --> 00:06:22,040 Speaker 1: the answer. Let's just focus on the stuff we actually 98 00:06:22,160 --> 00:06:26,479 Speaker 1: can answer. Now, I realize this is a limitation on 99 00:06:26,640 --> 00:06:30,320 Speaker 1: my part. I'm owning that I'm not out to upset 100 00:06:30,360 --> 00:06:34,640 Speaker 1: the philosophical apple cart. I'm just of a different philosophical meant, 101 00:06:35,200 --> 00:06:38,200 Speaker 1: and I realize that just because we can't answer some 102 00:06:38,400 --> 00:06:42,440 Speaker 1: questions right now, that doesn't necessarily mean they will all 103 00:06:42,480 --> 00:06:45,880 Speaker 1: go unanswered for all time. We might glean a way 104 00:06:45,920 --> 00:06:48,560 Speaker 1: of answering at least some of them, though I suspect 105 00:06:48,560 --> 00:06:52,120 Speaker 1: a few will be forever unanswered. If we go with 106 00:06:52,200 --> 00:06:57,159 Speaker 1: the Basic Dictionary definition of consciousness, it's quote the state 107 00:06:57,320 --> 00:07:02,440 Speaker 1: of being awake and aware of one's end quote. But 108 00:07:02,520 --> 00:07:06,000 Speaker 1: what this doesn't tell us is what's going on that 109 00:07:06,120 --> 00:07:09,159 Speaker 1: lets us do that. It also doesn't talk about being 110 00:07:09,279 --> 00:07:14,240 Speaker 1: aware of oneself, which we largely consider consciousness to be 111 00:07:14,440 --> 00:07:17,400 Speaker 1: part of. Is not just aware of your surroundings, but 112 00:07:17,480 --> 00:07:21,720 Speaker 1: aware that you exist within those surroundings, your relationship to 113 00:07:21,800 --> 00:07:26,120 Speaker 1: your surroundings, and things that are going on within you, yourself, 114 00:07:26,240 --> 00:07:28,560 Speaker 1: your feelings, and your thoughts. The fact that you can 115 00:07:28,600 --> 00:07:32,280 Speaker 1: process all of this, you can reflect upon yourself. We 116 00:07:32,360 --> 00:07:36,400 Speaker 1: tend to group that into consciousness as well. So how 117 00:07:36,480 --> 00:07:39,360 Speaker 1: is it that we can feel things and be aware 118 00:07:39,480 --> 00:07:42,120 Speaker 1: of those feelings? How is it that we can have 119 00:07:42,400 --> 00:07:46,560 Speaker 1: intentions and be aware of our intentions. We are more 120 00:07:46,600 --> 00:07:51,000 Speaker 1: complex than beings that simply react to sensory input. We 121 00:07:51,080 --> 00:07:55,760 Speaker 1: are more than beings that respond to stuff like hunger, fear, 122 00:07:56,040 --> 00:07:59,920 Speaker 1: or the desire to procreate. We have motivations, sometimes really 123 00:08:00,280 --> 00:08:03,680 Speaker 1: complex motivations, and we can reflect on those, we can 124 00:08:03,760 --> 00:08:06,880 Speaker 1: examine them, we can question them, we can even change them. 125 00:08:06,920 --> 00:08:10,760 Speaker 1: So how do we do this? Now? We know this 126 00:08:10,840 --> 00:08:13,400 Speaker 1: is special because some of the things we can do 127 00:08:13,760 --> 00:08:18,320 Speaker 1: are shared among a very few species on Earth. For example, 128 00:08:18,680 --> 00:08:22,000 Speaker 1: we humans can recognize our own reflections in a mirror. 129 00:08:22,240 --> 00:08:26,360 Speaker 1: Starting at around age two or so, we can see 130 00:08:26,360 --> 00:08:29,320 Speaker 1: the mirror image and we recognize the mirrorage image is 131 00:08:29,440 --> 00:08:33,400 Speaker 1: of us. Now, there are only eight species that can 132 00:08:33,480 --> 00:08:37,319 Speaker 1: do this that we know about anyway. Those species are 133 00:08:37,440 --> 00:08:42,360 Speaker 1: the great apes. So you've got humans, gorillas orangutans, bonobos, 134 00:08:42,400 --> 00:08:47,360 Speaker 1: and chimpanzees, the magpie, the dolphin, and that's it. Oh 135 00:08:47,400 --> 00:08:51,440 Speaker 1: and the magpies are birds, right, That's that's all of them. 136 00:08:51,720 --> 00:08:54,640 Speaker 1: Recognizing one's own form in a mirror shows a sense 137 00:08:54,679 --> 00:09:00,080 Speaker 1: of self awareness, literally, of awareness of one's self. There 138 00:09:00,120 --> 00:09:03,040 Speaker 1: are a lot of great resources online and offline that 139 00:09:03,160 --> 00:09:07,319 Speaker 1: go into the theme of consciousness. Heck, there are numerous 140 00:09:07,559 --> 00:09:12,280 Speaker 1: college level courses and graduate level courses dedicated to this topic. 141 00:09:12,640 --> 00:09:14,560 Speaker 1: So I'm not going to be able to go into 142 00:09:14,679 --> 00:09:19,360 Speaker 1: all the different hypotheses, arguments, counter arguments, etc. In this episode, 143 00:09:19,679 --> 00:09:24,400 Speaker 1: but I can cover some basics. Also, I highly recommend 144 00:09:24,440 --> 00:09:28,120 Speaker 1: you check out V. Sauce's video on YouTube that's titled 145 00:09:28,480 --> 00:09:32,840 Speaker 1: what is Consciousness? Because it's really good. And no, I 146 00:09:32,840 --> 00:09:35,439 Speaker 1: don't know Michael. I have no connection to him. I've 147 00:09:35,480 --> 00:09:38,720 Speaker 1: never met him. This is just an honest recommendation from me, 148 00:09:39,040 --> 00:09:42,240 Speaker 1: and I have no connection whatsoever to that video series. 149 00:09:42,720 --> 00:09:45,480 Speaker 1: The video includes a link to what V Sauce dubs 150 00:09:45,520 --> 00:09:49,400 Speaker 1: a lean back, which is a playlist of related videos 151 00:09:49,400 --> 00:09:52,520 Speaker 1: on the subject at hand, in this case, consciousness. Those 152 00:09:52,520 --> 00:09:55,040 Speaker 1: are also really fascinating. But I do want to point 153 00:09:55,040 --> 00:09:57,599 Speaker 1: out that, at least at the time of this recording, 154 00:09:57,960 --> 00:10:00,600 Speaker 1: a couple of the videos in that playlist since been 155 00:10:00,679 --> 00:10:03,559 Speaker 1: delisted from YouTube for whatever reason, So there are a 156 00:10:03,600 --> 00:10:06,760 Speaker 1: couple of blank spots in there. But what those videos show, 157 00:10:07,120 --> 00:10:11,439 Speaker 1: and what countless papers and courses and presentations also show, 158 00:10:11,920 --> 00:10:15,880 Speaker 1: is that the brain is so incredibly complex and nuanced 159 00:10:16,400 --> 00:10:19,840 Speaker 1: that we don't know what we don't know. We do 160 00:10:20,000 --> 00:10:22,480 Speaker 1: know that there are some pretty funky things going up 161 00:10:22,480 --> 00:10:24,720 Speaker 1: in the gray matter up in our noggins, and we 162 00:10:24,800 --> 00:10:29,199 Speaker 1: also know that many of the explanations given to describe 163 00:10:29,360 --> 00:10:33,480 Speaker 1: consciousness rely upon some assumptions that we don't have any 164 00:10:33,520 --> 00:10:38,080 Speaker 1: substantial evidence for. You can't really assert something to be 165 00:10:38,240 --> 00:10:41,760 Speaker 1: true if it's based on a premise that you also 166 00:10:42,000 --> 00:10:45,400 Speaker 1: don't know to be true. That's not how good science works. 167 00:10:45,640 --> 00:10:48,600 Speaker 1: This is also why I reject the arguments around stuff 168 00:10:48,720 --> 00:10:52,520 Speaker 1: like ghost hunting equipment. The use of that equipment is 169 00:10:52,600 --> 00:10:56,480 Speaker 1: predicated on the argument that ghosts exist and they have 170 00:10:56,559 --> 00:11:00,200 Speaker 1: certain influences on their environment. But we haven't proven that 171 00:11:00,280 --> 00:11:03,080 Speaker 1: ghosts exist in the first place, let alone that they 172 00:11:03,080 --> 00:11:06,800 Speaker 1: can affect the environment. So selling a meter that supposedly 173 00:11:06,840 --> 00:11:11,720 Speaker 1: detects a ghostly presence from electromagnetic fluctuations makes no logical sense. 174 00:11:12,360 --> 00:11:14,439 Speaker 1: For us to know that to be true, we would 175 00:11:14,440 --> 00:11:18,360 Speaker 1: already have to have established that one, ghosts are real, 176 00:11:18,960 --> 00:11:22,840 Speaker 1: at two that they have these electromagnetic fluctuation effects, and 177 00:11:22,880 --> 00:11:26,800 Speaker 1: we haven't done that. It's like working science in reverse. 178 00:11:26,840 --> 00:11:29,000 Speaker 1: That's not how it works anyway. There are a lot 179 00:11:29,000 --> 00:11:33,680 Speaker 1: of arguments about consciousness that suggests perhaps there's some ineffable 180 00:11:33,800 --> 00:11:37,240 Speaker 1: force that informs it. You can call it the spirit 181 00:11:37,480 --> 00:11:40,920 Speaker 1: or the soul or whatever. So that argument suggests that 182 00:11:41,000 --> 00:11:44,319 Speaker 1: this thing we've never proven to have existed, is what 183 00:11:44,480 --> 00:11:48,960 Speaker 1: gives consciousness its own and that's a problem. We can't 184 00:11:49,320 --> 00:11:51,920 Speaker 1: really state that. I mean, you can't say the reason 185 00:11:51,960 --> 00:11:54,679 Speaker 1: this thing exists is this other thing that we've never 186 00:11:54,760 --> 00:11:58,800 Speaker 1: proven to exist makes it exist. Well, that you've just 187 00:11:58,840 --> 00:12:02,640 Speaker 1: made it harder to even prove anything. And we have 188 00:12:02,720 --> 00:12:06,840 Speaker 1: evidence that also shows that that whole idea doesn't hold water. 189 00:12:07,240 --> 00:12:10,920 Speaker 1: The evidence comes in the form of brain disorders, brain diseases, 190 00:12:10,920 --> 00:12:14,320 Speaker 1: and brain damage. We have seen that disease and damage 191 00:12:14,360 --> 00:12:19,880 Speaker 1: to the brain affects consciousness, which suggests that consciousness manifests 192 00:12:20,240 --> 00:12:24,000 Speaker 1: from the actual form and function of our brains, not 193 00:12:24,200 --> 00:12:29,240 Speaker 1: from any mysterious force. Our ability to perceive, to process information, 194 00:12:29,320 --> 00:12:32,120 Speaker 1: to have an understanding of the self, to have an 195 00:12:32,240 --> 00:12:36,319 Speaker 1: accurate reflection of what's going on around us within our 196 00:12:36,360 --> 00:12:39,640 Speaker 1: own conceptual reality, all of that appears to be predicated 197 00:12:39,640 --> 00:12:43,800 Speaker 1: primarily upon the brain. Now, originally I was planning to 198 00:12:43,800 --> 00:12:47,800 Speaker 1: give a rundown on some of the prevailing theories about consciousness. 199 00:12:48,040 --> 00:12:50,880 Speaker 1: In other words, I want to summarize the various schools 200 00:12:50,880 --> 00:12:55,560 Speaker 1: of thought about how consciousness actually arises. But as I 201 00:12:55,679 --> 00:13:00,120 Speaker 1: dove down into the research, it became apparent really quickly 202 00:13:00,320 --> 00:13:03,840 Speaker 1: that such a discussion would require so much groundwork and 203 00:13:04,120 --> 00:13:08,440 Speaker 1: more importantly, a much deeper understanding on my part than 204 00:13:08,559 --> 00:13:12,040 Speaker 1: would be practical for this podcast. So instead of talking 205 00:13:12,120 --> 00:13:15,720 Speaker 1: about the higher order theory of consciousness versus the general 206 00:13:15,760 --> 00:13:20,280 Speaker 1: workspace theory versus integrated information theory, I'll take a step 207 00:13:20,320 --> 00:13:23,920 Speaker 1: back and I'll say there's a lot of ongoing debate 208 00:13:24,280 --> 00:13:27,960 Speaker 1: about the subject, and no one has conclusively proven that 209 00:13:28,040 --> 00:13:32,200 Speaker 1: any particular theory or argument is most likely true. Each 210 00:13:32,240 --> 00:13:36,160 Speaker 1: theory has its strengths and its weaknesses, and complicating matters 211 00:13:36,160 --> 00:13:39,560 Speaker 1: further is that we haven't refined our language around the 212 00:13:39,600 --> 00:13:44,960 Speaker 1: concepts enough to differentiate various ideas. That means you can't 213 00:13:45,000 --> 00:13:49,160 Speaker 1: talk about an organism being conscious of something and that 214 00:13:49,360 --> 00:13:54,120 Speaker 1: degree of consciousness is somehow inherently specific, it's not. That's 215 00:13:54,160 --> 00:13:57,120 Speaker 1: the issue. So, for example, I could say a rat 216 00:13:57,160 --> 00:14:00,880 Speaker 1: is conscious of a rat terrier type of that hunts 217 00:14:00,920 --> 00:14:04,439 Speaker 1: down rats, and so as a result of this consciousness 218 00:14:04,480 --> 00:14:07,520 Speaker 1: of the rat terrier, the rat attempts to remain hidden 219 00:14:07,720 --> 00:14:09,880 Speaker 1: so as not to be killed. But does that mean 220 00:14:09,920 --> 00:14:13,400 Speaker 1: the rat merely perceives the rat terrier and thus is 221 00:14:13,440 --> 00:14:15,240 Speaker 1: trying to stay out of its way, and that's as 222 00:14:15,240 --> 00:14:18,720 Speaker 1: far as the consciousness goes. Or does it mean that 223 00:14:18,760 --> 00:14:22,440 Speaker 1: the rat actually has a deeper, more meaningful awareness of 224 00:14:22,480 --> 00:14:25,600 Speaker 1: the rat terrier? The language is in much help here, 225 00:14:25,960 --> 00:14:30,200 Speaker 1: and moreover, there's debate about what degrees of consciousness there 226 00:14:30,240 --> 00:14:34,960 Speaker 1: even are. Also, while I've been harping on consciousness, that's 227 00:14:35,000 --> 00:14:38,520 Speaker 1: not the only concept we have to consider. Another is intelligence, 228 00:14:38,880 --> 00:14:43,200 Speaker 1: which is distinct from consciousness, and there are some similarities. 229 00:14:43,760 --> 00:14:48,600 Speaker 1: Like consciousness, intelligence is predicated upon brain functions. Again, a 230 00:14:48,720 --> 00:14:52,760 Speaker 1: long history of investigating brain disorders and brain damage indicates this, 231 00:14:53,240 --> 00:14:56,440 Speaker 1: as it can affect not just consciousness but also intelligence. 232 00:14:57,000 --> 00:15:01,960 Speaker 1: So what is intelligence? Well ready for this, but like consciousness, 233 00:15:02,200 --> 00:15:07,160 Speaker 1: there's no single agreed upon definition or theory of intelligence. 234 00:15:07,440 --> 00:15:10,120 Speaker 1: In general, we use the word intelligence to describe the 235 00:15:10,160 --> 00:15:14,120 Speaker 1: ability to think, to learn, to absorb knowledge, and to 236 00:15:14,200 --> 00:15:17,880 Speaker 1: make use of it to develop skills. Intelligence is what 237 00:15:17,960 --> 00:15:20,920 Speaker 1: allowed humans to learn how to make basic tools, to 238 00:15:20,960 --> 00:15:24,320 Speaker 1: gain an understanding of how to cultivate plants and develop agriculture, 239 00:15:24,640 --> 00:15:28,680 Speaker 1: to develop architecture, to understand mathematic principles and all sorts 240 00:15:28,680 --> 00:15:32,880 Speaker 1: of stuff. So in humans, we tend to lump consciousness 241 00:15:32,920 --> 00:15:35,640 Speaker 1: and intelligence together we tend to think in terms of 242 00:15:36,000 --> 00:15:39,800 Speaker 1: being intelligent and being self aware, but the two need 243 00:15:39,880 --> 00:15:43,080 Speaker 1: not necessarily go hand in hand. There are many people 244 00:15:43,160 --> 00:15:45,480 Speaker 1: who believe that it could be possible to construct an 245 00:15:45,560 --> 00:15:50,920 Speaker 1: artificial intelligence or an artificial consciousness independently of one another. 246 00:15:51,360 --> 00:15:54,280 Speaker 1: When we come back, i'll explain more, but first let's 247 00:15:54,360 --> 00:16:05,480 Speaker 1: take a quick break. So, in a very general sense, 248 00:16:06,000 --> 00:16:09,920 Speaker 1: the group of hypotheses that fall into the integrated information 249 00:16:10,120 --> 00:16:15,520 Speaker 1: theory umbrella state that consciousness emerges through linking elements in 250 00:16:15,640 --> 00:16:20,680 Speaker 1: our brains. These would be neurons processing large amounts of information, 251 00:16:21,080 --> 00:16:24,720 Speaker 1: and that it's the scale of this endeavor that then 252 00:16:24,880 --> 00:16:28,120 Speaker 1: leads to consciousness. In other words, if you have enough 253 00:16:28,160 --> 00:16:32,520 Speaker 1: processors working on enough information and they're all interconnected with 254 00:16:32,600 --> 00:16:37,440 Speaker 1: each other and it's very complicated, bang, you get consciousness. Now, 255 00:16:37,760 --> 00:16:41,960 Speaker 1: it is clear our brains process a lot of information. 256 00:16:42,400 --> 00:16:45,560 Speaker 1: If you do a search in textbooks or online, you'll 257 00:16:45,600 --> 00:16:49,320 Speaker 1: frequently encounter the stat that their brains have around one 258 00:16:49,400 --> 00:16:53,160 Speaker 1: hundred billion neurons in them and ten times as many 259 00:16:53,240 --> 00:16:57,960 Speaker 1: glial cells. Neurons are like the processors in a computer system, 260 00:16:57,960 --> 00:17:01,320 Speaker 1: and glial cells would be the support systems and insulators 261 00:17:01,360 --> 00:17:05,120 Speaker 1: for those processors. Anyway, those numbers have since come under 262 00:17:05,240 --> 00:17:09,639 Speaker 1: some dispute. As an associate professor at Vanderbilt University named 263 00:17:09,760 --> 00:17:15,640 Speaker 1: Susanna Herculano Husel. She explained that the old way of 264 00:17:16,400 --> 00:17:19,720 Speaker 1: estimating how many neurons the brain had appeared to be 265 00:17:19,760 --> 00:17:23,240 Speaker 1: based on taking slices of the brain, estimating the number 266 00:17:23,240 --> 00:17:26,320 Speaker 1: of neurons in that slice, and then kind of extrapolating 267 00:17:26,400 --> 00:17:29,320 Speaker 1: that number to apply across the brain in general. But 268 00:17:29,440 --> 00:17:32,800 Speaker 1: that ignores stuff like the density of cells and the 269 00:17:32,800 --> 00:17:35,960 Speaker 1: distribution of the cells across the brain. So what she did, 270 00:17:36,640 --> 00:17:40,360 Speaker 1: and this also falls into the category of Halloween horror stories, 271 00:17:40,880 --> 00:17:44,159 Speaker 1: is she took a brain and she freaking dissolved it. 272 00:17:44,960 --> 00:17:48,280 Speaker 1: She could then get account of the neuron nuclei that 273 00:17:48,440 --> 00:17:53,600 Speaker 1: was in the soupy mix. By her accounting, the brain 274 00:17:53,680 --> 00:17:58,000 Speaker 1: has closer to eighty six billion neurons and just as 275 00:17:58,000 --> 00:18:01,280 Speaker 1: many glial cells. Still a lot of cells, mind you, 276 00:18:01,359 --> 00:18:02,720 Speaker 1: But you got to admit it's a bit of a 277 00:18:02,760 --> 00:18:07,160 Speaker 1: blow to lose fourteen billion neurons overnight. Still, we're talking 278 00:18:07,160 --> 00:18:12,480 Speaker 1: about billions of neurons that interconnect through an incredibly complex 279 00:18:12,520 --> 00:18:15,880 Speaker 1: system in our brains, with different regions of the brain 280 00:18:15,960 --> 00:18:19,360 Speaker 1: handling different things, and so, yeah, we're processing a lot 281 00:18:19,359 --> 00:18:22,720 Speaker 1: of information all the time, and we do happen to 282 00:18:22,840 --> 00:18:26,439 Speaker 1: be conscious. So could it be possible that with a 283 00:18:26,440 --> 00:18:30,280 Speaker 1: sufficiently powerful computer system, perhaps made up of hundreds or 284 00:18:30,520 --> 00:18:34,720 Speaker 1: thousands or tens of thousands of individual computers, each with 285 00:18:34,960 --> 00:18:38,280 Speaker 1: hundreds of processors, that you could end up with an 286 00:18:38,320 --> 00:18:42,439 Speaker 1: emergent consciousness, Or, as some people have proposed, could the 287 00:18:42,560 --> 00:18:46,240 Speaker 1: Internet itself become conscious due to the fact that it 288 00:18:46,320 --> 00:18:50,040 Speaker 1: is an enormous system of interconnected nodes that is pushing 289 00:18:50,040 --> 00:18:55,960 Speaker 1: around incredible amounts of information. Well, maybe maybe it's possible. 290 00:18:56,480 --> 00:19:00,360 Speaker 1: But here's the kicker. This theory doesn't actually explain the 291 00:19:00,400 --> 00:19:05,640 Speaker 1: mechanism by which the consciousness emerges. See, it's one thing 292 00:19:05,720 --> 00:19:10,080 Speaker 1: to process information, it's another thing to be aware of 293 00:19:10,119 --> 00:19:13,520 Speaker 1: that experience. So when I perceive a color, I'm not 294 00:19:13,680 --> 00:19:18,600 Speaker 1: just perceiving a color. I'm aware that I'm experiencing that color. 295 00:19:19,000 --> 00:19:21,080 Speaker 1: Or to put it in another way, I can relate 296 00:19:21,200 --> 00:19:23,960 Speaker 1: something to how it makes me feel, or some other 297 00:19:24,119 --> 00:19:28,400 Speaker 1: subjective experience that is personal to me. So a machine 298 00:19:28,520 --> 00:19:32,320 Speaker 1: might objectively be able to return data about stuff like 299 00:19:32,800 --> 00:19:35,080 Speaker 1: what is a color of a piece of paper? It 300 00:19:35,119 --> 00:19:38,040 Speaker 1: analyzes the light that's being reflected off that piece of paper, 301 00:19:38,240 --> 00:19:40,879 Speaker 1: it compares that light to a spectrum of colors. But 302 00:19:40,920 --> 00:19:43,280 Speaker 1: that's still not the same thing as having the subjective 303 00:19:43,400 --> 00:19:47,159 Speaker 1: experience of perceiving the color. And there may well be 304 00:19:47,320 --> 00:19:50,960 Speaker 1: some connection between the complexity of the interconnected neurons in 305 00:19:51,000 --> 00:19:53,960 Speaker 1: our brains and the amount of information that we're processing 306 00:19:54,480 --> 00:19:58,800 Speaker 1: and our sense of consciousness, but the theory doesn't actually 307 00:19:58,840 --> 00:20:02,840 Speaker 1: explain what that connection is. It's more like saying, hey, 308 00:20:03,400 --> 00:20:07,840 Speaker 1: maybe this thing we have, this consciousness experience, is also 309 00:20:07,960 --> 00:20:11,960 Speaker 1: linked to this other thing, without actually making the link 310 00:20:12,000 --> 00:20:15,400 Speaker 1: between the two. It appears to be correlative, but not 311 00:20:15,520 --> 00:20:20,280 Speaker 1: necessarily causal to relate that to our personal experience. Imagine 312 00:20:20,320 --> 00:20:24,119 Speaker 1: that you've just poofed into existence. You have no prior 313 00:20:24,200 --> 00:20:27,679 Speaker 1: knowledge of the world, or the physics in that world, 314 00:20:27,800 --> 00:20:31,280 Speaker 1: or basic stuff like that, so you're drawing conclusions about 315 00:20:31,280 --> 00:20:35,360 Speaker 1: the world around you based solely on your observations as 316 00:20:35,400 --> 00:20:38,320 Speaker 1: you wander around and do stuff. And at one point 317 00:20:38,720 --> 00:20:42,199 Speaker 1: you see an interesting looking rock on the path, so 318 00:20:42,240 --> 00:20:44,199 Speaker 1: you bend over and you pick up the rock, and 319 00:20:44,280 --> 00:20:47,480 Speaker 1: when you do, it starts to rain, and you think, well, 320 00:20:47,520 --> 00:20:50,199 Speaker 1: maybe I caused it to rain because I picked up 321 00:20:50,240 --> 00:20:53,240 Speaker 1: this rock. And maybe it happens a few times where 322 00:20:53,280 --> 00:20:55,280 Speaker 1: you pick up a rock and it starts to rain, 323 00:20:55,600 --> 00:20:58,600 Speaker 1: which seems to support your thesis. But does that mean 324 00:20:58,680 --> 00:21:02,879 Speaker 1: you're actually causing the effects that you are observing? If so, 325 00:21:03,440 --> 00:21:07,199 Speaker 1: what is it about picking up the rock that's making 326 00:21:07,240 --> 00:21:11,000 Speaker 1: it rain? Now, even in this absurd case that I'm making, 327 00:21:11,440 --> 00:21:14,320 Speaker 1: you could argue that if there's never an instance in 328 00:21:14,359 --> 00:21:17,400 Speaker 1: which picking up the rock wasn't immediately followed by rain, 329 00:21:17,880 --> 00:21:20,479 Speaker 1: there's a lot of evidence to suggest the two are linked, 330 00:21:20,840 --> 00:21:24,040 Speaker 1: but you still can't explain why they are linked, why 331 00:21:24,080 --> 00:21:27,639 Speaker 1: does one cause the other? And that's a problem because 332 00:21:27,880 --> 00:21:31,840 Speaker 1: without that piece, you're never really totally sure that you're 333 00:21:31,840 --> 00:21:34,960 Speaker 1: on the right track. That's kind of where we are 334 00:21:35,040 --> 00:21:38,120 Speaker 1: with consciousness. We've got a lot of ideas about what 335 00:21:38,200 --> 00:21:41,480 Speaker 1: makes it happen, but those ideas are mostly missing key 336 00:21:41,520 --> 00:21:46,520 Speaker 1: pieces that explain why it's happening. Now, it's possible that 337 00:21:46,600 --> 00:21:50,280 Speaker 1: we cannot reduce consciousness any further than we already have, 338 00:21:50,840 --> 00:21:53,520 Speaker 1: and maybe that means we never really get a handle 339 00:21:53,600 --> 00:21:56,320 Speaker 1: on what makes it happen. It's also possible that we 340 00:21:56,359 --> 00:22:00,680 Speaker 1: could facilitate the emergence of consciousness and machines without knowing 341 00:22:01,080 --> 00:22:04,800 Speaker 1: how we did it. Essentially, that would be like stumbling 342 00:22:04,880 --> 00:22:08,399 Speaker 1: upon the phenomenon by luck. We just happened to create 343 00:22:08,440 --> 00:22:12,080 Speaker 1: the conditions necessary to allow some form of artificial consciousness 344 00:22:12,080 --> 00:22:15,439 Speaker 1: to emerge. Now, I think this might be possible, but 345 00:22:15,480 --> 00:22:18,520 Speaker 1: it strikes me as a long shot. I think of 346 00:22:18,560 --> 00:22:21,520 Speaker 1: it like being locked in a dark warehouse filled with 347 00:22:21,680 --> 00:22:25,280 Speaker 1: every mechanical part you can imagine, and you start trying 348 00:22:25,359 --> 00:22:28,159 Speaker 1: to put things together in complete darkness, and then the 349 00:22:28,240 --> 00:22:30,480 Speaker 1: lights come on and you see that you have created 350 00:22:30,520 --> 00:22:34,359 Speaker 1: a perfect replica of an F fifteen fighter jet. Is 351 00:22:34,400 --> 00:22:38,879 Speaker 1: that possible? Well, I mean, yeah, I guess, but it 352 00:22:38,920 --> 00:22:43,760 Speaker 1: seems overwhelmingly unlikely. But again, this is based off ignorance. 353 00:22:44,000 --> 00:22:46,399 Speaker 1: It's based off the fact that it hasn't happened yet, 354 00:22:46,720 --> 00:22:50,800 Speaker 1: so I could be totally wrong here. Now, on the 355 00:22:50,800 --> 00:22:55,320 Speaker 1: flip side of that, programmers, engineers, and scientists have created 356 00:22:55,400 --> 00:22:59,159 Speaker 1: computer systems that can process information in intricate ways to 357 00:22:59,160 --> 00:23:02,040 Speaker 1: come up with solutions to problems that seem, at least 358 00:23:02,080 --> 00:23:05,600 Speaker 1: at first glance, to be similar to how we humans think. 359 00:23:05,960 --> 00:23:09,440 Speaker 1: We even have names for systems that reflect biological systems, 360 00:23:09,480 --> 00:23:13,239 Speaker 1: like artificial neural networks. Now, the name might make it 361 00:23:13,320 --> 00:23:18,040 Speaker 1: sound like it's a robot brain, but it's not quite that. Instead, 362 00:23:18,320 --> 00:23:20,879 Speaker 1: it's a model for computing in which components in the 363 00:23:20,920 --> 00:23:25,280 Speaker 1: system act kind of like neurons. They're interconnected and each 364 00:23:25,359 --> 00:23:29,320 Speaker 1: one does a specific process. The nodes in the computer 365 00:23:29,400 --> 00:23:33,159 Speaker 1: system connect to other nodes. So you feed the system 366 00:23:33,240 --> 00:23:36,520 Speaker 1: input whatever it is you want to process, and then 367 00:23:36,680 --> 00:23:40,320 Speaker 1: the nodes that accept that input perform some form of 368 00:23:40,359 --> 00:23:44,840 Speaker 1: operation on it and then send that resulting data the 369 00:23:45,600 --> 00:23:50,439 Speaker 1: answer after they've processed this information onto other nodes in 370 00:23:50,480 --> 00:23:53,840 Speaker 1: the network. It's a non linear approach to computing, and 371 00:23:53,880 --> 00:23:57,600 Speaker 1: by adjusting the processes each node performs. This is also 372 00:23:58,119 --> 00:24:01,440 Speaker 1: known as adjusting the weight of the nodes, you can 373 00:24:01,480 --> 00:24:04,840 Speaker 1: tweak the outcomes. Now, this is incredibly useful. If you 374 00:24:04,960 --> 00:24:08,520 Speaker 1: already know the outcome you want, you can tweak the 375 00:24:08,560 --> 00:24:12,199 Speaker 1: system so that it learns or is trained to recognize 376 00:24:12,200 --> 00:24:16,760 Speaker 1: something specific. For example, you could train a computer system 377 00:24:16,800 --> 00:24:20,159 Speaker 1: to recognize faces, so you would feed it images. Some 378 00:24:20,200 --> 00:24:22,760 Speaker 1: of the images would have faces in them, some would 379 00:24:22,760 --> 00:24:25,639 Speaker 1: not have faces in them. Some might have something that 380 00:24:25,760 --> 00:24:27,680 Speaker 1: could be a face, but it's hard to tell. Maybe 381 00:24:27,720 --> 00:24:30,919 Speaker 1: it's a shape in a picture. That looks kind of 382 00:24:30,960 --> 00:24:33,960 Speaker 1: like a face, but it's not actually someone's face. Anyway. 383 00:24:34,000 --> 00:24:36,879 Speaker 1: You train the computer model to try and separate the 384 00:24:36,960 --> 00:24:40,600 Speaker 1: faces from the non faces, and it might take many 385 00:24:40,680 --> 00:24:43,800 Speaker 1: iterations to get the model trained up using your starting 386 00:24:43,880 --> 00:24:47,159 Speaker 1: data your training data. Now, once you do have your 387 00:24:47,160 --> 00:24:49,600 Speaker 1: computer model trained up, you've tweaked all the nodes so 388 00:24:49,680 --> 00:24:54,040 Speaker 1: that it is reliably producing results that say, yes, this 389 00:24:54,160 --> 00:24:56,800 Speaker 1: is a face or no, this isn't. You can now 390 00:24:56,920 --> 00:25:00,760 Speaker 1: feed that same computer model brand new images that it 391 00:25:00,800 --> 00:25:04,480 Speaker 1: has never seen before, and it can perform the same functions. 392 00:25:04,800 --> 00:25:08,520 Speaker 1: You have taught the computer model how to do something. 393 00:25:08,880 --> 00:25:12,880 Speaker 1: But this isn't like spontaneous intelligence, and it's not connected 394 00:25:12,920 --> 00:25:16,880 Speaker 1: to consciousness. You couldn't really call it thinking so much 395 00:25:16,880 --> 00:25:21,880 Speaker 1: as just being trained to recognize specific patterns pretty well. Now, 396 00:25:21,920 --> 00:25:25,119 Speaker 1: that's just one example of putting an artificial neural network 397 00:25:25,160 --> 00:25:28,199 Speaker 1: to use. There are lots of others, and there are 398 00:25:28,200 --> 00:25:33,480 Speaker 1: also systems like IBM's Watson, which also appears at casual 399 00:25:33,520 --> 00:25:36,960 Speaker 1: glance to think. This was helped in no small part 400 00:25:37,080 --> 00:25:40,720 Speaker 1: by the very public display of Watson competing on special 401 00:25:40,800 --> 00:25:44,119 Speaker 1: episodes of Jeopardy, and which it went up against human 402 00:25:44,200 --> 00:25:49,119 Speaker 1: opponents who were former Jeopardy champions themselves. Watson famously couldn't 403 00:25:49,160 --> 00:25:52,720 Speaker 1: call upon the Internet to search for answers. All the 404 00:25:52,800 --> 00:25:56,120 Speaker 1: data the computer could access was self contained in its 405 00:25:56,320 --> 00:26:01,080 Speaker 1: undeniably voluminous storage, and the computer had to parse what 406 00:26:01,240 --> 00:26:04,600 Speaker 1: the clues in Jeopardy were actually looking for, then come 407 00:26:04,680 --> 00:26:07,919 Speaker 1: up with an appropriate response. And to make matters more tricky, 408 00:26:08,119 --> 00:26:11,919 Speaker 1: the computer wasn't returning a guaranteed right answer. The computer 409 00:26:12,000 --> 00:26:14,919 Speaker 1: had to come to a judgment on how confident it 410 00:26:15,119 --> 00:26:17,600 Speaker 1: was that the answer it had arrived at was the 411 00:26:17,640 --> 00:26:21,840 Speaker 1: correct one. If the confidence met a certain threshold, then 412 00:26:21,920 --> 00:26:25,520 Speaker 1: Watson would submit an answer. If it did not meet 413 00:26:25,560 --> 00:26:29,600 Speaker 1: that threshold, Watson would remain silent. It's a remarkable achievement, 414 00:26:29,880 --> 00:26:32,320 Speaker 1: and it has lots of potential applications, many of which 415 00:26:32,359 --> 00:26:36,280 Speaker 1: are actually in action today, but it's still not quite 416 00:26:36,440 --> 00:26:39,399 Speaker 1: at the level of a machine thinking like a human, 417 00:26:39,480 --> 00:26:41,720 Speaker 1: and I don't think anyone at IBM would suggest that 418 00:26:41,800 --> 00:26:45,840 Speaker 1: it possesses any sense of consciousness. When we come back, 419 00:26:46,000 --> 00:26:49,800 Speaker 1: i'll talk about a famous thought experiment that really starts 420 00:26:49,800 --> 00:26:53,680 Speaker 1: to examine whether or not machines could ever attain intelligence 421 00:26:53,760 --> 00:27:04,480 Speaker 1: and consciousness. But first, let's take another quick break. And 422 00:27:04,640 --> 00:27:09,359 Speaker 1: now this brings me to a famous thought experiment proposed 423 00:27:09,480 --> 00:27:13,320 Speaker 1: by John Searle, a philosopher who questioned whether we could 424 00:27:13,320 --> 00:27:16,919 Speaker 1: say a machine, even one so proficient that could deliver 425 00:27:17,040 --> 00:27:22,119 Speaker 1: reliable answers on demand, would ever truly be intelligent, at 426 00:27:22,200 --> 00:27:25,119 Speaker 1: least on a level similar to what we humans identify 427 00:27:25,520 --> 00:27:30,400 Speaker 1: as being intelligent. It's called the Chinese room argument, which 428 00:27:30,520 --> 00:27:35,800 Speaker 1: Searle included in his article titled Minds, Brains, and Programs 429 00:27:35,920 --> 00:27:40,120 Speaker 1: for the Behavioral and Brain Sciences Journal. Here's the premise 430 00:27:40,520 --> 00:27:44,000 Speaker 1: of the thought experiment. Imagine that you are in a 431 00:27:44,080 --> 00:27:47,520 Speaker 1: simple room. The room has a table and a chair. 432 00:27:48,080 --> 00:27:52,000 Speaker 1: There's a ream of blank paper, there's a brush, there's 433 00:27:52,040 --> 00:27:55,600 Speaker 1: some ink, and there's also a large book within the 434 00:27:55,680 --> 00:27:59,359 Speaker 1: room that contains pairs of Chinese symbols in the book. 435 00:27:59,800 --> 00:28:02,320 Speaker 1: Oh and we also have to imagine that you don't 436 00:28:02,440 --> 00:28:06,800 Speaker 1: understand or recognize these Chinese symbols. They mean nothing to you. 437 00:28:07,440 --> 00:28:09,960 Speaker 1: There's also a door to the room, and the door 438 00:28:10,040 --> 00:28:13,119 Speaker 1: has a mail slot, and every now and again someone 439 00:28:13,160 --> 00:28:16,000 Speaker 1: slides a piece of paper through the slot. The piece 440 00:28:16,040 --> 00:28:19,600 Speaker 1: of paper has one of those Chinese symbols printed on it. 441 00:28:20,119 --> 00:28:22,639 Speaker 1: And it's your job to go through the book and 442 00:28:22,760 --> 00:28:26,520 Speaker 1: find the matching symbol in the book plus the corresponding 443 00:28:26,600 --> 00:28:29,479 Speaker 1: symbol in the pair, because remember I said there were 444 00:28:29,480 --> 00:28:33,000 Speaker 1: symbols that were paired together. You then take a blank 445 00:28:33,000 --> 00:28:37,800 Speaker 1: sheet of paper, You draw the corresponding symbol from that 446 00:28:38,000 --> 00:28:41,400 Speaker 1: pair onto the sheet of paper, and finally you slip 447 00:28:41,480 --> 00:28:44,600 Speaker 1: that piece of paper through the mail slot, presumably to 448 00:28:44,680 --> 00:28:47,400 Speaker 1: the person who gave you the first piece of paper 449 00:28:47,760 --> 00:28:50,600 Speaker 1: and the original part of this problem. So to an 450 00:28:50,640 --> 00:28:54,200 Speaker 1: outside observer, let's say it's actually the person who's slipping 451 00:28:54,280 --> 00:28:57,280 Speaker 1: the piece of paper to you, it would seem that 452 00:28:57,360 --> 00:29:03,200 Speaker 1: whomever is inside the door understands Chinese symbols. They can 453 00:29:03,280 --> 00:29:09,400 Speaker 1: recognize the significance of whatever symbol was contributed, was sent 454 00:29:09,520 --> 00:29:11,960 Speaker 1: in through the mail slot, and then match it to 455 00:29:12,080 --> 00:29:15,920 Speaker 1: whatever the corresponding data is for that particular symbol, and 456 00:29:15,960 --> 00:29:19,240 Speaker 1: then return that to the user. So to the outside observer, 457 00:29:19,560 --> 00:29:23,160 Speaker 1: it appears as though whatever is inside the room comprehends 458 00:29:23,200 --> 00:29:27,320 Speaker 1: what it is doing. But argues Serle, that's only an 459 00:29:27,360 --> 00:29:31,880 Speaker 1: illusion because the person inside the room doesn't know what 460 00:29:31,960 --> 00:29:35,680 Speaker 1: any of those symbols actually means. So if this is you, 461 00:29:35,680 --> 00:29:39,720 Speaker 1: you have no context. You don't know what any individual 462 00:29:39,760 --> 00:29:44,200 Speaker 1: symbol stands for, nor do you understand why any symbol 463 00:29:44,240 --> 00:29:47,040 Speaker 1: would be prepared with any other symbol. You don't know 464 00:29:47,120 --> 00:29:50,120 Speaker 1: the reasoning behind that. All you have is a book 465 00:29:50,160 --> 00:29:53,320 Speaker 1: of rules, But the rules only state what your response 466 00:29:53,440 --> 00:29:57,480 Speaker 1: should be given a specific input. The rules don't tell 467 00:29:57,520 --> 00:30:00,720 Speaker 1: you why, either on a granular level of what the 468 00:30:00,760 --> 00:30:03,840 Speaker 1: symbols actually mean, or on a larger scale when it 469 00:30:03,840 --> 00:30:07,720 Speaker 1: comes to what you're actually accomplishing in this endeavor. All 470 00:30:07,760 --> 00:30:10,840 Speaker 1: you are doing is filling a physical action over and 471 00:30:10,960 --> 00:30:14,240 Speaker 1: over based on a set of rules you don't understand. 472 00:30:14,680 --> 00:30:17,600 Speaker 1: And Searrele then uses this argument to say that essentially 473 00:30:17,800 --> 00:30:20,760 Speaker 1: we have to think the same way about machines. The 474 00:30:20,760 --> 00:30:24,920 Speaker 1: machines process information based on the input they receive and 475 00:30:25,000 --> 00:30:28,920 Speaker 1: the program that they are following. That's it. They don't 476 00:30:28,920 --> 00:30:33,560 Speaker 1: have awareness or understanding of what the information is. Searle 477 00:30:33,680 --> 00:30:37,240 Speaker 1: was taking aim at a particular concept in AI, often 478 00:30:37,320 --> 00:30:41,200 Speaker 1: dubbed strong AI or general AI. It's a sort of 479 00:30:41,280 --> 00:30:46,000 Speaker 1: general artificial intelligence. So it's something that we could or 480 00:30:46,080 --> 00:30:49,640 Speaker 1: would compare directly to human intelligence, even if it didn't 481 00:30:49,680 --> 00:30:53,320 Speaker 1: work the same way as our intelligence works. The argument 482 00:30:53,400 --> 00:30:55,840 Speaker 1: is that the capacity and the outcomes would be similar 483 00:30:55,960 --> 00:30:58,600 Speaker 1: enough for us to make the comparison. This is the 484 00:30:58,640 --> 00:31:03,040 Speaker 1: type of intelligence that we see in science fiction doomsday scenarios, 485 00:31:03,240 --> 00:31:06,880 Speaker 1: where the machines have rebelled against humans, or the machines 486 00:31:06,920 --> 00:31:11,240 Speaker 1: appear to misinterpret simple requests, or the machines come to 487 00:31:11,280 --> 00:31:16,680 Speaker 1: conclusions that, while logically sound, spell doom for us all. 488 00:31:16,840 --> 00:31:19,880 Speaker 1: The classic example of this, by the way, is appealing 489 00:31:19,920 --> 00:31:23,200 Speaker 1: to a super smart artificial intelligence and you say, could 490 00:31:23,240 --> 00:31:26,400 Speaker 1: you please bring about world peace because we're all sorts 491 00:31:26,400 --> 00:31:30,000 Speaker 1: of messed up, and the intelligence processes this and then 492 00:31:30,080 --> 00:31:34,280 Speaker 1: concludes that while there are at least two humans, there 493 00:31:34,320 --> 00:31:36,800 Speaker 1: can never be a guarantee for peace because there's always 494 00:31:36,800 --> 00:31:41,920 Speaker 1: the opportunity for disagreement and violence between two humans. And 495 00:31:42,000 --> 00:31:45,480 Speaker 1: so to achieve true piece, the computer then goes on 496 00:31:45,800 --> 00:31:48,880 Speaker 1: a killing spree to wipe out all of humanity. Now, 497 00:31:48,920 --> 00:31:53,600 Speaker 1: Cyril is not necessarily saying that computers won't contribute to 498 00:31:53,680 --> 00:31:57,360 Speaker 1: a castrophic outcome for humanity. Instead, he's saying they're not 499 00:31:57,480 --> 00:32:01,600 Speaker 1: actually thinking or processing information in a truly intelligent way. 500 00:32:02,040 --> 00:32:05,600 Speaker 1: They are arriving in outcomes through a series of processes 501 00:32:05,840 --> 00:32:09,160 Speaker 1: that might appear to be intelligent at first glance, but 502 00:32:09,200 --> 00:32:12,200 Speaker 1: when you break them down, it all reveals themselves to 503 00:32:12,240 --> 00:32:16,160 Speaker 1: be nothing more than a very complex series of mathematical processes. 504 00:32:16,320 --> 00:32:19,320 Speaker 1: You could even break it down further into binary and 505 00:32:19,400 --> 00:32:22,920 Speaker 1: say that ultimately, each apparent decision would just be a 506 00:32:22,960 --> 00:32:26,760 Speaker 1: particular sequence of switches that are in the on or 507 00:32:26,800 --> 00:32:30,480 Speaker 1: off position, and the status of each switch would be 508 00:32:30,520 --> 00:32:33,440 Speaker 1: determined by the input and the program you were running, 509 00:32:33,680 --> 00:32:39,360 Speaker 1: not some intelligent artificial creation that is reasoning through a problem. Essentially, 510 00:32:39,680 --> 00:32:46,040 Speaker 1: Serle's argument boils down to the difference between syntax and semantics. 511 00:32:46,760 --> 00:32:49,880 Speaker 1: Syntax would be the set of rules that you would 512 00:32:49,920 --> 00:32:54,080 Speaker 1: follow with those symbols. For example, in English, the letter 513 00:32:54,200 --> 00:32:58,320 Speaker 1: Q is nearly always followed by the letter you. The 514 00:32:58,640 --> 00:33:02,960 Speaker 1: few exceptions to this rule mostly involve romanizing words from 515 00:33:03,000 --> 00:33:07,560 Speaker 1: other language in which the letter Q represents a sound 516 00:33:07,640 --> 00:33:12,360 Speaker 1: that's not natively present in English. So you could program 517 00:33:12,400 --> 00:33:15,120 Speaker 1: a machine to follow the basic rule that the symbol 518 00:33:15,200 --> 00:33:18,680 Speaker 1: Q should be followed by the symbol you, assuming you're 519 00:33:18,720 --> 00:33:22,160 Speaker 1: eliminating all those exceptions I just mentioned. But that doesn't 520 00:33:22,240 --> 00:33:26,920 Speaker 1: lead to a grasp of semantics, which is actual meaning. Moreover, 521 00:33:27,200 --> 00:33:30,440 Speaker 1: Searle asserts that it's impossible to come to a of 522 00:33:30,480 --> 00:33:34,800 Speaker 1: semantics merely through a mastery of syntax. You might know 523 00:33:34,960 --> 00:33:39,640 Speaker 1: those rules flawlessly, but Searle argues, you still wouldn't understand 524 00:33:39,800 --> 00:33:42,640 Speaker 1: why there are rules, or what the output of those 525 00:33:42,760 --> 00:33:46,760 Speaker 1: rules means, or even what the input means. There are 526 00:33:46,760 --> 00:33:50,440 Speaker 1: some general counter arguments that philosophers have made to Searle's 527 00:33:50,480 --> 00:33:55,040 Speaker 1: thought experiment, and according to the Stanford Encyclopedia of Philosophy, 528 00:33:55,120 --> 00:34:00,560 Speaker 1: which is a phenomenal resource, it's also incredibly dense. But 529 00:34:00,880 --> 00:34:04,760 Speaker 1: these counter arguments tend to fall into three groups. The 530 00:34:04,800 --> 00:34:08,040 Speaker 1: first group agrees with Cerle that the person inside the 531 00:34:08,120 --> 00:34:12,080 Speaker 1: room clearly has no understanding of the Chinese symbols. But 532 00:34:12,160 --> 00:34:15,719 Speaker 1: the group counters the notion that this system as a 533 00:34:15,719 --> 00:34:18,400 Speaker 1: whole can't understand it. In fact, they say the opposite. 534 00:34:18,400 --> 00:34:21,240 Speaker 1: They say, yes, the person inside the room doesn't understand, 535 00:34:21,320 --> 00:34:26,400 Speaker 1: but you're looking at a specific component of a larger system. 536 00:34:26,600 --> 00:34:29,880 Speaker 1: And if we consider the system, or maybe a virtual 537 00:34:30,000 --> 00:34:34,759 Speaker 1: mind that exists due to the system that does have 538 00:34:34,800 --> 00:34:38,080 Speaker 1: an understanding, this is sort of like saying a neuron 539 00:34:38,280 --> 00:34:42,400 Speaker 1: in the brain doesn't understand anything. It sends along signals 540 00:34:42,400 --> 00:34:47,000 Speaker 1: that collectively and through mechanisms, we don't fully understand become 541 00:34:47,160 --> 00:34:50,720 Speaker 1: thoughts that we can become conscious of. So in this argument, 542 00:34:50,880 --> 00:34:53,200 Speaker 1: the person in the room is just a component of 543 00:34:53,239 --> 00:34:56,560 Speaker 1: an overall system, and the system possesses intelligence even if 544 00:34:56,600 --> 00:35:00,120 Speaker 1: the component does not. The second group argues that but 545 00:35:00,160 --> 00:35:03,680 Speaker 1: if the computer system either could simulate the operation of 546 00:35:03,719 --> 00:35:07,799 Speaker 1: a brain, perhaps with billions of nodes, approaching the complexity 547 00:35:07,840 --> 00:35:11,080 Speaker 1: of a human brain with billions of neurons, or if 548 00:35:11,080 --> 00:35:14,400 Speaker 1: the system were to inhabit a robotic body that could 549 00:35:14,480 --> 00:35:18,480 Speaker 1: have direct interaction with its environment, then the system could 550 00:35:18,480 --> 00:35:23,800 Speaker 1: manifest intelligence. The third group rejects Searle's arguments more thoroughly 551 00:35:24,160 --> 00:35:28,560 Speaker 1: and on the basis of various grounds, ranging from Searle's 552 00:35:28,560 --> 00:35:31,719 Speaker 1: experiment being too narrow in scope to an argument about 553 00:35:31,719 --> 00:35:35,799 Speaker 1: what the word understand actually means. This is where things 554 00:35:35,840 --> 00:35:38,680 Speaker 1: get a bit more loosey goosey, And sometimes I feel 555 00:35:38,719 --> 00:35:42,719 Speaker 1: like arguments in this group amount to oh yeah, but again, 556 00:35:42,760 --> 00:35:45,760 Speaker 1: I'm pragmatic, so I tend to have a pretty strong 557 00:35:45,880 --> 00:35:49,480 Speaker 1: bias against these arguments, and I recognize that this means 558 00:35:49,560 --> 00:35:53,239 Speaker 1: I'm not giving them fair consideration because of those biases. 559 00:35:53,760 --> 00:35:56,440 Speaker 1: A few of these arguments take issue with Searle's assertion 560 00:35:56,560 --> 00:36:01,719 Speaker 1: that one cannot grasp semantics through an understanding syntax. And 561 00:36:01,760 --> 00:36:05,800 Speaker 1: here's something that I find really interesting. Searle originally published 562 00:36:05,800 --> 00:36:09,840 Speaker 1: this argument way back in nineteen eighty. It's been nearly 563 00:36:10,000 --> 00:36:13,440 Speaker 1: forty years since he first proposed it, and to this 564 00:36:13,680 --> 00:36:17,160 Speaker 1: day there is no consensus on whether or not his 565 00:36:17,400 --> 00:36:20,720 Speaker 1: argument is sound. So why is that? Well, it's because, 566 00:36:20,760 --> 00:36:24,080 Speaker 1: as I've covered in this episode, the concepts of intelligence 567 00:36:24,160 --> 00:36:28,520 Speaker 1: and more to the point, consciousness are whibley wobbly, though 568 00:36:28,880 --> 00:36:31,960 Speaker 1: not as far as I can tell, timey whymy. When 569 00:36:32,000 --> 00:36:36,800 Speaker 1: we can't even nail down specific definitions for words like understand, 570 00:36:37,239 --> 00:36:40,160 Speaker 1: it becomes difficult to even tell when we're agreeing or 571 00:36:40,200 --> 00:36:43,760 Speaker 1: disagreeing on certain topics. It could be that while people 572 00:36:43,840 --> 00:36:47,320 Speaker 1: are in a debate and are using words in different ways, 573 00:36:47,560 --> 00:36:50,400 Speaker 1: it turns out they're actually in agreement with one another. 574 00:36:50,920 --> 00:36:55,759 Speaker 1: Such is the messiness that is intelligence. Further, we've not 575 00:36:55,840 --> 00:36:58,919 Speaker 1: yet observed anything in the machine world that seems, upon 576 00:36:59,000 --> 00:37:03,480 Speaker 1: closer examination and to reflect true intelligence and consciousness, at 577 00:37:03,560 --> 00:37:06,840 Speaker 1: least as the way we experience it. In fact, we 578 00:37:06,880 --> 00:37:10,040 Speaker 1: can't say that we've seen any artificial constructs that have 579 00:37:10,160 --> 00:37:13,680 Speaker 1: experienced anything, because, as far as we know, no such 580 00:37:13,719 --> 00:37:17,960 Speaker 1: device has any awareness of itself. Now, I'm not sure 581 00:37:18,320 --> 00:37:21,400 Speaker 1: if we'll ever create a machine that will have true 582 00:37:21,400 --> 00:37:25,279 Speaker 1: intelligence and consciousness, using the word true here to mean 583 00:37:25,520 --> 00:37:29,240 Speaker 1: human like. Now, I feel pretty confident that if it 584 00:37:29,360 --> 00:37:33,319 Speaker 1: is possible, we will get around to it eventually. It 585 00:37:33,400 --> 00:37:37,120 Speaker 1: might take way more resources than we currently estimate, or 586 00:37:37,120 --> 00:37:40,560 Speaker 1: maybe it will just require a different computational approach, maybe 587 00:37:40,560 --> 00:37:44,960 Speaker 1: it'll rely on bleeding edge technologies like quantum computing. I 588 00:37:44,960 --> 00:37:48,520 Speaker 1: figure if it's something we can do, we will do it. 589 00:37:48,520 --> 00:37:52,399 Speaker 1: It's just a question of time, really, and further, it's 590 00:37:52,480 --> 00:37:55,439 Speaker 1: hard for me to come to a conclusion other than 591 00:37:55,560 --> 00:38:00,799 Speaker 1: it will ultimately prove possible to make an intelligent, conscious construct. 592 00:38:01,280 --> 00:38:05,120 Speaker 1: Now I believe that because I believe our own intelligence 593 00:38:05,239 --> 00:38:09,560 Speaker 1: and our own consciousness is firmly rooted in our brains. 594 00:38:10,280 --> 00:38:13,680 Speaker 1: I don't think there's anything mystical involved. And while we 595 00:38:13,719 --> 00:38:16,000 Speaker 1: don't have a full picture of how it happens in 596 00:38:16,040 --> 00:38:19,440 Speaker 1: our brains, we at least know that it does happen, 597 00:38:19,880 --> 00:38:22,080 Speaker 1: and we know some of the questions to ask and 598 00:38:22,200 --> 00:38:25,439 Speaker 1: have some ideas on how to search for answers. It's 599 00:38:25,440 --> 00:38:28,080 Speaker 1: not a complete picture, and we still have a very 600 00:38:28,120 --> 00:38:30,359 Speaker 1: long way to go, but I think it's if it's 601 00:38:30,440 --> 00:38:33,440 Speaker 1: possible to build a full understanding of how our brains 602 00:38:33,520 --> 00:38:37,600 Speaker 1: work with regard to intelligence and consciousness, we'll get there too, 603 00:38:37,640 --> 00:38:42,680 Speaker 1: sooner or later, probably later. I suppose there's still the 604 00:38:42,800 --> 00:38:47,520 Speaker 1: chance that we could create an intelligent and or conscious 605 00:38:47,640 --> 00:38:52,120 Speaker 1: machine just by luck or accident. And while I intuitively 606 00:38:52,320 --> 00:38:55,600 Speaker 1: feel that this is unlikely, I have to admit that 607 00:38:55,680 --> 00:39:00,480 Speaker 1: intuition isn't really reliable in these matters. It feels to 608 00:39:00,560 --> 00:39:04,200 Speaker 1: me like it is the longest of long shots, but 609 00:39:04,280 --> 00:39:06,960 Speaker 1: that's entirely based on the fact that we haven't managed 610 00:39:07,000 --> 00:39:11,000 Speaker 1: to do it up until now, and including now. Maybe 611 00:39:11,040 --> 00:39:13,520 Speaker 1: the right sequence of events is right around the corner. 612 00:39:14,120 --> 00:39:17,759 Speaker 1: Just because it hasn't happened yet doesn't mean it can't 613 00:39:17,880 --> 00:39:21,440 Speaker 1: or won't happen at all. And it's good to remember 614 00:39:21,719 --> 00:39:26,280 Speaker 1: that machines don't need to be particularly intelligent or conscious 615 00:39:26,600 --> 00:39:31,160 Speaker 1: to be useful or potentially dangerous. We can see examples 616 00:39:31,200 --> 00:39:33,680 Speaker 1: of that playing out already with devices that have some 617 00:39:33,800 --> 00:39:37,759 Speaker 1: limited or weak AI. And by limited I mean it's 618 00:39:37,920 --> 00:39:41,720 Speaker 1: not general intelligence. I don't mean that the AI itself 619 00:39:41,760 --> 00:39:45,880 Speaker 1: is somehow unsophisticated or primitive, so it may not even matter. 620 00:39:46,200 --> 00:39:49,440 Speaker 1: If we never create devices that have true or human 621 00:39:49,640 --> 00:39:52,920 Speaker 1: like intelligence, we might be able to accomplish just as 622 00:39:52,960 --> 00:39:57,080 Speaker 1: much with something that does not have those capabilities. And 623 00:39:57,120 --> 00:40:01,040 Speaker 1: in other words, this is a very complicated top one 624 00:40:01,080 --> 00:40:04,160 Speaker 1: that I think gets oversimplified, and a lot of fiction 625 00:40:04,600 --> 00:40:09,960 Speaker 1: and also just a lot of speculative prognostications about the future. 626 00:40:10,000 --> 00:40:13,200 Speaker 1: I mean, you'll see a lot of videos about how 627 00:40:13,520 --> 00:40:17,000 Speaker 1: in the future AI is going to perform a more 628 00:40:17,040 --> 00:40:20,400 Speaker 1: intrinsic role, or maybe it'll be an existential threat to 629 00:40:20,480 --> 00:40:23,480 Speaker 1: humanity or whatever it may be. And I think a 630 00:40:23,520 --> 00:40:28,560 Speaker 1: lot of that is predicated upon a deep misunderstanding or 631 00:40:28,640 --> 00:40:33,680 Speaker 1: underestimation of how complicated cognitive neuroscience actually is and how 632 00:40:33,760 --> 00:40:37,680 Speaker 1: little we really understand when it comes to our own consciousness, 633 00:40:37,800 --> 00:40:40,640 Speaker 1: let alone how we would bring about such a thing 634 00:40:40,840 --> 00:40:46,000 Speaker 1: in a different device. I hope you enjoyed that rerun, 635 00:40:46,239 --> 00:40:49,120 Speaker 1: and as always, I also hope that you are all 636 00:40:49,200 --> 00:40:52,360 Speaker 1: well and I will talk to you again really soon. 637 00:40:58,440 --> 00:41:02,120 Speaker 1: Tech Stuff is an Iheartreate Radio production. For more podcasts 638 00:41:02,160 --> 00:41:06,640 Speaker 1: from iHeartRadio, visit the iHeartRadio app, Apple Podcasts, or wherever 639 00:41:06,680 --> 00:41:08,240 Speaker 1: you listen to your favorite shows.