1 00:00:04,400 --> 00:00:07,800 Speaker 1: Welcome to tech Stuff, a production from I Heart Radio. 2 00:00:12,080 --> 00:00:15,160 Speaker 1: Hey there, and welcome to tech Stuff. I'm your host, 3 00:00:15,280 --> 00:00:18,200 Speaker 1: Jonathan Strickland. I'm an executive producer with I Heart Radio, 4 00:00:18,560 --> 00:00:22,000 Speaker 1: and I love all things tech and I owe you 5 00:00:22,000 --> 00:00:25,360 Speaker 1: guys an apology where you're having a rerun episode today. 6 00:00:25,880 --> 00:00:29,600 Speaker 1: That's because I came down with food poisoning. You could 7 00:00:29,600 --> 00:00:32,880 Speaker 1: probably hear. I don't quite sound like my normal self. 8 00:00:32,920 --> 00:00:37,040 Speaker 1: I apologize for that as well, but it completely laid 9 00:00:37,120 --> 00:00:40,400 Speaker 1: me low and I was unable to do any work, 10 00:00:41,040 --> 00:00:43,839 Speaker 1: and so I wasn't able to research right and record 11 00:00:43,840 --> 00:00:47,479 Speaker 1: an episode, and I feel badly about that. I really 12 00:00:47,520 --> 00:00:50,400 Speaker 1: love you guys, and I love being able to create 13 00:00:51,040 --> 00:00:55,240 Speaker 1: great podcasts for you. But yeah, I just I was 14 00:00:55,360 --> 00:00:58,640 Speaker 1: in bed with a fever pretty much all day yesterday. 15 00:00:58,720 --> 00:01:02,960 Speaker 1: So this is a rerun from two thousand nineteen about 16 00:01:03,040 --> 00:01:07,160 Speaker 1: machine consciousness. I figured this is a good companion piece 17 00:01:07,440 --> 00:01:09,840 Speaker 1: to some of the things we've talked about recently with 18 00:01:09,920 --> 00:01:14,520 Speaker 1: artificial intelligence and machine learning and artificial neural networks and 19 00:01:14,560 --> 00:01:17,679 Speaker 1: all of that sort of stuff. And um, it's it's 20 00:01:17,720 --> 00:01:21,959 Speaker 1: again one of those tricky concepts. Consciousness is a difficult 21 00:01:21,959 --> 00:01:25,560 Speaker 1: thing to define, even for humans. So sit back and 22 00:01:25,680 --> 00:01:28,800 Speaker 1: enjoy this rerun, and I am going to go and 23 00:01:28,840 --> 00:01:32,240 Speaker 1: eat some apple sauce. I'll talk to you again at 24 00:01:32,240 --> 00:01:35,000 Speaker 1: the end of the episode. There's a topic I have 25 00:01:35,080 --> 00:01:38,840 Speaker 1: touched on on several occasions in past episodes, but I 26 00:01:38,920 --> 00:01:42,680 Speaker 1: really wanted to dig down today into this topic because 27 00:01:42,720 --> 00:01:46,360 Speaker 1: it's one of those that's fascinating and is an underpinning 28 00:01:46,360 --> 00:01:50,280 Speaker 1: for tons of speculative fiction and horror stories. And since 29 00:01:50,320 --> 00:01:52,760 Speaker 1: we're now in October, I think here this would be 30 00:01:53,080 --> 00:01:57,560 Speaker 1: kind of thematically linked to Halloween text style. It turns 31 00:01:57,600 --> 00:02:00,880 Speaker 1: out that's pretty hard to do. Halloween tech anology stories 32 00:02:01,000 --> 00:02:05,080 Speaker 1: have already covered stuff like haunted house technology. So today 33 00:02:05,200 --> 00:02:08,520 Speaker 1: we're going to talk about consciousness and whether or not 34 00:02:08,720 --> 00:02:13,720 Speaker 1: it might be possible that machines could one day achieve consciousness. Now, 35 00:02:14,000 --> 00:02:17,560 Speaker 1: I could start this off by talking about the Turing test, 36 00:02:17,680 --> 00:02:20,560 Speaker 1: which many people have used as the launch point for 37 00:02:20,560 --> 00:02:24,920 Speaker 1: a machine intelligence and machine consciousness debates. The way we 38 00:02:25,000 --> 00:02:28,160 Speaker 1: understand that test today, which by the way, is slightly 39 00:02:28,200 --> 00:02:32,240 Speaker 1: different from the test that Alan Turing first proposed, is 40 00:02:32,280 --> 00:02:36,160 Speaker 1: that you have a human interviewer who, through a computer interface, 41 00:02:36,560 --> 00:02:40,120 Speaker 1: asks questions of a subject, and the subject might be 42 00:02:40,160 --> 00:02:43,840 Speaker 1: another human, or it might be a computer program posing 43 00:02:44,040 --> 00:02:47,600 Speaker 1: as a human and the the interviewer just sees text 44 00:02:47,639 --> 00:02:50,280 Speaker 1: on a screen. So if the interviewer is unable to 45 00:02:50,320 --> 00:02:53,919 Speaker 1: pass a certain threshold of being able to tell the difference, 46 00:02:53,960 --> 00:02:56,160 Speaker 1: to be able to determine whether it was a machine 47 00:02:56,280 --> 00:02:59,720 Speaker 1: or a person, then the program or machine that's being 48 00:02:59,760 --> 00:03:02,680 Speaker 1: test did is said to have passed the Turing test. 49 00:03:03,160 --> 00:03:06,239 Speaker 1: It doesn't mean the program or machine is conscious or 50 00:03:06,360 --> 00:03:10,960 Speaker 1: even intelligent, but rather says that to outward appearances, it 51 00:03:11,200 --> 00:03:16,440 Speaker 1: seems to be intelligent and conscious. See, we humans can't 52 00:03:16,440 --> 00:03:21,000 Speaker 1: be absolutely sure that other humans are conscious and intelligent. 53 00:03:21,320 --> 00:03:25,400 Speaker 1: We assume that they are because each of us knows 54 00:03:25,520 --> 00:03:28,800 Speaker 1: of our own consciousness and our own intelligence. We have 55 00:03:29,160 --> 00:03:34,720 Speaker 1: a personal experience with that direct personal experience, and other 56 00:03:34,760 --> 00:03:38,920 Speaker 1: people seem to display behaviors that indicate they too possess 57 00:03:39,120 --> 00:03:43,160 Speaker 1: those traits, and they too have a personal experience. But 58 00:03:43,280 --> 00:03:46,640 Speaker 1: we cannot be those other people, and so we have 59 00:03:46,760 --> 00:03:50,240 Speaker 1: to grant them the consideration that they too are conscious 60 00:03:50,280 --> 00:03:54,320 Speaker 1: and intelligent. And I agree that is very big of us. 61 00:03:54,920 --> 00:03:59,080 Speaker 1: This is actually called the problem of other minds in 62 00:03:59,120 --> 00:04:02,240 Speaker 1: the field of philosophy, and the problem is this, It 63 00:04:02,320 --> 00:04:05,960 Speaker 1: is impossible for any one of us to step outside 64 00:04:06,000 --> 00:04:10,640 Speaker 1: of ourselves and into any other person's consciousness. We cannot 65 00:04:10,760 --> 00:04:15,520 Speaker 1: feel what other people are feeling or experience their thoughts firsthand. 66 00:04:15,880 --> 00:04:18,839 Speaker 1: We are aware of our own abilities, but we are 67 00:04:18,960 --> 00:04:23,840 Speaker 1: only aware of the appearance that other people share those abilities. 68 00:04:24,200 --> 00:04:27,960 Speaker 1: So assuming that other people also experience consciousness rather than 69 00:04:28,080 --> 00:04:31,800 Speaker 1: imitating it, really, really, well, that's a step we all 70 00:04:31,920 --> 00:04:36,479 Speaker 1: have to take. Turings point is that if we do 71 00:04:36,640 --> 00:04:40,440 Speaker 1: grant that consideration to other people, why would we not 72 00:04:40,640 --> 00:04:44,159 Speaker 1: do it to machines as well? I mean, the machine 73 00:04:44,200 --> 00:04:47,520 Speaker 1: appears to possess the same qualities as a human. This 74 00:04:47,560 --> 00:04:51,480 Speaker 1: is a hypothetical machine, so we cannot experience what that 75 00:04:51,560 --> 00:04:54,240 Speaker 1: machine is going through, just as we can't experience what 76 00:04:54,400 --> 00:04:58,480 Speaker 1: another person is going through, at least not the intrinsic 77 00:04:58,640 --> 00:05:02,720 Speaker 1: personal level. So why would we not grant the machine 78 00:05:03,040 --> 00:05:06,640 Speaker 1: the same consideration that we would grant to people? And 79 00:05:06,880 --> 00:05:10,000 Speaker 1: touring was being a little cheeky, But while I just 80 00:05:10,040 --> 00:05:12,839 Speaker 1: gave kind of a super fast, high level description of 81 00:05:12,880 --> 00:05:16,160 Speaker 1: the touring test, that's not actually where I want to start. 82 00:05:16,640 --> 00:05:20,680 Speaker 1: I want to begin with the concept of consciousness itself. Now. 83 00:05:20,720 --> 00:05:23,240 Speaker 1: The reason I want to do This isn't just to 84 00:05:23,800 --> 00:05:26,960 Speaker 1: make a longer podcast. It's because I think one of 85 00:05:26,960 --> 00:05:31,680 Speaker 1: the most fundamental problems with the discussion about AI intelligence, 86 00:05:31,960 --> 00:05:35,520 Speaker 1: self awareness, and consciousness is that there tends to be 87 00:05:35,600 --> 00:05:39,640 Speaker 1: a pretty large disconnect between the biologists and the doctors 88 00:05:39,640 --> 00:05:44,000 Speaker 1: who specialize in neuroscience, particularly cognitive neuroscience, and does have 89 00:05:44,120 --> 00:05:48,160 Speaker 1: some understanding about the nature of consciousness and people. And 90 00:05:48,200 --> 00:05:51,120 Speaker 1: then you have computer scientists who have a deep understanding 91 00:05:51,160 --> 00:05:54,839 Speaker 1: of how computers process information. And while we frequently will 92 00:05:54,880 --> 00:05:59,600 Speaker 1: compare brains to computers, that comparison is not one to one. 93 00:05:59,720 --> 00:06:03,400 Speaker 1: It is largely a comparison of convenience, and in some 94 00:06:03,480 --> 00:06:06,440 Speaker 1: cases you could argue it's not terribly useful, it might 95 00:06:06,480 --> 00:06:10,080 Speaker 1: actually be counterproductive. And so I think at least some 96 00:06:10,200 --> 00:06:13,919 Speaker 1: of the speculation about machine consciousness is based on a 97 00:06:14,000 --> 00:06:18,200 Speaker 1: lack of understanding of how complicated and mysterious this topic 98 00:06:18,360 --> 00:06:21,640 Speaker 1: is in the first place, and this ends up being 99 00:06:21,680 --> 00:06:27,600 Speaker 1: really tricky. Consciousness isn't an easily defined quality or quantity. 100 00:06:27,720 --> 00:06:30,359 Speaker 1: Some people like to say, we don't so much define 101 00:06:30,400 --> 00:06:34,080 Speaker 1: consciousness by what it is, but rather what it isn't, 102 00:06:34,520 --> 00:06:36,280 Speaker 1: And this will also will will kind of bring us 103 00:06:36,279 --> 00:06:40,040 Speaker 1: into the realm of philosophy. Now, I'm gonna be honest 104 00:06:40,080 --> 00:06:43,440 Speaker 1: with you, guys, the realm of philosophy is not one 105 00:06:43,640 --> 00:06:47,960 Speaker 1: I'm terribly comfortable in. I'm pretty pragmatic, and philosophy deals 106 00:06:47,960 --> 00:06:51,720 Speaker 1: with a lot of stuff that is, at least for now, unknowable. 107 00:06:52,000 --> 00:06:56,120 Speaker 1: Philosophy sometimes asks questions that we do not and cannot 108 00:06:56,240 --> 00:06:59,680 Speaker 1: have the answer to, and in many cases we may 109 00:06:59,760 --> 00:07:02,440 Speaker 1: never ever be able to answer those questions. And the 110 00:07:02,480 --> 00:07:06,360 Speaker 1: pragmatist in me says, well, why bother asking the question 111 00:07:06,640 --> 00:07:09,239 Speaker 1: if you can never get the answer. Let's just focus 112 00:07:09,279 --> 00:07:13,200 Speaker 1: on the stuff we actually can answer. Now, I realize 113 00:07:13,480 --> 00:07:16,960 Speaker 1: this is a limitation on my part. I'm owning that 114 00:07:17,600 --> 00:07:21,000 Speaker 1: I'm not out to upset the philosophical apple cart. I'm 115 00:07:21,040 --> 00:07:24,520 Speaker 1: just of a different philosophical bent. And I realized that 116 00:07:24,840 --> 00:07:28,320 Speaker 1: just because we can't answer some questions right now, that 117 00:07:28,400 --> 00:07:32,680 Speaker 1: doesn't necessarily mean they will all go unanswered for all time. 118 00:07:33,040 --> 00:07:35,680 Speaker 1: We might glean a way of answering at least some 119 00:07:35,880 --> 00:07:39,320 Speaker 1: of them, though I suspect a few will be forever unanswered. 120 00:07:39,960 --> 00:07:43,720 Speaker 1: If we go with the Basic Dictionary definition of consciousness, 121 00:07:44,040 --> 00:07:47,920 Speaker 1: it's quote the state of being awake and aware of 122 00:07:47,960 --> 00:07:52,080 Speaker 1: one's surroundings end quote. But what this doesn't tell us 123 00:07:52,680 --> 00:07:56,280 Speaker 1: is what's going on that lets us do that. It 124 00:07:56,320 --> 00:08:00,280 Speaker 1: also doesn't talk about being aware of oneself, which we 125 00:08:00,400 --> 00:08:04,160 Speaker 1: largely consider consciousness to be part of. Is not just 126 00:08:04,280 --> 00:08:07,720 Speaker 1: aware of your surroundings, but aware that you exist within 127 00:08:07,760 --> 00:08:12,360 Speaker 1: those surroundings, your relationship to your surroundings, and things that 128 00:08:12,360 --> 00:08:16,280 Speaker 1: are going on within you, yourself, your feelings, and your thoughts. 129 00:08:16,320 --> 00:08:18,160 Speaker 1: The fact that you can process all of this, you 130 00:08:18,160 --> 00:08:22,120 Speaker 1: can reflect upon yourself. We tend to group that into 131 00:08:22,200 --> 00:08:26,080 Speaker 1: consciousness as well. So how is it that we can 132 00:08:26,160 --> 00:08:29,480 Speaker 1: feel things and be aware of those feelings? How is 133 00:08:29,520 --> 00:08:32,960 Speaker 1: it that we can have intentions and be aware of 134 00:08:33,000 --> 00:08:37,199 Speaker 1: our intentions. We are more complex than beings that simply 135 00:08:37,280 --> 00:08:41,480 Speaker 1: react to sensory input. We are more than beings that 136 00:08:41,559 --> 00:08:46,000 Speaker 1: respond to stuff like hunger, fear, or the desire to procreate. 137 00:08:46,240 --> 00:08:50,800 Speaker 1: We have motivations, sometimes really complex motivations, and we can 138 00:08:50,840 --> 00:08:53,959 Speaker 1: reflect on those. We can examine them, we can question them, 139 00:08:53,960 --> 00:08:57,520 Speaker 1: we can even change them. So how do we do 140 00:08:57,559 --> 00:09:00,760 Speaker 1: this now? We know this is special because some of 141 00:09:00,800 --> 00:09:03,880 Speaker 1: the things we can do are shared among a very 142 00:09:04,000 --> 00:09:08,680 Speaker 1: few species on Earth. For example, we humans can recognize 143 00:09:08,679 --> 00:09:12,439 Speaker 1: our own reflections in a mirror, starting it around age 144 00:09:12,520 --> 00:09:15,920 Speaker 1: two or so, we can see the mirror image and 145 00:09:16,000 --> 00:09:19,679 Speaker 1: we recognize the mirriage images of us. Now, there are 146 00:09:19,720 --> 00:09:23,160 Speaker 1: only eight species that can do this that we know 147 00:09:23,200 --> 00:09:27,360 Speaker 1: about anyway. Those species are the great apes. So you've 148 00:09:27,360 --> 00:09:34,360 Speaker 1: got humans, guerrillas, orangutans, binobos, and chimpanzees, the magpie, the dolphin, 149 00:09:34,840 --> 00:09:37,600 Speaker 1: and that's it. Oh and the magpies are birds, right, 150 00:09:37,960 --> 00:09:41,800 Speaker 1: that's that's all of them. Recognizing one's own form and 151 00:09:41,840 --> 00:09:45,200 Speaker 1: a mirror shows a sense of self awareness, literally, of 152 00:09:45,280 --> 00:09:49,000 Speaker 1: awareness of one's self. Now, there are a lot of 153 00:09:49,040 --> 00:09:53,160 Speaker 1: great resources online and offline that go into the theme 154 00:09:53,200 --> 00:09:57,679 Speaker 1: of consciousness. Heck, there are numerous college level courses and 155 00:09:57,840 --> 00:10:01,679 Speaker 1: graduate level courses dedicated to this topic. So I'm not 156 00:10:01,720 --> 00:10:05,280 Speaker 1: going to be able to go into all the different hypotheses, arguments, 157 00:10:05,400 --> 00:10:09,280 Speaker 1: counter arguments, et cetera in this episode, but I can 158 00:10:09,280 --> 00:10:13,719 Speaker 1: cover some basics. Also, highly recommend you check out v 159 00:10:13,880 --> 00:10:18,920 Speaker 1: Sauces video on YouTube that's titled what Is Consciousness? Because 160 00:10:19,120 --> 00:10:22,240 Speaker 1: it's really good. And No, I don't know Michael, I 161 00:10:22,280 --> 00:10:25,160 Speaker 1: have no connection to him. I've never met him. This 162 00:10:25,240 --> 00:10:27,959 Speaker 1: is just an honest recommendation from me. And I have 163 00:10:28,440 --> 00:10:32,080 Speaker 1: no connection whatsoever to that video series. The video includes 164 00:10:32,120 --> 00:10:34,840 Speaker 1: a link to what v Sauce dubs a lean back, 165 00:10:35,240 --> 00:10:38,440 Speaker 1: which is a playlist of related videos on the subject 166 00:10:38,480 --> 00:10:42,319 Speaker 1: at hand, in this case, consciousness. Those are also really fascinating. 167 00:10:42,320 --> 00:10:44,320 Speaker 1: But I do want to point out that, at least 168 00:10:44,320 --> 00:10:47,240 Speaker 1: at the time of this recording, a couple of the 169 00:10:47,320 --> 00:10:50,560 Speaker 1: videos in that playlist have since been delisted from YouTube 170 00:10:50,559 --> 00:10:52,720 Speaker 1: for whatever reason. So there are a couple of blank 171 00:10:52,760 --> 00:10:55,800 Speaker 1: spots in there. But what those videos show, and what 172 00:10:55,960 --> 00:11:00,560 Speaker 1: countless papers and courses and presentations also show, is that 173 00:11:00,640 --> 00:11:05,120 Speaker 1: the brain is so incredibly complex and nuanced that we 174 00:11:05,200 --> 00:11:08,720 Speaker 1: don't know what we don't know. We do know that 175 00:11:08,760 --> 00:11:11,160 Speaker 1: there are some pretty funky things going up in the 176 00:11:11,200 --> 00:11:13,720 Speaker 1: gray matter up in our noggins, and we also know 177 00:11:14,040 --> 00:11:19,040 Speaker 1: that many of the explanations given to describe consciousness rely 178 00:11:19,240 --> 00:11:23,400 Speaker 1: upon some assumptions that we don't have any substantial evidence for. 179 00:11:24,120 --> 00:11:27,840 Speaker 1: You can't really assert something to be true if it's 180 00:11:27,880 --> 00:11:31,320 Speaker 1: based on a premise that you also don't know to 181 00:11:31,440 --> 00:11:34,360 Speaker 1: be true. That's not how good science works. This is 182 00:11:34,400 --> 00:11:37,680 Speaker 1: also why I reject the arguments around stuff like ghost 183 00:11:37,760 --> 00:11:41,840 Speaker 1: hunting equipment. The use of that equipment is predicated on 184 00:11:41,880 --> 00:11:45,920 Speaker 1: the argument the ghosts exist and they have certain influences 185 00:11:45,960 --> 00:11:49,600 Speaker 1: on their environment. But we haven't proven that ghosts exist 186 00:11:49,679 --> 00:11:52,080 Speaker 1: in the first place, let alone that they can affect 187 00:11:52,120 --> 00:11:55,720 Speaker 1: the environment. So selling a meter that supposedly detects a 188 00:11:55,760 --> 00:12:00,200 Speaker 1: ghostly presence from electromagnetic fluctuations makes no logical sense. Pens 189 00:12:00,840 --> 00:12:02,880 Speaker 1: For us to know that to be true, we would 190 00:12:02,920 --> 00:12:06,800 Speaker 1: already have to have established that one ghosts are real 191 00:12:07,440 --> 00:12:11,320 Speaker 1: and two that they have these electromagnetic fluctuation effects, and 192 00:12:11,360 --> 00:12:15,240 Speaker 1: we haven't done that. It's like working science in reverse. 193 00:12:15,320 --> 00:12:17,440 Speaker 1: That's not how it works. Anyway. There are a lot 194 00:12:17,480 --> 00:12:22,160 Speaker 1: of arguments about consciousness that suggests perhaps there's some ineffable 195 00:12:22,240 --> 00:12:25,679 Speaker 1: force that informs it. You can call it the spirit 196 00:12:25,960 --> 00:12:29,400 Speaker 1: or the soul or whatever. So that argument suggests that 197 00:12:29,480 --> 00:12:32,760 Speaker 1: this thing we've never proven to have existed is what 198 00:12:32,960 --> 00:12:37,320 Speaker 1: gives consciousness its own and that's a problem. We can't 199 00:12:37,760 --> 00:12:40,360 Speaker 1: really state that. I mean, you can't say the reason 200 00:12:40,440 --> 00:12:42,959 Speaker 1: this thing exists is that this other thing that we've 201 00:12:43,000 --> 00:12:47,120 Speaker 1: never proven to exist makes it exist. Well, that you've 202 00:12:47,160 --> 00:12:50,920 Speaker 1: just made it harder to even prove anything. And we 203 00:12:51,000 --> 00:12:54,560 Speaker 1: have evidence that also shows that that whole idea doesn't 204 00:12:54,600 --> 00:12:58,479 Speaker 1: hold water. The evidence comes in the form of brain disorders, 205 00:12:58,520 --> 00:13:02,200 Speaker 1: brain diseases, and brain image. We have seen that disease 206 00:13:02,240 --> 00:13:06,240 Speaker 1: and damage to the brain affects consciousness, which suggests that 207 00:13:06,280 --> 00:13:12,040 Speaker 1: consciousness manifests from the actual form and function of our brains, 208 00:13:12,360 --> 00:13:16,719 Speaker 1: not from any mysterious force. Our ability to perceive, to 209 00:13:16,800 --> 00:13:20,280 Speaker 1: process information, to have an understanding of the self, to 210 00:13:20,360 --> 00:13:23,800 Speaker 1: have an accurate reflection of what's going on around us 211 00:13:24,280 --> 00:13:27,400 Speaker 1: within our own conceptual reality, all of that appears to 212 00:13:27,440 --> 00:13:31,800 Speaker 1: be predicated primarily upon the brain. Now, originally I was 213 00:13:31,840 --> 00:13:34,400 Speaker 1: planning to give a rundown on some of the prevailing 214 00:13:34,480 --> 00:13:38,320 Speaker 1: theories about consciousness. In other words, I want to summarize 215 00:13:38,320 --> 00:13:43,079 Speaker 1: the various schools of thought about how consciousness actually arises. 216 00:13:43,440 --> 00:13:46,920 Speaker 1: But as I dove down into the research, it became 217 00:13:46,960 --> 00:13:50,800 Speaker 1: apparent really quickly that such a discussion would require so 218 00:13:50,920 --> 00:13:55,679 Speaker 1: much groundwork and more importantly, a much deeper understanding on 219 00:13:55,720 --> 00:13:59,480 Speaker 1: my part than would be practical for this podcast. So 220 00:13:59,559 --> 00:14:03,120 Speaker 1: instead of talking about the higher order theory of consciousness 221 00:14:03,280 --> 00:14:08,160 Speaker 1: versus the general workspace theory versus integrated information theory, I'll 222 00:14:08,200 --> 00:14:11,439 Speaker 1: take a step back and I'll say there's a lot 223 00:14:11,440 --> 00:14:14,880 Speaker 1: of ongoing debate about the subject, and no one has 224 00:14:14,880 --> 00:14:19,320 Speaker 1: conclusively proven that any particular theory or argument is most 225 00:14:19,360 --> 00:14:23,080 Speaker 1: likely true. Each theory has its strengths and its weaknesses, 226 00:14:23,440 --> 00:14:26,920 Speaker 1: and complicating matters further is that we haven't refined our 227 00:14:27,040 --> 00:14:32,680 Speaker 1: language around the concepts enough to differentiate various ideas. That 228 00:14:32,720 --> 00:14:36,440 Speaker 1: means you can't talk about an organism being conscious of 229 00:14:36,480 --> 00:14:41,240 Speaker 1: something and that degree of consciousness is somehow inherently specific, 230 00:14:41,280 --> 00:14:44,720 Speaker 1: it's not. That's the issue. So, for example, I could 231 00:14:44,720 --> 00:14:48,240 Speaker 1: say a rat is conscious of a rat terrier, type 232 00:14:48,280 --> 00:14:51,400 Speaker 1: of dog that hunts down rats, and so as a 233 00:14:51,440 --> 00:14:54,640 Speaker 1: result of this consciousness of the rat terrier, the rat 234 00:14:54,680 --> 00:14:57,280 Speaker 1: attempts to remain hidden so as not to be killed. 235 00:14:57,600 --> 00:15:00,360 Speaker 1: But does that mean the rat merely perceived eaves the 236 00:15:00,440 --> 00:15:02,560 Speaker 1: rat terrier and thus is trying to stay out of 237 00:15:02,560 --> 00:15:05,760 Speaker 1: its way, And that's as far as the consciousness goes, 238 00:15:06,520 --> 00:15:08,720 Speaker 1: or doesn't mean that the rat actually has a deeper, 239 00:15:08,800 --> 00:15:13,280 Speaker 1: more meaningful awareness of the rat terrier. The language isn't 240 00:15:13,360 --> 00:15:17,600 Speaker 1: much help here, and moreover, there's debate about what degrees 241 00:15:17,680 --> 00:15:22,080 Speaker 1: of consciousness there even are. Also, while I've been harping 242 00:15:22,080 --> 00:15:25,320 Speaker 1: on consciousness, that's not the only concept we have to consider. 243 00:15:25,440 --> 00:15:30,440 Speaker 1: Another is intelligence, which is distinct from consciousness, and there 244 00:15:30,480 --> 00:15:36,840 Speaker 1: are some similarities. Like consciousness, intelligence is predicated upon brain functions. Again, 245 00:15:36,960 --> 00:15:40,520 Speaker 1: a long history of investigating brain disorders and brain damage 246 00:15:40,560 --> 00:15:43,800 Speaker 1: indicates this as it can affect not just consciousness but 247 00:15:43,880 --> 00:15:49,000 Speaker 1: also intelligence. So what is intelligence? Well, get ready for this, 248 00:15:49,120 --> 00:15:53,880 Speaker 1: But like consciousness, there's no single agreed upon definition or 249 00:15:54,200 --> 00:15:57,920 Speaker 1: theory of intelligence. In general, we use the word intelligence 250 00:15:57,960 --> 00:16:02,320 Speaker 1: to describe the ability to think, to learn, to absorb knowledge, 251 00:16:02,360 --> 00:16:06,080 Speaker 1: and to make use of it to develop skills. Intelligence 252 00:16:06,160 --> 00:16:09,240 Speaker 1: is what allowed humans to learn how to make basic tools, 253 00:16:09,280 --> 00:16:11,680 Speaker 1: to gain an understanding of how to cultivate plants and 254 00:16:11,720 --> 00:16:16,400 Speaker 1: develop agriculture, to develop architecture, to understand mathematic principles, and 255 00:16:16,480 --> 00:16:20,240 Speaker 1: all sorts of stuff. So in humans, we tend to 256 00:16:20,320 --> 00:16:23,680 Speaker 1: lump consciousness and intelligence together. We tend to think in 257 00:16:23,800 --> 00:16:27,520 Speaker 1: terms of being intelligent and being self aware, but the 258 00:16:27,600 --> 00:16:30,840 Speaker 1: two need not necessarily go hand in hand. There are 259 00:16:30,840 --> 00:16:33,360 Speaker 1: many people who believe that it could be possible to 260 00:16:33,400 --> 00:16:38,800 Speaker 1: construct an artificial intelligence or an artificial consciousness independently of 261 00:16:38,840 --> 00:16:42,000 Speaker 1: one another. When we come back, I'll explain more, but 262 00:16:42,120 --> 00:16:52,880 Speaker 1: first let's take a quick break. So in a very 263 00:16:53,040 --> 00:16:57,080 Speaker 1: general sense. The group of hypotheses that fall into the 264 00:16:57,160 --> 00:17:02,760 Speaker 1: integrated information theory umbrella state that consciousness emerges through linking 265 00:17:02,920 --> 00:17:07,679 Speaker 1: elements in our brains. These would be neurons processing large 266 00:17:07,760 --> 00:17:11,520 Speaker 1: amounts of information, and that it's the scale of this 267 00:17:11,800 --> 00:17:15,959 Speaker 1: endeavor that then leads to consciousness. In other words, if 268 00:17:16,000 --> 00:17:19,880 Speaker 1: you have enough processors working on enough information and they're 269 00:17:19,920 --> 00:17:23,280 Speaker 1: all interconnected with each other and it's very complicated, bang, 270 00:17:23,800 --> 00:17:28,760 Speaker 1: you get consciousness. Now, it is clear our brains process 271 00:17:28,800 --> 00:17:31,840 Speaker 1: a lot of information. If you do a search in 272 00:17:31,920 --> 00:17:36,680 Speaker 1: textbooks or online, you'll frequently encounter the stat their brains 273 00:17:36,760 --> 00:17:40,520 Speaker 1: have around one hundred billion neurons in them and ten 274 00:17:40,680 --> 00:17:45,440 Speaker 1: times as many glial cells. Neurons are like the processors 275 00:17:45,480 --> 00:17:47,879 Speaker 1: in a computer system, and glial cells would be the 276 00:17:47,960 --> 00:17:52,440 Speaker 1: support systems and insulators with those processors. Anyway, those numbers 277 00:17:52,480 --> 00:17:56,280 Speaker 1: have since come under some dispute, as an associate professor 278 00:17:56,320 --> 00:18:02,240 Speaker 1: at Vanderbilt University named Susanna Herculano who sell She explained 279 00:18:02,240 --> 00:18:06,879 Speaker 1: that the old way of estimating how many neurons the 280 00:18:06,880 --> 00:18:10,080 Speaker 1: brain had appeared to be based on taking slices of 281 00:18:10,119 --> 00:18:13,120 Speaker 1: the brain. Estimating the number of neurons in that slice 282 00:18:13,200 --> 00:18:16,439 Speaker 1: and then kind of extrapolating that number to apply across 283 00:18:16,480 --> 00:18:19,600 Speaker 1: the brain in general. But that ignores stuff like the 284 00:18:19,680 --> 00:18:22,679 Speaker 1: density of cells and the distribution of the cells across 285 00:18:22,720 --> 00:18:26,160 Speaker 1: the brain. So what she did, and this also falls 286 00:18:26,160 --> 00:18:29,800 Speaker 1: into the category of Halloween horror stories, is she took 287 00:18:29,800 --> 00:18:33,920 Speaker 1: a brain and she freaking dissolved it. She could then 288 00:18:33,920 --> 00:18:37,600 Speaker 1: get account of the neuron nuclei that was in the 289 00:18:37,640 --> 00:18:43,000 Speaker 1: soupy mix. By her accounting, the brain has closer to 290 00:18:43,160 --> 00:18:47,479 Speaker 1: eighties six billion neurons and just as many glial cells. 291 00:18:47,880 --> 00:18:50,359 Speaker 1: Still a lot of cells, mind you, But you gotta 292 00:18:50,359 --> 00:18:52,400 Speaker 1: admit it's a bit of a blow to lose fourteen 293 00:18:52,480 --> 00:18:58,000 Speaker 1: billion neurons overnight. Still, we're talking about billions of neurons 294 00:18:58,040 --> 00:19:02,120 Speaker 1: that interconnect through an incredib doably complex system in our brains, 295 00:19:02,720 --> 00:19:06,640 Speaker 1: with different regions of the brain handling different things. And so, yeah, 296 00:19:06,720 --> 00:19:10,040 Speaker 1: we're processing a lot of information all the time, and 297 00:19:10,119 --> 00:19:13,359 Speaker 1: we do happen to be conscious. So could it be 298 00:19:13,440 --> 00:19:17,440 Speaker 1: possible that with a sufficiently powerful computer system, perhaps made 299 00:19:17,480 --> 00:19:21,399 Speaker 1: up of hundreds or thousands or tens of thousands of 300 00:19:21,440 --> 00:19:25,840 Speaker 1: individual computers, each with hundreds of processors, that you could 301 00:19:25,960 --> 00:19:29,560 Speaker 1: end up with an emergent consciousness, or, as some people 302 00:19:29,560 --> 00:19:33,760 Speaker 1: have proposed, could the Internet itself become conscious due to 303 00:19:33,840 --> 00:19:37,000 Speaker 1: the fact that it is an enormous system of interconnected 304 00:19:37,080 --> 00:19:42,760 Speaker 1: nodes that's pushing around incredible amounts of information. Well maybe, 305 00:19:43,400 --> 00:19:47,800 Speaker 1: maybe it's possible. But here's the kicker. This theory doesn't 306 00:19:47,800 --> 00:19:52,720 Speaker 1: actually explain the mechanism by which the consciousness emerges. See, 307 00:19:53,400 --> 00:19:56,920 Speaker 1: it's one thing to process information, it's another thing to 308 00:19:57,000 --> 00:20:01,119 Speaker 1: be aware of that experience. So when I perceive a color, 309 00:20:01,600 --> 00:20:04,920 Speaker 1: I'm not just perceiving a color. I'm aware that I'm 310 00:20:04,960 --> 00:20:08,560 Speaker 1: experiencing that color. Or to put it in another way, 311 00:20:08,800 --> 00:20:11,639 Speaker 1: I can relate something to how it makes me feel, 312 00:20:11,880 --> 00:20:16,040 Speaker 1: or some other subjective experience that's personal to me. So 313 00:20:16,280 --> 00:20:20,200 Speaker 1: a machine might objectively be able to return data about 314 00:20:20,240 --> 00:20:23,000 Speaker 1: stuff like what is a color of a piece of paper? 315 00:20:23,400 --> 00:20:26,040 Speaker 1: That analyzes the light that's being reflected off that piece 316 00:20:26,080 --> 00:20:29,120 Speaker 1: of paper, it compares that light to a spectrum of colors. 317 00:20:29,240 --> 00:20:31,080 Speaker 1: But that's still not the same thing as having the 318 00:20:31,119 --> 00:20:35,440 Speaker 1: subjective experience of perceiving the color. And there may well 319 00:20:35,520 --> 00:20:39,320 Speaker 1: be some connection between the complexity of the interconnected neurons 320 00:20:39,320 --> 00:20:41,760 Speaker 1: in our brains and the amount of information that we're 321 00:20:41,800 --> 00:20:46,960 Speaker 1: processing and our sense of consciousness, But the theory doesn't 322 00:20:46,960 --> 00:20:51,640 Speaker 1: actually explain what that connection is. It's more like saying, hey, 323 00:20:51,840 --> 00:20:56,320 Speaker 1: maybe this thing we have, this consciousness experience, is also 324 00:20:56,440 --> 00:21:00,440 Speaker 1: linked to this other thing, without actually making the link 325 00:21:00,480 --> 00:21:03,840 Speaker 1: between the two. It appears to be correlative but not 326 00:21:04,000 --> 00:21:08,720 Speaker 1: necessarily causal to relate that to our personal experience. Imagine 327 00:21:08,760 --> 00:21:12,600 Speaker 1: that you've just poofed into existence. You have no prior 328 00:21:12,680 --> 00:21:16,160 Speaker 1: knowledge of the world, or the physics in that world, 329 00:21:16,280 --> 00:21:19,760 Speaker 1: or basic stuff like that, So you're drawing conclusions about 330 00:21:19,760 --> 00:21:23,840 Speaker 1: the world around you based solely on your observations as 331 00:21:23,840 --> 00:21:26,720 Speaker 1: you wander around and do stuff. And at one point 332 00:21:27,200 --> 00:21:30,639 Speaker 1: you see an interesting looking rock on the path, so 333 00:21:30,720 --> 00:21:32,719 Speaker 1: you bend over and you pick up the rock, and 334 00:21:32,720 --> 00:21:35,960 Speaker 1: when you do, it starts to rain, and you think, well, 335 00:21:36,000 --> 00:21:38,639 Speaker 1: maybe I caused it to rain because I picked up 336 00:21:38,720 --> 00:21:41,720 Speaker 1: this rock. And maybe it happens a few times where 337 00:21:41,760 --> 00:21:43,760 Speaker 1: you pick up a rock and it starts to rain, 338 00:21:44,040 --> 00:21:47,040 Speaker 1: which seems to support your thesis. But does that mean 339 00:21:47,160 --> 00:21:51,480 Speaker 1: you're actually causing the effects that you are observing. If so, 340 00:21:51,880 --> 00:21:55,639 Speaker 1: what is it about picking up the rock that's making 341 00:21:55,680 --> 00:21:59,439 Speaker 1: it rain? Now, even in this absurd case, that I'm making. 342 00:21:59,840 --> 00:22:02,800 Speaker 1: You could argue that if there's never an instance in 343 00:22:02,800 --> 00:22:05,880 Speaker 1: which picking up the rock wasn't immediately followed by rain, 344 00:22:06,359 --> 00:22:08,960 Speaker 1: there's a lot of evidence to suggest the two are linked, 345 00:22:09,280 --> 00:22:12,520 Speaker 1: but you still can't explain why they are linked, why 346 00:22:12,560 --> 00:22:16,080 Speaker 1: does one cause the other. And that's a problem because 347 00:22:16,359 --> 00:22:20,280 Speaker 1: without that piece, you're never really totally sure that you're 348 00:22:20,320 --> 00:22:23,400 Speaker 1: on the right track. That's kind of where we are 349 00:22:23,480 --> 00:22:26,600 Speaker 1: with consciousness. We've got a lot of ideas about what 350 00:22:26,680 --> 00:22:29,920 Speaker 1: makes it happen, but those ideas are mostly missing key 351 00:22:30,000 --> 00:22:35,000 Speaker 1: pieces that explain why it's happening. Now, it's possible that 352 00:22:35,080 --> 00:22:38,760 Speaker 1: we cannot reduce consciousness any further than we already have, 353 00:22:39,320 --> 00:22:41,959 Speaker 1: and maybe that means we never really get a handle 354 00:22:42,080 --> 00:22:44,800 Speaker 1: on what makes it happen. It's also possible that we 355 00:22:44,840 --> 00:22:49,080 Speaker 1: could facilitate the emergence of consciousness and machines without knowing 356 00:22:49,480 --> 00:22:53,240 Speaker 1: how we did it. Essentially, that would be like stumbling 357 00:22:53,320 --> 00:22:56,879 Speaker 1: upon the phenomenon by luck. We just happened to create 358 00:22:56,880 --> 00:23:00,520 Speaker 1: the conditions necessary to allow some form of artificial ansciousness 359 00:23:00,560 --> 00:23:03,879 Speaker 1: to emerge. Now, I think this might be possible, but 360 00:23:03,960 --> 00:23:07,000 Speaker 1: it strikes me as a long shot. I think of 361 00:23:07,000 --> 00:23:10,000 Speaker 1: it like being locked in a dark warehouse filled with 362 00:23:10,119 --> 00:23:13,760 Speaker 1: every mechanical part you can imagine, and you start trying 363 00:23:13,800 --> 00:23:16,639 Speaker 1: to put things together in complete darkness, and then the 364 00:23:16,720 --> 00:23:18,919 Speaker 1: lights come on and you see that you have created 365 00:23:18,960 --> 00:23:22,840 Speaker 1: a perfect replica of an F fifteen fighter jet. Is 366 00:23:22,880 --> 00:23:27,359 Speaker 1: that possible? Well, I mean, yeah, I guess, but it 367 00:23:27,400 --> 00:23:32,240 Speaker 1: seems overwhelmingly unlikely. But again, this is based off ignorance. 368 00:23:32,480 --> 00:23:34,760 Speaker 1: It's based off the fact that it hasn't happened yet, 369 00:23:35,200 --> 00:23:39,280 Speaker 1: so I could be totally wrong here. Now, on the 370 00:23:39,280 --> 00:23:43,760 Speaker 1: flip side of that, programmers, engineers, and scientists have created 371 00:23:43,840 --> 00:23:47,600 Speaker 1: computer systems that can process information in intricate ways to 372 00:23:47,640 --> 00:23:50,520 Speaker 1: come up with solutions to problems that seem, at least 373 00:23:50,560 --> 00:23:53,960 Speaker 1: at first glance, to be similar to how we humans think. 374 00:23:54,440 --> 00:23:57,880 Speaker 1: We even have names for systems that reflect biological systems, 375 00:23:57,920 --> 00:24:01,679 Speaker 1: like artificial neural networks. Now the name might make it 376 00:24:01,760 --> 00:24:06,480 Speaker 1: sound like it's a robot brain, but it's not quite that. Instead, 377 00:24:06,760 --> 00:24:09,320 Speaker 1: it's a model for computing in which components in the 378 00:24:09,359 --> 00:24:13,800 Speaker 1: system act kind of like neurons. They're interconnected and each 379 00:24:13,840 --> 00:24:17,800 Speaker 1: one does a specific process. The nodes in the computer 380 00:24:17,880 --> 00:24:21,600 Speaker 1: system connect to other nodes, so you feed the system 381 00:24:21,680 --> 00:24:24,840 Speaker 1: input whatever it is you want to process, and then 382 00:24:25,119 --> 00:24:28,800 Speaker 1: the nodes that accept that input performs some form of 383 00:24:28,840 --> 00:24:33,800 Speaker 1: operation on it, and then send that resulting data the 384 00:24:33,800 --> 00:24:38,600 Speaker 1: the answer after they've processed this information onto other nodes 385 00:24:38,800 --> 00:24:42,320 Speaker 1: in the network. It's a nonlinear approach to computing, and 386 00:24:42,359 --> 00:24:46,080 Speaker 1: by adjusting the processes each node performs this is also 387 00:24:46,160 --> 00:24:49,679 Speaker 1: like known as adjusting the weight of the nodes, you 388 00:24:49,720 --> 00:24:53,200 Speaker 1: can tweak the outcomes. Now, this is incredibly useful. If 389 00:24:53,240 --> 00:24:56,840 Speaker 1: you already know the outcome you want, you can tweak 390 00:24:56,880 --> 00:25:00,080 Speaker 1: the system so that it learns or is trained to 391 00:25:00,119 --> 00:25:04,800 Speaker 1: recognize something specific. For example, you could train a computer 392 00:25:04,880 --> 00:25:08,320 Speaker 1: system to recognize faces, so you would feed it images. 393 00:25:08,520 --> 00:25:10,800 Speaker 1: Some of the images would have faces in them, some 394 00:25:10,880 --> 00:25:13,960 Speaker 1: would not have faces in them. Some might have something 395 00:25:14,000 --> 00:25:15,880 Speaker 1: that could be a face, but it's hard to tell. 396 00:25:15,960 --> 00:25:19,320 Speaker 1: Maybe it's a shape in a picture that looks kind 397 00:25:19,320 --> 00:25:22,440 Speaker 1: of like a face, but it's not actually someone's face. Anyway, 398 00:25:22,480 --> 00:25:25,359 Speaker 1: you train the computer model to try and separate the 399 00:25:25,440 --> 00:25:29,040 Speaker 1: faces from the non faces, and it might take many 400 00:25:29,160 --> 00:25:32,280 Speaker 1: iterations to get the model trained up using your starting 401 00:25:32,359 --> 00:25:35,639 Speaker 1: data your training data. Now, once you do have your 402 00:25:35,640 --> 00:25:38,119 Speaker 1: computer model trained up, you've tweaked all the nodes so 403 00:25:38,160 --> 00:25:42,520 Speaker 1: that it is reliably producing results that say, yes, this 404 00:25:42,640 --> 00:25:45,200 Speaker 1: is a face or no, this isn't. You can now 405 00:25:45,400 --> 00:25:49,240 Speaker 1: feed that same computer model brand new images that it 406 00:25:49,280 --> 00:25:52,359 Speaker 1: has never seen before, and it can perform the same 407 00:25:52,359 --> 00:25:57,000 Speaker 1: functions you have taught the computer model how to do something. 408 00:25:57,359 --> 00:26:01,359 Speaker 1: But this isn't like spontaneous intelligens and it's not connected 409 00:26:01,359 --> 00:26:05,360 Speaker 1: to consciousness. You couldn't really call it thinking so much 410 00:26:05,359 --> 00:26:10,360 Speaker 1: as just being trained to recognize specific patterns. Pretty well. Now, 411 00:26:10,400 --> 00:26:13,600 Speaker 1: that's just one example of putting an artificial neural network 412 00:26:13,640 --> 00:26:16,639 Speaker 1: to use. There are lots of others, and there are 413 00:26:16,680 --> 00:26:20,880 Speaker 1: also systems like IBM S Watson, which also appears at 414 00:26:21,200 --> 00:26:24,600 Speaker 1: you know, casual glance to think. This was helped in 415 00:26:24,600 --> 00:26:27,960 Speaker 1: no small part by the very public display of Watson 416 00:26:28,119 --> 00:26:31,720 Speaker 1: competing on special episodes of Jeopardy, and which went up 417 00:26:31,760 --> 00:26:36,120 Speaker 1: against human opponents who were former Jeopardy champions themselves. Watson 418 00:26:36,240 --> 00:26:40,440 Speaker 1: famously couldn't call upon the Internet to search for answers. 419 00:26:40,800 --> 00:26:44,159 Speaker 1: All the data the computer could access was self contained 420 00:26:44,240 --> 00:26:48,800 Speaker 1: in its undeniably voluminous storage, and the computer had to 421 00:26:48,880 --> 00:26:52,320 Speaker 1: parse what the clues and Jeopardy were actually looking for 422 00:26:52,680 --> 00:26:55,200 Speaker 1: then come up with an appropriate response, And to make 423 00:26:55,280 --> 00:26:59,600 Speaker 1: matters more tricky, the computer wasn't returning a guaranteed right answer. 424 00:27:00,000 --> 00:27:02,520 Speaker 1: The computer had to come to a judgment on how 425 00:27:02,600 --> 00:27:05,520 Speaker 1: confident it was that the answer it had arrived at 426 00:27:05,840 --> 00:27:09,480 Speaker 1: was the correct one. If the confidence met a certain threshold, 427 00:27:10,040 --> 00:27:13,560 Speaker 1: then Watson would submit an answer. If it did not 428 00:27:13,800 --> 00:27:18,080 Speaker 1: meet that threshold, Watson would remain silent. It's a remarkable achievement, 429 00:27:18,359 --> 00:27:20,840 Speaker 1: and it has lots of potential applications, many of which 430 00:27:20,840 --> 00:27:24,800 Speaker 1: are actually in action today. But it's still not quite 431 00:27:24,920 --> 00:27:27,879 Speaker 1: at the level of a machine thinking like a human, 432 00:27:27,960 --> 00:27:30,199 Speaker 1: and I don't think anyone at IBM would suggest that 433 00:27:30,240 --> 00:27:34,240 Speaker 1: it possesses any sense of consciousness. When we come back, 434 00:27:34,440 --> 00:27:38,240 Speaker 1: i'll talk about a famous thought experiment that really starts 435 00:27:38,240 --> 00:27:42,160 Speaker 1: to examine whether or not machines could ever attain intelligence 436 00:27:42,200 --> 00:27:52,840 Speaker 1: and consciousness. But first let's take another quick break. And 437 00:27:53,080 --> 00:27:57,840 Speaker 1: now this brings me to a famous thought experiment proposed 438 00:27:57,920 --> 00:28:01,240 Speaker 1: by John Searle, a philosoph of her who questioned whether 439 00:28:01,359 --> 00:28:04,760 Speaker 1: we could say a machine, even one so proficient that 440 00:28:04,800 --> 00:28:10,240 Speaker 1: could deliver reliable answers on demand, would ever truly be intelligent, 441 00:28:10,480 --> 00:28:12,880 Speaker 1: at least on a level similar to what we humans 442 00:28:12,960 --> 00:28:18,400 Speaker 1: identify as being intelligent. It's called the Chinese room argument, 443 00:28:18,640 --> 00:28:23,399 Speaker 1: which Searle included in his article titled Minds, Brains, and 444 00:28:23,520 --> 00:28:28,080 Speaker 1: Programs for the Behavioral and Brain Sciences Journal. Here's the 445 00:28:28,119 --> 00:28:32,359 Speaker 1: premise of the thought experiment. Imagine that you are in 446 00:28:32,359 --> 00:28:36,040 Speaker 1: a simple room. The room has a table and a chair. 447 00:28:36,520 --> 00:28:40,480 Speaker 1: There's a ream of blank paper, there's a brush, there's 448 00:28:40,520 --> 00:28:44,080 Speaker 1: some ink, and there's also a large book within the 449 00:28:44,160 --> 00:28:47,800 Speaker 1: room that contains pairs of Chinese symbols. In the book. 450 00:28:48,240 --> 00:28:50,800 Speaker 1: Oh and we also have to imagine that you don't 451 00:28:50,880 --> 00:28:55,320 Speaker 1: understand or recognize these Chinese symbols. They mean nothing to you. 452 00:28:55,920 --> 00:28:58,360 Speaker 1: There's also a door to the room, and the door 453 00:28:58,480 --> 00:29:01,560 Speaker 1: has a mail slot, and every now and again someone 454 00:29:01,640 --> 00:29:04,480 Speaker 1: slides a piece of paper through the slot. The piece 455 00:29:04,480 --> 00:29:08,080 Speaker 1: of paper has one of those Chinese symbols printed on it, 456 00:29:08,560 --> 00:29:11,120 Speaker 1: and it's your job to go through the book and 457 00:29:11,200 --> 00:29:14,960 Speaker 1: find the matching symbol in the book, plus the corresponding 458 00:29:15,080 --> 00:29:17,959 Speaker 1: symbol in the pair, because remember I said there were 459 00:29:17,960 --> 00:29:21,400 Speaker 1: symbols that were paired together. You then take a blank 460 00:29:21,480 --> 00:29:26,280 Speaker 1: sheet of paper, You draw the corresponding symbol from that 461 00:29:26,440 --> 00:29:29,880 Speaker 1: pair onto the sheet of paper, and finally you slip 462 00:29:29,960 --> 00:29:33,080 Speaker 1: that piece of paper through the mail slot, presumably to 463 00:29:33,120 --> 00:29:35,800 Speaker 1: the person who gave you the first piece of paper 464 00:29:36,240 --> 00:29:39,080 Speaker 1: and the original part of this problem. So to an 465 00:29:39,080 --> 00:29:42,640 Speaker 1: outside observer, let's say it's actually the person who's slipping 466 00:29:42,720 --> 00:29:45,760 Speaker 1: the piece of paper to you, it would seem that 467 00:29:45,840 --> 00:29:51,480 Speaker 1: whomever is inside the door actually understands Chinese symbols. They 468 00:29:51,480 --> 00:29:57,160 Speaker 1: can recognize the significance of whatever symbol was was contributed, 469 00:29:57,320 --> 00:30:00,200 Speaker 1: was sent in through the mail slot, and then match 470 00:30:00,240 --> 00:30:04,040 Speaker 1: it to whatever the corresponding data is for that particular symbol, 471 00:30:04,280 --> 00:30:06,760 Speaker 1: and then return that to the user. So to the 472 00:30:06,800 --> 00:30:10,080 Speaker 1: outside observer, it appears as though whatever is inside the 473 00:30:10,160 --> 00:30:15,320 Speaker 1: room comprehends what it is doing. But argues sirle, that's 474 00:30:15,320 --> 00:30:19,960 Speaker 1: only an illusion because the person inside the room doesn't 475 00:30:20,000 --> 00:30:23,240 Speaker 1: know what any of those symbols actually means. So, if 476 00:30:23,280 --> 00:30:26,360 Speaker 1: if this is you, you have no context. You don't 477 00:30:26,440 --> 00:30:30,480 Speaker 1: know what any individual symbol stands for, nor do you 478 00:30:30,560 --> 00:30:34,800 Speaker 1: understand why any symbol would be prepared with any other symbol. 479 00:30:35,040 --> 00:30:37,640 Speaker 1: You don't know the reasoning behind that. All you have 480 00:30:38,200 --> 00:30:40,640 Speaker 1: is a book of rules. But the rules only state 481 00:30:40,920 --> 00:30:45,200 Speaker 1: what your response should be given a specific input. The 482 00:30:45,280 --> 00:30:48,360 Speaker 1: rules don't tell you why either on a granular level 483 00:30:48,400 --> 00:30:51,479 Speaker 1: of what the symbols actually mean or on a larger 484 00:30:51,520 --> 00:30:54,760 Speaker 1: scale when it comes to what you're actually accomplishing in 485 00:30:54,800 --> 00:30:58,080 Speaker 1: this endeavor. All you are doing is filling a physical 486 00:30:58,160 --> 00:31:01,440 Speaker 1: action over and over based on a set of rules 487 00:31:01,680 --> 00:31:05,080 Speaker 1: you don't understand. And Searle then uses this argument to 488 00:31:05,080 --> 00:31:07,520 Speaker 1: say that, essentially we have to think the same way 489 00:31:07,600 --> 00:31:12,280 Speaker 1: about machines. The machines process information based on the input 490 00:31:12,320 --> 00:31:16,200 Speaker 1: they receive and the program that they are following. That's it. 491 00:31:16,720 --> 00:31:21,120 Speaker 1: They don't have awareness or understanding of what the information is. 492 00:31:21,680 --> 00:31:25,240 Speaker 1: Searle was taking aim at a particular concept in AI, 493 00:31:25,400 --> 00:31:29,560 Speaker 1: often dubbed strong AI or general AI. It's a sort 494 00:31:29,600 --> 00:31:34,200 Speaker 1: of general artificial intelligence. So it's something that we could 495 00:31:34,320 --> 00:31:37,760 Speaker 1: or would compared directly to human intelligence, even if it 496 00:31:37,800 --> 00:31:41,320 Speaker 1: didn't work the same way as our intelligence works. The 497 00:31:41,440 --> 00:31:44,000 Speaker 1: argument is that the capacity and the outcomes would be 498 00:31:44,000 --> 00:31:46,920 Speaker 1: similar enough for us to make the comparison. This is 499 00:31:46,960 --> 00:31:50,080 Speaker 1: the type of intelligence that we see in science fiction 500 00:31:50,200 --> 00:31:54,800 Speaker 1: doomsday scenarios where the machines have rebelled against humans, or 501 00:31:54,840 --> 00:31:59,400 Speaker 1: the machines appear to misinterpret simple requests, or the machines 502 00:31:59,440 --> 00:32:03,440 Speaker 1: come to conclud lusions that, while logically sound, spell doom 503 00:32:03,760 --> 00:32:07,560 Speaker 1: for us all. The classic example of this, by the way, 504 00:32:07,600 --> 00:32:11,320 Speaker 1: is appealing to a super smart artificial intelligence and you say, 505 00:32:11,440 --> 00:32:14,400 Speaker 1: could you please bring about world peace because we're we're 506 00:32:14,440 --> 00:32:18,120 Speaker 1: all sorts of messed up, and the intelligence processes this 507 00:32:18,200 --> 00:32:22,120 Speaker 1: and then concludes that while there are at least two humans, 508 00:32:22,560 --> 00:32:25,000 Speaker 1: there can never be a guarantee for peace because there's 509 00:32:25,000 --> 00:32:30,200 Speaker 1: always the opportunity for disagreement and violence between two humans, 510 00:32:30,240 --> 00:32:33,760 Speaker 1: and so to achieve true peace, the computer then goes 511 00:32:33,840 --> 00:32:37,320 Speaker 1: on a killing spree to wipe out all of humanity. Now, 512 00:32:37,360 --> 00:32:42,080 Speaker 1: Cerl is not necessarily saying that computers won't contribute to 513 00:32:42,160 --> 00:32:45,840 Speaker 1: a catastrophic outcome for humanity. Instead, he's saying they're not 514 00:32:45,920 --> 00:32:50,440 Speaker 1: actually thinking or processing information in a truly intelligent way. 515 00:32:50,480 --> 00:32:54,000 Speaker 1: They are arriving in outcomes through a series of processes 516 00:32:54,320 --> 00:32:57,640 Speaker 1: that might appear to be intelligent at first glance, but 517 00:32:57,680 --> 00:33:00,640 Speaker 1: when you break them down, it all reveal themselves to 518 00:33:00,720 --> 00:33:04,600 Speaker 1: be nothing more than a very complex series of mathematical processes. 519 00:33:04,800 --> 00:33:07,800 Speaker 1: You could even break it down further into binary and 520 00:33:07,880 --> 00:33:11,400 Speaker 1: say that ultimately each apparent decision would just be a 521 00:33:11,440 --> 00:33:15,160 Speaker 1: particular sequence of switches that are in the on or 522 00:33:15,280 --> 00:33:18,920 Speaker 1: off position, and the stats of each switch would be 523 00:33:19,000 --> 00:33:21,880 Speaker 1: determined by the input and the program you were running, 524 00:33:22,120 --> 00:33:27,800 Speaker 1: not some intelligent artificial creation that is reasoning through a problem. Essentially, 525 00:33:28,120 --> 00:33:34,480 Speaker 1: Cearle's argument boils down to the difference between syntax and semantics. 526 00:33:35,240 --> 00:33:38,360 Speaker 1: Syntax would be the set of rules that you would 527 00:33:38,400 --> 00:33:42,560 Speaker 1: follow with those symbols. For example, in English, the letter 528 00:33:42,680 --> 00:33:46,760 Speaker 1: Q is nearly always followed by the letter you. The 529 00:33:47,120 --> 00:33:51,400 Speaker 1: few exceptions to this rule mostly involved romanizing words from 530 00:33:51,480 --> 00:33:55,720 Speaker 1: other language, uh, in which the letter Q represents a 531 00:33:55,800 --> 00:33:59,960 Speaker 1: sound that's not natively present in English. So you could 532 00:34:00,120 --> 00:34:03,160 Speaker 1: program a machine to follow the basic rule that the 533 00:34:03,280 --> 00:34:06,920 Speaker 1: symbol Q should be followed by the symbol you, assuming 534 00:34:06,960 --> 00:34:10,200 Speaker 1: you're eliminating all those exceptions I just mentioned. But that 535 00:34:10,280 --> 00:34:15,360 Speaker 1: doesn't lead to a grasp of semantics, which is actual meaning. Moreover, 536 00:34:15,680 --> 00:34:18,720 Speaker 1: Searle asserts that it's impossible to come to a grasp 537 00:34:18,800 --> 00:34:22,960 Speaker 1: of semantics merely through a mastery of syntax. You might 538 00:34:23,080 --> 00:34:27,480 Speaker 1: know those rules flawlessly, but Searle argues, you still wouldn't 539 00:34:27,560 --> 00:34:30,920 Speaker 1: understand why there are rules, or what the output of 540 00:34:30,920 --> 00:34:35,080 Speaker 1: those rules means, or even what the input means. There 541 00:34:35,120 --> 00:34:38,360 Speaker 1: are some general counter arguments that philosophers have made to 542 00:34:38,480 --> 00:34:43,520 Speaker 1: Searle's thought experiment, and according to the Stanford Encyclopedia of Philosophy, 543 00:34:43,560 --> 00:34:49,040 Speaker 1: which is a phenomenal resource, it's also incredibly dense. But 544 00:34:49,360 --> 00:34:53,200 Speaker 1: these counter arguments tend to fall into three groups. The 545 00:34:53,280 --> 00:34:56,520 Speaker 1: first group agrees with Searle that the person inside the 546 00:34:56,560 --> 00:35:00,520 Speaker 1: room clearly has no understanding of the Chinese symbols. But 547 00:35:00,600 --> 00:35:04,200 Speaker 1: the group counters the notion that the system as a 548 00:35:04,200 --> 00:35:06,880 Speaker 1: whole can't understand it. In fact, they say the opposite. 549 00:35:06,880 --> 00:35:09,719 Speaker 1: They say, yes, the person inside the room doesn't understand, 550 00:35:09,760 --> 00:35:14,840 Speaker 1: but you're looking at a specific component of a larger system. 551 00:35:15,080 --> 00:35:18,320 Speaker 1: And if we consider the system, or maybe a virtual 552 00:35:18,440 --> 00:35:23,200 Speaker 1: mind that exists due to the system, that does have 553 00:35:23,280 --> 00:35:26,560 Speaker 1: an understanding, this is sort of like saying a neuron 554 00:35:26,719 --> 00:35:30,840 Speaker 1: in the brain doesn't understand anything. It sends along signals 555 00:35:30,880 --> 00:35:35,480 Speaker 1: that collectively and through mechanisms we don't fully understand, become 556 00:35:35,600 --> 00:35:39,240 Speaker 1: thoughts that we can become conscious of. So in this argument, 557 00:35:39,320 --> 00:35:41,680 Speaker 1: the person in the room is just a component of 558 00:35:41,719 --> 00:35:45,040 Speaker 1: an overall system, and the system possesses intelligence even if 559 00:35:45,040 --> 00:35:48,799 Speaker 1: the component does not. The second group argues that if 560 00:35:49,000 --> 00:35:52,600 Speaker 1: the computer system either could simulate the operation of a brain, 561 00:35:53,160 --> 00:35:56,480 Speaker 1: perhaps with billions of nodes, approaching the complexity of a 562 00:35:56,560 --> 00:36:00,120 Speaker 1: human brain with billions of neurons, or if the system 563 00:36:00,120 --> 00:36:03,520 Speaker 1: were to inhabit a robotic body that could have direct 564 00:36:03,560 --> 00:36:08,279 Speaker 1: interaction with its environment, then the system could manifest intelligence. 565 00:36:08,920 --> 00:36:13,480 Speaker 1: The third group rejects Searle's arguments more thoroughly and on 566 00:36:13,560 --> 00:36:17,759 Speaker 1: the basis of various grounds, ranging from Searle's experiment being 567 00:36:17,840 --> 00:36:20,480 Speaker 1: too narrow in scope to an argument about what the 568 00:36:20,520 --> 00:36:24,520 Speaker 1: word understand actually means. This is where things get a 569 00:36:24,560 --> 00:36:27,880 Speaker 1: bit more loosey goosey, and sometimes I feel like arguments 570 00:36:27,880 --> 00:36:32,080 Speaker 1: in this group amount to oh yeah, but again, I'm pragmatic, 571 00:36:32,239 --> 00:36:35,280 Speaker 1: so I tend to have a pretty strong bias against 572 00:36:35,320 --> 00:36:38,399 Speaker 1: these arguments, and I recognize that this means I'm not 573 00:36:38,520 --> 00:36:42,560 Speaker 1: giving them fair consideration because of those biases. A few 574 00:36:42,600 --> 00:36:45,399 Speaker 1: of these arguments take issue with Searle's assertion that one 575 00:36:45,600 --> 00:36:50,520 Speaker 1: cannot grasp semantics through an understanding of syntax. And here's 576 00:36:50,560 --> 00:36:54,360 Speaker 1: something that I find really interesting. Searle originally published this 577 00:36:54,520 --> 00:36:59,360 Speaker 1: argument way back in nineteen It's been nearly forty years 578 00:36:59,680 --> 00:37:03,080 Speaker 1: since first proposed it, and to this day, there is 579 00:37:03,120 --> 00:37:06,920 Speaker 1: no consensus on whether or not his argument is sound. 580 00:37:07,400 --> 00:37:09,800 Speaker 1: So why is that? Well, it's because, as I've covered 581 00:37:09,800 --> 00:37:13,239 Speaker 1: in this episode, the concepts of intelligence and more to 582 00:37:13,280 --> 00:37:18,040 Speaker 1: the point, consciousness are wibbly wobbly, though not as far 583 00:37:18,080 --> 00:37:21,279 Speaker 1: as I can tell, Timey, Whymy. When we can't even 584 00:37:21,400 --> 00:37:26,200 Speaker 1: nail down specific definitions for words like understand, it becomes 585 00:37:26,200 --> 00:37:29,400 Speaker 1: difficult to even tell when we're agreeing or disagreeing on 586 00:37:29,520 --> 00:37:32,640 Speaker 1: certain topics. It could be that while people are in 587 00:37:32,680 --> 00:37:36,080 Speaker 1: a debate and are using words in different ways, it 588 00:37:36,160 --> 00:37:39,640 Speaker 1: turns out there actually in agreement with one another. Such 589 00:37:39,800 --> 00:37:44,480 Speaker 1: is the messiness that is intelligence. Further, we've not yet 590 00:37:44,520 --> 00:37:48,560 Speaker 1: observed anything in the machine world that seems, upon closer examination, 591 00:37:48,680 --> 00:37:52,799 Speaker 1: to reflect true intelligence and consciousness, at least as the 592 00:37:52,840 --> 00:37:55,920 Speaker 1: way we experience it. In fact, we can't say that 593 00:37:55,960 --> 00:38:00,799 Speaker 1: we've seen any artificial constructs that have experienced any thing, because, 594 00:38:00,840 --> 00:38:03,280 Speaker 1: as far as we know, no such device has any 595 00:38:03,440 --> 00:38:07,399 Speaker 1: awareness of itself. Now, I'm not sure if we'll ever 596 00:38:07,440 --> 00:38:11,560 Speaker 1: create a machine that will have true intelligence and consciousness, 597 00:38:12,160 --> 00:38:15,400 Speaker 1: using the word true here to mean human like. Now, 598 00:38:15,440 --> 00:38:19,359 Speaker 1: I feel pretty confident that, if it is possible, we 599 00:38:19,440 --> 00:38:22,640 Speaker 1: will get around to it eventually. It might take way 600 00:38:22,680 --> 00:38:26,120 Speaker 1: more resources than we currently estimate, or maybe it will 601 00:38:26,120 --> 00:38:29,719 Speaker 1: just require a different computational approach, maybe it'll rely on 602 00:38:29,920 --> 00:38:34,040 Speaker 1: bleeding edge technologies like quantum computing. I figure, if it's 603 00:38:34,080 --> 00:38:37,279 Speaker 1: something we can do, we will do it. It's just 604 00:38:37,320 --> 00:38:41,359 Speaker 1: a question of time, really, and further, it's hard for 605 00:38:41,400 --> 00:38:44,640 Speaker 1: me to come to a conclusion other than it will 606 00:38:44,719 --> 00:38:49,960 Speaker 1: ultimately prove possible to make an intelligent conscious construct. Now. 607 00:38:50,000 --> 00:38:53,799 Speaker 1: I believe that because I believe our own intelligence and 608 00:38:53,840 --> 00:38:58,919 Speaker 1: our own consciousness is firmly rooted in our brains. I 609 00:38:59,120 --> 00:39:02,120 Speaker 1: don't think there's an anything mystical involved. And while we 610 00:39:02,200 --> 00:39:04,480 Speaker 1: don't have a full picture of how it happens in 611 00:39:04,480 --> 00:39:07,880 Speaker 1: our brains, we at least know that it does happen, 612 00:39:08,360 --> 00:39:10,560 Speaker 1: and we know some of the questions to ask and 613 00:39:10,640 --> 00:39:13,879 Speaker 1: have some ideas on how to search for answers. It's 614 00:39:13,920 --> 00:39:16,560 Speaker 1: not a complete picture, and we still have a very 615 00:39:16,600 --> 00:39:18,839 Speaker 1: long way to go, but I think it's if it's 616 00:39:18,880 --> 00:39:21,920 Speaker 1: possible to build a full understanding of how our brains 617 00:39:21,960 --> 00:39:25,920 Speaker 1: work with regard to intelligence and consciousness, we'll get there too, 618 00:39:26,120 --> 00:39:31,160 Speaker 1: sooner or later. Probably later, I suppose there's still the 619 00:39:31,280 --> 00:39:35,960 Speaker 1: chance that we could create an intelligent and or conscious 620 00:39:36,040 --> 00:39:40,560 Speaker 1: machine just by luck or accident. And while I intuitively 621 00:39:40,800 --> 00:39:44,040 Speaker 1: feel that this is unlikely, I have to admit that 622 00:39:44,120 --> 00:39:48,880 Speaker 1: intuition isn't really reliable in these matters. It feels to 623 00:39:49,000 --> 00:39:52,680 Speaker 1: me like it is the longest of long shots, but 624 00:39:52,760 --> 00:39:55,400 Speaker 1: that's entirely based on the fact that we haven't managed 625 00:39:55,440 --> 00:39:59,480 Speaker 1: to do it up until now, and including now. Maybe 626 00:39:59,480 --> 00:40:01,960 Speaker 1: the right see points of events is right around the corner. 627 00:40:02,520 --> 00:40:06,040 Speaker 1: Just because it hasn't happened yet doesn't mean it can't 628 00:40:06,320 --> 00:40:09,920 Speaker 1: or won't happen at all, And it's good to remember 629 00:40:10,200 --> 00:40:14,759 Speaker 1: the machines don't need to be particularly intelligent or conscious 630 00:40:15,040 --> 00:40:19,640 Speaker 1: to be useful or potentially dangerous. We can see examples 631 00:40:19,640 --> 00:40:22,160 Speaker 1: of that playing out already with devices that have some 632 00:40:22,280 --> 00:40:26,319 Speaker 1: limited or weak AI. And by limited I mean it's 633 00:40:26,360 --> 00:40:30,200 Speaker 1: not general intelligence. I don't mean that the AI itself 634 00:40:30,280 --> 00:40:33,879 Speaker 1: is somehow unsophisticated or primitive. So it may not even 635 00:40:33,960 --> 00:40:37,520 Speaker 1: matter if we never create devices that have true or 636 00:40:37,600 --> 00:40:41,160 Speaker 1: human like intelligence. We might be able to accomplish just 637 00:40:41,239 --> 00:40:44,600 Speaker 1: as much with something that does not have those capabilities. 638 00:40:45,480 --> 00:40:48,600 Speaker 1: And in other words, this is a very complicated topic, 639 00:40:49,320 --> 00:40:52,720 Speaker 1: one that I think gets oversimplified, and a lot of fiction, 640 00:40:53,080 --> 00:40:58,400 Speaker 1: and also just a lot of speculative prognostications about the future. 641 00:40:58,480 --> 00:41:01,560 Speaker 1: I mean, you'll see a lot of videos about how 642 00:41:01,960 --> 00:41:05,399 Speaker 1: in the future AI is going to perform a more 643 00:41:05,520 --> 00:41:08,760 Speaker 1: intrinsic role, or maybe it will be an existential threat 644 00:41:08,800 --> 00:41:11,839 Speaker 1: to humanity or whatever it may be. And I think 645 00:41:11,880 --> 00:41:15,400 Speaker 1: a lot of that is predicated upon, uh, a deep 646 00:41:15,840 --> 00:41:21,759 Speaker 1: misunderstanding or underestimation of how complicated cognitive neuroscience actually is 647 00:41:21,800 --> 00:41:25,000 Speaker 1: and how little we really understand when it comes to 648 00:41:25,040 --> 00:41:28,480 Speaker 1: our own consciousness, let alone how we would bring about 649 00:41:28,520 --> 00:41:32,120 Speaker 1: such a thing in a different device. I hope you 650 00:41:32,200 --> 00:41:36,720 Speaker 1: enjoyed that rerun, uh, and I promise will be back 651 00:41:36,840 --> 00:41:39,799 Speaker 1: to new episodes very soon. I hope to have a 652 00:41:39,840 --> 00:41:43,759 Speaker 1: news episode for you tomorrow. That's the plan. I have 653 00:41:44,000 --> 00:41:46,719 Speaker 1: an interview I have to do for another show today, 654 00:41:46,760 --> 00:41:49,560 Speaker 1: but after that, I plan on jumping on the Tech 655 00:41:49,640 --> 00:41:53,880 Speaker 1: Stuff news episode. So, uh, trying to get back running. 656 00:41:54,120 --> 00:41:57,479 Speaker 1: You know, I'm not a percent yet, but gosh darn ittt, 657 00:41:58,160 --> 00:42:00,759 Speaker 1: it's this. This show is really what keeps me going. 658 00:42:00,840 --> 00:42:04,160 Speaker 1: So we're gonna We're gonna soldier on the show, as 659 00:42:04,200 --> 00:42:08,040 Speaker 1: they say, must keep on going. I know, I make 660 00:42:08,040 --> 00:42:10,279 Speaker 1: that joke a lot all right. Well, that's it. If 661 00:42:10,320 --> 00:42:13,320 Speaker 1: you have any suggestions for future episodes of tech Stuff, 662 00:42:13,360 --> 00:42:15,560 Speaker 1: please reach out to me. The best way to do 663 00:42:15,640 --> 00:42:18,919 Speaker 1: that is on Twitter. The handle is tech stuff h 664 00:42:19,040 --> 00:42:22,200 Speaker 1: s W you guys, take care. I can tell you 665 00:42:22,200 --> 00:42:26,640 Speaker 1: of food poisoning is no fun. Uh, but the show 666 00:42:27,040 --> 00:42:30,279 Speaker 1: sometimes is. All Right, that's it for me. Bye, I'll 667 00:42:30,280 --> 00:42:38,160 Speaker 1: talk to you again really soon. Text Stuff is an 668 00:42:38,160 --> 00:42:41,880 Speaker 1: I Heart Radio production. For more podcasts from my Heart Radio, 669 00:42:42,200 --> 00:42:45,360 Speaker 1: visit the i Heart Radio app, Apple Podcasts, or wherever 670 00:42:45,480 --> 00:42:47,000 Speaker 1: you listen to your favorite shows.