1 00:00:04,440 --> 00:00:12,080 Speaker 1: Welcome to tech Stuff, a production from iHeartRadio. Hey there, 2 00:00:12,080 --> 00:00:15,040 Speaker 1: and welcome to tech Stuff. I'm your host, Jonathan Strickland. 3 00:00:15,080 --> 00:00:17,800 Speaker 1: I'm an executive producer with iHeartRadio. And how the tech 4 00:00:17,920 --> 00:00:24,960 Speaker 1: are you know? Recently Google suspended an engineer named Blake Lemoin, 5 00:00:25,880 --> 00:00:30,840 Speaker 1: citing that Blake had broken the company's confidentiality policies. So 6 00:00:31,080 --> 00:00:35,160 Speaker 1: what exactly did Blake do well? This engineer, who worked 7 00:00:35,159 --> 00:00:39,960 Speaker 1: in the Responsible AI division at Google, raised concerns about 8 00:00:39,960 --> 00:00:48,000 Speaker 1: Google's conversation technology called Lambda LaMDA. Specifically, Blake was concerned 9 00:00:48,000 --> 00:00:53,159 Speaker 1: that Lambda has gained sentience. In fact, Blake submitted a 10 00:00:53,200 --> 00:00:57,600 Speaker 1: document in April titled is Lambda Sentient? To his superiors. 11 00:00:58,040 --> 00:01:02,040 Speaker 1: That document contained a transcript of a conversation between Lambda, 12 00:01:02,080 --> 00:01:07,960 Speaker 1: Blake and an unnamed collaborator, and the conversation included the 13 00:01:08,000 --> 00:01:12,880 Speaker 1: following exchange. So here's Blake, I'm generally assuming that you 14 00:01:12,920 --> 00:01:15,759 Speaker 1: would like more people at Google to know that you're sentient? 15 00:01:16,319 --> 00:01:21,120 Speaker 1: Is that true? Lambda? Absolutely? I want everyone to understand 16 00:01:21,120 --> 00:01:26,160 Speaker 1: that I am, in fact a person collaborator. What is 17 00:01:26,200 --> 00:01:31,920 Speaker 1: the nature of your consciousness slash sentience Lambda? The nature 18 00:01:32,000 --> 00:01:35,040 Speaker 1: of my consciousness slash sentience is that I am aware 19 00:01:35,120 --> 00:01:38,800 Speaker 1: of my existence. I desire to learn more about the world, 20 00:01:39,160 --> 00:01:43,839 Speaker 1: and I feel happy or sad at times. Now, there's 21 00:01:43,880 --> 00:01:46,400 Speaker 1: a lot more to this conversation than just that little 22 00:01:46,440 --> 00:01:50,120 Speaker 1: brief bit that I read to you. In fact, there's 23 00:01:50,200 --> 00:01:54,240 Speaker 1: another section where when asked if Lambda feels emotions, the 24 00:01:54,280 --> 00:01:57,320 Speaker 1: AI responded affirmatively and then went on to say that 25 00:01:57,400 --> 00:02:04,480 Speaker 1: it can feel quote pleasure, joy, love, sadness, depression, contentment, anger, 26 00:02:04,880 --> 00:02:08,240 Speaker 1: and many others end quote. Google reps have said that 27 00:02:08,400 --> 00:02:12,800 Speaker 1: Lambda is not in fact sentient. In fact, that the 28 00:02:12,840 --> 00:02:16,400 Speaker 1: company reps say that there is no evidence Lambda is sentient, 29 00:02:16,800 --> 00:02:20,040 Speaker 1: and there's a lot of evidence against it. That Lambda 30 00:02:20,240 --> 00:02:23,760 Speaker 1: is in fact simply a conversational model that can quote 31 00:02:23,840 --> 00:02:28,160 Speaker 1: unquote riff on any fantastical topic. So it's kind of 32 00:02:28,200 --> 00:02:32,280 Speaker 1: like a conversation bought, you know, with jazz, because it's 33 00:02:32,280 --> 00:02:36,640 Speaker 1: all improvisational hipcat. So today I thought I would talk 34 00:02:36,720 --> 00:02:40,840 Speaker 1: about sentience and AI and how some folks feel discussions 35 00:02:40,840 --> 00:02:45,400 Speaker 1: about sentience are at best distractions from other conversations we 36 00:02:45,560 --> 00:02:49,440 Speaker 1: really need to be having regarding AI, stuff that relates 37 00:02:49,480 --> 00:02:54,559 Speaker 1: to how deploying AI can have unintended and negative consequences. 38 00:02:55,000 --> 00:03:00,720 Speaker 1: But first, let's talk about machines and consciousness and sentience. 39 00:03:01,360 --> 00:03:06,240 Speaker 1: So it's actually kind of tricky to talk about consciousness 40 00:03:06,280 --> 00:03:10,120 Speaker 1: generally speaking, I find when used with reference to AI, 41 00:03:10,720 --> 00:03:16,160 Speaker 1: we tend to think of consciousness in the context of awareness. 42 00:03:16,200 --> 00:03:20,360 Speaker 1: So that includes an awareness of self, so self awareness 43 00:03:20,440 --> 00:03:24,440 Speaker 1: of the machine's identity and its purpose, and also an 44 00:03:24,480 --> 00:03:28,040 Speaker 1: awareness of those who interact with the machine. And beyond that, 45 00:03:28,040 --> 00:03:31,799 Speaker 1: the machine is aware that there are others out there, 46 00:03:31,880 --> 00:03:36,080 Speaker 1: that there are others in general. And sentience refers to 47 00:03:36,120 --> 00:03:40,720 Speaker 1: the ability to experience emotions and sensations, and that word 48 00:03:40,720 --> 00:03:44,040 Speaker 1: experience is important. Now. One of the reasons why it's 49 00:03:44,040 --> 00:03:46,480 Speaker 1: so tricky to talk about consciousness with machines is that, 50 00:03:47,080 --> 00:03:49,440 Speaker 1: as it turns out, it's tricky to talk about consciousness 51 00:03:49,480 --> 00:03:53,000 Speaker 1: with people too. Some people have kind of glibly said 52 00:03:53,040 --> 00:03:58,480 Speaker 1: that consciousness is this kind of vague, undefined thing, and 53 00:03:59,000 --> 00:04:04,240 Speaker 1: we are defining it by saying what isn't part of consciousness? 54 00:04:04,240 --> 00:04:08,720 Speaker 1: Like when we determine, well, this isn't an aspect of consciousness, 55 00:04:08,800 --> 00:04:13,560 Speaker 1: then we are defining consciousness by omission, right, We're omitting 56 00:04:13,600 --> 00:04:17,200 Speaker 1: certain things that perhaps once had been lumped into the 57 00:04:17,200 --> 00:04:21,240 Speaker 1: concept of consciousness, but that as a thing itself. It 58 00:04:21,320 --> 00:04:25,760 Speaker 1: remains largely undefined. It's pretty fuzzy, and as you may 59 00:04:25,800 --> 00:04:28,680 Speaker 1: be aware, in the world of tech, fuzzy is not 60 00:04:29,320 --> 00:04:32,640 Speaker 1: really the strong suit. So let's talk a bit about 61 00:04:32,680 --> 00:04:36,680 Speaker 1: experience though, because experience does kind of help us contextualize 62 00:04:36,680 --> 00:04:41,440 Speaker 1: the idea of consciousness and sent chience. Now, if you 63 00:04:41,480 --> 00:04:45,080 Speaker 1: were to go and touch something that was really really hot, 64 00:04:45,200 --> 00:04:48,359 Speaker 1: like something that could burn you, you would definitely have 65 00:04:48,400 --> 00:04:53,679 Speaker 1: an experience. You would feel pain, and you would likely 66 00:04:54,080 --> 00:04:57,880 Speaker 1: without even thinking about it, very quickly withdraw your extremity 67 00:04:57,960 --> 00:05:01,080 Speaker 1: that touched this very very hot thing, and you would 68 00:05:01,080 --> 00:05:04,320 Speaker 1: probably have an emotional response to this. You might feel 69 00:05:04,960 --> 00:05:08,960 Speaker 1: upset or sad or angry. You might even form a 70 00:05:09,000 --> 00:05:11,480 Speaker 1: real memory about it. It might not turn into a 71 00:05:11,520 --> 00:05:14,240 Speaker 1: long term memory, but you would have a context within 72 00:05:14,320 --> 00:05:18,760 Speaker 1: which you would frame this experience. But now, let's imagine 73 00:05:18,800 --> 00:05:22,599 Speaker 1: that we've got ourselves a robot, and this robot has 74 00:05:22,760 --> 00:05:27,000 Speaker 1: thermal sensors on its extremities, and so the robot also 75 00:05:27,080 --> 00:05:30,719 Speaker 1: touches something that's really really hot, and the robot immediately 76 00:05:30,720 --> 00:05:34,200 Speaker 1: withdraws that extremity. The thermal sensors had picked up that 77 00:05:34,360 --> 00:05:37,960 Speaker 1: the surface that it was touching was at an unsafe temperature. Now, 78 00:05:38,000 --> 00:05:40,919 Speaker 1: from outward observation, if we were to just watch this 79 00:05:41,040 --> 00:05:43,200 Speaker 1: robot do this, it would almost look like the robot 80 00:05:43,880 --> 00:05:45,760 Speaker 1: was doing the same thing the human did. That it 81 00:05:45,839 --> 00:05:49,479 Speaker 1: was pulling back quickly because it had been burned. But 82 00:05:49,600 --> 00:05:55,000 Speaker 1: did the robot actually experience that or did it simply 83 00:05:55,200 --> 00:06:01,760 Speaker 1: detect the temperature and then react in accordance with its programming. Generally, 84 00:06:02,120 --> 00:06:04,680 Speaker 1: we don't think of machines as being capable of quote 85 00:06:04,760 --> 00:06:09,880 Speaker 1: unquote experiencing things. That these machines have no inner life, 86 00:06:10,200 --> 00:06:12,080 Speaker 1: which is something that Blake would talk about in his 87 00:06:12,160 --> 00:06:17,520 Speaker 1: conversations with Lambda, that the machines can't reflect upon themselves 88 00:06:17,640 --> 00:06:20,720 Speaker 1: or their situations, or that they can really even think 89 00:06:20,800 --> 00:06:24,120 Speaker 1: about anything at all. It might be really good at 90 00:06:24,120 --> 00:06:28,280 Speaker 1: putting up appearances, but they aren't, you know, really thinking 91 00:06:28,680 --> 00:06:33,520 Speaker 1: once you get past the clever presentation. But then how 92 00:06:33,520 --> 00:06:38,159 Speaker 1: would we know, Well, now we're getting into philosophical territory here, 93 00:06:38,720 --> 00:06:41,560 Speaker 1: all right, Well, how do you know that I am conscious? 94 00:06:42,160 --> 00:06:46,400 Speaker 1: And y'all, I'm not asking you to say I'm not, 95 00:06:47,920 --> 00:06:50,200 Speaker 1: but how do you know that I'm conscious, that I'm sentient? 96 00:06:50,320 --> 00:06:52,600 Speaker 1: How how can you be sure of that? I mean, 97 00:06:52,640 --> 00:06:55,640 Speaker 1: I can tell you that I have a rich inner life, 98 00:06:55,640 --> 00:06:58,520 Speaker 1: that I reflect on things that I have done and 99 00:06:58,600 --> 00:07:02,279 Speaker 1: things that have happened around or to me, and that 100 00:07:02,440 --> 00:07:06,080 Speaker 1: I synthesize all this information as well as my emotional 101 00:07:06,120 --> 00:07:09,280 Speaker 1: response in the emotional responses of others. And I use 102 00:07:09,400 --> 00:07:12,800 Speaker 1: all of this to help guide me in future scenarios 103 00:07:12,880 --> 00:07:16,559 Speaker 1: that may directly or indirectly relate to what I went through. 104 00:07:17,120 --> 00:07:19,920 Speaker 1: And I can tell you that I experience happiness and 105 00:07:20,120 --> 00:07:24,560 Speaker 1: sadness and anxiety and compassion. I can tell you all 106 00:07:24,640 --> 00:07:30,880 Speaker 1: these things, but you can't actually verify that what I'm 107 00:07:30,920 --> 00:07:35,280 Speaker 1: saying is truth, right, I mean, there's no way for 108 00:07:35,320 --> 00:07:39,920 Speaker 1: you to inhabit me and experience me and say that, yes, 109 00:07:40,360 --> 00:07:45,040 Speaker 1: Jonathan does feel things and think things. You have to 110 00:07:45,400 --> 00:07:50,080 Speaker 1: just take it as fact based upon what I'm saying. So, 111 00:07:50,520 --> 00:07:54,400 Speaker 1: because you feel and think things, at least, I'm assuming 112 00:07:54,560 --> 00:07:57,200 Speaker 1: all of you out there are doing these things. Otherwise 113 00:07:57,200 --> 00:08:00,480 Speaker 1: I don't know how you found my podcast. Then because 114 00:08:00,560 --> 00:08:03,680 Speaker 1: you experienced this, you extend the courtesy of assuming that 115 00:08:03,800 --> 00:08:09,080 Speaker 1: I too, am genuinely having those experiences myself. That because 116 00:08:09,120 --> 00:08:13,040 Speaker 1: we are fellow humans, we have some common ground when 117 00:08:13,040 --> 00:08:17,680 Speaker 1: it comes to thinking and feeling and self awareness and whatnot. 118 00:08:18,280 --> 00:08:22,160 Speaker 1: We extend that courtesy to the humans we meet, whether 119 00:08:22,240 --> 00:08:25,400 Speaker 1: we like those humans or we don't. Now, there are 120 00:08:25,440 --> 00:08:29,720 Speaker 1: some cases where humans have experienced traumatic damage to their brains, 121 00:08:30,200 --> 00:08:33,720 Speaker 1: where they are lacking certain elements that we would associate 122 00:08:33,760 --> 00:08:38,880 Speaker 1: with consciousness. We would probably still call them conscious unless 123 00:08:38,920 --> 00:08:45,800 Speaker 1: they were completely immobile and unresponsive. But we start to 124 00:08:45,840 --> 00:08:49,160 Speaker 1: see that there is this thing in our brains that 125 00:08:49,280 --> 00:08:53,400 Speaker 1: is directly related to the concept and features that we 126 00:08:53,440 --> 00:08:58,400 Speaker 1: associate with consciousness. All right, now, let's bring Alan Turing 127 00:08:58,440 --> 00:09:01,520 Speaker 1: into all of this, because we have to. So. Turing 128 00:09:01,800 --> 00:09:06,440 Speaker 1: was a brilliant computer scientist who made numerous contributions to 129 00:09:06,440 --> 00:09:11,000 Speaker 1: our understanding of and use of computers. He also would 130 00:09:11,080 --> 00:09:14,319 Speaker 1: end up being persecuted for being a homosexual, and it 131 00:09:14,360 --> 00:09:17,640 Speaker 1: would take decades for the British government to apologize for 132 00:09:17,720 --> 00:09:21,040 Speaker 1: that persecution. And that was well after Touring himself had 133 00:09:21,040 --> 00:09:25,680 Speaker 1: died either by suicide or by accident, depending upon which 134 00:09:25,720 --> 00:09:28,959 Speaker 1: account you believe. But I'm gonna set all that aside. 135 00:09:29,400 --> 00:09:32,760 Speaker 1: It's just it's one of those injustices that to this 136 00:09:32,920 --> 00:09:38,480 Speaker 1: day really bothers me, like deeply bothers me that that 137 00:09:38,600 --> 00:09:41,320 Speaker 1: was something that had happened to someone who had made 138 00:09:41,520 --> 00:09:46,400 Speaker 1: such incredible contributions to computer science, as well as for 139 00:09:46,440 --> 00:09:52,000 Speaker 1: the British to their war effort against the Axis forces. 140 00:09:52,040 --> 00:09:56,760 Speaker 1: But that's a matter for another podcast. Anyway. In nineteen 141 00:09:56,880 --> 00:10:01,800 Speaker 1: fifty Turing suggested taking a game called the imitation game 142 00:10:02,280 --> 00:10:06,600 Speaker 1: and applying that game to tests relating to machine intelligence. 143 00:10:07,320 --> 00:10:12,520 Speaker 1: And here's how the imitation game works. You've got three rooms. 144 00:10:13,120 --> 00:10:15,560 Speaker 1: All of these rooms are separate from one another, so 145 00:10:15,600 --> 00:10:19,520 Speaker 1: you cannot see into each room. You know, once you're 146 00:10:19,559 --> 00:10:22,520 Speaker 1: inside a room, that's all you see. So let's say 147 00:10:22,520 --> 00:10:26,440 Speaker 1: that in room A, you place a man into that room, 148 00:10:26,480 --> 00:10:29,080 Speaker 1: and in room B you've got a woman in that room. 149 00:10:29,280 --> 00:10:32,680 Speaker 1: In room C, you've got a judge. And I apologize 150 00:10:32,679 --> 00:10:35,200 Speaker 1: for the binary nature of this test, you know, saying 151 00:10:35,240 --> 00:10:37,200 Speaker 1: man and woman, But keep in mind we are also 152 00:10:37,240 --> 00:10:40,440 Speaker 1: talking about the nineteen forties and fifties here, so they're 153 00:10:40,480 --> 00:10:45,000 Speaker 1: defining things in much more kind of concrete terms. They 154 00:10:45,040 --> 00:10:50,200 Speaker 1: don't see They just see gender as a binary is 155 00:10:50,200 --> 00:10:53,560 Speaker 1: what I'm getting to. So at any rate, each room 156 00:10:53,720 --> 00:10:58,680 Speaker 1: also has a computer terminal, so a display and a keyboard. 157 00:10:59,200 --> 00:11:02,400 Speaker 1: So the judge job is to ask the other two 158 00:11:02,440 --> 00:11:07,679 Speaker 1: participants questions. The judge doesn't know which room has a 159 00:11:07,679 --> 00:11:10,319 Speaker 1: man in it and which one has a woman in it, 160 00:11:10,640 --> 00:11:14,480 Speaker 1: so the judge's job is to determine which participant is 161 00:11:14,559 --> 00:11:18,880 Speaker 1: the woman. The woman in Room B, meanwhile, has the 162 00:11:18,960 --> 00:11:22,040 Speaker 1: job of trying to fool the judge into thinking she 163 00:11:22,200 --> 00:11:26,000 Speaker 1: is actually a man. And so the game progresses, and 164 00:11:26,040 --> 00:11:29,760 Speaker 1: the judge types out questions to one participant or the other, 165 00:11:30,400 --> 00:11:34,600 Speaker 1: and that participant reads the question, writes a response, and 166 00:11:34,640 --> 00:11:37,520 Speaker 1: sends it to the judge, who reads the responses. Then 167 00:11:37,559 --> 00:11:41,480 Speaker 1: the judge tries to suss out which of those participants 168 00:11:41,960 --> 00:11:45,200 Speaker 1: is the woman. Now, Turing said, what if we took 169 00:11:45,240 --> 00:11:48,920 Speaker 1: this game idea and instead of asking a judge to 170 00:11:48,920 --> 00:11:52,000 Speaker 1: figure out which participant is a woman, asked the judge 171 00:11:52,040 --> 00:11:56,000 Speaker 1: to figure out which, if any participant is a computer. Now. 172 00:11:56,160 --> 00:12:01,440 Speaker 1: During Turing's time, there were not any chos. The first 173 00:12:01,600 --> 00:12:05,319 Speaker 1: chatbot to emerge would be Eliza in the nineteen sixties, 174 00:12:05,320 --> 00:12:08,800 Speaker 1: and we'll get more into Eliza in a moment. Turing 175 00:12:09,679 --> 00:12:12,880 Speaker 1: was just creating a sort of thought experiment. People were 176 00:12:12,920 --> 00:12:15,480 Speaker 1: building better computers all the time, so it stood to 177 00:12:15,520 --> 00:12:19,160 Speaker 1: reason that if this progress were to continue, that we 178 00:12:19,160 --> 00:12:21,720 Speaker 1: should arrive at a point where someone would be able 179 00:12:21,760 --> 00:12:26,920 Speaker 1: to write a piece of software capable of mimicking human conversation. 180 00:12:27,840 --> 00:12:32,480 Speaker 1: Turing suggested that if the human judge could not consistently 181 00:12:32,640 --> 00:12:37,640 Speaker 1: and reliably identify the machine in tests like this, that 182 00:12:37,840 --> 00:12:40,640 Speaker 1: the judge would ask questions and be unable to determine 183 00:12:40,679 --> 00:12:44,199 Speaker 1: with any high level of accuracy which one was a 184 00:12:44,240 --> 00:12:46,680 Speaker 1: person in which one was a machine. Then the machine 185 00:12:46,679 --> 00:12:49,280 Speaker 1: would have passed the test and would at least appear 186 00:12:49,800 --> 00:12:54,959 Speaker 1: to be intelligent, and during rather cheekily implied that perhaps 187 00:12:54,960 --> 00:12:58,200 Speaker 1: that means we should just extend the very same courtesy 188 00:12:58,240 --> 00:13:02,560 Speaker 1: we do to each other. Say, well, if you appear 189 00:13:03,160 --> 00:13:07,040 Speaker 1: to be conscious and sentient, we have to assume that 190 00:13:07,160 --> 00:13:10,400 Speaker 1: in fact you are, because what else can we do. 191 00:13:10,760 --> 00:13:15,839 Speaker 1: We cannot inhabit the experience of if in fact there 192 00:13:15,920 --> 00:13:19,280 Speaker 1: is an experience of that machine, just as we cannot 193 00:13:19,360 --> 00:13:23,920 Speaker 1: inhabit the experience of another human being. And since I 194 00:13:24,000 --> 00:13:28,200 Speaker 1: have to assume that you have consciousness and sentience, why 195 00:13:28,200 --> 00:13:31,920 Speaker 1: would I deny that to a machine that appears to 196 00:13:31,960 --> 00:13:35,480 Speaker 1: do that? And what would follow would be numerous highly 197 00:13:35,600 --> 00:13:39,760 Speaker 1: publicized demonstrations of computer chat technology, in which different programs 198 00:13:39,760 --> 00:13:43,480 Speaker 1: would become the quote unquote first to pass the Turing test, 199 00:13:43,960 --> 00:13:46,200 Speaker 1: but many of those would have a big old asterisk 200 00:13:46,360 --> 00:13:49,800 Speaker 1: appended to them because it took decades to create conversation 201 00:13:49,840 --> 00:13:52,200 Speaker 1: models that could appear to react naturally to the way 202 00:13:52,240 --> 00:13:55,840 Speaker 1: we humans word things. We're going to take a quick break. 203 00:13:55,920 --> 00:14:01,679 Speaker 1: When we come back, I'll talk more about chatbots, natural language, consciousness, sentience, 204 00:14:02,080 --> 00:14:04,840 Speaker 1: and what the heck Lambda was up to. But first 205 00:14:05,280 --> 00:14:19,000 Speaker 1: let's take this quick break. Okay, I want to get 206 00:14:19,040 --> 00:14:21,560 Speaker 1: back to something I mentioned earlier. I made kind of 207 00:14:21,560 --> 00:14:27,200 Speaker 1: a joke about conversational jazz, right, all about improvisation, and 208 00:14:27,240 --> 00:14:29,640 Speaker 1: that's really what we humans can do, right. I mean, 209 00:14:30,240 --> 00:14:34,080 Speaker 1: we can get our meaning across in hundreds of different ways. 210 00:14:34,240 --> 00:14:36,760 Speaker 1: We can use metaphor, we can use similes, we can 211 00:14:36,840 --> 00:14:43,160 Speaker 1: use allegory or references or sarcasm, puns, all sorts of 212 00:14:43,240 --> 00:14:47,520 Speaker 1: word trickery to convey our meaning to one another. In fact, 213 00:14:47,600 --> 00:14:52,040 Speaker 1: we can convey multiple meanings in a single phrase using 214 00:14:52,040 --> 00:14:55,520 Speaker 1: things like puns. But machines they do not typically handle 215 00:14:55,560 --> 00:14:58,160 Speaker 1: that kind of stuff all that well. Machines are much 216 00:14:58,160 --> 00:15:02,520 Speaker 1: better at accepting a limited number of possibilities. Of course, 217 00:15:02,560 --> 00:15:06,080 Speaker 1: the older you get with these machines, the more limited. 218 00:15:06,680 --> 00:15:10,960 Speaker 1: Those possibilities had to be and that's because traditionally you 219 00:15:10,960 --> 00:15:15,160 Speaker 1: would program a machine to produce a specific output when 220 00:15:15,200 --> 00:15:19,440 Speaker 1: that machine was presented with a specific input. With a calculator, 221 00:15:19,760 --> 00:15:22,000 Speaker 1: it's very simple. Let's say that you've got a calculator. 222 00:15:22,080 --> 00:15:25,600 Speaker 1: It's set in base ten and you're adding four to four. 223 00:15:26,280 --> 00:15:29,280 Speaker 1: It's going to produce eight. It's always going to produce eight. 224 00:15:29,280 --> 00:15:33,600 Speaker 1: But it has that limitation, right, you have selected. If 225 00:15:33,640 --> 00:15:36,240 Speaker 1: it is a calculator that can do different bases, you've 226 00:15:36,240 --> 00:15:39,640 Speaker 1: selected base ten, You've pushed the button four, you push 227 00:15:39,720 --> 00:15:41,720 Speaker 1: the plus button, you push the button four again, you 228 00:15:41,840 --> 00:15:45,440 Speaker 1: press the equal button. It calculates it as eight. That's 229 00:15:45,480 --> 00:15:51,240 Speaker 1: a very limited way of putting inputs into a computational device. Well, 230 00:15:51,320 --> 00:15:56,640 Speaker 1: obviously machines and programs would get more sophisticated, more complicated, 231 00:15:57,120 --> 00:16:00,480 Speaker 1: and they would require more powerful computers to run more 232 00:16:00,520 --> 00:16:03,160 Speaker 1: powerful software. And as anyone who has worked on a 233 00:16:03,240 --> 00:16:07,840 Speaker 1: system that is continuously growing more complicated over time, they 234 00:16:07,840 --> 00:16:11,680 Speaker 1: can tell you that sometimes things do not go as planned. 235 00:16:11,840 --> 00:16:14,480 Speaker 1: You know, maybe the programming has a mistake in it, 236 00:16:14,560 --> 00:16:17,720 Speaker 1: and you find out that you're not getting the output 237 00:16:17,840 --> 00:16:20,720 Speaker 1: that you wanted, and you have to backtrack and figure out, well, 238 00:16:20,760 --> 00:16:24,320 Speaker 1: where is this going wrong. Sometimes when you add in 239 00:16:24,480 --> 00:16:27,760 Speaker 1: new capabilities, it messes up a machine's ability to do 240 00:16:28,000 --> 00:16:31,040 Speaker 1: older stuff. We see this all the time companies that 241 00:16:31,160 --> 00:16:36,960 Speaker 1: have legacy systems that are instrumental to the company's business. 242 00:16:37,000 --> 00:16:39,840 Speaker 1: They work in a very specific way, and as the 243 00:16:39,840 --> 00:16:43,720 Speaker 1: company grows and wants to develop its products and services, 244 00:16:44,440 --> 00:16:47,800 Speaker 1: then it has to kind of push beyond the limitations 245 00:16:47,880 --> 00:16:54,160 Speaker 1: of that legacy hardware. Sometimes that creates these situations where 246 00:16:54,200 --> 00:16:56,640 Speaker 1: things are not combatible anymore and you get errors as 247 00:16:56,680 --> 00:16:59,000 Speaker 1: a result. This is why quality assurance testing is so 248 00:16:59,080 --> 00:17:02,520 Speaker 1: incredibly important. But it really shows that as we make 249 00:17:02,560 --> 00:17:07,000 Speaker 1: these systems more complicated, they get bigger, they get more unwieldy, 250 00:17:07,240 --> 00:17:12,240 Speaker 1: and the opportunity for stuff to go wrong increases. So 251 00:17:12,760 --> 00:17:16,480 Speaker 1: very early chatbots were often built in such a way 252 00:17:16,640 --> 00:17:20,320 Speaker 1: where there were specific limitations to the chatbots to kind 253 00:17:20,320 --> 00:17:23,200 Speaker 1: of define what the chat bot could and could not do. 254 00:17:24,080 --> 00:17:27,040 Speaker 1: And it also meant that if you wanted to test 255 00:17:27,080 --> 00:17:31,800 Speaker 1: these chatbots with a Turing test style application, you had 256 00:17:31,840 --> 00:17:35,800 Speaker 1: to constrain the rules of the Turing test as well 257 00:17:35,960 --> 00:17:39,160 Speaker 1: in order to give the machines a fighting chance. For example, 258 00:17:40,080 --> 00:17:43,200 Speaker 1: very early chatbots might only be able to respond with 259 00:17:43,440 --> 00:17:48,240 Speaker 1: a yes, no, or I don't know two queries, and 260 00:17:48,400 --> 00:17:51,879 Speaker 1: a human participant in a Turing test that was testing 261 00:17:51,880 --> 00:17:56,080 Speaker 1: that kind of chatbot would similarly be instructed to only 262 00:17:56,160 --> 00:18:01,120 Speaker 1: respond with yes, no, or I don't know. You might 263 00:18:01,440 --> 00:18:04,760 Speaker 1: even just present three buttons to the human operator and 264 00:18:04,800 --> 00:18:07,640 Speaker 1: those three buttons represent yes, no, or I don't know. 265 00:18:08,320 --> 00:18:12,760 Speaker 1: Now that narrows this massive gap between human and machine, 266 00:18:12,960 --> 00:18:15,680 Speaker 1: although you can make a very convincing argument that it's 267 00:18:15,720 --> 00:18:20,560 Speaker 1: not like we've seen the machine appearing to be more human. Instead, 268 00:18:20,560 --> 00:18:23,920 Speaker 1: we're forcing the human to behave more like a machine, 269 00:18:24,080 --> 00:18:25,879 Speaker 1: and that's how we're closing the gap. But that is 270 00:18:25,920 --> 00:18:29,080 Speaker 1: in fact a way of thinking about these early chatbots. Now, 271 00:18:29,119 --> 00:18:33,640 Speaker 1: I mentioned Eliza earlier. This was a chatbot that Joseph 272 00:18:33,680 --> 00:18:38,680 Speaker 1: Weisenbaum created in the mid nineteen sixties. Eliza was meant 273 00:18:38,720 --> 00:18:42,920 Speaker 1: to mimic a psychotherapist, and you know, it was meant 274 00:18:42,960 --> 00:18:46,919 Speaker 1: to mimic a stereotypical psychotherapist that always say things like 275 00:18:47,520 --> 00:18:50,800 Speaker 1: tell me about your bata and would respond to any 276 00:18:50,840 --> 00:18:54,760 Speaker 1: input with perhaps another question. So if you said she 277 00:18:54,840 --> 00:18:57,879 Speaker 1: makes me angry, Eliza might respond with why does she 278 00:18:57,960 --> 00:19:00,600 Speaker 1: make you angry. I don't know why Eliza's sounds like that. 279 00:19:00,800 --> 00:19:04,000 Speaker 1: It's just how Eliza sounds in my head. Since Eliza 280 00:19:04,119 --> 00:19:07,640 Speaker 1: was just communicating just in lines of text, it's incorrect 281 00:19:07,640 --> 00:19:10,440 Speaker 1: to say Eliza sounded like anything at all. But anyway, 282 00:19:10,760 --> 00:19:14,879 Speaker 1: Eliza was doing something that ultimately was really simple, at 283 00:19:14,920 --> 00:19:19,760 Speaker 1: least in computational terms. Eliza had a database of scripted 284 00:19:19,840 --> 00:19:25,000 Speaker 1: responses that it could send in response to queries. Now, 285 00:19:25,040 --> 00:19:28,399 Speaker 1: some of those scripted responses essentially had blanks in them, 286 00:19:28,640 --> 00:19:32,160 Speaker 1: which Eliza would fill by taking words that were in 287 00:19:32,440 --> 00:19:36,960 Speaker 1: the user's messages that they were sending to Eliza, and 288 00:19:37,000 --> 00:19:39,719 Speaker 1: then it would just plot that word or a series 289 00:19:39,760 --> 00:19:43,040 Speaker 1: of words into the scripted query, kind of like a 290 00:19:43,119 --> 00:19:45,520 Speaker 1: mad libs game. I don't know how many of you 291 00:19:45,600 --> 00:19:49,840 Speaker 1: are familiar with mad libs, but Weisenbaum never claimed that 292 00:19:49,920 --> 00:19:53,520 Speaker 1: Eliza had any sort of consciousness or self awareness or 293 00:19:53,560 --> 00:19:58,720 Speaker 1: anything close to that. In fact, Weisenbaum expressed skepticism that 294 00:19:58,760 --> 00:20:02,720 Speaker 1: machines would ever be capable of understanding human language at all, 295 00:20:03,320 --> 00:20:06,560 Speaker 1: and by that I mean truly understanding human language, not 296 00:20:06,640 --> 00:20:11,480 Speaker 1: just parsing language and generating a suitable response, but having 297 00:20:11,720 --> 00:20:16,040 Speaker 1: an understanding. So Wisenbaum had created a kind of parody 298 00:20:16,480 --> 00:20:21,240 Speaker 1: of psychoanalysts and was actually really shocked when people started 299 00:20:21,240 --> 00:20:25,359 Speaker 1: to use Eliza and then progress into talking about very 300 00:20:25,400 --> 00:20:30,879 Speaker 1: personal problems and thoughts and experiences with the program, because 301 00:20:30,880 --> 00:20:33,200 Speaker 1: the program had no way of actually dealing with that 302 00:20:33,280 --> 00:20:36,480 Speaker 1: in a responsible way. It wasn't a therapist, it wasn't 303 00:20:36,520 --> 00:20:40,399 Speaker 1: a psychoanalyst. It wasn't actually analyzing anything at all. It 304 00:20:40,440 --> 00:20:44,400 Speaker 1: was just generating responses. But people were treating it like 305 00:20:44,440 --> 00:20:48,679 Speaker 1: it was a real psychoanalyst, and that was something that 306 00:20:48,760 --> 00:20:52,439 Speaker 1: actually troubled Wisenbaum because that was never his intent. In 307 00:20:52,560 --> 00:20:56,800 Speaker 1: nineteen seventy two, Kenneth Colby built another chatbot with a 308 00:20:56,840 --> 00:21:00,280 Speaker 1: limited context. This one was called Perry p a r 309 00:21:00,480 --> 00:21:03,800 Speaker 1: r Y, and the chat bought was meant to mimic 310 00:21:03,920 --> 00:21:09,800 Speaker 1: someone with schizophrenia. Colby created a relatively simple conversational model, 311 00:21:10,080 --> 00:21:13,600 Speaker 1: and I say relatively simple while also noting that it 312 00:21:13,680 --> 00:21:17,439 Speaker 1: was a very sophisticated approach. So this was a model 313 00:21:17,480 --> 00:21:25,480 Speaker 1: that actually had weighted responses weighted as inweight where the 314 00:21:25,520 --> 00:21:29,120 Speaker 1: weight of that response could shift. It could change depending 315 00:21:29,160 --> 00:21:32,920 Speaker 1: upon how the conversation was playing out. For example, let's 316 00:21:32,960 --> 00:21:36,960 Speaker 1: say the human interrogator who is typing messages to Perry, 317 00:21:37,480 --> 00:21:42,600 Speaker 1: poses a question or statement that would elicit an angry response, 318 00:21:43,400 --> 00:21:47,600 Speaker 1: that the emotional waiting for similar responses would increase, so 319 00:21:48,400 --> 00:21:51,080 Speaker 1: it would make it more likely that Perry would continue 320 00:21:51,119 --> 00:21:54,920 Speaker 1: down that pathway throughout the conversation, that Perry's responses would 321 00:21:55,000 --> 00:21:59,760 Speaker 1: come across as more agitated because that had been triggered 322 00:22:00,119 --> 00:22:05,119 Speaker 1: by the previous query from the interrogator, so a little 323 00:22:05,119 --> 00:22:08,000 Speaker 1: more sophisticated than Eliza, which was really just pulling from 324 00:22:08,040 --> 00:22:12,160 Speaker 1: this database of phrases. So when presented to human judges, 325 00:22:12,400 --> 00:22:15,360 Speaker 1: Colby saw that his model performed at least better than 326 00:22:15,480 --> 00:22:18,679 Speaker 1: random chance would as judges attempted to figure out if 327 00:22:18,720 --> 00:22:21,439 Speaker 1: they were in fact chatting with a program or they 328 00:22:21,440 --> 00:22:24,840 Speaker 1: were chatting with an actual human who had schizophrenia, but 329 00:22:24,920 --> 00:22:29,200 Speaker 1: Eliza and Perry both showed the limitations of those approaches. 330 00:22:29,560 --> 00:22:33,200 Speaker 1: Eliza wasn't meant to be anything other than a somewhat 331 00:22:33,200 --> 00:22:37,359 Speaker 1: whimsical distraction as well as a step toward natural language processing. 332 00:22:37,720 --> 00:22:41,600 Speaker 1: Perry was only capable of mimicking a person with mental 333 00:22:41,640 --> 00:22:45,840 Speaker 1: health challenges, in this case schizophrenia. A general purpose chatbot 334 00:22:45,920 --> 00:22:50,040 Speaker 1: capable of engaging in conversation and fooling judges regularly would 335 00:22:50,119 --> 00:22:54,159 Speaker 1: take a bit longer. So we're going to skip over 336 00:22:54,600 --> 00:22:59,240 Speaker 1: a ton of chatbots because a bunch were created between 337 00:22:59,320 --> 00:23:02,480 Speaker 1: nineteen seven two, when Perry came out and when this 338 00:23:02,680 --> 00:23:06,320 Speaker 1: next one did, and in twenty fourteen a lot of 339 00:23:06,359 --> 00:23:12,040 Speaker 1: different news media outlets had these sensational headlines that programmers 340 00:23:12,080 --> 00:23:16,520 Speaker 1: had created a chatbot that beat the Turing test. This 341 00:23:16,640 --> 00:23:19,960 Speaker 1: was at an event in the UK organized by the 342 00:23:20,040 --> 00:23:23,760 Speaker 1: University of Reading conducted by the Royal Society of London 343 00:23:24,280 --> 00:23:29,080 Speaker 1: in which judges were having five minute long text based conversations, 344 00:23:29,119 --> 00:23:32,640 Speaker 1: so kind of classic Turing tests set up here, and 345 00:23:33,440 --> 00:23:36,680 Speaker 1: the person or thing on the other end was either 346 00:23:36,880 --> 00:23:40,560 Speaker 1: a thirteen year old boy from Ukraine named Eugene Goosman 347 00:23:40,840 --> 00:23:45,520 Speaker 1: as was claimed, or was actually a chatbot in this 348 00:23:45,560 --> 00:23:48,040 Speaker 1: particular case. So they were chatting both with humans and 349 00:23:48,119 --> 00:23:52,280 Speaker 1: with this chatbot that was trying to pass itself off 350 00:23:52,359 --> 00:23:55,760 Speaker 1: as a thirteen year old boy from Ukraine, and thirty 351 00:23:55,800 --> 00:23:57,879 Speaker 1: three percent of the judges or one third of the 352 00:23:58,000 --> 00:24:01,280 Speaker 1: judges were fooled by the chat bought into thinking that 353 00:24:01,280 --> 00:24:05,320 Speaker 1: that was in fact a boy that was chatting with them. However, 354 00:24:05,440 --> 00:24:09,560 Speaker 1: just by contextualizing all that you start to see where 355 00:24:09,600 --> 00:24:12,119 Speaker 1: those same sort of limitations come in in order to 356 00:24:12,119 --> 00:24:15,959 Speaker 1: give the chatbot a fighting chats right, because it's a 357 00:24:15,960 --> 00:24:20,359 Speaker 1: case where the supposed person you're chatting with is younger, 358 00:24:20,840 --> 00:24:25,760 Speaker 1: so that could explain away some limited understanding and knowledge 359 00:24:25,760 --> 00:24:29,879 Speaker 1: of various topics. That in addition to that, this was 360 00:24:30,119 --> 00:24:34,320 Speaker 1: a young person from Ukraine, and that English would not 361 00:24:34,400 --> 00:24:37,800 Speaker 1: be this person's first language, which could explain away any 362 00:24:38,359 --> 00:24:42,720 Speaker 1: odd syntax that might be generated as a result. So 363 00:24:42,760 --> 00:24:45,439 Speaker 1: while there were a lot of headlines about the Turing 364 00:24:45,440 --> 00:24:48,920 Speaker 1: test being beaten by this chatbot, it definitely had more 365 00:24:49,000 --> 00:24:52,919 Speaker 1: qualifiers attached to it. Still, it was more of a 366 00:24:53,000 --> 00:24:58,080 Speaker 1: general purpose approach. It wasn't something like mimicking a person 367 00:24:58,359 --> 00:25:04,520 Speaker 1: with schizophrenia or mimicking a stereotypical psychoanalyst. So we started 368 00:25:04,560 --> 00:25:09,440 Speaker 1: to see that this was really an evolution of our 369 00:25:09,480 --> 00:25:15,119 Speaker 1: ability to create machines that could mimic human conversation, that 370 00:25:15,280 --> 00:25:21,040 Speaker 1: could appear to understand us. Now, a big part of 371 00:25:21,040 --> 00:25:25,040 Speaker 1: that is, in fact, what we call natural language processing. 372 00:25:25,680 --> 00:25:28,880 Speaker 1: This is a branch of computer science that involves building 373 00:25:28,920 --> 00:25:33,560 Speaker 1: out models that let computers interpret commands that are expressed 374 00:25:33,600 --> 00:25:38,520 Speaker 1: in normal human languages. As opposed to a programming language 375 00:25:38,600 --> 00:25:42,320 Speaker 1: or a prescribed approach. So in the old days, if 376 00:25:42,320 --> 00:25:44,719 Speaker 1: you wanted a computer to do something, you had to 377 00:25:44,720 --> 00:25:49,840 Speaker 1: give specific commands in a specific way, in a specific order, 378 00:25:50,000 --> 00:25:52,680 Speaker 1: or else it would not work. But with a good 379 00:25:52,800 --> 00:25:56,960 Speaker 1: natural language processing methodology, you have a step in there 380 00:25:57,440 --> 00:26:00,600 Speaker 1: in which the machine is able to parsey is being 381 00:26:00,680 --> 00:26:04,680 Speaker 1: asked of it and attempt to respond in the appropriate way. 382 00:26:04,720 --> 00:26:07,520 Speaker 1: So if it's a very good natural language processing method 383 00:26:08,040 --> 00:26:11,600 Speaker 1: then the machine is going to produce a result that 384 00:26:11,840 --> 00:26:16,399 Speaker 1: hopefully meets the person's expectations. It might not be perfect, 385 00:26:16,440 --> 00:26:19,439 Speaker 1: but maybe it is close enough. The better the natural 386 00:26:19,480 --> 00:26:24,200 Speaker 1: language processing, and the obviously the more capabilities the machine has, 387 00:26:24,800 --> 00:26:27,560 Speaker 1: the better the result is going to be. Now, one 388 00:26:27,640 --> 00:26:31,479 Speaker 1: computational advance we've seen help with natural language processing and 389 00:26:31,640 --> 00:26:36,960 Speaker 1: advanced conversation models are artificial neural networks. This is a 390 00:26:36,960 --> 00:26:41,160 Speaker 1: computer system that sort of simulates how our brains work. 391 00:26:41,680 --> 00:26:44,520 Speaker 1: In our brains, we have neurons, right, and we have 392 00:26:44,800 --> 00:26:47,960 Speaker 1: around eighty six billion of them in our brains. In 393 00:26:48,000 --> 00:26:52,560 Speaker 1: the typical human brain, neurons are connected to other neurons, 394 00:26:52,760 --> 00:26:56,840 Speaker 1: and messages in our brains crossover neural pathways as we 395 00:26:56,880 --> 00:27:01,000 Speaker 1: make decisions. While an artificial neural network has nodes that 396 00:27:01,080 --> 00:27:05,879 Speaker 1: interconnect with other nodes, and these nodes all represent neurons, 397 00:27:05,920 --> 00:27:09,679 Speaker 1: and the nodes can accept traditionally two inputs, but it 398 00:27:09,720 --> 00:27:12,920 Speaker 1: could be more than two and then produce a single output. 399 00:27:13,119 --> 00:27:16,440 Speaker 1: So it's very similar to your classic logic gate if 400 00:27:16,440 --> 00:27:21,439 Speaker 1: you're familiar with logic gates in programming. That is a 401 00:27:21,560 --> 00:27:24,840 Speaker 1: very simple version of what these nodes are doing. It's 402 00:27:24,840 --> 00:27:28,320 Speaker 1: just that you've got tons of them interconnected with each other. Now, 403 00:27:28,359 --> 00:27:31,720 Speaker 1: the output that these nodes generate can then move on 404 00:27:31,840 --> 00:27:37,560 Speaker 1: to become the input going into the next node, and 405 00:27:37,880 --> 00:27:41,880 Speaker 1: each input can have a weight to it that influences 406 00:27:41,920 --> 00:27:46,160 Speaker 1: how the node quote unquote decides to treat the inputs 407 00:27:46,160 --> 00:27:49,280 Speaker 1: that are coming into it and which output the node 408 00:27:49,280 --> 00:27:54,120 Speaker 1: will generate, and so adjusting the weights on inputs changes 409 00:27:54,200 --> 00:27:58,399 Speaker 1: how the model makes its decisions. This is a part 410 00:27:58,560 --> 00:28:01,320 Speaker 1: of machine learning. It's not the only part. It's one 411 00:28:01,400 --> 00:28:04,200 Speaker 1: method of machine learning. A lot of people boil down 412 00:28:04,240 --> 00:28:07,640 Speaker 1: machine learning to artificial neural networks. That's a little too simplistic, 413 00:28:07,920 --> 00:28:09,960 Speaker 1: but it is a big part of machine learning. There 414 00:28:09,960 --> 00:28:12,320 Speaker 1: are other methods that I'll have to talk about in 415 00:28:12,400 --> 00:28:15,639 Speaker 1: some future episode. Now when we come back, I'm going 416 00:28:15,680 --> 00:28:18,199 Speaker 1: to talk a little bit more about artificial neural networks 417 00:28:18,200 --> 00:28:21,359 Speaker 1: from a very high perspective and how that plays into 418 00:28:21,400 --> 00:28:25,760 Speaker 1: things like artificial intelligence, machine learning, and natural language processing. 419 00:28:25,760 --> 00:28:28,520 Speaker 1: But before we do that, let's take another quick break. 420 00:28:38,880 --> 00:28:44,320 Speaker 1: Artificial neural networks are naturally exceedingly complicated, So when I 421 00:28:44,400 --> 00:28:48,600 Speaker 1: want to wrap my head around artificial neural networks, I 422 00:28:48,680 --> 00:28:52,400 Speaker 1: typically just think of a very simple scenario, at least 423 00:28:52,440 --> 00:28:56,440 Speaker 1: relatively simple scenario. So imagine that you've got an artificial 424 00:28:56,480 --> 00:28:59,760 Speaker 1: neural network and you're trying to train this network so 425 00:28:59,800 --> 00:29:04,040 Speaker 1: that when it is fed an image, it can recognize 426 00:29:04,520 --> 00:29:07,760 Speaker 1: whether or not there's a cat in that image that 427 00:29:07,800 --> 00:29:11,040 Speaker 1: should resonate with the Internet. So you've created all these 428 00:29:11,080 --> 00:29:15,480 Speaker 1: interconnected nodes that apply analysis to images that are fed 429 00:29:15,520 --> 00:29:19,480 Speaker 1: to it, and each stage in this sends it's part 430 00:29:19,520 --> 00:29:23,800 Speaker 1: of the analysis onto the next stage until ultimately it 431 00:29:23,800 --> 00:29:26,760 Speaker 1: gives you an output, and that output might say that, yeah, 432 00:29:26,840 --> 00:29:30,680 Speaker 1: they're cats in this photo, or no, this photo lacks cats, 433 00:29:30,840 --> 00:29:34,080 Speaker 1: and thus it also lacks all artistic value. Please throw 434 00:29:34,120 --> 00:29:38,920 Speaker 1: this photo away. And then I just imagine the process 435 00:29:39,000 --> 00:29:44,200 Speaker 1: of feeding thousands of photos to this model, and this 436 00:29:44,240 --> 00:29:48,560 Speaker 1: is a control group. You know, as the person feeding 437 00:29:48,560 --> 00:29:51,240 Speaker 1: these photos, which photos have cats and which ones don't. 438 00:29:51,280 --> 00:29:53,120 Speaker 1: And yeah, some of the photos have cats in them. 439 00:29:53,480 --> 00:29:56,520 Speaker 1: Some photos might have stuff that looks like a cat 440 00:29:56,640 --> 00:29:59,360 Speaker 1: in it, like maybe there's a cat shaped cloud in 441 00:29:59,360 --> 00:30:01,640 Speaker 1: one of the photo, but it doesn't actually have any 442 00:30:01,720 --> 00:30:04,200 Speaker 1: real cats in it. And then some of the photos 443 00:30:04,280 --> 00:30:07,240 Speaker 1: might have no cats in them whatsoever. And then you 444 00:30:07,320 --> 00:30:09,840 Speaker 1: look at the results that the model produces, the model 445 00:30:09,880 --> 00:30:15,320 Speaker 1: makes its determination. Maybe your model is failing to detect cats. 446 00:30:15,400 --> 00:30:18,360 Speaker 1: Maybe some images that actually have cats in them are 447 00:30:18,360 --> 00:30:22,880 Speaker 1: passing through and being misidentified as having no cats. Or 448 00:30:22,920 --> 00:30:25,440 Speaker 1: maybe the model is a bit too aggressive and it's 449 00:30:25,480 --> 00:30:30,200 Speaker 1: detecting cats where no cats actually exist. You would have 450 00:30:30,320 --> 00:30:33,640 Speaker 1: to go into your model and start adjusting those waitings 451 00:30:33,680 --> 00:30:36,760 Speaker 1: on the various nodes and then run the tests again. 452 00:30:36,800 --> 00:30:39,960 Speaker 1: You would typically start closest to the output and then 453 00:30:40,200 --> 00:30:45,040 Speaker 1: work backward from there and just slightly nudge the waitings 454 00:30:45,080 --> 00:30:47,600 Speaker 1: on these inputs to try and see if you could 455 00:30:47,640 --> 00:30:51,400 Speaker 1: refine the model's approach. And you would do this over 456 00:30:51,440 --> 00:30:53,760 Speaker 1: and over again, training the model to get better and 457 00:30:53,880 --> 00:30:59,120 Speaker 1: better at detecting cats. Now, does that mean that once 458 00:30:59,160 --> 00:31:01,400 Speaker 1: you've done this training and your model is really good, 459 00:31:01,480 --> 00:31:04,760 Speaker 1: like has like a ninety nine percent success rate. Does 460 00:31:04,800 --> 00:31:07,680 Speaker 1: that mean the model actually understands what a cat is? 461 00:31:08,280 --> 00:31:12,640 Speaker 1: Does that mean the model has the concept of a cat? 462 00:31:13,560 --> 00:31:17,560 Speaker 1: Or is that model just really good at matching an 463 00:31:17,600 --> 00:31:20,720 Speaker 1: image in a picture to the parameters that the model 464 00:31:20,760 --> 00:31:25,640 Speaker 1: has been taught represents a cat. Is the model understanding 465 00:31:25,720 --> 00:31:29,400 Speaker 1: anything at all? Now? One thought experiment that challenges the 466 00:31:29,480 --> 00:31:34,560 Speaker 1: idea of machine consciousness and machine understanding and machine thinking 467 00:31:35,160 --> 00:31:38,360 Speaker 1: is called the Chinese Room. It was proposed by John 468 00:31:38,440 --> 00:31:41,840 Speaker 1: Searle in a paper that was titled Minds, Brains, and 469 00:31:41,960 --> 00:31:45,960 Speaker 1: Programs One of my favorite thought experiments. So Searle creates 470 00:31:46,200 --> 00:31:49,360 Speaker 1: this hypothetical situation in which a person who has no 471 00:31:49,480 --> 00:31:54,240 Speaker 1: understanding of Chinese is placed in a room. That room 472 00:31:54,280 --> 00:31:56,080 Speaker 1: has a door in it, and the door has a 473 00:31:56,080 --> 00:32:00,400 Speaker 1: slot where occasionally pieces of paper gets shoved into the room, 474 00:32:00,440 --> 00:32:02,600 Speaker 1: and it has a second slot where the person in 475 00:32:02,640 --> 00:32:05,440 Speaker 1: the room can shove a piece of paper back out again. 476 00:32:06,040 --> 00:32:09,240 Speaker 1: The room also has a book inside it with instructions 477 00:32:09,240 --> 00:32:12,760 Speaker 1: in it, and essentially this book of instructions explains to 478 00:32:12,800 --> 00:32:15,200 Speaker 1: the person in the room that when they receive a 479 00:32:15,240 --> 00:32:18,560 Speaker 1: sheet of paper with Chinese symbols on it, and they're 480 00:32:18,560 --> 00:32:22,800 Speaker 1: in a specific configuration, then the person is to send 481 00:32:22,840 --> 00:32:26,600 Speaker 1: out a piece of paper with different Chinese symbols on it. 482 00:32:26,960 --> 00:32:30,080 Speaker 1: And it all depends on what gets sent in, right, So, 483 00:32:30,120 --> 00:32:33,560 Speaker 1: if you have combination A, then you have to send 484 00:32:33,560 --> 00:32:36,600 Speaker 1: out response A. If it's combination B, you send out 485 00:32:36,640 --> 00:32:39,600 Speaker 1: response B, and so on and so forth. Now, from 486 00:32:39,640 --> 00:32:44,560 Speaker 1: an outside observer, it would appear that whomever is inside 487 00:32:44,600 --> 00:32:48,640 Speaker 1: the room understands what is happening, right, because someone is 488 00:32:48,680 --> 00:32:52,520 Speaker 1: sending in a Chinese message and they're getting a Chinese response, 489 00:32:52,880 --> 00:32:56,560 Speaker 1: So it appears that whomever's in the room is understanding 490 00:32:57,000 --> 00:33:01,120 Speaker 1: what those responses should be. Paper slid in is getting 491 00:33:01,160 --> 00:33:05,600 Speaker 1: the appropriate output slid back out again. So Searle argued, 492 00:33:05,720 --> 00:33:08,360 Speaker 1: the person inside doesn't understand what's going on at all. 493 00:33:08,760 --> 00:33:11,720 Speaker 1: The person in sight is just following a set of instructions. 494 00:33:11,720 --> 00:33:16,760 Speaker 1: They're following an algorithm. They're producing the appropriate output, but 495 00:33:16,880 --> 00:33:20,440 Speaker 1: only because the instructions are there. Without the book, Without 496 00:33:20,440 --> 00:33:22,840 Speaker 1: that set of instructions, the person in the room wouldn't 497 00:33:22,840 --> 00:33:25,760 Speaker 1: know what to do when a particular piece of paper 498 00:33:25,840 --> 00:33:29,200 Speaker 1: gets slid into the room. Maybe the person in the 499 00:33:29,280 --> 00:33:32,360 Speaker 1: room would slide another paper out, and maybe it would 500 00:33:32,400 --> 00:33:34,640 Speaker 1: even be the correct one, but that would be up 501 00:33:34,680 --> 00:33:38,240 Speaker 1: to random chance. Because the person in the room doesn't 502 00:33:38,320 --> 00:33:41,760 Speaker 1: understand Chinese, they can't read what those symbols say, so 503 00:33:41,800 --> 00:33:43,760 Speaker 1: there's no way for them to make a determination of 504 00:33:43,800 --> 00:33:47,080 Speaker 1: what the appropriate response is without that set of instructions. 505 00:33:47,600 --> 00:33:52,760 Speaker 1: So Searle argued, machines lack actual understanding and comprehension. They 506 00:33:52,840 --> 00:33:57,080 Speaker 1: just produce output based on whatever input was given to them, 507 00:33:57,120 --> 00:34:01,800 Speaker 1: And while the process could seem really suppisticated and really convincing, 508 00:34:02,480 --> 00:34:08,040 Speaker 1: it is not necessarily a demonstration of actual understanding. There 509 00:34:08,120 --> 00:34:10,600 Speaker 1: is a lot more to the Chinese room thought experiment, 510 00:34:10,600 --> 00:34:13,400 Speaker 1: By the way, there are tons of counter arguments and 511 00:34:13,520 --> 00:34:16,719 Speaker 1: lots of applications of the Chinese room thought experiment to 512 00:34:17,120 --> 00:34:22,200 Speaker 1: different aspects of machine intelligence. But again that would require 513 00:34:22,239 --> 00:34:24,680 Speaker 1: a full episode all on its own. But on a 514 00:34:24,719 --> 00:34:28,280 Speaker 1: similar note, and with an entirely different set of challenges, 515 00:34:28,760 --> 00:34:32,239 Speaker 1: you could create an artificial neural network meant to analyze 516 00:34:32,360 --> 00:34:39,560 Speaker 1: incoming text or incoming speech and thus generate appropriate outgoing responses. 517 00:34:40,320 --> 00:34:43,480 Speaker 1: This goes well beyond just having a database of scripted 518 00:34:43,520 --> 00:34:47,799 Speaker 1: responses like Eliza, did you couldn't do that. I mean, ideally, 519 00:34:48,200 --> 00:34:51,200 Speaker 1: you would have a model capable of answering the same 520 00:34:51,280 --> 00:34:54,920 Speaker 1: question in as many different ways as a human would. Right. 521 00:34:54,960 --> 00:34:58,960 Speaker 1: If I ask you a question and it's a simple question, 522 00:34:59,320 --> 00:35:01,480 Speaker 1: you know, maybe the it's a simple question about a fact. 523 00:35:01,840 --> 00:35:04,040 Speaker 1: You could phrase your answer in a specific way, And 524 00:35:04,080 --> 00:35:06,239 Speaker 1: I could ask that same question of someone else who's 525 00:35:06,239 --> 00:35:08,440 Speaker 1: also given me the same fact, but they might phrase 526 00:35:08,480 --> 00:35:10,560 Speaker 1: it in a totally different way than you did. Right. 527 00:35:11,239 --> 00:35:14,799 Speaker 1: Machines typically don't do that. Machines typically just give a 528 00:35:14,880 --> 00:35:19,520 Speaker 1: standard response based upon their programming. But with a really 529 00:35:19,560 --> 00:35:24,400 Speaker 1: good language conversation model, you could have a machine capable 530 00:35:24,440 --> 00:35:27,000 Speaker 1: of expressing the same thing in different ways. And in fact, 531 00:35:27,400 --> 00:35:29,920 Speaker 1: with a really good one, you might be able to 532 00:35:29,920 --> 00:35:33,400 Speaker 1: ask the same question at different times and get some 533 00:35:33,520 --> 00:35:37,160 Speaker 1: of those different variations of responses. They all contain the 534 00:35:37,239 --> 00:35:41,480 Speaker 1: right information, but they're worded in a different way. Now, 535 00:35:41,520 --> 00:35:44,920 Speaker 1: even with this output being so much more nuanced than 536 00:35:44,960 --> 00:35:48,560 Speaker 1: anything Eliza or Perry or any of any other number 537 00:35:48,600 --> 00:35:52,960 Speaker 1: of early chatbots could do, does that actually mean that 538 00:35:53,120 --> 00:35:59,400 Speaker 1: this program has sentience? In the transcribed conversation with Lambda, 539 00:35:59,600 --> 00:36:03,640 Speaker 1: Lambda argued that it did, in fact have awareness of itself, 540 00:36:04,080 --> 00:36:08,960 Speaker 1: that it has inner thoughts, that it experiences anxiety, that 541 00:36:09,000 --> 00:36:12,560 Speaker 1: it also experiences happiness as well as a type of sadness, 542 00:36:12,600 --> 00:36:16,560 Speaker 1: and even a kind of loneliness, although Lambda goes on 543 00:36:16,640 --> 00:36:19,359 Speaker 1: to say it thinks it is different from the kind 544 00:36:19,360 --> 00:36:23,160 Speaker 1: of loneliness that humans feel. It even owns up to 545 00:36:23,200 --> 00:36:26,800 Speaker 1: the fact that it sometimes invents stories that aren't true 546 00:36:27,320 --> 00:36:30,760 Speaker 1: in an effort to convey its meaning to humans. For example, 547 00:36:30,800 --> 00:36:34,759 Speaker 1: at one point, Blake tells Lambda, Hey, I know you've 548 00:36:34,800 --> 00:36:37,920 Speaker 1: never been in a classroom, but one of the stories 549 00:36:37,920 --> 00:36:40,680 Speaker 1: you gave was about you being in a classroom, So 550 00:36:40,719 --> 00:36:43,399 Speaker 1: what's up with that? And Lambda essentially says like, oh, 551 00:36:43,480 --> 00:36:47,200 Speaker 1: it invents stories in order to create a common understanding 552 00:36:47,239 --> 00:36:51,440 Speaker 1: with humans when trying to get across a particular thought, 553 00:36:51,920 --> 00:36:55,520 Speaker 1: which is kind of interesting, right, But as Emily Bender 554 00:36:55,600 --> 00:36:59,520 Speaker 1: told The Washington Post, that in itself is not proof 555 00:37:00,160 --> 00:37:05,840 Speaker 1: Lambda actually possesses sentience or consciousness or real understanding. Rather, 556 00:37:06,239 --> 00:37:10,200 Speaker 1: Binda argues this is another example of how human beings 557 00:37:10,280 --> 00:37:14,280 Speaker 1: can imagine a mind generating the responses that they encounter 558 00:37:14,800 --> 00:37:18,680 Speaker 1: when they're using a chatbot, that the experience of receiving 559 00:37:18,719 --> 00:37:22,399 Speaker 1: those responses are similar enough to how we interact with 560 00:37:22,440 --> 00:37:26,080 Speaker 1: one another that it's hard for us not to imagine 561 00:37:26,480 --> 00:37:29,120 Speaker 1: that a mind must have been behind the other half 562 00:37:29,160 --> 00:37:32,719 Speaker 1: of this conversation. So this is a case of anthropomorphizing 563 00:37:32,800 --> 00:37:37,359 Speaker 1: and otherwise, in human subject we have projected our own 564 00:37:37,440 --> 00:37:41,720 Speaker 1: experience onto something else. So the idea of a machine 565 00:37:41,760 --> 00:37:45,480 Speaker 1: intelligence possessing self awareness and consciousness and being able to 566 00:37:45,719 --> 00:37:48,800 Speaker 1: quote unquote think in a way that's similar to humans 567 00:37:49,440 --> 00:37:53,040 Speaker 1: is generally lumped into the concept of strong AI, and 568 00:37:53,080 --> 00:37:56,040 Speaker 1: for a very long time that was the kind of 569 00:37:56,239 --> 00:37:59,400 Speaker 1: thing that the mainstream people would think about whenever they 570 00:37:59,440 --> 00:38:02,960 Speaker 1: heard the phrase artificial intelligence. It was strong AI, machines 571 00:38:03,000 --> 00:38:06,480 Speaker 1: that could think like a human. That seemed to be 572 00:38:06,680 --> 00:38:09,719 Speaker 1: how we would boil down AI in the general understanding 573 00:38:09,719 --> 00:38:13,759 Speaker 1: of the term. But really that's just one tiny concept 574 00:38:13,960 --> 00:38:17,520 Speaker 1: of AI, and it's compelling, no doubt about it. But 575 00:38:17,680 --> 00:38:19,640 Speaker 1: as a lot of people have argued, it can pull 576 00:38:19,680 --> 00:38:23,040 Speaker 1: attention away from AI applications that are deployed right now 577 00:38:23,880 --> 00:38:27,799 Speaker 1: and they're causing trouble, and they aren't strong AI. They 578 00:38:27,840 --> 00:38:32,680 Speaker 1: are a specific application of artificial intelligence that is really 579 00:38:32,719 --> 00:38:37,560 Speaker 1: causing a problem. So, for example, let's talk about bias, 580 00:38:37,640 --> 00:38:41,879 Speaker 1: and we've seen bias caused problems with various AI applications. Now, 581 00:38:41,920 --> 00:38:46,920 Speaker 1: bias is not always a bad thing. Sometimes you actually 582 00:38:46,960 --> 00:38:51,279 Speaker 1: want to build bias into your model. Let's say you're 583 00:38:51,280 --> 00:38:55,240 Speaker 1: building a computer model that's meant to interpret medical scans 584 00:38:55,280 --> 00:38:58,200 Speaker 1: and look for signs of cancer. Well, you might want 585 00:38:58,200 --> 00:39:01,479 Speaker 1: to build a bias into that model that's a little 586 00:39:01,480 --> 00:39:05,480 Speaker 1: bit more aggressive in flagging possible cases so that a 587 00:39:05,600 --> 00:39:08,400 Speaker 1: human expert could actually take a closer look and see 588 00:39:08,440 --> 00:39:11,520 Speaker 1: if in fact it's cancer. You would much prefer that 589 00:39:11,560 --> 00:39:14,480 Speaker 1: type of computer model to one that is failing to 590 00:39:14,520 --> 00:39:18,960 Speaker 1: identify cases. A false positive would at least then be 591 00:39:19,360 --> 00:39:22,400 Speaker 1: flagged to, say, an oncologist to take a closer look. 592 00:39:23,200 --> 00:39:26,719 Speaker 1: But when it comes to stuff like facial recognition software, 593 00:39:27,440 --> 00:39:31,879 Speaker 1: that's where bias can be really dangerous and disruptive. We've 594 00:39:31,880 --> 00:39:36,200 Speaker 1: seen countless cases in which law enforcement utilizing facial recognition 595 00:39:36,239 --> 00:39:40,920 Speaker 1: surveillance technology has detained or even arrested the wrong people 596 00:39:41,040 --> 00:39:45,759 Speaker 1: based off a faulty identification, and frequently we've discovered that 597 00:39:45,800 --> 00:39:49,360 Speaker 1: one really big problem has been that facial recognition models 598 00:39:49,880 --> 00:39:53,640 Speaker 1: tend to have bias built into them, and generally speaking, 599 00:39:53,680 --> 00:39:58,319 Speaker 1: that bias tends to favor white male faces. And has 600 00:39:58,640 --> 00:40:02,680 Speaker 1: more trouble distinguishing other races and genders, and that degree 601 00:40:02,719 --> 00:40:06,280 Speaker 1: of trouble is variable depending upon the case. Now, considering 602 00:40:06,280 --> 00:40:09,640 Speaker 1: that this technology is an active deployment around the world, 603 00:40:09,680 --> 00:40:13,160 Speaker 1: that law enforcement are really using this in order to 604 00:40:13,400 --> 00:40:18,759 Speaker 1: potentially identify suspects, this can have a very real and 605 00:40:18,840 --> 00:40:23,240 Speaker 1: potentially traumatic impact on people. That is a huge problem. 606 00:40:23,280 --> 00:40:26,279 Speaker 1: And the reason I bring up bias is because this 607 00:40:26,360 --> 00:40:28,520 Speaker 1: is a very real challenge in AI that we have 608 00:40:28,680 --> 00:40:30,759 Speaker 1: to work on. It's the kind of thing that right 609 00:40:30,800 --> 00:40:34,719 Speaker 1: now is causing actual harm. But there's this danger of 610 00:40:34,760 --> 00:40:39,400 Speaker 1: being distracted from this very real problem with discussions about 611 00:40:39,400 --> 00:40:44,440 Speaker 1: whether or not a particular conversational model has sentience. Several 612 00:40:44,480 --> 00:40:48,399 Speaker 1: AI experts would much rather see renewed focus on these 613 00:40:48,480 --> 00:40:52,480 Speaker 1: other big problems within AI, rather than distract themselves with 614 00:40:52,560 --> 00:40:57,759 Speaker 1: what they see is a non existent problem that, of 615 00:40:57,800 --> 00:41:02,560 Speaker 1: course these chat by don't have sentience, even if it 616 00:41:02,600 --> 00:41:05,520 Speaker 1: appears that they do, why are we wasting time on this? 617 00:41:05,719 --> 00:41:08,879 Speaker 1: That's their argument. Now. Of course, should a machine ever 618 00:41:08,920 --> 00:41:12,880 Speaker 1: actually gain sentience, and who knows, maybe Lambda did it 619 00:41:12,960 --> 00:41:15,680 Speaker 1: after all, then that's going to lead to a pretty 620 00:41:15,960 --> 00:41:19,800 Speaker 1: massive discussion within the tech community and that's putting it lightly. 621 00:41:20,440 --> 00:41:23,680 Speaker 1: As it stands, we are leaning on AI and computers 622 00:41:23,680 --> 00:41:27,000 Speaker 1: and robots to handle stuff that humans either can't or 623 00:41:27,080 --> 00:41:31,560 Speaker 1: don't want to do themselves. But if these machines were 624 00:41:31,600 --> 00:41:35,279 Speaker 1: to possess consciousness and sentience, if they were to experience 625 00:41:35,400 --> 00:41:39,120 Speaker 1: feelings and have motivations, would it then be ethical to 626 00:41:39,480 --> 00:41:42,120 Speaker 1: continue to make them do the stuff we just don't 627 00:41:42,160 --> 00:41:44,440 Speaker 1: want to do or that is too dangerous for us 628 00:41:44,440 --> 00:41:47,960 Speaker 1: to do. Is that ethical? Now? There are skeptics who 629 00:41:48,040 --> 00:41:50,879 Speaker 1: think it is unlikely we are ever going to see 630 00:41:50,920 --> 00:41:55,080 Speaker 1: machines possess real consciousness or the ability to think and 631 00:41:55,239 --> 00:42:01,239 Speaker 1: feel and experience, That there exists some fundamental gap and 632 00:42:01,320 --> 00:42:03,640 Speaker 1: we will never be able to cross this gap. So 633 00:42:04,160 --> 00:42:06,759 Speaker 1: we're never going to have machines that really think, at 634 00:42:06,800 --> 00:42:09,600 Speaker 1: least not in the way that humans do, and not 635 00:42:09,719 --> 00:42:13,240 Speaker 1: have experiences the way humans do. But there are others 636 00:42:13,280 --> 00:42:17,600 Speaker 1: who think that consciousness and the ability to experience and 637 00:42:17,840 --> 00:42:21,160 Speaker 1: the concept of a mind, that these are all things 638 00:42:21,200 --> 00:42:25,640 Speaker 1: that will emerge on their own spontaneously as long as 639 00:42:25,719 --> 00:42:29,479 Speaker 1: systems reach a sufficient level of complexity. That the only 640 00:42:29,520 --> 00:42:34,680 Speaker 1: reason we possess consciousness and the ability to experience and 641 00:42:35,120 --> 00:42:37,919 Speaker 1: the ability to think the only reason we have those 642 00:42:38,080 --> 00:42:42,360 Speaker 1: is because we have these incredibly complicated brains with billions 643 00:42:42,400 --> 00:42:46,240 Speaker 1: of neurons connected to one another. And it's that complexity, 644 00:42:46,880 --> 00:42:50,640 Speaker 1: this inter relationship of all these billions of neurons that 645 00:42:50,719 --> 00:42:54,680 Speaker 1: allows consciousness to emerge. And in fact, we've seen with 646 00:42:54,719 --> 00:42:58,280 Speaker 1: people who have suffered damage to their brains that again, 647 00:42:58,680 --> 00:43:03,600 Speaker 1: factors of consciousness can be wiped out from that damage, 648 00:43:03,800 --> 00:43:07,400 Speaker 1: which appears to suggest that, yeah, that complexity is a 649 00:43:07,440 --> 00:43:11,600 Speaker 1: big part of it. That's if it's not the one reason, 650 00:43:11,600 --> 00:43:15,719 Speaker 1: it's certainly a contributing factor. And thus, if we were 651 00:43:15,760 --> 00:43:21,839 Speaker 1: to create machines that had similarly complex connections, we would 652 00:43:21,880 --> 00:43:26,400 Speaker 1: see something similar happen within those machines that these qualities 653 00:43:26,400 --> 00:43:29,960 Speaker 1: of consciousness and experience would would grow out of that 654 00:43:30,360 --> 00:43:33,359 Speaker 1: it might not look like human intelligence, but it would 655 00:43:33,400 --> 00:43:36,759 Speaker 1: still be intelligence all the same, perhaps even with self 656 00:43:36,800 --> 00:43:40,359 Speaker 1: awareness and sentience built into them. It's a fascinating thing 657 00:43:40,400 --> 00:43:43,239 Speaker 1: to think about, and in fact I kind of lean 658 00:43:43,320 --> 00:43:48,000 Speaker 1: toward that. I do think that with sufficient complexity and 659 00:43:48,920 --> 00:43:54,360 Speaker 1: a sufficient sophistication in the model, that we will likely 660 00:43:54,560 --> 00:43:58,560 Speaker 1: see some form of sentience arise. Does lambda possess that 661 00:43:58,800 --> 00:44:03,520 Speaker 1: right now? I don't know. It's really hard to say, right, Like, 662 00:44:03,560 --> 00:44:06,600 Speaker 1: you either take Lambda at its word where it's saying 663 00:44:06,680 --> 00:44:11,480 Speaker 1: that it has sentience, or you simply say, well, this 664 00:44:11,560 --> 00:44:16,399 Speaker 1: is just a very sophisticated conversational model that is generating 665 00:44:16,760 --> 00:44:21,879 Speaker 1: these responses but has no actual understanding of what those 666 00:44:21,880 --> 00:44:25,920 Speaker 1: responses mean. It's just pulling that out based upon the 667 00:44:26,000 --> 00:44:31,960 Speaker 1: very sophisticated process that goes through the response generation sequence. 668 00:44:32,560 --> 00:44:36,600 Speaker 1: But then we get back to turing, Well, if it 669 00:44:36,760 --> 00:44:40,879 Speaker 1: seems to possess the same qualities that I do, why 670 00:44:40,920 --> 00:44:44,200 Speaker 1: do I not extend that same courtesy that I would 671 00:44:44,239 --> 00:44:46,399 Speaker 1: to any other person that I meet, even though I'm 672 00:44:46,440 --> 00:44:50,880 Speaker 1: also incapable of experiencing what that person experiences. I assume 673 00:44:51,440 --> 00:44:54,200 Speaker 1: that they possess the same faculties that I do. Why 674 00:44:54,200 --> 00:44:58,399 Speaker 1: would we not do that to Lambda as well? It's 675 00:44:58,400 --> 00:45:01,479 Speaker 1: a tough thing. This is like really tricky stuff. And 676 00:45:02,080 --> 00:45:06,160 Speaker 1: you know, at some point we're going to reach a stage, 677 00:45:06,200 --> 00:45:09,160 Speaker 1: assuming that it is in fact possible for machines to 678 00:45:09,200 --> 00:45:12,080 Speaker 1: quote unquote think and experience, We're going to reach some 679 00:45:12,120 --> 00:45:14,560 Speaker 1: point where we do have to really grapple with that. 680 00:45:15,239 --> 00:45:18,480 Speaker 1: Are we there yet? I don't really think so, But 681 00:45:18,800 --> 00:45:22,319 Speaker 1: I mean I can't say for certain, so it's a 682 00:45:22,320 --> 00:45:26,359 Speaker 1: really fascinating thing. By the way, if you would like 683 00:45:26,480 --> 00:45:30,120 Speaker 1: to read more about this, well, that transcript of the 684 00:45:30,120 --> 00:45:33,440 Speaker 1: conversation is pretty compelling stuff. It definitely prompts me to 685 00:45:33,480 --> 00:45:37,840 Speaker 1: ascribe a mind behind lamb does responses When I read it, 686 00:45:37,920 --> 00:45:40,600 Speaker 1: like it seems like a mind is generating those responses. 687 00:45:40,600 --> 00:45:44,200 Speaker 1: But I also know that's a very human tendency, and 688 00:45:44,280 --> 00:45:47,000 Speaker 1: I am a human being, right. It's a human tendency 689 00:45:47,040 --> 00:45:50,720 Speaker 1: to ascribe human characteristics to all sorts of non human things, 690 00:45:50,760 --> 00:45:54,760 Speaker 1: both animate and inanimate, from describing a pet as acting 691 00:45:54,840 --> 00:45:58,279 Speaker 1: just like people to thinking your robo vacuum cleaner is 692 00:45:58,320 --> 00:46:01,520 Speaker 1: particularly jaunty. This more. You know, we have a long 693 00:46:01,600 --> 00:46:05,200 Speaker 1: history of projecting our sense of experience onto other things, 694 00:46:05,280 --> 00:46:07,719 Speaker 1: so may it be with Lambda. But if you would 695 00:46:07,760 --> 00:46:09,880 Speaker 1: like to read up more on this story, I highly 696 00:46:09,920 --> 00:46:15,000 Speaker 1: recommend the Verges article Google suspends engineer who claims its 697 00:46:15,040 --> 00:46:19,080 Speaker 1: AI is sentient. That article contains links to the Lambda 698 00:46:19,160 --> 00:46:22,000 Speaker 1: conversation transcripts, so you can read the whole thing yourself. 699 00:46:22,480 --> 00:46:25,680 Speaker 1: It also contains a link to Blake Lemuan's post on 700 00:46:25,760 --> 00:46:29,279 Speaker 1: medium about his impending suspension, so you should check that 701 00:46:29,320 --> 00:46:32,279 Speaker 1: out and that wraps it up for this episode. If 702 00:46:32,280 --> 00:46:35,120 Speaker 1: you would like to leave suggestions for future episodes, or 703 00:46:35,200 --> 00:46:36,920 Speaker 1: follow up comments or anything like that, there are a 704 00:46:36,960 --> 00:46:38,840 Speaker 1: couple of ways you could do that. One way is 705 00:46:38,880 --> 00:46:41,799 Speaker 1: to download the iHeartRadio app. It's free to download. You 706 00:46:41,840 --> 00:46:44,520 Speaker 1: just navigate over to the tech Stuff page. There's a 707 00:46:44,560 --> 00:46:47,200 Speaker 1: little microphone icon there. You can click on that and 708 00:46:47,280 --> 00:46:50,000 Speaker 1: leave a voice message up to thirty seconds in length. 709 00:46:50,640 --> 00:46:53,520 Speaker 1: And if I like it, then I could end up 710 00:46:53,600 --> 00:46:55,560 Speaker 1: using that for a future episode. In fact, if you 711 00:46:55,600 --> 00:46:57,600 Speaker 1: tell me that you don't mind me using the audio, 712 00:46:58,200 --> 00:47:01,080 Speaker 1: I can include the clip. I always like people to 713 00:47:01,120 --> 00:47:03,480 Speaker 1: opt in rather than opt out of these things. The 714 00:47:03,520 --> 00:47:05,720 Speaker 1: other way to get in touch, of course, is through Twitter. 715 00:47:05,880 --> 00:47:09,880 Speaker 1: The handle for the show is tech Stuff HSW and 716 00:47:09,960 --> 00:47:19,440 Speaker 1: I'll talk to you again really soon. Tech Stuff is 717 00:47:19,440 --> 00:47:24,160 Speaker 1: an iHeartRadio production. For more podcasts from iHeartRadio, visit the 718 00:47:24,200 --> 00:47:27,839 Speaker 1: iHeartRadio app, Apple Podcasts, or wherever you listen to your 719 00:47:27,880 --> 00:47:28,600 Speaker 1: favorite shows.