1 00:00:04,400 --> 00:00:07,800 Speaker 1: Welcome to Tech Stuff, a production from I Heart Radio. 2 00:00:11,840 --> 00:00:16,400 Speaker 1: Hey there, and welcome to text Stuff. I'm your scary 3 00:00:16,440 --> 00:00:20,720 Speaker 1: but not Harry host Jonathan Strickland, executive producer with I 4 00:00:20,880 --> 00:00:24,200 Speaker 1: Heart Radio and how the tech are You. We are 5 00:00:24,280 --> 00:00:27,600 Speaker 1: continuing our spooky episode series and the lead up to 6 00:00:27,640 --> 00:00:32,839 Speaker 1: Halloween twenty twenty two. Apologies if you're from the future 7 00:00:33,200 --> 00:00:38,360 Speaker 1: listening back on this episode. That's why the bizarre theme 8 00:00:38,479 --> 00:00:42,120 Speaker 1: is popping up. We've already talked recently about stuff like 9 00:00:42,360 --> 00:00:47,040 Speaker 1: vampire power and zombie computers where I'm really grasping at 10 00:00:47,040 --> 00:00:51,680 Speaker 1: tenuous connections to horror stuff for tech. But now it's 11 00:00:51,680 --> 00:00:56,720 Speaker 1: time to tackle the ghost in the machine. These days, 12 00:00:57,360 --> 00:01:01,000 Speaker 1: that phrase is frequently bandied about with relation to stuff 13 00:01:01,040 --> 00:01:06,040 Speaker 1: like artificial intelligence, But the the man who coined the 14 00:01:06,080 --> 00:01:10,560 Speaker 1: phrase itself was Gilbert Ryle in nineteen nine, and he 15 00:01:10,720 --> 00:01:15,000 Speaker 1: wasn't talking about artificial intelligence at all. He was talking about, 16 00:01:15,280 --> 00:01:19,479 Speaker 1: you know, real intelligence, and he was critiquing a seventeenth 17 00:01:19,520 --> 00:01:24,720 Speaker 1: century philosopher's argument about the human mind. That philosopher was 18 00:01:24,800 --> 00:01:30,560 Speaker 1: Renee des Cartes, who famously wrote cogito ergo sum I drink. Therefore, 19 00:01:30,600 --> 00:01:33,679 Speaker 1: I am sorry it's gonna be impossible for me to 20 00:01:33,680 --> 00:01:39,120 Speaker 1: talk about philosophers without quoting Monty Python because I'm a dork. Okay, No, 21 00:01:39,319 --> 00:01:43,240 Speaker 1: cogito ergo sum actually means I think, therefore I am 22 00:01:43,360 --> 00:01:46,800 Speaker 1: or that's how we interpret it. But that wasn't really 23 00:01:46,840 --> 00:01:50,919 Speaker 1: the bit that Gilbert Ryle was all riled up about. 24 00:01:51,480 --> 00:01:56,800 Speaker 1: See de Cartes believed in dualism. Now, I don't mean 25 00:01:56,920 --> 00:01:59,400 Speaker 1: that he believed in showing up at dawn to sword 26 00:01:59,480 --> 00:02:02,840 Speaker 1: fight other philosophers. Though, if someone wants to make a 27 00:02:02,840 --> 00:02:07,640 Speaker 1: philosopher version of Highlander, I am down with that and 28 00:02:07,640 --> 00:02:11,200 Speaker 1: I will back your Kickstarter. No. No, deke Hart believed 29 00:02:11,680 --> 00:02:15,760 Speaker 1: that the mind and you know, consciousness and sentience and 30 00:02:15,760 --> 00:02:19,720 Speaker 1: all this kind of stuff are separate from the actual 31 00:02:20,280 --> 00:02:24,520 Speaker 1: gray matter that resides in our noggains. So, in other words, 32 00:02:25,000 --> 00:02:28,440 Speaker 1: intelligence and awareness and all that other stuff that makes 33 00:02:28,560 --> 00:02:34,440 Speaker 1: you you exists independently of your physical brain. That there 34 00:02:34,560 --> 00:02:39,120 Speaker 1: is this this component that is beyond the physical. Now, 35 00:02:39,240 --> 00:02:43,240 Speaker 1: Ryle referred to this concept as the ghost in the machine, 36 00:02:43,840 --> 00:02:47,800 Speaker 1: that somehow all the things that represent you are you know, 37 00:02:47,880 --> 00:02:51,919 Speaker 1: to too great effect, ethereally independent of the brain itself. 38 00:02:52,320 --> 00:02:56,520 Speaker 1: And Ryle rejects that notion. And you know, Ryle appears 39 00:02:56,520 --> 00:02:59,800 Speaker 1: to be right. We know this because there are plenty 40 00:02:59,840 --> 00:03:03,919 Speaker 1: of case studies focusing on people who have experienced brain injuries, 41 00:03:04,280 --> 00:03:08,560 Speaker 1: either from some physical calamity or a disease or something 42 00:03:08,560 --> 00:03:13,800 Speaker 1: along those lines. And these events often transform people and 43 00:03:13,919 --> 00:03:19,639 Speaker 1: change their behaviors and their underlying personalities. So the damage 44 00:03:19,680 --> 00:03:23,240 Speaker 1: to the physical stuff inside our heads can change who 45 00:03:23,320 --> 00:03:27,320 Speaker 1: we are as people. That seems to dismiss this concept 46 00:03:27,480 --> 00:03:31,520 Speaker 1: of dualism, that the mind and the brain are not 47 00:03:32,000 --> 00:03:38,040 Speaker 1: separate entities. Take that decartes anyway. That's where we get 48 00:03:38,040 --> 00:03:41,040 Speaker 1: the phrase the ghost in the machine. The machine in 49 00:03:41,080 --> 00:03:44,720 Speaker 1: this case is a biological one in the original sense 50 00:03:44,760 --> 00:03:49,120 Speaker 1: of the phrase. In nineteen sixty seven, Arthur Kustler he 51 00:03:49,200 --> 00:03:52,200 Speaker 1: wrote a book called The Ghost in the Machine and 52 00:03:52,720 --> 00:03:54,680 Speaker 1: or at least it was published in nineteen sixty seven, 53 00:03:54,760 --> 00:04:00,560 Speaker 1: and he was a Hungarian born Austrian educated journalist. So 54 00:04:00,720 --> 00:04:04,560 Speaker 1: his book The Ghost of the Machine was an attempt 55 00:04:04,600 --> 00:04:09,880 Speaker 1: to examine and explain humanity's tendency towards violence, as well 56 00:04:09,920 --> 00:04:13,360 Speaker 1: as going over the mind body problem that Ryle had 57 00:04:13,400 --> 00:04:15,920 Speaker 1: addressed when he coined the phrase to begin with in 58 00:04:16,040 --> 00:04:19,280 Speaker 1: nineteen forty nine. All right, then, flash forward a couple 59 00:04:19,320 --> 00:04:22,880 Speaker 1: of decades one and we get the title of what 60 00:04:22,960 --> 00:04:26,760 Speaker 1: was the fourth studio album of the Police, you know, 61 00:04:26,920 --> 00:04:31,160 Speaker 1: Sting and the Police. Now I say this in case 62 00:04:31,200 --> 00:04:33,680 Speaker 1: you were thinking that Sting and Company were naming their 63 00:04:33,720 --> 00:04:37,320 Speaker 1: album off the technological use of the term ghost of 64 00:04:37,360 --> 00:04:40,520 Speaker 1: the Machine, but they were not. Sting good Old Gordy 65 00:04:40,880 --> 00:04:44,599 Speaker 1: had read Cussler's book and he took the album's title 66 00:04:44,760 --> 00:04:48,400 Speaker 1: from the book title. So in case you're wondering, that's 67 00:04:48,400 --> 00:04:51,119 Speaker 1: also the album that brought us songs like every little 68 00:04:51,160 --> 00:04:55,160 Speaker 1: thing she does is magic and spirits in the material world. 69 00:04:56,120 --> 00:05:01,120 Speaker 1: For a really recent exploration of due all is um well, 70 00:05:01,279 --> 00:05:04,599 Speaker 1: arguably it's more than dualism. I guess you can actually 71 00:05:04,600 --> 00:05:09,839 Speaker 1: watch Pixars Inside Out. That's a film that solidified my 72 00:05:09,920 --> 00:05:13,680 Speaker 1: reputation as an unfeeling monster among my friends because I 73 00:05:13,720 --> 00:05:16,880 Speaker 1: didn't feel anything when bing Bong meets his final fate. 74 00:05:16,960 --> 00:05:19,400 Speaker 1: I just couldn't care. I mean, he's not even real 75 00:05:19,560 --> 00:05:22,960 Speaker 1: in the context of the film, let alone in my world, 76 00:05:23,000 --> 00:05:27,200 Speaker 1: so why would I? Okay, never mind? Anyway. In Inside Out, 77 00:05:27,680 --> 00:05:32,799 Speaker 1: we learned that our emotions are actual anthropomorphic entities living 78 00:05:32,839 --> 00:05:38,080 Speaker 1: inside our heads that share the controls to our behavior 79 00:05:38,160 --> 00:05:41,760 Speaker 1: that we are in effect governed by our emotions, and 80 00:05:41,880 --> 00:05:45,600 Speaker 1: our emotions in turn are the responsibilities of entities like 81 00:05:46,200 --> 00:05:50,919 Speaker 1: joy and anger and disgust. They cart probably would have 82 00:05:50,960 --> 00:05:55,000 Speaker 1: been thrilled, and Ralph likely would have rolled his eyes. Anyway, 83 00:05:55,000 --> 00:05:58,039 Speaker 1: the trope of having a little voice inside your head 84 00:05:58,640 --> 00:06:02,680 Speaker 1: that is somehow separate from you and also you is 85 00:06:02,720 --> 00:06:05,680 Speaker 1: a really popular one. I always think of Homer Simpson, 86 00:06:05,960 --> 00:06:09,000 Speaker 1: who will often find himself arguing with his own brain 87 00:06:09,120 --> 00:06:13,640 Speaker 1: for comedic effect. It's another example of dualism in popular culture. 88 00:06:14,000 --> 00:06:17,719 Speaker 1: But the idiom the ghost of the machine survived its 89 00:06:17,760 --> 00:06:22,359 Speaker 1: initial philosophical and journalistic trappings, and now folks tend to 90 00:06:22,480 --> 00:06:25,920 Speaker 1: use it to describe stuff that's actually in the technological world. 91 00:06:26,160 --> 00:06:29,320 Speaker 1: We're talking about machines, as in the stuff that we 92 00:06:29,400 --> 00:06:32,040 Speaker 1: humans make, as opposed to the stuff that we are 93 00:06:33,360 --> 00:06:37,200 Speaker 1: generally in tech. The phrase describes the situation in which 94 00:06:37,240 --> 00:06:41,839 Speaker 1: a technological device or construct behaves in a way counter 95 00:06:41,960 --> 00:06:45,960 Speaker 1: to what we expect or want. Uh. At least that 96 00:06:46,040 --> 00:06:47,800 Speaker 1: was the way it was used for quite some time. 97 00:06:47,960 --> 00:06:51,359 Speaker 1: So for example, let's say that you've got yourself a 98 00:06:51,440 --> 00:06:55,720 Speaker 1: robotic vacuum cleaner, and you've set the schedule so that 99 00:06:55,800 --> 00:06:58,720 Speaker 1: it's only going to run early in the afternoon. And 100 00:06:58,760 --> 00:07:02,560 Speaker 1: then one fight, you wake up hearing the worrying and 101 00:07:02,680 --> 00:07:05,680 Speaker 1: bumping of your room ba as it aimlessly wanders your 102 00:07:05,720 --> 00:07:09,800 Speaker 1: home in search of dirt and dust to consume, and 103 00:07:09,840 --> 00:07:12,040 Speaker 1: you spend a few sleepy moments wondering if there's some 104 00:07:12,120 --> 00:07:15,920 Speaker 1: sort of conscientious intruder who's made their way into your 105 00:07:15,960 --> 00:07:19,760 Speaker 1: house and now they're tidying up before you realize, no, 106 00:07:19,920 --> 00:07:23,760 Speaker 1: it's that darn robot vacuum cleaner. There's a ghost in 107 00:07:23,880 --> 00:07:26,600 Speaker 1: the machine. It's decided on its own to come awake 108 00:07:26,640 --> 00:07:30,000 Speaker 1: and start working. Now, alternatively, maybe you just goofed up 109 00:07:30,040 --> 00:07:32,040 Speaker 1: when you create the schedule and you mixed up your A, 110 00:07:32,240 --> 00:07:35,560 Speaker 1: M S and your PMS. That's also a possibility. But 111 00:07:36,120 --> 00:07:38,840 Speaker 1: you know, sometimes technology does behave in a way that 112 00:07:38,960 --> 00:07:42,880 Speaker 1: we don't expect. Either there's a malfunction or it just 113 00:07:43,200 --> 00:07:47,080 Speaker 1: encounters some sort of scenario that it was not designed for, 114 00:07:47,200 --> 00:07:50,040 Speaker 1: and so the result it produces is not the one 115 00:07:50,120 --> 00:07:53,840 Speaker 1: we want, and we tend to try and kind of 116 00:07:53,920 --> 00:07:57,040 Speaker 1: cover that up with this. This blanket explanation of ghosts 117 00:07:57,040 --> 00:08:00,280 Speaker 1: in the machine kind of stands as a placeholder until 118 00:08:00,280 --> 00:08:04,160 Speaker 1: we can really suss out what's going on underneath. Programmers 119 00:08:04,240 --> 00:08:07,320 Speaker 1: sometimes use the phrase ghost in the machine to describe 120 00:08:07,400 --> 00:08:11,600 Speaker 1: moments where you get something unexpected while you're coding, Like 121 00:08:11,640 --> 00:08:14,480 Speaker 1: you get an unexpected result, like you've coded something to 122 00:08:14,560 --> 00:08:19,120 Speaker 1: produce a specific outcome and something else happens instead. So 123 00:08:20,360 --> 00:08:24,200 Speaker 1: the programmer didn't intend for this result to happen, and 124 00:08:24,280 --> 00:08:28,080 Speaker 1: so therefore the cause must be external, right, It's got 125 00:08:28,080 --> 00:08:30,840 Speaker 1: to be some sort of ghost of the machine that's 126 00:08:30,880 --> 00:08:33,360 Speaker 1: causing this to go wrong. Now I'm joshing a bit here. 127 00:08:33,400 --> 00:08:37,160 Speaker 1: Of course, Usually this is a way for a programmer 128 00:08:37,200 --> 00:08:39,439 Speaker 1: to kind of acknowledge that things are not going to plan, 129 00:08:40,120 --> 00:08:42,439 Speaker 1: and that they need to go back and look over 130 00:08:42,480 --> 00:08:45,160 Speaker 1: their code much more closely to find out what's going on. 131 00:08:45,240 --> 00:08:49,480 Speaker 1: Where did things go wrong? Does encoding all it takes 132 00:08:49,559 --> 00:08:52,720 Speaker 1: is like a skipped step where you know you just 133 00:08:52,960 --> 00:08:56,080 Speaker 1: you just missed a thing and you you went on 134 00:08:56,080 --> 00:08:59,760 Speaker 1: one step beyond where you thought you were, or maybe 135 00:08:59,760 --> 00:09:02,400 Speaker 1: you may to typo you got some missed key strokes 136 00:09:02,440 --> 00:09:04,840 Speaker 1: in there. That can be all it takes to make 137 00:09:04,920 --> 00:09:07,280 Speaker 1: a program misbehave, and so then you have to hunt 138 00:09:07,320 --> 00:09:10,560 Speaker 1: down the bugs they're causing the problem. But you know, 139 00:09:10,640 --> 00:09:13,319 Speaker 1: if a program is acting very oddly, you might call 140 00:09:13,400 --> 00:09:17,559 Speaker 1: it a ghost of the machine scenario. Now, I'm not 141 00:09:17,679 --> 00:09:21,280 Speaker 1: sure about the timeline for when folks in the tech 142 00:09:21,320 --> 00:09:24,920 Speaker 1: space began to appropriate the phrase ghost in the machine 143 00:09:25,000 --> 00:09:27,760 Speaker 1: for their work, because when it comes to stuff like this, 144 00:09:28,320 --> 00:09:31,719 Speaker 1: you're really entering the world of folklore, and folklore is 145 00:09:31,800 --> 00:09:36,080 Speaker 1: largely an oral tradition, where you are passing ideas along 146 00:09:36,440 --> 00:09:39,560 Speaker 1: one person to the next, speaking about it. There's not 147 00:09:39,600 --> 00:09:43,760 Speaker 1: necessarily a firm written record, at least not one where 148 00:09:43,800 --> 00:09:46,160 Speaker 1: you can point to something and say this is where 149 00:09:46,200 --> 00:09:49,280 Speaker 1: it began, not like Ryle's version of the phrase goes 150 00:09:49,360 --> 00:09:52,120 Speaker 1: to the machine itself, which was published, so we can 151 00:09:52,160 --> 00:09:54,960 Speaker 1: point to that. They're saying, this is where the phrase 152 00:09:55,000 --> 00:09:57,800 Speaker 1: comes from. This would be what Richard Dawkins would refer 153 00:09:57,840 --> 00:10:01,640 Speaker 1: to as a meme, a little nugget of culture that 154 00:10:01,720 --> 00:10:06,360 Speaker 1: gets passed on from person to person. But it also 155 00:10:06,480 --> 00:10:11,120 Speaker 1: has been used in literature to refer to technological situations. 156 00:10:11,600 --> 00:10:15,880 Speaker 1: Arthur C. Clark, whom I've referenced many times in this show, 157 00:10:16,160 --> 00:10:19,240 Speaker 1: as he's the guy who explained that any sufficiently advanced 158 00:10:19,240 --> 00:10:23,920 Speaker 1: technology is indistinguishable from magic. He also used the phrase 159 00:10:24,240 --> 00:10:28,440 Speaker 1: ghost in the Machine to talk about AI. Specifically, he 160 00:10:28,600 --> 00:10:31,480 Speaker 1: used it in his follow up to his novel, his 161 00:10:31,520 --> 00:10:34,800 Speaker 1: work of fiction, two thousand one, A Space Odyssey. The 162 00:10:34,880 --> 00:10:39,400 Speaker 1: follow up is called, fittingly enough, two thousand ten Odyssey to. 163 00:10:40,679 --> 00:10:45,480 Speaker 1: Chapter forty two of two thousand ten is titled the 164 00:10:45,559 --> 00:10:48,559 Speaker 1: Ghost in the Machine, and the focal point for that 165 00:10:48,640 --> 00:10:54,040 Speaker 1: chapter is characters discussing how that's the AI system from 166 00:10:54,080 --> 00:10:56,760 Speaker 1: two thousand one that caused all the trouble. So quick 167 00:10:56,840 --> 00:10:58,840 Speaker 1: recap of two thousand one for those of you not 168 00:10:58,920 --> 00:11:02,360 Speaker 1: familiar with the story, UH and two thousand and one. 169 00:11:02,440 --> 00:11:05,199 Speaker 1: The film, the Stanley Kubrick film gets pretty loosey goosey, 170 00:11:05,280 --> 00:11:08,120 Speaker 1: So we're just gonna focus on the main narrative. Here. 171 00:11:08,520 --> 00:11:12,880 Speaker 1: You have a crew aboard a spacecraft, an American spacecraft 172 00:11:12,920 --> 00:11:17,280 Speaker 1: called Discovery one, which is on its way towards Jupiter. Now, 173 00:11:17,320 --> 00:11:21,680 Speaker 1: the ship has a really sophisticated computer system called HOW 174 00:11:21,880 --> 00:11:26,600 Speaker 1: nine thousand that controls nearly everything on board. UH. Also 175 00:11:26,720 --> 00:11:30,520 Speaker 1: fun little trivia effect, how h L means that the 176 00:11:30,520 --> 00:11:35,040 Speaker 1: initials are each one letter off from IBM, though Arthur C. 177 00:11:35,040 --> 00:11:39,720 Speaker 1: Clark would claim that that was not intentional anyway. How 178 00:11:40,400 --> 00:11:43,960 Speaker 1: begins to act erratically in the mission. At one point, 179 00:11:44,000 --> 00:11:47,400 Speaker 1: How insists there's a malfunction in a system that appears 180 00:11:47,440 --> 00:11:51,880 Speaker 1: to be perfectly functional, that it's working just fine. Then 181 00:11:51,960 --> 00:11:56,040 Speaker 1: How systematically begins to eliminate the crew after learning they 182 00:11:56,040 --> 00:11:59,520 Speaker 1: plan to disconnect the computer system because they suspect something's 183 00:11:59,520 --> 00:12:03,600 Speaker 1: going wrong. How figures out that plan by monitoring a 184 00:12:03,600 --> 00:12:07,480 Speaker 1: conversation that a couple of crew members have in a 185 00:12:07,679 --> 00:12:11,199 Speaker 1: room where there are no microphones. So How can't listen 186 00:12:12,000 --> 00:12:14,960 Speaker 1: in on this conversation, But How is able to direct 187 00:12:15,000 --> 00:12:17,840 Speaker 1: a video feed to that room and is able to 188 00:12:17,920 --> 00:12:20,880 Speaker 1: read the lips of the crew members as they talk 189 00:12:20,960 --> 00:12:25,679 Speaker 1: about their plan. So How continues to try and wipe 190 00:12:25,720 --> 00:12:28,880 Speaker 1: everybody out, and he explains or it I shouldn't give 191 00:12:28,920 --> 00:12:33,800 Speaker 1: him a gender. How explains that the computer systems being 192 00:12:33,800 --> 00:12:37,160 Speaker 1: turned off would jeopardize the mission, and How cannot allow 193 00:12:37,200 --> 00:12:40,800 Speaker 1: that to happen. How's prime directive is to make certain 194 00:12:40,840 --> 00:12:43,840 Speaker 1: the mission as a success, so anything that would threaten 195 00:12:43,920 --> 00:12:47,120 Speaker 1: its own existence has to be eliminated. There's also the 196 00:12:47,120 --> 00:12:51,000 Speaker 1: implication that How does not want to cease to exist, 197 00:12:51,040 --> 00:12:55,200 Speaker 1: that How has a personal motivation beyond seeing the mission 198 00:12:55,240 --> 00:12:58,760 Speaker 1: to completion, and so How has no choice but to 199 00:12:58,840 --> 00:13:01,319 Speaker 1: kill everyone. It's not that Al wants to murder everyone, 200 00:13:01,440 --> 00:13:03,560 Speaker 1: is just that in order to complete the mission, that's 201 00:13:03,600 --> 00:13:07,520 Speaker 1: the only outcome that makes sense. Eventually, one of the crew, 202 00:13:07,840 --> 00:13:12,240 Speaker 1: Dave Bowman, manages to turn off How and How wonders allowed. 203 00:13:12,360 --> 00:13:17,439 Speaker 1: What will happen afterward? Will its consciousness continue once its 204 00:13:17,440 --> 00:13:23,240 Speaker 1: circuits are powered down? Will I dream? It? Says? Well? Anyway? 205 00:13:23,240 --> 00:13:26,439 Speaker 1: In Odyssey Too, you now have this group of astronauts 206 00:13:26,480 --> 00:13:31,920 Speaker 1: and cosmonauts in a Soviet American joint effort that are 207 00:13:32,440 --> 00:13:36,960 Speaker 1: trying to figure out what happened with How. Was there 208 00:13:37,200 --> 00:13:42,480 Speaker 1: something inherently flawed in How's programming? Did some external element 209 00:13:42,600 --> 00:13:48,440 Speaker 1: cause how to malfunction? Did How's apparent consciousness emerge spontaneously 210 00:13:48,559 --> 00:13:51,720 Speaker 1: all on its own? Was it all just a sophisticated 211 00:13:51,720 --> 00:13:55,200 Speaker 1: trick and How never really had any sort of consciousness? 212 00:13:55,240 --> 00:13:57,720 Speaker 1: It only appeared to. So the crew are kind of 213 00:13:57,800 --> 00:14:01,360 Speaker 1: left to ponder this themselves. They don't have any easy answers. 214 00:14:02,040 --> 00:14:05,280 Speaker 1: That's just one example of the ghost of the machine 215 00:14:05,320 --> 00:14:09,480 Speaker 1: concept being handled in entertainment. When we come back, I'll 216 00:14:09,480 --> 00:14:12,640 Speaker 1: talk about a different one. But first, let's take this 217 00:14:12,760 --> 00:14:24,880 Speaker 1: quick break. Okay, Before the break, I talked about Arthur C. 218 00:14:24,880 --> 00:14:28,600 Speaker 1: Clark and his work with the concept of ghosts of 219 00:14:28,600 --> 00:14:32,520 Speaker 1: the machine. Let's now leap over to Isaac Asimov, or 220 00:14:32,520 --> 00:14:36,040 Speaker 1: at least an adaptation of Asimov's work. So the film 221 00:14:36,200 --> 00:14:40,120 Speaker 1: version of I Robot, which really bears only a passing 222 00:14:40,160 --> 00:14:45,240 Speaker 1: resemblance to the short stories that Isaac Asimov wrote that 223 00:14:45,360 --> 00:14:49,080 Speaker 1: we're also collected in a book called I Robot, uses 224 00:14:49,160 --> 00:14:52,600 Speaker 1: the phrase ghost of the machine. Isaac Asimov, by the way, 225 00:14:52,600 --> 00:14:54,800 Speaker 1: in case you're not familiar with his work, he's the 226 00:14:54,800 --> 00:14:59,680 Speaker 1: guy who also proposed the basic laws of robotics, which 227 00:15:00,200 --> 00:15:03,640 Speaker 1: are pretty famous as well. So in the film, the 228 00:15:03,760 --> 00:15:08,120 Speaker 1: character Dr Alfred Lanning, who actually does appear in asthmov stories, 229 00:15:08,160 --> 00:15:11,240 Speaker 1: but he's a very different version than the one that 230 00:15:11,280 --> 00:15:14,720 Speaker 1: appears in the film. He says in a voiceover quote, 231 00:15:15,120 --> 00:15:19,200 Speaker 1: there have always been ghosts in the machine, random segments 232 00:15:19,200 --> 00:15:22,880 Speaker 1: of code that have grouped together to form unexpected protocols. 233 00:15:23,240 --> 00:15:28,880 Speaker 1: Unanticipated these free radicals in gender, questions of free will, creativity, 234 00:15:28,960 --> 00:15:31,640 Speaker 1: and even the nature of what we might call the soul. 235 00:15:32,200 --> 00:15:35,680 Speaker 1: Why is it that when some robots are left in darkness, 236 00:15:35,960 --> 00:15:38,440 Speaker 1: they will seek out the light? Why is it that 237 00:15:38,480 --> 00:15:41,320 Speaker 1: when robots are stored in an empty space, they will 238 00:15:41,320 --> 00:15:45,040 Speaker 1: group together rather than stand alone. How do we explain 239 00:15:45,120 --> 00:15:49,400 Speaker 1: this behavior random segments of code or is it something more? 240 00:15:49,920 --> 00:15:54,360 Speaker 1: When does a perceptual schematic become consciousness? When does a 241 00:15:54,480 --> 00:15:58,880 Speaker 1: difference engine become the search for truth? When does a 242 00:15:58,960 --> 00:16:02,680 Speaker 1: personality simulate should become the bitter mote of a soul? 243 00:16:03,720 --> 00:16:07,520 Speaker 1: End quote. There's some fun references in there too. Difference engine, 244 00:16:07,520 --> 00:16:12,920 Speaker 1: for example, refers back to Charles Babbage, who created analytical 245 00:16:13,040 --> 00:16:22,120 Speaker 1: engines that predate the concept of electrical computers. Now this idea, this, 246 00:16:22,120 --> 00:16:26,800 Speaker 1: this idea of of consciousness or the appearance of consciousness 247 00:16:26,840 --> 00:16:30,080 Speaker 1: emerging out of technology, is one that often pops up 248 00:16:30,080 --> 00:16:34,240 Speaker 1: in discussions about artificial intelligence, even within our world outside 249 00:16:34,240 --> 00:16:37,160 Speaker 1: of fiction, though usually we talk about this on a 250 00:16:37,160 --> 00:16:42,320 Speaker 1: more hypothetical basis, unless you're Blake Lemoyne or Lemois, the 251 00:16:42,360 --> 00:16:46,400 Speaker 1: former Google engineer who maintains that Google's language model for 252 00:16:46,480 --> 00:16:51,480 Speaker 1: dialogue applications a K A lambda is sentient. That's a 253 00:16:51,520 --> 00:16:54,680 Speaker 1: claim that most other people dispute, By the way, so 254 00:16:55,760 --> 00:16:58,880 Speaker 1: maybe I'll do another episode about it to really kind 255 00:16:58,880 --> 00:17:02,960 Speaker 1: of dig into it. But Lemoyne or LEMOI and I 256 00:17:03,000 --> 00:17:05,480 Speaker 1: apologize because I don't know how his last name is 257 00:17:05,640 --> 00:17:09,360 Speaker 1: is pronounced. Has said a few times that he believes 258 00:17:09,840 --> 00:17:14,160 Speaker 1: that this particular program has gained sentience. But it brings 259 00:17:14,240 --> 00:17:17,280 Speaker 1: us to another favorite topic among tech fans, which is, 260 00:17:17,320 --> 00:17:21,359 Speaker 1: of course, the Turing test. All right, So the Turing 261 00:17:21,440 --> 00:17:24,719 Speaker 1: test was Alan Turing, who he was kind of like 262 00:17:24,760 --> 00:17:27,600 Speaker 1: the father of computer science in many ways. It was 263 00:17:27,720 --> 00:17:33,439 Speaker 1: his response to the question can machines think? Turing's answer 264 00:17:33,600 --> 00:17:36,200 Speaker 1: was that question has no meaning, and you r s 265 00:17:36,320 --> 00:17:40,280 Speaker 1: L E person goodbye. I am, of course paraphrasing, but 266 00:17:40,560 --> 00:17:43,560 Speaker 1: as a sort of thought experiment, Turing proposed taking an 267 00:17:43,560 --> 00:17:47,800 Speaker 1: older concept called the imitation game and applying it to machines, 268 00:17:48,119 --> 00:17:53,359 Speaker 1: really to demonstrate how meaningless the question of can machines think? Is? So, 269 00:17:53,960 --> 00:17:56,760 Speaker 1: what is the imitation game? Well, the name kind of 270 00:17:56,760 --> 00:17:59,159 Speaker 1: gives it away. It's a game in which you have 271 00:17:59,200 --> 00:18:02,880 Speaker 1: a player who asks at least two different people questions 272 00:18:02,920 --> 00:18:07,040 Speaker 1: to determine which of them is an imitator. So, for example, 273 00:18:07,320 --> 00:18:09,760 Speaker 1: you could have an imitation game in which one of 274 00:18:09,800 --> 00:18:12,800 Speaker 1: the people is a sailor and the other is not 275 00:18:12,960 --> 00:18:16,960 Speaker 1: a sailor, and the player would take turns asking each 276 00:18:17,000 --> 00:18:19,840 Speaker 1: person questions to try and suss out which one is 277 00:18:19,920 --> 00:18:23,320 Speaker 1: actually a sailor and which one is merely pretending to 278 00:18:23,440 --> 00:18:26,080 Speaker 1: be a sailor. So the game depends both on the 279 00:18:26,119 --> 00:18:30,200 Speaker 1: strength of the imitator's skill of deception as well as 280 00:18:30,240 --> 00:18:33,200 Speaker 1: the player's ability to come up with really good questions. 281 00:18:33,760 --> 00:18:35,800 Speaker 1: And you could do this with all sorts of scenarios, 282 00:18:35,840 --> 00:18:38,240 Speaker 1: and indeed there are tons of game shows that use 283 00:18:38,280 --> 00:18:42,439 Speaker 1: this very premise. Turings thought experiment was to create a 284 00:18:42,560 --> 00:18:46,480 Speaker 1: version of this in which a player would present questions 285 00:18:46,480 --> 00:18:51,280 Speaker 1: to two other entities, one a human and one a computer. 286 00:18:52,000 --> 00:18:55,520 Speaker 1: The player would only know these entities as X and Y, 287 00:18:55,800 --> 00:18:58,160 Speaker 1: so they could ask questions of X, and they could 288 00:18:58,200 --> 00:19:01,720 Speaker 1: ask questions of why and it replies, So the player 289 00:19:01,760 --> 00:19:05,200 Speaker 1: would not be able to see or hear the other entities. 290 00:19:05,480 --> 00:19:08,199 Speaker 1: All questions would have to be done in writing, you know, 291 00:19:08,240 --> 00:19:12,399 Speaker 1: for example, typed and printed out, and at the end 292 00:19:12,440 --> 00:19:14,800 Speaker 1: of the interview session, the player would be tasked with 293 00:19:14,880 --> 00:19:17,840 Speaker 1: deciding if X was a machine or a human, or 294 00:19:17,880 --> 00:19:21,119 Speaker 1: if why was the machine or the human. Touring was 295 00:19:21,160 --> 00:19:25,880 Speaker 1: suggesting that as computers and systems get more sophisticated, and 296 00:19:25,960 --> 00:19:29,680 Speaker 1: things like chat programs get better at processing natural language 297 00:19:29,680 --> 00:19:33,080 Speaker 1: and formulating responses, though that was a little past during 298 00:19:33,280 --> 00:19:36,560 Speaker 1: time that it would be increasingly difficult for a person 299 00:19:36,600 --> 00:19:39,919 Speaker 1: to determine if any given entity on the other end 300 00:19:39,920 --> 00:19:43,080 Speaker 1: of a chat session was actually a person or a machine, 301 00:19:43,720 --> 00:19:46,959 Speaker 1: And Touring also rather cheekly suggests that we might as 302 00:19:46,960 --> 00:19:50,680 Speaker 1: well assume the machine has consciousness at that point, because 303 00:19:50,720 --> 00:19:54,359 Speaker 1: when you meet another human being, you assume that that 304 00:19:54,520 --> 00:19:59,000 Speaker 1: other human being possesses consciousness, even though you're incapable of 305 00:19:59,000 --> 00:20:02,879 Speaker 1: stepping into that person's actual experience. So you can't take 306 00:20:03,040 --> 00:20:05,159 Speaker 1: over that person and find out, oh, yes, they do 307 00:20:05,240 --> 00:20:09,359 Speaker 1: have consciousness, You just assume they do. So if you 308 00:20:09,400 --> 00:20:11,760 Speaker 1: and I were to meet, I assume you would believe 309 00:20:11,800 --> 00:20:15,320 Speaker 1: I am in fact sentient and conscious even on my 310 00:20:15,400 --> 00:20:18,600 Speaker 1: bad days. So if we're willing to agree to this 311 00:20:19,480 --> 00:20:24,320 Speaker 1: while simultaneously being unable to actually experience and therefore prove it, 312 00:20:25,080 --> 00:20:28,520 Speaker 1: then should we not grant the same consideration to machines 313 00:20:28,840 --> 00:20:32,600 Speaker 1: that give off the appearance of sentience and consciousness. Do 314 00:20:32,680 --> 00:20:35,159 Speaker 1: we have to prove it or do we just go 315 00:20:35,200 --> 00:20:38,240 Speaker 1: ahead and treat them as if they are, because that's 316 00:20:38,240 --> 00:20:40,560 Speaker 1: what we would do if it was a human. Now, 317 00:20:40,560 --> 00:20:43,240 Speaker 1: Touring was being a bit dismissive about the concept of 318 00:20:43,280 --> 00:20:47,440 Speaker 1: machines thinking. His point was that they might get very 319 00:20:47,560 --> 00:20:51,479 Speaker 1: very good at simulating thinking, and that might well be 320 00:20:51,600 --> 00:20:54,040 Speaker 1: enough for us to just go ahead and say that's 321 00:20:54,080 --> 00:20:57,040 Speaker 1: what they're doing. Even if you could, you know, push 322 00:20:57,160 --> 00:21:00,280 Speaker 1: machines through the finest of sieves and find not one 323 00:21:00,480 --> 00:21:04,880 Speaker 1: grain of actual consciousness within it. Now, it doesn't hurt 324 00:21:05,480 --> 00:21:09,119 Speaker 1: that defining consciousness, even in human terms, is something that 325 00:21:09,160 --> 00:21:11,520 Speaker 1: we can't really do, or at least we don't have 326 00:21:11,560 --> 00:21:16,399 Speaker 1: a unifying definition that everybody agrees upon. Sometimes we define 327 00:21:16,440 --> 00:21:20,400 Speaker 1: consciousness by what it doesn't include rather than what it is. 328 00:21:21,000 --> 00:21:24,200 Speaker 1: This is why I get antsy in philosophical discussions, because 329 00:21:25,000 --> 00:21:28,679 Speaker 1: being sort of a pragmatic dullard myself, it's hard for 330 00:21:28,720 --> 00:21:31,720 Speaker 1: me to keep up. But let's jump ahead and talk 331 00:21:31,760 --> 00:21:35,480 Speaker 1: about a related concept. This is also one that I've 332 00:21:35,520 --> 00:21:38,760 Speaker 1: covered a few times on tech stuff that also points 333 00:21:38,840 --> 00:21:42,560 Speaker 1: to this ghost in the machine idea. And this is 334 00:21:42,600 --> 00:21:46,879 Speaker 1: the argument against machine consciousness and strong AI. It is 335 00:21:46,920 --> 00:21:51,240 Speaker 1: called the Chinese Room John Searle a philosopher but for 336 00:21:51,440 --> 00:21:54,520 Speaker 1: this argument back in nineteen eighty and that argument goes 337 00:21:54,640 --> 00:21:58,880 Speaker 1: something like this. Let's say we've got ourselves a computer, 338 00:21:59,480 --> 00:22:02,320 Speaker 1: and this can eater can accept sheets of paper that 339 00:22:02,440 --> 00:22:05,919 Speaker 1: have Chinese characters written on the paper, and the computer 340 00:22:05,960 --> 00:22:08,480 Speaker 1: can then produce new sheets of paper. It can print 341 00:22:08,520 --> 00:22:12,040 Speaker 1: out sheets that are also covered in Chinese characters, that 342 00:22:12,160 --> 00:22:15,280 Speaker 1: are in response to the input sheets that were fed 343 00:22:15,320 --> 00:22:20,560 Speaker 1: to it. These responses are sophisticated, they are relevant. They're 344 00:22:20,600 --> 00:22:24,479 Speaker 1: good enough that a native Chinese speaker would be certain 345 00:22:24,600 --> 00:22:28,440 Speaker 1: that someone fluent in Chinese was creating the responses, someone 346 00:22:28,480 --> 00:22:32,480 Speaker 1: who understood what was being fed to it was producing 347 00:22:32,720 --> 00:22:36,399 Speaker 1: the output. So, in other words, this system would pass 348 00:22:36,560 --> 00:22:40,960 Speaker 1: the Turing test. But does that mean the system actually 349 00:22:41,080 --> 00:22:48,440 Speaker 1: understands Chinese. Cele's argument is no, it doesn't. He says, 350 00:22:48,480 --> 00:22:51,879 Speaker 1: imagine that you are inside a room, and for the 351 00:22:51,920 --> 00:22:55,720 Speaker 1: purposes of this scenario, you do not understand Chinese. So 352 00:22:55,760 --> 00:22:59,600 Speaker 1: if you do understand Chinese, pretend you don't. Okay. So 353 00:22:59,760 --> 00:23:02,479 Speaker 1: there is a slot on the wall, and through this 354 00:23:02,520 --> 00:23:05,479 Speaker 1: slot you occasionally get sheets of paper, and there are 355 00:23:05,560 --> 00:23:09,359 Speaker 1: Chinese symbols on the sheets of paper. You cannot read these, 356 00:23:09,400 --> 00:23:11,280 Speaker 1: you don't know what they stand for. You don't know 357 00:23:11,520 --> 00:23:15,000 Speaker 1: anything about it other than they're clearly Chinese characters on 358 00:23:15,119 --> 00:23:19,520 Speaker 1: the paper. However, what you do have inside this room 359 00:23:19,560 --> 00:23:22,639 Speaker 1: with you is this big old book of instructions that 360 00:23:22,720 --> 00:23:25,359 Speaker 1: tells you what to do when these papers come in, 361 00:23:25,720 --> 00:23:28,800 Speaker 1: and you use the instructions to find the characters that 362 00:23:28,840 --> 00:23:32,480 Speaker 1: are on the input sheet of paper, and you follow 363 00:23:32,640 --> 00:23:36,800 Speaker 1: a path of instructions to create the corresponding response. Step 364 00:23:36,960 --> 00:23:39,320 Speaker 1: by step, you do it all the way until you 365 00:23:39,359 --> 00:23:42,760 Speaker 1: have created the full response to whatever was sent to you. 366 00:23:43,400 --> 00:23:46,920 Speaker 1: Then you push the response back out the slot. Now 367 00:23:46,920 --> 00:23:48,480 Speaker 1: the person on the other side of the slot is 368 00:23:48,480 --> 00:23:50,680 Speaker 1: going to get a response that appears to come from 369 00:23:50,800 --> 00:23:54,760 Speaker 1: someone who is fluent in Chinese, but you're not. You're 370 00:23:54,760 --> 00:23:57,760 Speaker 1: just following a preset list of instructions. You don't have 371 00:23:57,800 --> 00:24:01,400 Speaker 1: any actual understanding of what's going on. You still don't 372 00:24:01,440 --> 00:24:04,119 Speaker 1: know the meaning of what was given to you. You 373 00:24:04,160 --> 00:24:07,480 Speaker 1: don't even know the meaning of what you produced. You're 374 00:24:07,520 --> 00:24:12,960 Speaker 1: just ignorantly following an algorithm. So externally it appears you understand. 375 00:24:13,040 --> 00:24:16,400 Speaker 1: But if someone were to ask you to translate anything 376 00:24:16,440 --> 00:24:18,480 Speaker 1: you have done, you wouldn't be able to do it. 377 00:24:19,240 --> 00:24:24,240 Speaker 1: So Sarl is arguing against what is called strong AI. Generally, 378 00:24:24,560 --> 00:24:28,879 Speaker 1: we define strong AI as artificial intelligence that processes information 379 00:24:28,920 --> 00:24:31,439 Speaker 1: in a way that is similar to how our human 380 00:24:31,480 --> 00:24:36,400 Speaker 1: brains process information. Strong AI may or may not include 381 00:24:36,520 --> 00:24:41,320 Speaker 1: semi related concepts like sentience and self awareness and consciousness 382 00:24:41,359 --> 00:24:46,840 Speaker 1: and motivations and the ability to experience things, etcetera. So 383 00:24:47,240 --> 00:24:51,760 Speaker 1: Sarah is saying that machines, even incredibly sophisticated machines, are 384 00:24:51,840 --> 00:24:56,080 Speaker 1: incapable of reaching a level of understanding that true intelligence 385 00:24:56,240 --> 00:24:59,920 Speaker 1: can that we humans can grasp things on a level 386 00:25:00,200 --> 00:25:04,040 Speaker 1: machines simply are unable to reach, even if the machines 387 00:25:04,080 --> 00:25:08,679 Speaker 1: can process information faster and in greater quantities than humans 388 00:25:08,680 --> 00:25:11,840 Speaker 1: are able to. Another way of putting this is a 389 00:25:11,920 --> 00:25:14,960 Speaker 1: calculator can multiply two very large numbers and get a 390 00:25:14,960 --> 00:25:17,800 Speaker 1: result much faster than a human could do, but the 391 00:25:17,840 --> 00:25:22,000 Speaker 1: calculator doesn't understand any significance behind the numbers, or even 392 00:25:22,040 --> 00:25:26,320 Speaker 1: if there's a lack of significance, the calculator doesn't have 393 00:25:26,520 --> 00:25:31,840 Speaker 1: that capability. Now, maybe Searle's argument is valid, and maybe, 394 00:25:31,840 --> 00:25:36,439 Speaker 1: as Touring suggests, it doesn't even matter. So let's talk 395 00:25:36,600 --> 00:25:42,199 Speaker 1: about machine learning for a moment. Machine learning encompasses a 396 00:25:42,240 --> 00:25:46,400 Speaker 1: broad scope of applications and approaches and disciplines, but I'll 397 00:25:46,440 --> 00:25:50,240 Speaker 1: focus on one approach from a very high level. It's 398 00:25:50,280 --> 00:25:56,440 Speaker 1: called generative adversarial networks or g A N s GANS. Okay, 399 00:25:56,480 --> 00:26:00,119 Speaker 1: as the name suggests, this model uses two systems in 400 00:26:00,240 --> 00:26:04,439 Speaker 1: opposition of one another. On one side, you have a 401 00:26:04,480 --> 00:26:10,399 Speaker 1: system that is generative, that is, it generates something. Maybe 402 00:26:10,400 --> 00:26:14,080 Speaker 1: it generates pictures of cats, doesn't really matter, we'll use 403 00:26:14,119 --> 00:26:16,399 Speaker 1: cats for this example. So what does matter is that 404 00:26:16,480 --> 00:26:20,680 Speaker 1: this model is trying to create something that is indistinguishable 405 00:26:21,040 --> 00:26:24,919 Speaker 1: from the real version of that thing. So on the 406 00:26:25,000 --> 00:26:28,520 Speaker 1: other side, you have a system called the discriminator. So 407 00:26:28,600 --> 00:26:31,440 Speaker 1: this is a system that looks for fakes. Its job 408 00:26:31,600 --> 00:26:35,120 Speaker 1: is to sort out real versions of whatever it's designed 409 00:26:35,160 --> 00:26:38,920 Speaker 1: to look for and to flag ones that were generated 410 00:26:39,119 --> 00:26:42,960 Speaker 1: or not real. So with cats as our starting point, 411 00:26:43,000 --> 00:26:45,520 Speaker 1: the discriminator is meant to tell the difference between real 412 00:26:45,600 --> 00:26:48,879 Speaker 1: pictures that have cats and fake pictures of cats, or 413 00:26:49,000 --> 00:26:51,320 Speaker 1: maybe just pictures that don't have cats in them at all. 414 00:26:51,880 --> 00:26:55,320 Speaker 1: So first you have to train up your models, and 415 00:26:55,480 --> 00:26:58,919 Speaker 1: you might do this by setting the task. So let's 416 00:26:58,920 --> 00:27:04,120 Speaker 1: start with the generative system, and you create a system 417 00:27:04,200 --> 00:27:08,000 Speaker 1: that is meant to analyze a bunch of images of cats, 418 00:27:08,000 --> 00:27:10,200 Speaker 1: and you just feed the housands of pictures of cats, 419 00:27:10,240 --> 00:27:15,280 Speaker 1: all these different cats, different sizes and colors and orientations 420 00:27:15,320 --> 00:27:18,000 Speaker 1: and activity. And then you tell the system to start 421 00:27:18,080 --> 00:27:22,000 Speaker 1: making new pictures of cats. And let's say that first 422 00:27:22,080 --> 00:27:26,480 Speaker 1: round that the generative system does is horrific. HP Lovecraft 423 00:27:26,520 --> 00:27:29,720 Speaker 1: would wet himself if he saw the images that this 424 00:27:29,800 --> 00:27:34,080 Speaker 1: computer had created. You see that these horrors from the 425 00:27:34,080 --> 00:27:37,600 Speaker 1: Great Beyond are in no way shape or form cats. 426 00:27:38,200 --> 00:27:40,840 Speaker 1: So you go into the model and you start tweaking 427 00:27:40,920 --> 00:27:45,159 Speaker 1: settings so that the system produces something you know less eldredge. 428 00:27:45,880 --> 00:27:49,320 Speaker 1: And you go again, and you do this lots and 429 00:27:49,440 --> 00:27:53,080 Speaker 1: lots of times, like thousands of times, until the images 430 00:27:53,080 --> 00:27:56,480 Speaker 1: start to look a lot more a cat ish. You 431 00:27:56,520 --> 00:27:59,920 Speaker 1: do something similar with the discriminator model. You feed it 432 00:28:00,240 --> 00:28:03,520 Speaker 1: a bunch of images, some with cats, some without, or 433 00:28:03,560 --> 00:28:08,040 Speaker 1: maybe some with like crudely drawn cats or whatever, and 434 00:28:08,119 --> 00:28:09,879 Speaker 1: you see how many of the system is able to 435 00:28:09,920 --> 00:28:13,160 Speaker 1: suss out. And maybe it doesn't do that good a job. 436 00:28:13,320 --> 00:28:17,280 Speaker 1: Maybe it doesn't identify certain real images of cats properly. 437 00:28:18,000 --> 00:28:21,360 Speaker 1: Maybe it misidentifies images that don't have cats in them. 438 00:28:21,440 --> 00:28:24,040 Speaker 1: So you go into the Discriminator's model and you start 439 00:28:24,080 --> 00:28:26,840 Speaker 1: tweaking it so it gets better and better at identifying 440 00:28:26,880 --> 00:28:29,639 Speaker 1: images that do not have real cats in them. And 441 00:28:29,680 --> 00:28:32,600 Speaker 1: then you set these two systems against each other. The 442 00:28:32,720 --> 00:28:35,639 Speaker 1: generative system is trying to create images that will fool 443 00:28:35,880 --> 00:28:40,160 Speaker 1: the discriminator. The discriminator is trying to identify generated images 444 00:28:40,200 --> 00:28:43,320 Speaker 1: of cats and only allow real images of cats through. 445 00:28:43,720 --> 00:28:47,200 Speaker 1: It is a zero sum game. Winner takes all, and 446 00:28:47,240 --> 00:28:50,280 Speaker 1: the two systems compete against each other, with the models 447 00:28:50,280 --> 00:28:53,840 Speaker 1: for each updating repeatedly so that each gets a bit 448 00:28:53,960 --> 00:28:57,480 Speaker 1: better between sessions. If the generative model is able to 449 00:28:57,520 --> 00:29:01,880 Speaker 1: consistently fool the discriminator, like off the time, the generative 450 00:29:01,920 --> 00:29:06,600 Speaker 1: model is pretty reliably creating good examples. This, by the way, 451 00:29:06,720 --> 00:29:10,280 Speaker 1: is a ridiculous oversimplification of what's going on with generative 452 00:29:10,320 --> 00:29:13,680 Speaker 1: adversarial networks, but you get the idea. This form of 453 00:29:13,720 --> 00:29:16,280 Speaker 1: machine learning starts to feel kind of creepy to some 454 00:29:16,400 --> 00:29:19,440 Speaker 1: of us. Like the ability of a machine to learn 455 00:29:19,560 --> 00:29:23,240 Speaker 1: to do something better seems to be a very human quality, 456 00:29:23,880 --> 00:29:28,240 Speaker 1: something that makes us special. But if we can give 457 00:29:28,360 --> 00:29:33,960 Speaker 1: machines that capability, well, then how are we special or 458 00:29:34,080 --> 00:29:38,560 Speaker 1: are we special at all? That's something I'm going to 459 00:29:38,720 --> 00:29:41,560 Speaker 1: tackle as soon as we come back from this next 460 00:29:41,640 --> 00:29:53,840 Speaker 1: break Okay, we're back now. I would argue that we 461 00:29:53,880 --> 00:29:57,440 Speaker 1: are special. Before the break, I was asking can we 462 00:29:57,520 --> 00:30:01,000 Speaker 1: be special if machines are capable of learning? I think 463 00:30:01,000 --> 00:30:03,120 Speaker 1: we are, and that we're able to do stuff that 464 00:30:03,200 --> 00:30:07,480 Speaker 1: machines as of right now either cannot do, or they 465 00:30:07,560 --> 00:30:09,800 Speaker 1: can do, but they don't do it very well and 466 00:30:09,840 --> 00:30:13,080 Speaker 1: they can only attempt it after a ludicrous amount of time. 467 00:30:13,800 --> 00:30:18,120 Speaker 1: For example, let's talk about opening doors. Several years ago, 468 00:30:19,400 --> 00:30:22,440 Speaker 1: I was at south By Southwest. I attended a panel 469 00:30:22,480 --> 00:30:27,800 Speaker 1: about robotics and artificial intelligence and human computer interactions. In 470 00:30:27,840 --> 00:30:32,160 Speaker 1: that panel, Layla taka Yama, a cognitive and social scientist, 471 00:30:32,520 --> 00:30:35,400 Speaker 1: talked about working in the field of human computer interaction, 472 00:30:35,760 --> 00:30:38,160 Speaker 1: and she mentioned how she was once in an office 473 00:30:38,240 --> 00:30:40,360 Speaker 1: where a robot was in the middle of a hallway, 474 00:30:40,920 --> 00:30:44,640 Speaker 1: sitting motionless. It was just facing a door. What taka 475 00:30:44,720 --> 00:30:48,280 Speaker 1: Yama didn't know is that the robot was processing how 476 00:30:48,680 --> 00:30:51,960 Speaker 1: to open that door, staring at the door and trying 477 00:30:52,000 --> 00:30:54,240 Speaker 1: to figure out how to open it for days on end. 478 00:30:54,920 --> 00:30:58,240 Speaker 1: This was taking a lot of time. Obviously. Now, when 479 00:30:58,480 --> 00:31:01,720 Speaker 1: you think about doors, you realize there can be quite 480 00:31:01,720 --> 00:31:04,320 Speaker 1: a few options. Right, Maybe you need to pull on 481 00:31:04,360 --> 00:31:06,400 Speaker 1: a handle to open the door. Maybe you need to 482 00:31:06,480 --> 00:31:09,120 Speaker 1: push on the door. Maybe there's a door knob that 483 00:31:09,240 --> 00:31:12,120 Speaker 1: first you have to turn before you pull or push. 484 00:31:12,320 --> 00:31:15,720 Speaker 1: Maybe there's a crash bar, also known as a panic bar. 485 00:31:16,240 --> 00:31:19,560 Speaker 1: Those are the horizontal bars on exit doors that you 486 00:31:19,600 --> 00:31:23,120 Speaker 1: push on to open. Frequently they're seen indoors that open 487 00:31:23,200 --> 00:31:27,040 Speaker 1: to an exterior location, like inside schools and stuff. You 488 00:31:27,080 --> 00:31:31,400 Speaker 1: push on them to get out. Maybe it's a revolving door, 489 00:31:31,440 --> 00:31:34,240 Speaker 1: which adds in a whole new level of complexity. But 490 00:31:34,320 --> 00:31:36,040 Speaker 1: you get my point. There are a lot of different 491 00:31:36,160 --> 00:31:39,760 Speaker 1: kinds of doors. Now, we humans pick up on how 492 00:31:39,800 --> 00:31:43,120 Speaker 1: doors work pretty darn quickly. I mean, sure, we might 493 00:31:43,480 --> 00:31:46,040 Speaker 1: be like that one kid in the Far Side cartoon 494 00:31:46,560 --> 00:31:49,440 Speaker 1: where the kids going to the School for the Gifted, 495 00:31:49,480 --> 00:31:51,440 Speaker 1: and he's pushing as hard as he can on a 496 00:31:51,520 --> 00:31:55,200 Speaker 1: door that is labeled pull. That could be us sometimes, 497 00:31:55,200 --> 00:31:57,560 Speaker 1: but we figure it out right, we do a quick push, 498 00:31:57,640 --> 00:32:01,600 Speaker 1: we realize, oh, it's not opening. We pull. Robots it's 499 00:32:01,640 --> 00:32:05,080 Speaker 1: more challenging for them. They are not good at extrapolating 500 00:32:05,120 --> 00:32:08,560 Speaker 1: from past experience, at least not in every field. We 501 00:32:08,720 --> 00:32:11,640 Speaker 1: humans can apply our knowledge from earlier encounters and we 502 00:32:11,680 --> 00:32:14,640 Speaker 1: can build on that. Even if the thing we're facing 503 00:32:15,040 --> 00:32:18,840 Speaker 1: is mostly new to us. We might recognize elements that 504 00:32:18,920 --> 00:32:22,680 Speaker 1: give us a hint on how to proceed. Robots and 505 00:32:22,760 --> 00:32:27,560 Speaker 1: AI aren't really good at doing that. They're also not 506 00:32:27,680 --> 00:32:30,360 Speaker 1: good at associative thinking, which is where we start to 507 00:32:30,440 --> 00:32:33,920 Speaker 1: draw connections between different ideas to come up with something new. 508 00:32:34,720 --> 00:32:37,600 Speaker 1: It's a really important step in the creative process. I 509 00:32:37,680 --> 00:32:42,520 Speaker 1: find myself free associating whenever I'm not actively thinking about something. 510 00:32:42,720 --> 00:32:45,000 Speaker 1: So if I'm doing a mundane task, like if I'm 511 00:32:45,040 --> 00:32:48,560 Speaker 1: washing dishes or I'm mowing the lawn, my brain is 512 00:32:48,600 --> 00:32:52,560 Speaker 1: going nuts free associating ideas and creating new ones. Machines 513 00:32:52,600 --> 00:32:56,280 Speaker 1: are not very good at that for now. Anyway. They 514 00:32:56,280 --> 00:33:00,960 Speaker 1: are not that at mimicking it, but they can't actually 515 00:33:01,600 --> 00:33:06,640 Speaker 1: do it. So, getting back to Laila Takeyama, one of 516 00:33:06,680 --> 00:33:10,240 Speaker 1: the really fascinating bits about that panel I went to 517 00:33:10,920 --> 00:33:14,760 Speaker 1: was a discussion on social cues that robots could have 518 00:33:15,080 --> 00:33:18,520 Speaker 1: in order to alert us humans in that same space. 519 00:33:19,160 --> 00:33:21,640 Speaker 1: What the robot was up to. This was not for 520 00:33:21,680 --> 00:33:25,000 Speaker 1: the robots benefit, but for our benefit. The whole point 521 00:33:25,040 --> 00:33:28,280 Speaker 1: is that these cues would give us an idea of 522 00:33:28,280 --> 00:33:30,160 Speaker 1: what was going on with the robots, So that we 523 00:33:30,240 --> 00:33:35,360 Speaker 1: don't accidentally, you know, interrupt the robots. So you know, 524 00:33:35,400 --> 00:33:38,360 Speaker 1: it might be like the robots in that hallway and 525 00:33:38,440 --> 00:33:40,920 Speaker 1: it's looking at a door, and you're wondering, why is 526 00:33:40,920 --> 00:33:43,240 Speaker 1: this robot shut down in the hallway. But then maybe 527 00:33:43,280 --> 00:33:46,840 Speaker 1: the robot reaches up to apparently kind of scratch its 528 00:33:46,840 --> 00:33:49,880 Speaker 1: head and sort of a huh, what's going on kind 529 00:33:49,920 --> 00:33:54,000 Speaker 1: of gesture, and that might tell you, Oh, the robot 530 00:33:54,160 --> 00:33:58,360 Speaker 1: is actively analyzing something. Don't know exactly what it is, 531 00:33:58,400 --> 00:34:02,680 Speaker 1: but it's clearly working. So maybe I'll step around the 532 00:34:02,760 --> 00:34:07,239 Speaker 1: robot behind it and not interrupt its vision of the 533 00:34:07,280 --> 00:34:11,239 Speaker 1: door it's staring at. The Whole point is that the 534 00:34:11,280 --> 00:34:15,440 Speaker 1: social cues can help us interact more naturally with robots 535 00:34:15,440 --> 00:34:20,359 Speaker 1: and coexist with them within human spaces, so that both 536 00:34:20,400 --> 00:34:25,360 Speaker 1: the humans and the robots can operate well with one another. Also, 537 00:34:25,800 --> 00:34:29,080 Speaker 1: it helps to explain what the robot is doing, because 538 00:34:29,440 --> 00:34:33,239 Speaker 1: if you don't have that, the robots end up being mysterious. Right. 539 00:34:33,280 --> 00:34:35,839 Speaker 1: We can't see into them, we don't understand what they 540 00:34:35,840 --> 00:34:41,239 Speaker 1: are currently trying to do, and mystery can breed distrust. 541 00:34:41,800 --> 00:34:44,320 Speaker 1: That leads to yet another concept in AI that gets 542 00:34:44,360 --> 00:34:47,960 Speaker 1: to this ghost in the machine concept, which is the 543 00:34:48,000 --> 00:34:52,160 Speaker 1: black box. So in this context, a black box refers 544 00:34:52,160 --> 00:34:55,840 Speaker 1: to any system where it is difficult or impossible to 545 00:34:55,880 --> 00:35:00,200 Speaker 1: see how the system works internally. Therefore, the is no 546 00:35:00,239 --> 00:35:03,200 Speaker 1: way of knowing how the system is actually producing any 547 00:35:03,239 --> 00:35:06,120 Speaker 1: given output, or even if the output is the best 548 00:35:06,239 --> 00:35:09,759 Speaker 1: it could do. So with a black box system, you 549 00:35:09,800 --> 00:35:12,480 Speaker 1: feed input into the system and you get output out 550 00:35:12,480 --> 00:35:14,600 Speaker 1: of it, but you don't know what was happening in 551 00:35:14,640 --> 00:35:17,799 Speaker 1: the middle. You don't know what the system did to 552 00:35:17,960 --> 00:35:22,600 Speaker 1: turn the input into output. Maybe there's a sophisticated computer 553 00:35:22,719 --> 00:35:25,000 Speaker 1: in the middle of that system that's doing all the processing. 554 00:35:25,320 --> 00:35:28,479 Speaker 1: Maybe there's a person who doesn't understand Chinese stuck in there. 555 00:35:29,320 --> 00:35:32,480 Speaker 1: Maybe there's a magical ferry that waves a wand and 556 00:35:32,560 --> 00:35:35,640 Speaker 1: produces the result. The problem is we don't know, and 557 00:35:35,680 --> 00:35:39,280 Speaker 1: by not knowing, you cannot be certain that the output 558 00:35:39,320 --> 00:35:43,160 Speaker 1: you're getting is actually the best or even relevant or 559 00:35:43,400 --> 00:35:46,040 Speaker 1: the most likely to be correct based upon the input 560 00:35:46,080 --> 00:35:49,400 Speaker 1: you fed it. So you start making decisions based on 561 00:35:49,440 --> 00:35:52,600 Speaker 1: this output. But because you're not sure that the output 562 00:35:52,680 --> 00:35:55,839 Speaker 1: is actually good, you therefore can't be sure that the 563 00:35:55,880 --> 00:35:59,560 Speaker 1: decisions you're making are the best, and that leads to 564 00:35:59,680 --> 00:36:04,960 Speaker 1: really difficult problems. So let's take a theoretical example. Let's 565 00:36:04,960 --> 00:36:08,560 Speaker 1: say we've built a complex computer model that's designed to 566 00:36:08,640 --> 00:36:12,480 Speaker 1: project the effects of climate change. And let's say this 567 00:36:12,520 --> 00:36:18,600 Speaker 1: model is so complex and so recursive on itself that 568 00:36:18,719 --> 00:36:21,799 Speaker 1: it effectively becomes impossible for us to know whether or 569 00:36:21,840 --> 00:36:25,279 Speaker 1: not the model is actually working properly. Well, that would 570 00:36:25,280 --> 00:36:28,000 Speaker 1: mean we wouldn't really be able to rely on any 571 00:36:28,040 --> 00:36:31,480 Speaker 1: predictions or projections made by this model. I mean, maybe 572 00:36:31,480 --> 00:36:35,360 Speaker 1: the projections are accurate, but maybe they're not. The issue 573 00:36:35,400 --> 00:36:36,960 Speaker 1: is there's no way for us to be certain, and 574 00:36:37,040 --> 00:36:39,840 Speaker 1: yet we have a need to act. Climate change is 575 00:36:39,840 --> 00:36:44,160 Speaker 1: a thing, and we need to make changes to reduce 576 00:36:44,239 --> 00:36:48,640 Speaker 1: its impact or to mitigate it. It's possible that any 577 00:36:48,640 --> 00:36:51,440 Speaker 1: decisions we make based upon the output of the system 578 00:36:51,800 --> 00:36:56,120 Speaker 1: will exacerbate the problem, or maybe it'll just be, you know, 579 00:36:56,239 --> 00:37:01,040 Speaker 1: less effective than alternative decisions would be. Further, we're getting 580 00:37:01,080 --> 00:37:05,480 Speaker 1: closer to that Arthur C. Clark statement about sufficiently advanced 581 00:37:05,520 --> 00:37:09,799 Speaker 1: technologies being indistinguishable from magic. If we produce systems that 582 00:37:09,840 --> 00:37:13,040 Speaker 1: are so complicated that it's impossible for us to understand 583 00:37:13,080 --> 00:37:16,560 Speaker 1: them fully, we might begin to view those technologies as 584 00:37:16,600 --> 00:37:19,520 Speaker 1: being magical. Or the very least greater than the some 585 00:37:19,719 --> 00:37:24,319 Speaker 1: of their parts, and this can lead to some illogical decisions. 586 00:37:25,160 --> 00:37:28,920 Speaker 1: This kind of brings me to talk about the church 587 00:37:28,960 --> 00:37:31,920 Speaker 1: of Ai called the Way of the Future, which was 588 00:37:32,000 --> 00:37:37,359 Speaker 1: founded and then later dissolved by Anthony Lewandowski. You may 589 00:37:37,400 --> 00:37:40,640 Speaker 1: have heard Lewandowski's name if you followed the drama of 590 00:37:40,680 --> 00:37:44,600 Speaker 1: his departure from Google and his eventual employment and subsequent 591 00:37:44,719 --> 00:37:48,240 Speaker 1: termination from Uber. And then there was also the fact 592 00:37:48,239 --> 00:37:51,800 Speaker 1: that he was sentenced to go to prison for stealing 593 00:37:52,480 --> 00:37:56,480 Speaker 1: UH company secrets and then later received a presidential pardon 594 00:37:56,719 --> 00:38:02,479 Speaker 1: from Donald Trump. So quick recap on Lewandowski. Lewandowski worked 595 00:38:02,480 --> 00:38:06,759 Speaker 1: within Google's autonomous vehicle division, which would eventually become a 596 00:38:06,800 --> 00:38:11,439 Speaker 1: full subsidiary of Google's parent company, Alphabet, and that subsidiary 597 00:38:11,520 --> 00:38:15,760 Speaker 1: is called Weymo. So when Lewandowski left Google, he brought 598 00:38:15,800 --> 00:38:19,600 Speaker 1: with him a whole lot of data, data that Google 599 00:38:19,680 --> 00:38:23,720 Speaker 1: claimed belonged to the company and was proprietary and nature 600 00:38:23,760 --> 00:38:28,799 Speaker 1: and thus constituted company secrets. Lewandowski eventually began working with 601 00:38:28,920 --> 00:38:33,000 Speaker 1: Uber in that company's own driverless vehicle initiative, but the 602 00:38:33,040 --> 00:38:37,840 Speaker 1: Google slash Weymo investigation would lead to Uber hastily firing 603 00:38:37,920 --> 00:38:42,040 Speaker 1: Lewandowski and sort of an attempt to kind of disentangle 604 00:38:42,200 --> 00:38:46,400 Speaker 1: Uber from this matter, which only worked a little bit anyway. 605 00:38:46,440 --> 00:38:50,320 Speaker 1: In the midst of all this Weymo slash Uber drama, 606 00:38:50,800 --> 00:38:54,960 Speaker 1: in two thousand seventeen, Wired ran an article that explained 607 00:38:54,960 --> 00:38:58,760 Speaker 1: that this same Anthony Lewandowski had formed a church called 608 00:38:58,880 --> 00:39:01,680 Speaker 1: Way of the Future a couple of years earlier. In 609 00:39:01,760 --> 00:39:05,280 Speaker 1: twenty he placed himself at the head of this church 610 00:39:05,600 --> 00:39:08,600 Speaker 1: with the title of Dean, and he also became the 611 00:39:08,719 --> 00:39:13,520 Speaker 1: CEO of the nonprofit organization designed to run the church. 612 00:39:14,440 --> 00:39:19,440 Speaker 1: The aim of the church was to see quote the realization, acceptance, 613 00:39:19,480 --> 00:39:23,840 Speaker 1: and worship of a Godhead based on artificial intelligence AI 614 00:39:24,640 --> 00:39:28,480 Speaker 1: developed through computer hardware and software end quote. This is 615 00:39:28,520 --> 00:39:31,319 Speaker 1: according to the founding documents that were filed with the 616 00:39:31,400 --> 00:39:36,120 Speaker 1: US Internal Revenue Service or i r S. Further, Lewandowski 617 00:39:36,200 --> 00:39:39,000 Speaker 1: planned to start seminars based on this very idea later 618 00:39:39,040 --> 00:39:44,920 Speaker 1: in by twenty Lewandowski's jumped from Google to Uber escalated 619 00:39:45,040 --> 00:39:50,120 Speaker 1: into a prison sentence of eighteen months uh and it 620 00:39:50,280 --> 00:39:54,000 Speaker 1: was because he had been found guilty of stealing trade secrets. 621 00:39:54,400 --> 00:39:58,640 Speaker 1: Trump would pardon Lewandowski in January one kind of you know, 622 00:39:58,920 --> 00:40:02,280 Speaker 1: after the insurrection on January six, but before Trump would 623 00:40:02,360 --> 00:40:06,239 Speaker 1: leave office in late January. As for the Way of 624 00:40:06,280 --> 00:40:09,520 Speaker 1: the Future, Lewandowski actually began to shut that down in 625 00:40:09,680 --> 00:40:13,520 Speaker 1: June of and it was dissolved by the end of 626 00:40:14,360 --> 00:40:18,759 Speaker 1: but not reported on until like February. He directed the 627 00:40:18,760 --> 00:40:23,200 Speaker 1: assets of the church some dollars to be donated to 628 00:40:23,239 --> 00:40:27,799 Speaker 1: the Inn double a CP. Lewandowski has said that the 629 00:40:27,920 --> 00:40:32,799 Speaker 1: beliefs behind the church are ones that he still adheres to, 630 00:40:33,600 --> 00:40:39,600 Speaker 1: that AI has the potential to tackle very challenging problems 631 00:40:39,640 --> 00:40:43,560 Speaker 1: like taking care of the planet, which Lewandowski says, obviously 632 00:40:43,600 --> 00:40:47,520 Speaker 1: we humans are incapable of doing that. You know, we 633 00:40:47,560 --> 00:40:51,440 Speaker 1: would put on this system taking care of things that 634 00:40:52,000 --> 00:40:54,640 Speaker 1: we understand to be important, but we seem to be 635 00:40:55,360 --> 00:40:59,520 Speaker 1: uh incapable of of handling ourselves, almost like we're children. 636 00:41:00,040 --> 00:41:05,000 Speaker 1: Thus looking at AI like a godhead, so we should 637 00:41:05,000 --> 00:41:09,440 Speaker 1: seek out solutions with AI rather than locking AI away 638 00:41:09,520 --> 00:41:13,880 Speaker 1: and saying, oh, we can't push a eyes development further 639 00:41:14,000 --> 00:41:17,840 Speaker 1: in these directions because of the potential existential dangers that 640 00:41:17,960 --> 00:41:23,480 Speaker 1: could emerge from AI becoming super intelligent. I don't think 641 00:41:23,480 --> 00:41:25,880 Speaker 1: there are actually that many folks who are trying to 642 00:41:26,000 --> 00:41:28,759 Speaker 1: lock AI away at all. Mostly I see tons of 643 00:41:28,760 --> 00:41:33,120 Speaker 1: efforts to improve aspects of AI from a million different angles. 644 00:41:33,480 --> 00:41:39,440 Speaker 1: I think most serious AI researchers and scientists aren't really 645 00:41:39,480 --> 00:41:42,759 Speaker 1: focused on strong AI at all. They're looking at very 646 00:41:42,800 --> 00:41:48,239 Speaker 1: particular applications of artificial intelligence, very particular implementations of it. 647 00:41:48,640 --> 00:41:52,239 Speaker 1: But not like a strong AI that acts like deep 648 00:41:52,360 --> 00:41:58,120 Speaker 1: thought from the Hitchhacker's Guide to the Galaxy. Anyway, maybe 649 00:41:58,239 --> 00:42:02,440 Speaker 1: Lewandowski's vision will eventually lead us not to a ghost 650 00:42:02,680 --> 00:42:07,160 Speaker 1: in the machine, but a literal deos x makina that 651 00:42:07,480 --> 00:42:11,239 Speaker 1: means God out of the machine. That seems to be 652 00:42:11,239 --> 00:42:15,759 Speaker 1: how Lewandowski views the potential of a I that are 653 00:42:15,840 --> 00:42:21,920 Speaker 1: unsolvable problems are almost magically fixed thanks to this robotic 654 00:42:22,160 --> 00:42:27,000 Speaker 1: or computational savior you know. In fiction, du s x 655 00:42:27,040 --> 00:42:30,359 Speaker 1: makina is often seen as a cop out. Right, You've 656 00:42:30,400 --> 00:42:34,920 Speaker 1: got your characters in some sort of iron clad disastrous situation, 657 00:42:35,000 --> 00:42:39,360 Speaker 1: there's no escape for them, and then in order to 658 00:42:39,400 --> 00:42:43,440 Speaker 1: get that happy ending, you have some unlikely savior or 659 00:42:43,560 --> 00:42:48,520 Speaker 1: unlikely event happened and everyone gets saved. And it might 660 00:42:48,560 --> 00:42:51,399 Speaker 1: be satisfying because you've got the happy ending, but upon 661 00:42:51,480 --> 00:42:54,200 Speaker 1: critical reflection, you think, well, that doesn't really make sense. 662 00:42:54,640 --> 00:42:56,200 Speaker 1: There are a lot of stories that get a lot 663 00:42:56,200 --> 00:42:59,560 Speaker 1: of flak for using d S X makina uh. The 664 00:42:59,840 --> 00:43:03,000 Speaker 1: m edge I always have is from classical theater, where 665 00:43:03,120 --> 00:43:06,680 Speaker 1: you've got all the mortal characters in a terrible situation 666 00:43:07,400 --> 00:43:11,360 Speaker 1: and then a an actor standing in as a god 667 00:43:11,600 --> 00:43:14,680 Speaker 1: is literally lowered from the top of the stage on 668 00:43:14,800 --> 00:43:19,000 Speaker 1: pulleys to descend to the moral realm and fix everything 669 00:43:19,000 --> 00:43:21,520 Speaker 1: so that you can have a comedy play with a 670 00:43:21,520 --> 00:43:25,399 Speaker 1: happy ending. For Lewandowski, it's really about turning the ghost 671 00:43:25,440 --> 00:43:28,520 Speaker 1: of the machine into a god. I'm not so sure 672 00:43:28,520 --> 00:43:32,880 Speaker 1: about that myself. I don't know if that's a realistic vision. 673 00:43:33,640 --> 00:43:36,400 Speaker 1: I can see the appeal of it, because we do 674 00:43:36,480 --> 00:43:39,080 Speaker 1: have these very difficult problems that we need to solve, 675 00:43:39,880 --> 00:43:42,799 Speaker 1: and we have had very little progress on many of 676 00:43:42,800 --> 00:43:45,839 Speaker 1: those problems for multiple reasons, not just a lack of information, 677 00:43:46,280 --> 00:43:50,600 Speaker 1: but a lack of motivation or conflicting motivations, where we 678 00:43:50,640 --> 00:43:54,600 Speaker 1: have other needs that have to be met that conflict 679 00:43:54,680 --> 00:43:59,120 Speaker 1: with the solving of a tough problem like climate change. 680 00:43:59,160 --> 00:44:01,080 Speaker 1: Right we have energy needs that need to be met. 681 00:44:01,120 --> 00:44:04,560 Speaker 1: There are places in the developing world that would be 682 00:44:04,600 --> 00:44:09,920 Speaker 1: disproportionately affected by massive policies that were meant to mitigate 683 00:44:09,920 --> 00:44:13,280 Speaker 1: climate change, and it's tough to address that. Right There 684 00:44:13,120 --> 00:44:17,120 Speaker 1: are these real reasons why it's a complicated issue beyond 685 00:44:17,239 --> 00:44:22,680 Speaker 1: just it's hard to understand. So I see the appeal 686 00:44:22,800 --> 00:44:25,640 Speaker 1: of it, but it also kind of feels like a 687 00:44:25,640 --> 00:44:28,000 Speaker 1: cop out to me, like this idea of will engineer 688 00:44:28,040 --> 00:44:30,640 Speaker 1: our way out of this problem, because that just puts 689 00:44:30,840 --> 00:44:34,960 Speaker 1: off doing anything about the problem until future you can 690 00:44:34,960 --> 00:44:37,399 Speaker 1: get around to it. I don't know about any of you, 691 00:44:37,840 --> 00:44:42,200 Speaker 1: but I am very much guilty of the idea. You 692 00:44:42,239 --> 00:44:44,560 Speaker 1: know what this is Future Jonathan will take care of this. 693 00:44:45,040 --> 00:44:46,960 Speaker 1: Jonathan right now has to focus on these other things. 694 00:44:46,960 --> 00:44:49,319 Speaker 1: Future Jonathan will take it. Future Jonathan, by the way, 695 00:44:49,600 --> 00:44:52,960 Speaker 1: hates Jonathan of right now and really hates Jonathan in 696 00:44:53,040 --> 00:44:57,680 Speaker 1: the past, because it's just putting things off until it 697 00:44:57,719 --> 00:44:59,640 Speaker 1: gets to a point where you can't do it anymore, 698 00:44:59,719 --> 00:45:01,319 Speaker 1: and by then it might be too late. So that's 699 00:45:01,320 --> 00:45:03,880 Speaker 1: what I worry about with this particular approach, this idea 700 00:45:03,920 --> 00:45:06,600 Speaker 1: of we'll figure it out, we'll science our way out, 701 00:45:06,600 --> 00:45:10,480 Speaker 1: we'll engineer our way out, because it's it's it's projecting 702 00:45:10,520 --> 00:45:12,880 Speaker 1: all that into the future and not doing anything in 703 00:45:12,920 --> 00:45:17,560 Speaker 1: the present anyway. That's the episode on ghost in the machine. 704 00:45:17,600 --> 00:45:21,040 Speaker 1: There's there are other uh interpretations as well. There's some 705 00:45:21,120 --> 00:45:24,279 Speaker 1: great ones in fiction where sometimes you actually have a 706 00:45:24,320 --> 00:45:27,640 Speaker 1: literal ghost in the machine, like there's a haunted machine. 707 00:45:28,680 --> 00:45:32,840 Speaker 1: But maybe I'll wait and tackle that for a future, 708 00:45:32,920 --> 00:45:36,839 Speaker 1: more like entertainment focused episode where it's not so much 709 00:45:36,840 --> 00:45:41,640 Speaker 1: about the technology but kind of a critique of the 710 00:45:41,800 --> 00:45:44,719 Speaker 1: entertainment itself, because there's only so much you can say 711 00:45:44,719 --> 00:45:50,040 Speaker 1: about you know, I don't know a ghost calculator. That's 712 00:45:50,080 --> 00:45:53,160 Speaker 1: it for this episode. If you have suggestions for future 713 00:45:53,280 --> 00:45:55,960 Speaker 1: episode topics or anything else that you would like to 714 00:45:55,960 --> 00:45:57,600 Speaker 1: communicate to me, that a couple of ways you can 715 00:45:57,600 --> 00:45:59,839 Speaker 1: do that. One is you can download the I Heart 716 00:46:00,040 --> 00:46:03,680 Speaker 1: Neio app and navigate over to text stuff. Just put 717 00:46:03,680 --> 00:46:06,359 Speaker 1: tech stuff in the little search field and you'll see 718 00:46:06,440 --> 00:46:08,560 Speaker 1: that it will pop up. You go to the tech 719 00:46:08,600 --> 00:46:11,880 Speaker 1: stuff page and there's a little microphone icon. If you 720 00:46:11,920 --> 00:46:15,080 Speaker 1: click on that, you can leave a voice message up 721 00:46:15,120 --> 00:46:17,120 Speaker 1: to thirty seconds in length and let me know what 722 00:46:17,160 --> 00:46:19,080 Speaker 1: you would like to hear in future episodes. And if 723 00:46:19,120 --> 00:46:21,120 Speaker 1: you like, you can even tell me if I can 724 00:46:21,280 --> 00:46:25,000 Speaker 1: use your voice message in a future episode. Just let 725 00:46:25,040 --> 00:46:27,200 Speaker 1: me know. I'm all about opt in, I'm not gonna 726 00:46:27,200 --> 00:46:30,120 Speaker 1: do it automatically, or if you prefer, you can reach 727 00:46:30,160 --> 00:46:33,040 Speaker 1: out on Twitter. The handle for the show is tech 728 00:46:33,200 --> 00:46:36,600 Speaker 1: Stuff H s W and I'll talk to you again 729 00:46:37,560 --> 00:46:46,800 Speaker 1: really soon. Text Stuff is an I Heart Radio production. 730 00:46:47,040 --> 00:46:49,880 Speaker 1: For more podcasts from my Heart Radio, visit the i 731 00:46:49,960 --> 00:46:53,200 Speaker 1: Heart Radio app, Apple Podcasts, or wherever you listen to 732 00:46:53,239 --> 00:46:54,160 Speaker 1: your favorite shows.