1 00:00:04,480 --> 00:00:12,640 Speaker 1: Welcome to tech Stuff, a production from iHeartRadio. Hey there, 2 00:00:12,640 --> 00:00:15,960 Speaker 1: and welcome to tech Stuff. I'm your host, Jonathan Strickland. 3 00:00:16,000 --> 00:00:19,639 Speaker 1: I'm an executive producer with iHeart Podcasts and how the 4 00:00:19,680 --> 00:00:23,320 Speaker 1: tech are you. I'm actually a little under the weather today. 5 00:00:23,920 --> 00:00:28,960 Speaker 1: I got my vaccinations for fluent COVID and so I've 6 00:00:29,000 --> 00:00:32,920 Speaker 1: got a few symptoms, nothing serious, but it is hard 7 00:00:32,960 --> 00:00:38,440 Speaker 1: for me to talk, which unfortunately is one of the 8 00:00:38,560 --> 00:00:43,440 Speaker 1: pivotal aspects of podcasting. So while I continue to work 9 00:00:43,440 --> 00:00:46,800 Speaker 1: on episodes, I thought I would bring you an episode 10 00:00:46,840 --> 00:00:49,440 Speaker 1: from just a couple of years ago because spooky times 11 00:00:49,440 --> 00:00:51,760 Speaker 1: are coming up, and with spooky times coming up, I 12 00:00:51,760 --> 00:00:54,240 Speaker 1: thought we could have a spooky times kind of episode, 13 00:00:54,240 --> 00:00:57,720 Speaker 1: at least, you know, tangentially. So I bring to you 14 00:00:57,760 --> 00:01:02,280 Speaker 1: an episode that published on October twenty four, twenty twenty two. 15 00:01:02,480 --> 00:01:06,720 Speaker 1: It is titled The Ghost in the Machine. I hope 16 00:01:06,720 --> 00:01:15,000 Speaker 1: you enjoy. We are continuing our spooky episode series and 17 00:01:15,040 --> 00:01:19,680 Speaker 1: the lead up to Halloween twenty twenty two. Apologies if 18 00:01:19,959 --> 00:01:23,760 Speaker 1: you're from the future listening back on this episode. That's 19 00:01:24,160 --> 00:01:28,559 Speaker 1: why the bizarre theme is popping up. We've already talked 20 00:01:28,880 --> 00:01:33,160 Speaker 1: recently about stuff like Van Bier power and zombie computers 21 00:01:33,160 --> 00:01:38,240 Speaker 1: where I'm really grasping at tenuous connections to horror stuff 22 00:01:38,480 --> 00:01:41,680 Speaker 1: for tech. But now it's time to tackle the ghost 23 00:01:42,120 --> 00:01:47,560 Speaker 1: in the machine. These days, that phrase is frequently bandied 24 00:01:47,600 --> 00:01:51,960 Speaker 1: about with relation to stuff like artificial intelligence. But the 25 00:01:52,560 --> 00:01:56,600 Speaker 1: man who coined the phrase itself was Gilbert Ryle in 26 00:01:56,680 --> 00:02:01,240 Speaker 1: nineteen forty nine, and he wasn't talking about artificial intelligence 27 00:02:01,280 --> 00:02:05,200 Speaker 1: at all. He was talking about, you know, real intelligence, 28 00:02:05,640 --> 00:02:10,040 Speaker 1: and he was critiquing a seventeenth century philosopher's argument about 29 00:02:10,080 --> 00:02:15,200 Speaker 1: the human mind. That philosopher was Renee Descartes, who famously wrote, 30 00:02:15,320 --> 00:02:20,600 Speaker 1: cogito ergo sum, I drink therefore I am sorry. It's 31 00:02:20,600 --> 00:02:22,840 Speaker 1: going to be impossible for me to talk about philosophers 32 00:02:22,840 --> 00:02:27,240 Speaker 1: without quoting Monty Python because I'm a dork. Okay, No, 33 00:02:27,400 --> 00:02:31,320 Speaker 1: cogito ergo sum actually means I think, therefore I am, 34 00:02:31,480 --> 00:02:34,880 Speaker 1: or that's how we interpret it. But that wasn't really 35 00:02:34,919 --> 00:02:39,040 Speaker 1: the bit that Gilbert Ryle was all riled up about. 36 00:02:39,560 --> 00:02:45,120 Speaker 1: See Descartes believed in dualism. Now, I don't mean that 37 00:02:45,280 --> 00:02:47,760 Speaker 1: he believed in showing up at dawn to sword fight 38 00:02:47,880 --> 00:02:51,600 Speaker 1: other philosophers. Though, if someone wants to make a philosopher 39 00:02:51,760 --> 00:02:55,520 Speaker 1: version of Highlander. I am one hundred percent down with that, 40 00:02:55,639 --> 00:02:59,239 Speaker 1: and I will back your kickstarter. No. No, Descartes believed 41 00:02:59,720 --> 00:03:03,840 Speaker 1: that the mind and you know, consciousness and sentience and 42 00:03:03,840 --> 00:03:07,800 Speaker 1: all this kind of stuff are separate from the actual 43 00:03:08,360 --> 00:03:12,600 Speaker 1: gray matter that resides in our noggins. So, in other words, 44 00:03:13,080 --> 00:03:16,520 Speaker 1: intelligence and awareness and all that other stuff that makes 45 00:03:16,639 --> 00:03:22,519 Speaker 1: you you exists independently of your physical brain. That there 46 00:03:22,600 --> 00:03:27,239 Speaker 1: is this this component that is beyond the physical Now, 47 00:03:27,320 --> 00:03:31,359 Speaker 1: Ryle referred to this concept as the ghost in the machine, 48 00:03:31,919 --> 00:03:35,920 Speaker 1: that somehow all the things that represent you are you know, 49 00:03:35,960 --> 00:03:40,520 Speaker 1: to great effect, ethereally independent of the brain itself. And 50 00:03:40,640 --> 00:03:44,680 Speaker 1: Ryle rejects that notion, and you know, Ryle appears to 51 00:03:44,720 --> 00:03:48,000 Speaker 1: be right. We know this because there are plenty of 52 00:03:48,080 --> 00:03:52,000 Speaker 1: case studies focusing on people who have experienced brain injuries, 53 00:03:52,400 --> 00:03:56,600 Speaker 1: either from some physical calamity or a disease or something 54 00:03:56,640 --> 00:04:01,600 Speaker 1: along those lines. And these events all often transform people 55 00:04:01,760 --> 00:04:07,160 Speaker 1: and change their behaviors and their underlying personalities. So the 56 00:04:07,280 --> 00:04:11,040 Speaker 1: damage to the physical stuff inside our heads can change 57 00:04:11,120 --> 00:04:14,640 Speaker 1: who we are as people. That seems to dismiss this 58 00:04:14,840 --> 00:04:19,400 Speaker 1: concept of dualism that the mind and the brain are 59 00:04:19,480 --> 00:04:25,880 Speaker 1: not separate entities. Take that descartes anyway. That's where we 60 00:04:25,920 --> 00:04:29,040 Speaker 1: get the phrase the ghost in the machine. The machine 61 00:04:29,080 --> 00:04:32,520 Speaker 1: in this case is a biological one in the original 62 00:04:32,560 --> 00:04:36,640 Speaker 1: sense of the phrase. In nineteen sixty seven, Arthur Kustler 63 00:04:37,120 --> 00:04:39,440 Speaker 1: he wrote a book called The Ghost in the Machine 64 00:04:40,040 --> 00:04:42,760 Speaker 1: and or at least it was published in nineteen sixty seven, 65 00:04:42,800 --> 00:04:48,599 Speaker 1: and he was a Hungarian born, Austrian educated journalist. So 66 00:04:48,760 --> 00:04:52,680 Speaker 1: his book The Ghost of the Machine was an attempt 67 00:04:52,680 --> 00:04:57,920 Speaker 1: to examine and explain humanity's tendency toward violence, as well 68 00:04:57,960 --> 00:05:01,480 Speaker 1: as going over the mind body problem that Ryle had 69 00:05:01,480 --> 00:05:04,039 Speaker 1: addressed when he coined the phrase to begin with in 70 00:05:04,120 --> 00:05:07,400 Speaker 1: nineteen forty nine. All right, then flash forward a couple 71 00:05:07,400 --> 00:05:10,360 Speaker 1: of decades, nineteen eighty one, and we get the title 72 00:05:10,400 --> 00:05:14,200 Speaker 1: of what was the fourth studio album of the Police, 73 00:05:14,680 --> 00:05:17,880 Speaker 1: you know Sting and the Police. Now I say this 74 00:05:18,839 --> 00:05:21,240 Speaker 1: in case you were thinking that Sting and Company were 75 00:05:21,320 --> 00:05:25,000 Speaker 1: naming their album off the technological use of the term 76 00:05:25,080 --> 00:05:27,960 Speaker 1: ghost of the machine, but they were not. Sting good 77 00:05:27,960 --> 00:05:31,800 Speaker 1: Old Gordy had read Kussler's book, and he took the 78 00:05:31,880 --> 00:05:35,520 Speaker 1: album's title from the book title. So in case you're 79 00:05:35,520 --> 00:05:38,479 Speaker 1: wondering That's also the album that brought us songs like 80 00:05:38,640 --> 00:05:42,080 Speaker 1: every little thing she does as magic and spirits in 81 00:05:42,120 --> 00:05:49,240 Speaker 1: the material world. For a really recent exploration of dualism, well, 82 00:05:49,360 --> 00:05:52,680 Speaker 1: arguably it's more than dualism. I guess you can actually 83 00:05:52,720 --> 00:05:57,919 Speaker 1: watch Pixar's Inside Out. That's a film that solidified my 84 00:05:58,000 --> 00:06:01,719 Speaker 1: reputation as an unfeeling monster among my friends because I 85 00:06:01,800 --> 00:06:04,960 Speaker 1: didn't feel anything when bing Bong meets his final fate. 86 00:06:05,040 --> 00:06:07,480 Speaker 1: I just couldn't care. I mean, he's not even real 87 00:06:07,680 --> 00:06:11,040 Speaker 1: in the context of the film, let alone in my world, 88 00:06:11,120 --> 00:06:14,560 Speaker 1: so why would I know? Okay, never mind anyway. In 89 00:06:14,560 --> 00:06:19,800 Speaker 1: Inside Out, we learn that our emotions are actual anthropomorphic 90 00:06:20,040 --> 00:06:24,599 Speaker 1: entities living inside our heads that share the controls to 91 00:06:25,000 --> 00:06:29,440 Speaker 1: our behavior. That we are in effect governed by our emotions, 92 00:06:29,640 --> 00:06:33,479 Speaker 1: and our emotions, in turn are the responsibilities of entities 93 00:06:33,640 --> 00:06:38,960 Speaker 1: like joy and anger and disgust. Descartes probably would have 94 00:06:39,000 --> 00:06:43,080 Speaker 1: been thrilled and rale Likeli would have rolled his eyes. Anyway, 95 00:06:43,080 --> 00:06:46,119 Speaker 1: The trope of having a little voice inside your head 96 00:06:46,720 --> 00:06:50,760 Speaker 1: that is somehow separate from you and also you is 97 00:06:50,800 --> 00:06:53,760 Speaker 1: a really popular one. I always think of Homer Simpson, 98 00:06:54,040 --> 00:06:57,080 Speaker 1: who will often find himself arguing with his own brain 99 00:06:57,200 --> 00:07:01,679 Speaker 1: for comedic effect. It's another example of doue in popular culture. 100 00:07:02,080 --> 00:07:05,800 Speaker 1: But the idiom the ghost of the machine survived its 101 00:07:05,880 --> 00:07:10,440 Speaker 1: initial philosophical and journalistic trappings, and now folks tend to 102 00:07:10,560 --> 00:07:13,920 Speaker 1: use it to describe stuff that's actually in the technological world. 103 00:07:14,240 --> 00:07:17,360 Speaker 1: We're talking about machines, as in the stuff that we 104 00:07:17,480 --> 00:07:20,120 Speaker 1: humans make, as opposed to the stuff that we are 105 00:07:21,440 --> 00:07:25,280 Speaker 1: generally in tech. The phrase describes a situation in which 106 00:07:25,320 --> 00:07:29,920 Speaker 1: a technological device or construct behaves in a way counter 107 00:07:30,040 --> 00:07:34,200 Speaker 1: to what we expect or want. At least that was 108 00:07:34,200 --> 00:07:36,120 Speaker 1: the way it was used for quite some time. So 109 00:07:36,640 --> 00:07:40,000 Speaker 1: for example, let's say that you've got yourself a robotic 110 00:07:40,080 --> 00:07:44,080 Speaker 1: vacuum cleaner, and you've set the schedule so that it's 111 00:07:44,120 --> 00:07:47,040 Speaker 1: only going to run early in the afternoon, and then 112 00:07:47,640 --> 00:07:51,200 Speaker 1: one night you wake up hearing the worrying and bumping 113 00:07:51,280 --> 00:07:54,160 Speaker 1: of your rumba as it aimlessly wanders your home in 114 00:07:54,200 --> 00:07:58,320 Speaker 1: search of dirt and dust to consume, and you spend 115 00:07:58,320 --> 00:08:00,560 Speaker 1: a few sleepy moments wondering if there's some sort of 116 00:08:00,640 --> 00:08:04,440 Speaker 1: conscientious intruder who's made their way into your house and 117 00:08:04,480 --> 00:08:08,280 Speaker 1: now they're tidying up. Before you realize no, it's that 118 00:08:08,440 --> 00:08:12,760 Speaker 1: darn robot vacuum cleaner. There's a ghost in a machine. 119 00:08:12,880 --> 00:08:17,000 Speaker 1: It's decided on its own to come awake and start working. Now, Alternatively, 120 00:08:17,080 --> 00:08:19,239 Speaker 1: maybe you just goofed up when you created the schedule 121 00:08:19,240 --> 00:08:22,000 Speaker 1: and you mixed up your AMS and your pms. That's 122 00:08:22,040 --> 00:08:26,360 Speaker 1: also a possibility. But you know, sometimes technology does behave 123 00:08:26,400 --> 00:08:28,400 Speaker 1: in a way that we don't expect. Either there's a 124 00:08:28,440 --> 00:08:33,800 Speaker 1: malfunction or it just encounters some sort of scenario that 125 00:08:33,880 --> 00:08:36,960 Speaker 1: it was not designed for, and so the result it 126 00:08:37,000 --> 00:08:40,079 Speaker 1: produces is not the one we want, and we tend 127 00:08:40,120 --> 00:08:43,079 Speaker 1: to try and kind of cover that up with this 128 00:08:43,600 --> 00:08:46,680 Speaker 1: blanket explanation of ghosts in the machine kind of stands 129 00:08:46,679 --> 00:08:49,720 Speaker 1: as a placeholder until we can really suss out what's 130 00:08:49,840 --> 00:08:53,599 Speaker 1: going on underneath. Programmers sometimes use the phrase ghost in 131 00:08:53,640 --> 00:08:57,880 Speaker 1: the machine to describe moments where you get something unexpected 132 00:08:58,320 --> 00:09:01,400 Speaker 1: while you're coding, Like you get an unexpected result, like 133 00:09:01,440 --> 00:09:05,320 Speaker 1: you've coded something to produce a specific outcome, and something 134 00:09:05,320 --> 00:09:10,760 Speaker 1: else happens instead. So the programmer didn't intend for this 135 00:09:10,880 --> 00:09:15,280 Speaker 1: result to happen, and so therefore the cause must be external, Right, 136 00:09:15,679 --> 00:09:18,760 Speaker 1: It's got to be some sort of ghost in the machine. 137 00:09:18,760 --> 00:09:21,040 Speaker 1: That's causing this to go wrong. Now I'm joshing a 138 00:09:21,040 --> 00:09:24,520 Speaker 1: bit here. Of course, Usually this is a way for 139 00:09:24,600 --> 00:09:26,720 Speaker 1: a programmer to kind of acknowledge that things are not 140 00:09:26,800 --> 00:09:29,640 Speaker 1: going to plan and that they need to go back 141 00:09:29,679 --> 00:09:32,480 Speaker 1: and look over their code much more closely to find 142 00:09:32,480 --> 00:09:35,600 Speaker 1: out what's going on. Where did things go wrong? Does 143 00:09:35,720 --> 00:09:39,760 Speaker 1: encoding all it takes is like a skipped step where 144 00:09:40,120 --> 00:09:42,679 Speaker 1: you know you've just you just missed a thing, and 145 00:09:42,960 --> 00:09:46,480 Speaker 1: you you went on one step beyond where you thought 146 00:09:46,520 --> 00:09:49,400 Speaker 1: you were, or maybe you made a typo, you got 147 00:09:49,400 --> 00:09:52,200 Speaker 1: some missed keystrokes in there. That can be all it 148 00:09:52,240 --> 00:09:54,880 Speaker 1: takes to make a program misbehave, and so then you 149 00:09:54,920 --> 00:09:57,920 Speaker 1: have to hunt down the bugs they're causing the problem. 150 00:09:58,280 --> 00:10:00,760 Speaker 1: But you know, if a program is acting very oddly, 151 00:10:00,880 --> 00:10:04,600 Speaker 1: you might call it a ghost of the machine scenario. Now, 152 00:10:04,720 --> 00:10:09,000 Speaker 1: I'm not sure about the timeline for when folks in 153 00:10:09,040 --> 00:10:12,480 Speaker 1: the tech space began to appropriate the phrase ghost in 154 00:10:12,520 --> 00:10:15,079 Speaker 1: the machine for their work, because when it comes to 155 00:10:15,160 --> 00:10:18,520 Speaker 1: stuff like this, you're really entering the world of folklore. 156 00:10:18,960 --> 00:10:22,839 Speaker 1: And folklore is largely an oral tradition where you are 157 00:10:22,920 --> 00:10:27,120 Speaker 1: passing ideas along one person to the next, speaking about it. 158 00:10:27,280 --> 00:10:30,960 Speaker 1: There's not necessarily a firm written record, at least not 159 00:10:31,040 --> 00:10:33,840 Speaker 1: one where you can point to something and say this 160 00:10:33,920 --> 00:10:36,720 Speaker 1: is where it began, Not like Ryle's version of the 161 00:10:36,760 --> 00:10:39,840 Speaker 1: phrase ghost in the Machine itself, which was published, so 162 00:10:39,880 --> 00:10:42,360 Speaker 1: we can point to that as saying this is where 163 00:10:42,400 --> 00:10:45,400 Speaker 1: the phrase comes from. This would be what Richard Dawkins 164 00:10:45,400 --> 00:10:48,840 Speaker 1: would refer to as a meme, a little nugget of 165 00:10:48,960 --> 00:10:53,320 Speaker 1: culture that gets passed on from person to person. But 166 00:10:53,840 --> 00:10:57,120 Speaker 1: it also has been used in literature to refer to 167 00:10:57,520 --> 00:11:03,200 Speaker 1: technological situations. Arthur's Clark, whom I've referenced many times in 168 00:11:03,280 --> 00:11:06,040 Speaker 1: this show, as he's the guy who explained that any 169 00:11:06,120 --> 00:11:11,520 Speaker 1: sufficiently advanced technology is indistinguishable from magic. He also used 170 00:11:11,520 --> 00:11:16,280 Speaker 1: the phrase ghost in the Machine to talk about AI. Specifically, 171 00:11:16,440 --> 00:11:19,360 Speaker 1: he used it in his follow up to his novel, 172 00:11:19,480 --> 00:11:22,440 Speaker 1: his work of fiction two thousand and one, A Space Odyssey. 173 00:11:22,800 --> 00:11:27,640 Speaker 1: The follow up is called, fittingly enough, twenty ten Odyssey two. 174 00:11:28,760 --> 00:11:34,000 Speaker 1: Chapter forty two of twenty ten is titled the Ghost 175 00:11:34,000 --> 00:11:37,120 Speaker 1: in the Machine, and the focal point for that chapter 176 00:11:37,760 --> 00:11:42,280 Speaker 1: is characters discussing how that's the AI system from two 177 00:11:42,320 --> 00:11:44,840 Speaker 1: thousand and one that caused all the trouble so quick 178 00:11:44,920 --> 00:11:46,720 Speaker 1: recap of two thousand and one for those of you 179 00:11:46,840 --> 00:11:50,400 Speaker 1: not familiar with the story and two thousand and one 180 00:11:50,520 --> 00:11:53,280 Speaker 1: the film, the Stanley Kubrick film gets pretty lucy goosey, 181 00:11:53,360 --> 00:11:56,200 Speaker 1: So we're just going to focus on the main narrative here. 182 00:11:56,559 --> 00:12:00,080 Speaker 1: You have a crew aboard a spacecraft and a mayormer 183 00:12:00,200 --> 00:12:03,880 Speaker 1: in spacecraft called Discovery one, which is on its way 184 00:12:03,880 --> 00:12:08,240 Speaker 1: toward Jupiter. Now, the ship has a really sophisticated computer 185 00:12:08,320 --> 00:12:13,360 Speaker 1: system called HOW nine thousand that controls nearly everything on board. 186 00:12:14,440 --> 00:12:18,600 Speaker 1: Also fun little trivia fact, how HL means that the 187 00:12:18,640 --> 00:12:23,080 Speaker 1: initials are each one letter off from IBM, though Arthur C. 188 00:12:23,080 --> 00:12:27,840 Speaker 1: Clark would claim that that was not intentional. Anyway, How 189 00:12:28,480 --> 00:12:32,040 Speaker 1: begins to act erratically in the mission. At one point, 190 00:12:32,080 --> 00:12:35,480 Speaker 1: How insists there's a malfunction in a system that appears 191 00:12:35,520 --> 00:12:39,960 Speaker 1: to be perfectly functional, that it's working just fine. Then 192 00:12:40,040 --> 00:12:44,040 Speaker 1: How systematically begins to eliminate the crew after learning they 193 00:12:44,120 --> 00:12:47,560 Speaker 1: plan to disconnect the computer system because they suspect something's 194 00:12:47,600 --> 00:12:51,640 Speaker 1: going wrong. How figures out that plan by monitoring a 195 00:12:51,679 --> 00:12:55,040 Speaker 1: conversation that a couple of crew members have in a 196 00:12:55,720 --> 00:12:59,280 Speaker 1: room where there are no microphones, so How can't listen 197 00:13:00,120 --> 00:13:02,840 Speaker 1: in on this conversation. But How is able to direct 198 00:13:03,080 --> 00:13:05,920 Speaker 1: a video feed to that room and is able to 199 00:13:06,040 --> 00:13:09,000 Speaker 1: read the lips of the crew members as they talk 200 00:13:09,040 --> 00:13:13,800 Speaker 1: about their plan. So How continues to try and wipe 201 00:13:13,800 --> 00:13:16,960 Speaker 1: everybody out, and he explains or it I shouldn't give 202 00:13:17,000 --> 00:13:21,880 Speaker 1: him a gender. How explains that the computer systems being 203 00:13:21,920 --> 00:13:25,240 Speaker 1: turned off would jeopardize the mission, and How cannot allow 204 00:13:25,280 --> 00:13:28,880 Speaker 1: that to happen. How's prime directive is to make certain 205 00:13:28,920 --> 00:13:31,959 Speaker 1: the mission is a success, so anything that would threaten 206 00:13:32,000 --> 00:13:35,160 Speaker 1: its own existence has to be eliminated. There's also the 207 00:13:35,200 --> 00:13:39,040 Speaker 1: implication that How does not want to cease to exist, 208 00:13:39,120 --> 00:13:43,280 Speaker 1: that How has a personal motivation beyond seeing the mission 209 00:13:43,320 --> 00:13:46,839 Speaker 1: to completion, and so How has no choice but to 210 00:13:46,920 --> 00:13:49,360 Speaker 1: kill everyone. It's not that How wants to murder everyone. 211 00:13:49,400 --> 00:13:51,600 Speaker 1: It's just that in order to complete the mission, that's 212 00:13:51,640 --> 00:13:55,640 Speaker 1: the only outcome that makes sense. Eventually, one of the crew, 213 00:13:55,880 --> 00:14:00,160 Speaker 1: Dave Bowman, manages to turn off How, and How wonders allowil. 214 00:14:00,440 --> 00:14:05,559 Speaker 1: What will happen afterward? Will its consciousness continue once its 215 00:14:05,559 --> 00:14:11,319 Speaker 1: circuits are powered down? Will I dream? It? Says? Well? Anyway. 216 00:14:11,320 --> 00:14:14,600 Speaker 1: In Odyssey two, you now have this group of astronauts 217 00:14:14,600 --> 00:14:20,000 Speaker 1: and cosmonauts in a Soviet American joint effort that are 218 00:14:20,560 --> 00:14:25,040 Speaker 1: trying to figure out what happened with How. Was there 219 00:14:25,280 --> 00:14:30,560 Speaker 1: something inherently flawed in How's programming? Did some external element 220 00:14:30,720 --> 00:14:36,480 Speaker 1: cause how to malfunction? Did How's apparent consciousness emerge spontaneously 221 00:14:36,640 --> 00:14:39,800 Speaker 1: all on its own? Was it all just a sophisticated 222 00:14:39,840 --> 00:14:43,320 Speaker 1: trick And How never really had any sort of consciousness? 223 00:14:43,320 --> 00:14:45,800 Speaker 1: It only appeared to. So the crew are kind of 224 00:14:45,880 --> 00:14:49,440 Speaker 1: left to ponder this themselves. They don't have any easy answers. 225 00:14:50,080 --> 00:14:53,400 Speaker 1: That's just one example of the ghost of the machine 226 00:14:53,440 --> 00:14:57,560 Speaker 1: concept being handled in entertainment. When we come back, I'll 227 00:14:57,600 --> 00:15:00,720 Speaker 1: talk about a different one. But first let's take this 228 00:15:00,840 --> 00:15:12,760 Speaker 1: quick break. Okay, before the break, I talked about Arthur C. 229 00:15:12,920 --> 00:15:16,680 Speaker 1: Clark and his work with the concept of ghost in 230 00:15:16,680 --> 00:15:20,560 Speaker 1: the machine. Let's now leap over to Isaac Asimov, or 231 00:15:20,600 --> 00:15:24,160 Speaker 1: at least an adaptation of Asimov's work. So the film 232 00:15:24,280 --> 00:15:28,200 Speaker 1: version of I Robot, which really bears only a passing 233 00:15:28,280 --> 00:15:33,360 Speaker 1: resemblance to the short stories that Isaac Asimov wrote that 234 00:15:33,440 --> 00:15:37,160 Speaker 1: were also collected in a book called I Robot, uses 235 00:15:37,240 --> 00:15:40,640 Speaker 1: the phrase ghost in the machine. Isaac Asimov, by the way, 236 00:15:40,680 --> 00:15:42,880 Speaker 1: in case you're not familiar with his work, he's the 237 00:15:42,920 --> 00:15:47,760 Speaker 1: guy who also proposed the basic laws of robotics, which 238 00:15:48,280 --> 00:15:51,720 Speaker 1: are pretty famous as well. So in the film the 239 00:15:51,840 --> 00:15:56,200 Speaker 1: character doctor Alfred Lanning, who actually does appear in Asimov's stories, 240 00:15:56,240 --> 00:15:59,320 Speaker 1: but he's a very different version than the one that 241 00:15:59,360 --> 00:16:02,880 Speaker 1: appears in the film. He says in a voiceover quote, 242 00:16:03,200 --> 00:16:07,280 Speaker 1: there have always been ghosts in the machine, random segments 243 00:16:07,280 --> 00:16:10,760 Speaker 1: of code that have grouped together to form unexpected protocols. 244 00:16:11,360 --> 00:16:16,800 Speaker 1: Unanticipated these free radicals engender questions of free will, creativity, 245 00:16:17,040 --> 00:16:19,720 Speaker 1: and even the nature of what we might call the soul. 246 00:16:20,320 --> 00:16:23,720 Speaker 1: Why is it that when some robots are left in darkness, 247 00:16:24,040 --> 00:16:26,480 Speaker 1: they will seek out the light? Why is it that 248 00:16:26,560 --> 00:16:29,400 Speaker 1: when robots are stored in an empty space they will 249 00:16:29,400 --> 00:16:33,120 Speaker 1: group together rather than stand alone. How do we explain 250 00:16:33,240 --> 00:16:37,520 Speaker 1: this behavior random segments of code? Or is it something more? 251 00:16:38,000 --> 00:16:42,440 Speaker 1: When does a perceptual schematic become consciousness? When does a 252 00:16:42,520 --> 00:16:47,000 Speaker 1: difference engine become the search for truth? When does a 253 00:16:47,040 --> 00:16:53,000 Speaker 1: personality simulation become the bitter mote of a soul? End quote. 254 00:16:53,040 --> 00:16:56,200 Speaker 1: There's some fun references in there too. Difference engine, for example, 255 00:16:56,760 --> 00:17:01,960 Speaker 1: refers back to Charles Babbage, who create analytical engines that 256 00:17:02,560 --> 00:17:09,080 Speaker 1: pre date the concept of electrical computers. Now, this idea, 257 00:17:09,680 --> 00:17:15,440 Speaker 1: this idea of consciousness or the appearance of consciousness emerging 258 00:17:15,440 --> 00:17:18,240 Speaker 1: out of technology, is one that often pops up in 259 00:17:18,280 --> 00:17:22,680 Speaker 1: discussions about artificial intelligence, even within our world outside of fiction, 260 00:17:23,280 --> 00:17:26,560 Speaker 1: though usually we talk about this on a more hypothetical basis, 261 00:17:26,760 --> 00:17:31,880 Speaker 1: ununless you're Blake Lemoine or Lemoine, the former Google engineer 262 00:17:31,880 --> 00:17:37,440 Speaker 1: who maintains that Google's language model for dialogue applications aka Lambda, 263 00:17:37,640 --> 00:17:41,800 Speaker 1: is sentient. That's a claim that most other people dispute, 264 00:17:41,800 --> 00:17:45,720 Speaker 1: by the way, So maybe I'll do another episode about 265 00:17:45,720 --> 00:17:49,639 Speaker 1: it to really kind of dig into it. But Lemoine 266 00:17:49,720 --> 00:17:52,760 Speaker 1: or Lemoine, and I apologize because I don't know how 267 00:17:52,800 --> 00:17:56,639 Speaker 1: his last name is pronounced, has said a few times 268 00:17:56,640 --> 00:18:00,879 Speaker 1: that he believes that this particular program has gained sentience. 269 00:18:01,480 --> 00:18:05,000 Speaker 1: But it brings us to another favorite topic among tech fans, 270 00:18:05,040 --> 00:18:08,919 Speaker 1: which is, of course the Turing test. All right, So 271 00:18:08,960 --> 00:18:12,560 Speaker 1: the Turing test was Alan Turing, who he was kind 272 00:18:12,600 --> 00:18:15,080 Speaker 1: of like the father of computer science in many ways. 273 00:18:15,400 --> 00:18:19,879 Speaker 1: It was his response to the question can machines think? 274 00:18:20,760 --> 00:18:23,919 Speaker 1: Turing's answer was that question has no meaning, and you 275 00:18:23,960 --> 00:18:27,560 Speaker 1: are a silly person, goodbye. I am, of course paraphrasing, 276 00:18:28,280 --> 00:18:31,440 Speaker 1: but as a sort of thought experiment, Turing proposed taking 277 00:18:31,480 --> 00:18:35,199 Speaker 1: an older concept called the imitation game and applying it 278 00:18:35,200 --> 00:18:39,080 Speaker 1: to machines, really to demonstrate how meaningless the question of 279 00:18:39,240 --> 00:18:44,120 Speaker 1: can machines think is. So, what is the imitation game? Well, 280 00:18:44,119 --> 00:18:46,200 Speaker 1: the name kind of gives it away. It's a game 281 00:18:46,240 --> 00:18:49,160 Speaker 1: in which you have a player who asks at least 282 00:18:49,320 --> 00:18:52,879 Speaker 1: two different people questions to determine which of them is 283 00:18:53,040 --> 00:18:56,320 Speaker 1: an imitator. So, for example, you could have an imitation 284 00:18:56,480 --> 00:18:59,480 Speaker 1: game in which one of the people is a sailor 285 00:18:59,800 --> 00:19:03,160 Speaker 1: and the other is not a sailor, and the player 286 00:19:03,520 --> 00:19:06,399 Speaker 1: would take turns asking each person questions to try and 287 00:19:06,440 --> 00:19:09,480 Speaker 1: suss out which one is actually a sailor and which 288 00:19:09,520 --> 00:19:12,960 Speaker 1: one is merely pretending to be a sailor. So the 289 00:19:13,000 --> 00:19:16,040 Speaker 1: game depends both on the strength of the imitator's skill 290 00:19:16,119 --> 00:19:19,719 Speaker 1: of deception as well as the player's ability to come 291 00:19:19,800 --> 00:19:22,520 Speaker 1: up with really good questions. And you could do this 292 00:19:22,600 --> 00:19:25,199 Speaker 1: with all sorts of scenarios, and indeed there are tons 293 00:19:25,240 --> 00:19:28,560 Speaker 1: of game shows that use this very premise. Turing's thought 294 00:19:28,560 --> 00:19:32,119 Speaker 1: experiment was to create a version of this in which 295 00:19:32,480 --> 00:19:37,040 Speaker 1: a player would present questions to two other entities, one 296 00:19:37,280 --> 00:19:41,480 Speaker 1: a human and one a computer. The player would only 297 00:19:41,520 --> 00:19:44,320 Speaker 1: know these entities as X and Y, so they could 298 00:19:44,320 --> 00:19:47,159 Speaker 1: ask questions of X, and they could ask questions of 299 00:19:47,200 --> 00:19:50,320 Speaker 1: why and get replies. So the player would not be 300 00:19:50,359 --> 00:19:54,159 Speaker 1: able to see or hear the other entities. All questions 301 00:19:54,200 --> 00:19:56,920 Speaker 1: would have to be done in writing, you know, for example, 302 00:19:57,440 --> 00:20:00,679 Speaker 1: typed and printed out, and at the end of the 303 00:20:00,680 --> 00:20:03,760 Speaker 1: interview session, the player would be tasked with deciding if 304 00:20:04,040 --> 00:20:06,320 Speaker 1: X was a machine or a human, or if why 305 00:20:06,520 --> 00:20:09,920 Speaker 1: was the machine or the human. Turing was suggesting that 306 00:20:10,400 --> 00:20:14,399 Speaker 1: as computers and systems get more sophisticated and things like 307 00:20:14,520 --> 00:20:19,080 Speaker 1: chat programs get better at processing natural language and formulating responses, 308 00:20:19,160 --> 00:20:22,399 Speaker 1: though that was a little past Turing's time, that it 309 00:20:22,400 --> 00:20:26,080 Speaker 1: would be increasingly difficult for a person to determine if 310 00:20:26,200 --> 00:20:28,440 Speaker 1: any given entity on the other end of a chat 311 00:20:28,440 --> 00:20:32,359 Speaker 1: session was actually a person or a machine. And Turing 312 00:20:32,440 --> 00:20:35,639 Speaker 1: also rather cheekily suggests that we might as well assume 313 00:20:35,720 --> 00:20:39,080 Speaker 1: the machine has consciousness at that point, because when you 314 00:20:39,119 --> 00:20:43,200 Speaker 1: meet another human being, you assume that that other human 315 00:20:43,240 --> 00:20:47,879 Speaker 1: being possesses consciousness, even though you're incapable of stepping into 316 00:20:47,920 --> 00:20:51,560 Speaker 1: that person's actual experience. So you can't take over that 317 00:20:51,640 --> 00:20:54,120 Speaker 1: person and find out, oh, yes, they do have consciousness, 318 00:20:54,440 --> 00:20:57,720 Speaker 1: You just assume they do. So if you and I 319 00:20:57,760 --> 00:21:00,159 Speaker 1: were to meet, I assume you would believe I am 320 00:21:00,320 --> 00:21:04,080 Speaker 1: in fact sentient and conscious even on my bad days. 321 00:21:04,560 --> 00:21:08,840 Speaker 1: So if we're willing to agree to this while simultaneously 322 00:21:08,880 --> 00:21:13,400 Speaker 1: being unable to actually experience and therefore prove it, then 323 00:21:13,640 --> 00:21:17,000 Speaker 1: should we not grant the same consideration to machines that 324 00:21:17,119 --> 00:21:20,879 Speaker 1: give off the appearance of sentience and consciousness. Do we 325 00:21:21,000 --> 00:21:23,520 Speaker 1: have to prove it or do we just go ahead 326 00:21:23,560 --> 00:21:26,439 Speaker 1: and treat them as if they are because that's what 327 00:21:26,480 --> 00:21:29,040 Speaker 1: we would do if it was a human. Now, Turing 328 00:21:29,200 --> 00:21:32,960 Speaker 1: was being a bit dismissive about the concept of machines thinking. 329 00:21:33,440 --> 00:21:36,200 Speaker 1: His point was that they might get very very good 330 00:21:36,240 --> 00:21:39,960 Speaker 1: at simulating thinking, and that might well be enough for 331 00:21:40,080 --> 00:21:42,919 Speaker 1: us to just go ahead and say that's what they're doing, 332 00:21:43,000 --> 00:21:46,000 Speaker 1: even if you could, you know, push machines through the 333 00:21:46,040 --> 00:21:49,680 Speaker 1: finest of sieves and find not one grain of actual 334 00:21:49,760 --> 00:21:55,240 Speaker 1: consciousness within it. Now, it doesn't hurt that defining consciousness, 335 00:21:55,240 --> 00:21:58,440 Speaker 1: even in human terms, is something that we can't really do, 336 00:21:58,800 --> 00:22:01,119 Speaker 1: or at least we don't have a unifying definition that 337 00:22:01,359 --> 00:22:05,879 Speaker 1: everybody agrees upon. Sometimes we define consciousness by what it 338 00:22:06,080 --> 00:22:09,560 Speaker 1: doesn't include, rather than what it is. This is why 339 00:22:09,600 --> 00:22:13,680 Speaker 1: I get antsy in philosophical discussions, because being sort of 340 00:22:13,960 --> 00:22:17,600 Speaker 1: a pragmatic Dullered myself, it's hard for me to keep up. 341 00:22:18,119 --> 00:22:21,639 Speaker 1: But let's jump ahead and talk about a related concept. 342 00:22:22,040 --> 00:22:24,560 Speaker 1: This is also one that I've covered a few times 343 00:22:24,600 --> 00:22:28,159 Speaker 1: on tech stuff that also points to this ghost and 344 00:22:28,240 --> 00:22:32,440 Speaker 1: the machine idea. And this is the argument against machine 345 00:22:32,480 --> 00:22:36,760 Speaker 1: consciousness and strong AI. It is called the Chinese room. 346 00:22:37,320 --> 00:22:40,480 Speaker 1: John Searle, a philosopher, put for this argument back in 347 00:22:40,560 --> 00:22:45,080 Speaker 1: nineteen eighty and that argument goes something like this. Let's 348 00:22:45,080 --> 00:22:48,480 Speaker 1: say we've got ourselves a computer, and this computer can 349 00:22:48,560 --> 00:22:52,320 Speaker 1: accept sheets of paper that have Chinese characters written on 350 00:22:52,440 --> 00:22:55,560 Speaker 1: the paper, and the computer can then produce new sheets 351 00:22:55,560 --> 00:22:57,920 Speaker 1: of paper. It can print out sheets that are also 352 00:22:58,000 --> 00:23:01,800 Speaker 1: covered in Chinese characters, that are in response to the 353 00:23:01,840 --> 00:23:06,480 Speaker 1: input sheets that were fed to it. These responses are sophisticated, 354 00:23:06,680 --> 00:23:11,240 Speaker 1: they are relevant. They're good enough that a native Chinese 355 00:23:11,280 --> 00:23:15,040 Speaker 1: speaker would be certain that someone fluent in Chinese was 356 00:23:15,080 --> 00:23:18,960 Speaker 1: creating the responses. Someone who understood what was being fed 357 00:23:19,000 --> 00:23:23,080 Speaker 1: to it was producing the output. So, in other words, 358 00:23:23,440 --> 00:23:27,240 Speaker 1: this system would pass the Turing test. But does that 359 00:23:27,359 --> 00:23:34,040 Speaker 1: mean the system actually understands Chinese. Cirtle's argument is no, 360 00:23:34,680 --> 00:23:38,680 Speaker 1: it doesn't. He says, Imagine that you are inside a room, 361 00:23:39,400 --> 00:23:42,439 Speaker 1: and for the purposes of this scenario, you do not 362 00:23:42,640 --> 00:23:47,399 Speaker 1: understand Chinese. So if you do understand Chinese, pretend you don't. Okay. 363 00:23:47,600 --> 00:23:50,520 Speaker 1: So there's a slot on the wall, and through this 364 00:23:50,600 --> 00:23:53,560 Speaker 1: slot you occasionally get sheets of paper, and there are 365 00:23:53,640 --> 00:23:57,400 Speaker 1: Chinese symbols on the sheets of paper. You cannot read these, 366 00:23:57,480 --> 00:23:59,359 Speaker 1: You don't know what they stand for. You don't know 367 00:23:59,600 --> 00:24:03,080 Speaker 1: anything about it other than they're clearly Chinese characters on 368 00:24:03,200 --> 00:24:07,600 Speaker 1: the paper. However, what you do have inside this room 369 00:24:07,640 --> 00:24:10,679 Speaker 1: with you is this big old book of instructions that 370 00:24:10,800 --> 00:24:13,439 Speaker 1: tells you what to do when these papers come in, 371 00:24:13,800 --> 00:24:16,879 Speaker 1: and you use the instructions to find the characters that 372 00:24:16,920 --> 00:24:20,560 Speaker 1: are on the input sheet of paper, and you follow 373 00:24:20,720 --> 00:24:24,840 Speaker 1: a path of instructions to create the corresponding response. Step 374 00:24:25,040 --> 00:24:27,399 Speaker 1: by step, you do it all the way until you 375 00:24:27,440 --> 00:24:30,800 Speaker 1: have created the full response to whatever was said to you. 376 00:24:31,480 --> 00:24:35,000 Speaker 1: Then you push the response back out the slot. Now, 377 00:24:35,000 --> 00:24:36,600 Speaker 1: the person on the other side of the slot is 378 00:24:36,600 --> 00:24:38,760 Speaker 1: going to get a response that appears to come from 379 00:24:38,840 --> 00:24:42,840 Speaker 1: someone who is fluent in Chinese. But you're not. You're 380 00:24:42,880 --> 00:24:45,840 Speaker 1: just following a preset list of instructions. You don't have 381 00:24:45,920 --> 00:24:49,480 Speaker 1: any actual understanding of what's going on. You still don't 382 00:24:49,520 --> 00:24:52,199 Speaker 1: know the meaning of what was given to you. You 383 00:24:52,240 --> 00:24:55,560 Speaker 1: don't even know the meaning of what you produced. You're 384 00:24:55,640 --> 00:25:00,320 Speaker 1: just ignorantly following an algorithm. So externally it appear here's 385 00:25:00,320 --> 00:25:03,159 Speaker 1: you understand. But if someone were to ask you to 386 00:25:03,280 --> 00:25:06,199 Speaker 1: translate anything you had done, you wouldn't be able to 387 00:25:06,240 --> 00:25:09,919 Speaker 1: do it. So Ceyrele is arguing against what is called 388 00:25:10,080 --> 00:25:15,560 Speaker 1: strong AI. Generally, we define strong AI as artificial intelligence 389 00:25:15,560 --> 00:25:18,439 Speaker 1: that processes information in a way that is similar to 390 00:25:18,560 --> 00:25:23,639 Speaker 1: how our human brains process information. Strong AI may or 391 00:25:23,640 --> 00:25:27,760 Speaker 1: may not include semi related concepts like sentience and self 392 00:25:27,760 --> 00:25:33,520 Speaker 1: awareness and consciousness and motivations and the ability to experience things, 393 00:25:33,640 --> 00:25:38,080 Speaker 1: et cetera. So Cearrel is saying that machines, even incredibly 394 00:25:38,119 --> 00:25:42,520 Speaker 1: sophisticated machines, are incapable of reaching a level of understanding 395 00:25:42,920 --> 00:25:47,119 Speaker 1: that true intelligence can, that we humans can grasp things 396 00:25:47,480 --> 00:25:50,680 Speaker 1: on a level that machines simply are unable to reach 397 00:25:51,119 --> 00:25:54,679 Speaker 1: even if the machines can process information faster and in 398 00:25:54,720 --> 00:25:58,440 Speaker 1: greater quantities than humans are able to. Another way of 399 00:25:58,480 --> 00:26:02,240 Speaker 1: putting this is a calculator can multiply two very large 400 00:26:02,320 --> 00:26:04,800 Speaker 1: numbers and get a result much faster than a human 401 00:26:04,840 --> 00:26:08,919 Speaker 1: could do, but the calculator doesn't understand any significance behind 402 00:26:08,960 --> 00:26:12,040 Speaker 1: the numbers, or even if there's a lack of significance, 403 00:26:12,480 --> 00:26:18,320 Speaker 1: the calculator doesn't have that capability. Now, maybe Seerle's argument 404 00:26:18,680 --> 00:26:22,760 Speaker 1: is valid, and maybe, as Touring suggests, it doesn't even matter. 405 00:26:23,440 --> 00:26:28,600 Speaker 1: So let's talk about machine learning for a moment. Machine 406 00:26:28,680 --> 00:26:33,640 Speaker 1: learning encompasses a broad scope of applications and approaches and disciplines, 407 00:26:34,160 --> 00:26:37,639 Speaker 1: but I'll focus on one approach from a very high level. 408 00:26:38,119 --> 00:26:44,760 Speaker 1: It's called generative adversarial networks or gans GANS. Okay, As 409 00:26:44,800 --> 00:26:48,960 Speaker 1: the name suggests, this model uses two systems in opposition 410 00:26:49,280 --> 00:26:53,040 Speaker 1: of one another. On one side, you have a system 411 00:26:53,160 --> 00:26:58,560 Speaker 1: that is generative, that is, it generates something. Maybe it 412 00:26:58,680 --> 00:27:02,480 Speaker 1: generates pictures of cats, doesn't really matter. We'll use cats 413 00:27:02,520 --> 00:27:04,639 Speaker 1: for this example. So what does matter is that this 414 00:27:04,720 --> 00:27:09,359 Speaker 1: model is trying to create something that is indistinguishable from 415 00:27:09,400 --> 00:27:13,920 Speaker 1: the real version of that thing. So on the other side, 416 00:27:14,040 --> 00:27:16,960 Speaker 1: you have a system called the discriminator. So this is 417 00:27:16,960 --> 00:27:19,919 Speaker 1: a system that looks for fakes. Its job is to 418 00:27:19,960 --> 00:27:23,520 Speaker 1: sort out real versions of whatever it's designed to look 419 00:27:23,560 --> 00:27:28,119 Speaker 1: for and to flag ones that were generated or not real. 420 00:27:28,760 --> 00:27:31,920 Speaker 1: So with cats as our starting point, the discriminator is 421 00:27:31,920 --> 00:27:34,399 Speaker 1: meant to tell the difference between real pictures that have 422 00:27:34,480 --> 00:27:37,920 Speaker 1: cats and fake pictures of cats, or maybe just pictures 423 00:27:37,920 --> 00:27:40,680 Speaker 1: that don't have cats in them at all. So first 424 00:27:40,680 --> 00:27:43,920 Speaker 1: you have to train up your models, and you might 425 00:27:44,000 --> 00:27:47,439 Speaker 1: do this by setting the task. So let's start with 426 00:27:47,480 --> 00:27:52,639 Speaker 1: the generative system, and you create a system that is 427 00:27:52,680 --> 00:27:56,159 Speaker 1: meant to analyze a bunch of images of cats, and 428 00:27:56,200 --> 00:27:58,480 Speaker 1: you just feed the housands of pictures of cats, all 429 00:27:58,520 --> 00:28:04,040 Speaker 1: these different cats, different sizes and colors and orientations and activity, 430 00:28:04,600 --> 00:28:06,720 Speaker 1: and then you tell the system to start making new 431 00:28:06,760 --> 00:28:10,840 Speaker 1: pictures of cats. And let's say that first round that 432 00:28:10,960 --> 00:28:15,200 Speaker 1: the generative system does is horrific. HP Lovecraft would wet 433 00:28:15,320 --> 00:28:19,320 Speaker 1: himself if he saw the images that this computer had created. 434 00:28:20,560 --> 00:28:23,160 Speaker 1: You see that these horrors from the Great Beyond are 435 00:28:23,160 --> 00:28:26,679 Speaker 1: in no way shape or form cats. So you go 436 00:28:26,760 --> 00:28:30,119 Speaker 1: into the model and you start tweaking settings so that 437 00:28:30,160 --> 00:28:34,360 Speaker 1: the system produces something you know less eldritch, and you 438 00:28:34,440 --> 00:28:38,600 Speaker 1: go again, and you do this lots and lots of times, 439 00:28:38,640 --> 00:28:41,800 Speaker 1: like thousands of times, until the images start to look 440 00:28:42,120 --> 00:28:45,800 Speaker 1: a lot more cat ish. You do something similar with 441 00:28:45,880 --> 00:28:49,320 Speaker 1: the discriminator model. You feed it a bunch of images, 442 00:28:49,400 --> 00:28:52,560 Speaker 1: some with cats, some without, or maybe some with like 443 00:28:52,920 --> 00:28:56,920 Speaker 1: crudely drawn cats or whatever, and you see how many 444 00:28:56,920 --> 00:28:59,960 Speaker 1: of the system is able to suss out. And maybe 445 00:29:00,000 --> 00:29:02,040 Speaker 1: if he doesn't do that good a job, maybe it 446 00:29:02,160 --> 00:29:06,440 Speaker 1: doesn't identify certain real images of cats properly. Maybe it 447 00:29:06,480 --> 00:29:09,800 Speaker 1: misidentifies images that don't have cats in them. So you 448 00:29:09,840 --> 00:29:12,760 Speaker 1: go into the discriminator's model and you start tweaking it 449 00:29:12,920 --> 00:29:15,680 Speaker 1: so it gets better and better at identifying images that 450 00:29:15,840 --> 00:29:18,080 Speaker 1: do not have real cats in them. And then you 451 00:29:18,160 --> 00:29:21,640 Speaker 1: set these two systems against each other. The generative system 452 00:29:21,720 --> 00:29:24,920 Speaker 1: is trying to create images that will fool the discriminator. 453 00:29:25,440 --> 00:29:28,720 Speaker 1: The discriminator is trying to identify generated images of cats 454 00:29:29,040 --> 00:29:32,080 Speaker 1: and only allow real images of cats through. It is 455 00:29:32,120 --> 00:29:35,560 Speaker 1: a zero sum game, winner takes all, and the two 456 00:29:35,640 --> 00:29:38,680 Speaker 1: systems compete against each other, with the models for each 457 00:29:38,920 --> 00:29:43,480 Speaker 1: updating repeatedly so that each gets a bit better Between sessions. 458 00:29:44,000 --> 00:29:46,720 Speaker 1: If the generative model is able to consistently fool the 459 00:29:46,800 --> 00:29:50,880 Speaker 1: discriminator like half the time, the generative model is pretty 460 00:29:50,880 --> 00:29:55,000 Speaker 1: reliably creating good examples. This, by the way, is a 461 00:29:55,080 --> 00:29:59,600 Speaker 1: ridiculous oversimplification of what's going on with generative adversarial networks, 462 00:29:59,600 --> 00:30:02,520 Speaker 1: but you get the idea. This form of machine learning 463 00:30:02,560 --> 00:30:04,880 Speaker 1: starts to feel kind of creepy to some of us. 464 00:30:04,920 --> 00:30:08,000 Speaker 1: Like the ability of a machine to learn to do 465 00:30:08,080 --> 00:30:12,360 Speaker 1: something better seems to be a very human quality, something 466 00:30:12,560 --> 00:30:17,080 Speaker 1: that makes us special. But if we can give machines 467 00:30:17,120 --> 00:30:22,440 Speaker 1: that capability, well, then how are we special or are 468 00:30:22,480 --> 00:30:27,160 Speaker 1: we special at all? That's something I'm going to tackle 469 00:30:27,600 --> 00:30:39,680 Speaker 1: as soon as we come back from this next break. Okay, 470 00:30:39,720 --> 00:30:42,680 Speaker 1: we're back now. I would argue that we are special. 471 00:30:43,480 --> 00:30:46,120 Speaker 1: Before the break, I was asking, can we be special 472 00:30:46,160 --> 00:30:49,440 Speaker 1: if machines are capable of learning? I think we are 473 00:30:49,720 --> 00:30:52,040 Speaker 1: in that we're able to do stuff that machines as 474 00:30:52,080 --> 00:30:56,200 Speaker 1: of right now either cannot do or they can do, 475 00:30:56,280 --> 00:30:58,320 Speaker 1: but they don't do it very well and they can 476 00:30:58,360 --> 00:31:01,160 Speaker 1: only attempt it after a lootic Chris amount of time. 477 00:31:01,920 --> 00:31:06,160 Speaker 1: For example, let's talk about opening doors. Several years ago 478 00:31:06,560 --> 00:31:10,080 Speaker 1: twenty sixteen, I was at south By Southwest. I attended 479 00:31:10,080 --> 00:31:15,240 Speaker 1: a panel about robotics and artificial intelligence and human computer interactions. 480 00:31:15,760 --> 00:31:20,280 Speaker 1: In that panel, Laila Takayama, a cognitive and social scientist, 481 00:31:20,640 --> 00:31:23,400 Speaker 1: talked about working in the field of human computer interaction 482 00:31:23,800 --> 00:31:26,240 Speaker 1: and she mentioned how she was once in an office 483 00:31:26,280 --> 00:31:28,440 Speaker 1: where a robot was in the middle of a hallway, 484 00:31:29,000 --> 00:31:33,080 Speaker 1: sitting motionless. It was just facing a door. What Takeyama 485 00:31:33,120 --> 00:31:36,880 Speaker 1: didn't know is that the robot was processing how to 486 00:31:37,080 --> 00:31:40,120 Speaker 1: open that door, staring at the door and trying to 487 00:31:40,120 --> 00:31:42,280 Speaker 1: figure out how to open it for days on end. 488 00:31:43,000 --> 00:31:46,360 Speaker 1: This was taking a lot of time, obviously. Now when 489 00:31:46,560 --> 00:31:49,800 Speaker 1: you think about doors, you realize there can be quite 490 00:31:49,800 --> 00:31:52,400 Speaker 1: a few options, right, Maybe you need to pull on 491 00:31:52,440 --> 00:31:54,520 Speaker 1: a handle to open the door. Maybe you need to 492 00:31:54,560 --> 00:31:57,240 Speaker 1: push on the door. Maybe there's a door knob that 493 00:31:57,320 --> 00:31:59,840 Speaker 1: first you have to turn before you pull or push. 494 00:32:00,400 --> 00:32:03,800 Speaker 1: Maybe there's a crash bar, also known as a panic bar. 495 00:32:04,320 --> 00:32:07,600 Speaker 1: Those are the horizontal bars on exit doors that you 496 00:32:07,680 --> 00:32:10,880 Speaker 1: push on to open. Frequently, they're seen in doors that 497 00:32:10,960 --> 00:32:14,920 Speaker 1: open to an exterior location, like inside schools and stuff. 498 00:32:15,000 --> 00:32:18,600 Speaker 1: You push on them to get out. Maybe it's a 499 00:32:18,640 --> 00:32:22,160 Speaker 1: revolving door, which adds in a whole new level of complexity. 500 00:32:22,240 --> 00:32:23,760 Speaker 1: But you get my point. There are a lot of 501 00:32:23,800 --> 00:32:27,600 Speaker 1: different kinds of doors. Now. We humans pick up on 502 00:32:27,680 --> 00:32:30,920 Speaker 1: how doors work pretty darn quickly. I mean sure, we 503 00:32:31,000 --> 00:32:34,160 Speaker 1: might be like that one kid in the Farside cartoon 504 00:32:34,640 --> 00:32:37,520 Speaker 1: where the kid's going to the School for the Gifted 505 00:32:37,520 --> 00:32:39,560 Speaker 1: and he's pushing as hard as he can on a 506 00:32:39,600 --> 00:32:43,200 Speaker 1: door that is labeled pull. That could be us sometimes, 507 00:32:43,280 --> 00:32:45,640 Speaker 1: but we figure it out right, we do a quick push, 508 00:32:45,720 --> 00:32:49,680 Speaker 1: we realize, oh, it's not opening. We pull. Robots it's 509 00:32:49,720 --> 00:32:53,160 Speaker 1: more challenging for them. They are not good at extrapolating 510 00:32:53,200 --> 00:32:56,680 Speaker 1: from past experience, at least not in every field. We 511 00:32:56,760 --> 00:32:59,680 Speaker 1: humans can apply our knowledge from earlier encounters, and we 512 00:32:59,680 --> 00:33:04,280 Speaker 1: can even if the thing we're facing is mostly new 513 00:33:04,320 --> 00:33:07,360 Speaker 1: to us, we might recognize elements that give us a 514 00:33:07,440 --> 00:33:12,760 Speaker 1: hint on how to proceed. Robots and AI aren't really 515 00:33:12,880 --> 00:33:17,440 Speaker 1: good at doing that. They're also not good at associative thinking, 516 00:33:17,480 --> 00:33:20,400 Speaker 1: which is where we start to draw connections between different 517 00:33:20,400 --> 00:33:23,240 Speaker 1: ideas to come up with something new. It's a really 518 00:33:23,280 --> 00:33:26,760 Speaker 1: important step in the creative process. I find myself free 519 00:33:26,800 --> 00:33:31,040 Speaker 1: associating whenever I'm not actively thinking about something. So if 520 00:33:31,080 --> 00:33:34,000 Speaker 1: I'm doing a mundane task, like if I'm washing dishes 521 00:33:34,400 --> 00:33:37,360 Speaker 1: or I'm mowing the lawn, my brain is going nuts 522 00:33:37,400 --> 00:33:40,960 Speaker 1: free associating ideas and creating new ones. Machines are not 523 00:33:41,160 --> 00:33:45,640 Speaker 1: very good at that for now anyway. They are not 524 00:33:45,840 --> 00:33:51,240 Speaker 1: bad at mimicking it, but they can't actually do it. So, 525 00:33:51,920 --> 00:33:56,400 Speaker 1: getting back to Laila Takeyama, one of the really fascinating 526 00:33:56,480 --> 00:33:59,800 Speaker 1: bits about that panel I went to was a discussion 527 00:33:59,880 --> 00:34:03,680 Speaker 1: on social cues that robots could have in order to 528 00:34:03,760 --> 00:34:08,080 Speaker 1: alert us humans in that same space what the robot 529 00:34:08,160 --> 00:34:10,720 Speaker 1: was up to. This was not for the robots benefit, 530 00:34:10,880 --> 00:34:13,560 Speaker 1: but for our benefit. The whole point is that these 531 00:34:13,640 --> 00:34:16,879 Speaker 1: cues would give us an idea of what was going 532 00:34:16,880 --> 00:34:21,200 Speaker 1: on with the robot so that we don't accidentally, you know, 533 00:34:21,320 --> 00:34:24,239 Speaker 1: interrupt the robot. So you know, it might be like 534 00:34:24,440 --> 00:34:27,600 Speaker 1: the robots in that hallway and it's looking at a door, 535 00:34:28,040 --> 00:34:30,200 Speaker 1: and you're wondering, why is this robot shut down in 536 00:34:30,239 --> 00:34:32,759 Speaker 1: the hallway, But then maybe the robot reaches up to 537 00:34:33,440 --> 00:34:36,640 Speaker 1: apparently kind of scratch its head in sort of a huh, 538 00:34:36,680 --> 00:34:41,360 Speaker 1: what's going on kind of gesture, and that might tell you, oh, 539 00:34:41,480 --> 00:34:46,080 Speaker 1: the robot is actively analyzing something. Don't know exactly what 540 00:34:46,160 --> 00:34:50,120 Speaker 1: it is, but it's clearly working, So maybe I'll step 541 00:34:50,280 --> 00:34:54,840 Speaker 1: around the robot behind it and not interrupt its vision 542 00:34:55,000 --> 00:34:58,920 Speaker 1: of the door it's staring at. The whole point is 543 00:34:58,960 --> 00:35:02,200 Speaker 1: that the social cue whose can help us interact more 544 00:35:02,280 --> 00:35:06,440 Speaker 1: naturally with robots and coexist with them within human spaces, 545 00:35:07,160 --> 00:35:10,360 Speaker 1: so that both the humans and the robots can operate 546 00:35:10,800 --> 00:35:15,600 Speaker 1: well with one another. Also, it helps to explain what 547 00:35:15,640 --> 00:35:18,560 Speaker 1: the robot is doing, because if you don't have that, 548 00:35:19,040 --> 00:35:22,040 Speaker 1: the robots end up being mysterious. Right we can't see 549 00:35:22,080 --> 00:35:25,320 Speaker 1: into them, we don't understand what they are currently trying 550 00:35:25,360 --> 00:35:30,360 Speaker 1: to do, and mystery can breed distrust. That leads to 551 00:35:30,400 --> 00:35:33,400 Speaker 1: yet another concept in AI that gets to this ghost 552 00:35:33,480 --> 00:35:37,440 Speaker 1: in the machine concept, which is the black box. So, 553 00:35:37,480 --> 00:35:41,360 Speaker 1: in this context, a black box refers to any system 554 00:35:41,440 --> 00:35:45,239 Speaker 1: where it is difficult or impossible to see how the 555 00:35:45,280 --> 00:35:49,600 Speaker 1: system works internally. Therefore, there's no way of knowing how 556 00:35:49,640 --> 00:35:52,600 Speaker 1: the system is actually producing any given output, or even 557 00:35:52,640 --> 00:35:55,880 Speaker 1: if the output is the best it could do. So 558 00:35:56,000 --> 00:35:59,000 Speaker 1: with a black box system, you feed input into the 559 00:35:59,000 --> 00:36:01,279 Speaker 1: system and you get out output out of it, but 560 00:36:01,400 --> 00:36:03,480 Speaker 1: you don't know what was happening in the middle. You 561 00:36:03,480 --> 00:36:07,200 Speaker 1: don't know what the system did to turn the input 562 00:36:07,239 --> 00:36:11,200 Speaker 1: into output. Maybe there's a sophisticated computer in the middle 563 00:36:11,239 --> 00:36:13,919 Speaker 1: of that system that's doing all the processing. Maybe there's 564 00:36:13,920 --> 00:36:17,800 Speaker 1: a person who doesn't understand Chinese stuck in there. Maybe 565 00:36:18,080 --> 00:36:21,080 Speaker 1: there's a magical theory that waves a wand and produces 566 00:36:21,120 --> 00:36:23,920 Speaker 1: the result. The problem is we don't know, and by 567 00:36:24,000 --> 00:36:27,520 Speaker 1: not knowing, you cannot be certain that the output you're 568 00:36:27,560 --> 00:36:31,560 Speaker 1: getting is actually the best, or even relevant or the 569 00:36:31,600 --> 00:36:34,239 Speaker 1: most likely to be correct based upon the input you 570 00:36:34,320 --> 00:36:38,160 Speaker 1: fed it. So you start making decisions based on this output. 571 00:36:38,320 --> 00:36:41,719 Speaker 1: But because you're not sure that the output is actually good, 572 00:36:42,400 --> 00:36:45,000 Speaker 1: you therefore can't be sure that the decisions you're making 573 00:36:45,520 --> 00:36:49,239 Speaker 1: are the best, and that leads to really difficult problems. 574 00:36:49,520 --> 00:36:53,640 Speaker 1: So let's take a theoretical example. Let's say we've built 575 00:36:53,680 --> 00:36:57,920 Speaker 1: a complex computer model that's designed to project the effects 576 00:36:57,960 --> 00:37:01,400 Speaker 1: of climate change. And let's say this model is so 577 00:37:01,840 --> 00:37:08,040 Speaker 1: complex and so recursive on itself that it effectively becomes 578 00:37:08,040 --> 00:37:11,000 Speaker 1: impossible for us to know whether or not the model 579 00:37:11,120 --> 00:37:14,480 Speaker 1: is actually working properly. Well, that would mean we wouldn't 580 00:37:14,520 --> 00:37:17,839 Speaker 1: really be able to rely on any predictions or projections 581 00:37:17,880 --> 00:37:20,800 Speaker 1: made by this model. I mean, maybe the projections are accurate, 582 00:37:21,400 --> 00:37:24,040 Speaker 1: but maybe they're not. The issue is there's no way 583 00:37:24,040 --> 00:37:26,279 Speaker 1: for us to be certain, and yet we have a 584 00:37:26,360 --> 00:37:28,680 Speaker 1: need to act. Climate change is a thing, and we 585 00:37:28,760 --> 00:37:33,759 Speaker 1: need to make changes to reduce its impact or to 586 00:37:33,920 --> 00:37:37,919 Speaker 1: mitigate it. It's possible that any decisions we make based 587 00:37:38,000 --> 00:37:41,600 Speaker 1: upon the output of the system will exacerbate the problem, 588 00:37:42,120 --> 00:37:46,800 Speaker 1: or maybe it'll just be less effective than alternative decisions 589 00:37:46,840 --> 00:37:50,720 Speaker 1: would be. Further, we're getting closer to that Arthur C. 590 00:37:50,719 --> 00:37:56,120 Speaker 1: Clark statement about sufficiently advanced technologies being indistinguishable from magic. 591 00:37:56,560 --> 00:37:59,319 Speaker 1: If we produce systems that are so complicated that it's 592 00:37:59,360 --> 00:38:03,239 Speaker 1: impossible for us to understand them fully, we might begin 593 00:38:03,280 --> 00:38:06,399 Speaker 1: to view those technologies as being magical or the very 594 00:38:06,480 --> 00:38:09,200 Speaker 1: least greater than the sum of their parts, and this 595 00:38:09,280 --> 00:38:14,840 Speaker 1: can lead to some illogical decisions. This kind of brings 596 00:38:14,880 --> 00:38:18,080 Speaker 1: me to talk about the Church of Ai called the 597 00:38:18,200 --> 00:38:21,760 Speaker 1: Way of the Future, which was founded and then later 598 00:38:22,080 --> 00:38:27,040 Speaker 1: dissolved by Anthony Lewandowski. You may have heard Lewandowski's name 599 00:38:27,360 --> 00:38:30,000 Speaker 1: if you followed the drama of his departure from Google 600 00:38:30,360 --> 00:38:35,160 Speaker 1: and his eventual employment and subsequent termination from Uber. And 601 00:38:35,200 --> 00:38:37,920 Speaker 1: then there was also the fact that he was sentenced 602 00:38:38,000 --> 00:38:42,480 Speaker 1: to go to prison for stealing company secrets and then 603 00:38:42,560 --> 00:38:46,960 Speaker 1: later received a presidential pardon from Donald Trump. So quick 604 00:38:47,000 --> 00:38:53,200 Speaker 1: recap on Lewandowski. Lewandowski worked within Google's autonomous vehicle division, 605 00:38:53,440 --> 00:38:58,120 Speaker 1: which would eventually become a full subsidiary of Google's parent company, Alphabet, 606 00:38:58,480 --> 00:39:03,160 Speaker 1: and that subsidiary is called Weimo. So when Lewandowski left Google, 607 00:39:03,400 --> 00:39:07,120 Speaker 1: he brought with him a whole lot of data, data 608 00:39:07,200 --> 00:39:11,359 Speaker 1: that Google claimed belonged to the company and was proprietary 609 00:39:11,360 --> 00:39:16,359 Speaker 1: in nature and thus constituted company secrets. Lewandowski eventually began 610 00:39:16,440 --> 00:39:19,960 Speaker 1: working with Uber in that company's own driverless vehicle initiative, 611 00:39:20,360 --> 00:39:24,520 Speaker 1: but the Google slash Weimo investigation would lead to Uber 612 00:39:24,719 --> 00:39:29,239 Speaker 1: hastily firing Lewandowski in sort of an attempt to kind 613 00:39:29,239 --> 00:39:33,080 Speaker 1: of disentangle Uber from this matter, which only worked a 614 00:39:33,080 --> 00:39:36,480 Speaker 1: little bit anyway. In the midst of all this Weimo 615 00:39:37,000 --> 00:39:42,200 Speaker 1: slash Uber drama, in twenty seventeen, Wired ran an article 616 00:39:42,400 --> 00:39:46,160 Speaker 1: that explained that this same Anthony Lewandowski had formed a 617 00:39:46,239 --> 00:39:49,480 Speaker 1: church called Way of the Future a couple of years earlier. 618 00:39:49,680 --> 00:39:52,799 Speaker 1: In twenty fifteen, he placed himself at the head of 619 00:39:52,840 --> 00:39:55,720 Speaker 1: this church with the title of dean, and he also 620 00:39:56,200 --> 00:40:01,120 Speaker 1: became the CEO of the nonprofit organization designed to run 621 00:40:01,160 --> 00:40:04,479 Speaker 1: the church. The aim of the church was to see 622 00:40:04,640 --> 00:40:09,520 Speaker 1: quote the realization, acceptance, and worship of a Godhead based 623 00:40:09,560 --> 00:40:15,160 Speaker 1: on artificial intelligence AI developed through computer hardware and software 624 00:40:15,360 --> 00:40:18,640 Speaker 1: end quote. This is according to the founding documents that 625 00:40:18,680 --> 00:40:23,160 Speaker 1: were filed with the US Internal Revenue Service or IRS. Further, 626 00:40:23,520 --> 00:40:26,760 Speaker 1: Lewandowski planned to start seminars based on this very idea 627 00:40:26,840 --> 00:40:31,480 Speaker 1: later in twenty seventeen. By twenty twenty, Lewandowski's jumped from 628 00:40:31,520 --> 00:40:37,359 Speaker 1: Google to Uber escalated into a prison sentence of eighteen months, 629 00:40:37,440 --> 00:40:40,719 Speaker 1: and it was because he had been found guilty of 630 00:40:40,760 --> 00:40:45,040 Speaker 1: stealing trade secrets. Trump would pardon Lewandowski in January twenty 631 00:40:45,080 --> 00:40:48,160 Speaker 1: twenty one, kind of you know, after the insurrection on 632 00:40:48,239 --> 00:40:52,200 Speaker 1: January sixth, but before Trump would leave office in late January. 633 00:40:53,440 --> 00:40:56,359 Speaker 1: As for the Way of the Future, Lewandowski actually began 634 00:40:56,400 --> 00:40:59,759 Speaker 1: to shut that down in June of twenty twenty and 635 00:40:59,800 --> 00:41:02,480 Speaker 1: it was dissolved by the end of twenty twenty, but 636 00:41:02,560 --> 00:41:06,239 Speaker 1: not reported on until like February twenty twenty one. He 637 00:41:06,320 --> 00:41:09,080 Speaker 1: directed the assets of the church some one hundred and 638 00:41:09,080 --> 00:41:12,360 Speaker 1: seventy five thousand dollars to be donated to the NAACP. 639 00:41:13,200 --> 00:41:19,040 Speaker 1: Lewandowski has said that the beliefs behind the church are 640 00:41:19,120 --> 00:41:23,480 Speaker 1: ones that he still adheres to, that AI has the 641 00:41:23,480 --> 00:41:29,080 Speaker 1: potential to tackle very challenging problems like taking care of 642 00:41:29,120 --> 00:41:33,240 Speaker 1: the planet, which Lewandowski says, obviously we humans are incapable 643 00:41:33,480 --> 00:41:36,759 Speaker 1: of doing that. You know, we would put on this 644 00:41:36,920 --> 00:41:41,480 Speaker 1: system taking care of things that we understand to be important, 645 00:41:41,480 --> 00:41:46,720 Speaker 1: but we seem to be incapable of handling ourselves, almost 646 00:41:46,719 --> 00:41:51,000 Speaker 1: like we're children. Thus looking at AI like a godhead, 647 00:41:51,760 --> 00:41:56,319 Speaker 1: so we should seek out solutions with AI rather than 648 00:41:56,360 --> 00:42:00,920 Speaker 1: locking AI away and saying, oh, we can't push AI's 649 00:42:00,960 --> 00:42:05,239 Speaker 1: development further in these directions because of the potential existential 650 00:42:05,320 --> 00:42:11,120 Speaker 1: dangers that could emerge from AI becoming super intelligent. I 651 00:42:11,160 --> 00:42:13,440 Speaker 1: don't think there are actually that many folks who are 652 00:42:13,480 --> 00:42:16,439 Speaker 1: trying to lock AI away at all. Mostly I see 653 00:42:16,480 --> 00:42:20,080 Speaker 1: tons of efforts to improve aspects of AI from a 654 00:42:20,120 --> 00:42:25,160 Speaker 1: million different angles. I think most serious AI researchers and 655 00:42:25,239 --> 00:42:29,480 Speaker 1: scientists aren't really focused on strong AI at all. They're 656 00:42:29,520 --> 00:42:34,920 Speaker 1: looking at very particular applications of artificial intelligence, very particular 657 00:42:35,000 --> 00:42:39,520 Speaker 1: implementations of it, but not like a strong AI that 658 00:42:39,600 --> 00:42:44,520 Speaker 1: acts like deep thought from the Hitchhacker's Guide to the Galaxy. Anyway, 659 00:42:45,840 --> 00:42:49,880 Speaker 1: maybe Lewandowski's vision will eventually lead us not to a 660 00:42:50,120 --> 00:42:54,280 Speaker 1: ghost in the machine, but a literal dios x makina 661 00:42:54,960 --> 00:42:59,200 Speaker 1: that means god out of the machine. That seems to 662 00:42:59,239 --> 00:43:03,560 Speaker 1: be how Levendal's Ski views the potential of AI that 663 00:43:03,680 --> 00:43:08,959 Speaker 1: are unsolvable problems are almost magically fixed thanks to this 664 00:43:09,440 --> 00:43:15,560 Speaker 1: robotic or computational savior. Know In fiction, deos ex machina 665 00:43:15,920 --> 00:43:18,680 Speaker 1: is often seen as a cop out. Right, You've got 666 00:43:18,719 --> 00:43:23,319 Speaker 1: your characters in some sort of ironclad disastrous situation, there's 667 00:43:23,440 --> 00:43:27,640 Speaker 1: no escape for them, and then in order to get 668 00:43:27,719 --> 00:43:32,120 Speaker 1: that happy ending, you have some unlikely savior or unlikely 669 00:43:32,200 --> 00:43:36,719 Speaker 1: event happen and everyone gets saved, and it might be 670 00:43:36,800 --> 00:43:40,040 Speaker 1: satisfying because you've got the happy ending, but upon critical 671 00:43:40,080 --> 00:43:42,839 Speaker 1: reflection you think, well, that doesn't really make sense. There 672 00:43:42,840 --> 00:43:44,360 Speaker 1: are a lot of stories that get a lot of 673 00:43:44,400 --> 00:43:48,920 Speaker 1: flak for using deosx machina. The image I always have 674 00:43:49,080 --> 00:43:52,320 Speaker 1: is from classical theater, where you've got all the mortal 675 00:43:52,440 --> 00:43:58,720 Speaker 1: characters in a terrible situation and then an actor standing 676 00:43:58,760 --> 00:44:01,960 Speaker 1: in as a god is literally lowered from the top 677 00:44:02,000 --> 00:44:05,280 Speaker 1: of the stage on pulleys to descend to the moral 678 00:44:05,320 --> 00:44:08,280 Speaker 1: realm and fix everything so that you can have a comedy, 679 00:44:08,880 --> 00:44:12,560 Speaker 1: a play with a happy ending. For Lewandowski, it's really 680 00:44:12,600 --> 00:44:14,720 Speaker 1: about turning the ghost of the machine into a god. 681 00:44:15,520 --> 00:44:18,520 Speaker 1: I'm not so sure about that myself. I don't know 682 00:44:18,920 --> 00:44:22,880 Speaker 1: if that's a realistic vision. I can see the appeal 683 00:44:22,880 --> 00:44:26,160 Speaker 1: of it, because we do have these very difficult problems 684 00:44:26,200 --> 00:44:29,399 Speaker 1: that we need to solve, and we have had very 685 00:44:29,440 --> 00:44:32,400 Speaker 1: little progress on many of those problems for multiple reasons, 686 00:44:32,400 --> 00:44:35,080 Speaker 1: not just a lack of information, but a lack of 687 00:44:35,239 --> 00:44:40,239 Speaker 1: motivation or conflicting motivations where we have other needs that 688 00:44:40,480 --> 00:44:45,359 Speaker 1: have to be met that conflict with the solving of 689 00:44:45,520 --> 00:44:48,080 Speaker 1: a tough problem like climate change. Right, we have energy 690 00:44:48,160 --> 00:44:50,040 Speaker 1: needs that need to be met. There are places in 691 00:44:50,080 --> 00:44:55,920 Speaker 1: the developing world that would be disproportionately affected by massive 692 00:44:56,200 --> 00:44:59,319 Speaker 1: policies that were meant to mitigate climate change, and it's 693 00:44:59,440 --> 00:45:02,680 Speaker 1: tough to address that. Right. There are these real reasons 694 00:45:02,680 --> 00:45:07,880 Speaker 1: why it's a complicated issue beyond just it's hard to understand. 695 00:45:08,280 --> 00:45:12,880 Speaker 1: So I see the appeal of it, but it also 696 00:45:12,960 --> 00:45:14,719 Speaker 1: kind of feels like a cop out to me, Like 697 00:45:14,760 --> 00:45:17,400 Speaker 1: this idea of will engineer our way out of this problem, 698 00:45:17,719 --> 00:45:21,160 Speaker 1: because that just puts off doing anything about the problem 699 00:45:21,280 --> 00:45:24,560 Speaker 1: until future you can get around to it. I don't 700 00:45:24,560 --> 00:45:28,200 Speaker 1: know about any of you, but I am very much 701 00:45:28,239 --> 00:45:30,920 Speaker 1: guilty of the idea of, you know what this is, 702 00:45:31,080 --> 00:45:33,839 Speaker 1: future Jonathan will take care of this. Jonathan right now 703 00:45:33,840 --> 00:45:35,879 Speaker 1: has to focus on these other things. Future Jonathan will 704 00:45:35,880 --> 00:45:38,840 Speaker 1: take it. Future Jonathan, by the way, hates Jonathan of 705 00:45:38,960 --> 00:45:42,640 Speaker 1: right now and really hates Jonathan in the past because 706 00:45:43,600 --> 00:45:46,160 Speaker 1: it's just putting things off until it gets to a 707 00:45:46,200 --> 00:45:48,200 Speaker 1: point where you can't do it anymore, and by then 708 00:45:48,239 --> 00:45:49,919 Speaker 1: it might be too late. So that's what I worry 709 00:45:49,960 --> 00:45:53,320 Speaker 1: about with this particular approach, this idea of we'll figure 710 00:45:53,360 --> 00:45:55,440 Speaker 1: it out, will science our way out, we'll engineer our 711 00:45:55,480 --> 00:45:59,720 Speaker 1: way out, because it's projecting all that into the future 712 00:45:59,760 --> 00:46:03,719 Speaker 1: and not doing anything in the present anyway. That's the 713 00:46:03,760 --> 00:46:08,279 Speaker 1: episode on ghost in the Machine. There are other interpretations 714 00:46:08,320 --> 00:46:11,760 Speaker 1: as well. There's some great ones in fiction where sometimes 715 00:46:11,760 --> 00:46:14,200 Speaker 1: you actually have a literal ghost in the machine, like 716 00:46:14,239 --> 00:46:19,200 Speaker 1: there's a haunted machine. But maybe I'll wait and tackle 717 00:46:19,280 --> 00:46:24,279 Speaker 1: that for a future, more like entertainment focused episode where 718 00:46:24,280 --> 00:46:27,160 Speaker 1: it's not so much about the technology but kind of 719 00:46:27,200 --> 00:46:32,160 Speaker 1: a critique of the entertainment itself, because there's only so 720 00:46:32,239 --> 00:46:34,759 Speaker 1: much you can say about. You know, I don't know 721 00:46:35,680 --> 00:46:39,640 Speaker 1: a ghost calculator. That's it for this episode. If you 722 00:46:39,760 --> 00:46:43,520 Speaker 1: have suggestions for future episode topics or anything else that 723 00:46:43,560 --> 00:46:45,000 Speaker 1: you would like to communicate to me, there are a 724 00:46:45,000 --> 00:46:46,799 Speaker 1: couple of ways you can do that. One is you 725 00:46:46,840 --> 00:46:51,440 Speaker 1: can download the iHeartRadio app and navigate over to tech Stuff. 726 00:46:51,480 --> 00:46:53,719 Speaker 1: Just put tech Stuff in the little search field and 727 00:46:53,760 --> 00:46:56,439 Speaker 1: you'll see that it'll pop up. You go to the 728 00:46:56,480 --> 00:47:00,279 Speaker 1: tech Stuff page and there's a little microphone icon. Click 729 00:47:00,320 --> 00:47:03,279 Speaker 1: on that. You can leave a voice message up to 730 00:47:03,480 --> 00:47:05,279 Speaker 1: thirty seconds in length and let me know what you 731 00:47:05,280 --> 00:47:07,440 Speaker 1: would like to hear in future episodes. And if you like, 732 00:47:07,520 --> 00:47:10,080 Speaker 1: you can even tell me if I can use your 733 00:47:10,280 --> 00:47:13,360 Speaker 1: voice message in a future episode. Just let me know. 734 00:47:13,640 --> 00:47:16,160 Speaker 1: I'm all about opt in. I'm not gonna do it automatically, 735 00:47:16,880 --> 00:47:19,240 Speaker 1: or if you prefer, you can reach out on Twitter. 736 00:47:19,480 --> 00:47:23,560 Speaker 1: The handle for the show is tech Stuff HSW and 737 00:47:23,600 --> 00:47:33,239 Speaker 1: I'll talk to you again really soon. Tech Stuff is 738 00:47:33,280 --> 00:47:37,840 Speaker 1: an iHeartRadio production. For more podcasts from iHeartRadio, visit the 739 00:47:37,880 --> 00:47:41,480 Speaker 1: iHeartRadio app, Apple Podcasts, or wherever you listen to your 740 00:47:41,560 --> 00:47:46,160 Speaker 1: favorite shows,