1 00:00:05,720 --> 00:00:07,680 Speaker 1: Hey, welcome to Stuff to Blow Your Mind. My name 2 00:00:07,720 --> 00:00:10,639 Speaker 1: is Robert Lamb and I'm Joe McCormick, and it's Saturday. 3 00:00:10,720 --> 00:00:13,640 Speaker 1: Time for an episode from the Vault. This is the 4 00:00:13,680 --> 00:00:16,959 Speaker 1: beginning of a series that we did about about robot 5 00:00:17,000 --> 00:00:21,200 Speaker 1: culpability and machine punishment. I think we're just calling this 6 00:00:21,239 --> 00:00:24,000 Speaker 1: Punish the Machine Part one, right, So this originally aired 7 00:00:24,040 --> 00:00:29,040 Speaker 1: on April. This one raised a lot of interesting questions 8 00:00:29,120 --> 00:00:34,960 Speaker 1: and so we hope you enjoy it. Welcome to stot 9 00:00:35,000 --> 00:00:44,280 Speaker 1: to Blow Your Mind, production of My Heart Radio. Hey you, 10 00:00:44,320 --> 00:00:46,240 Speaker 1: welcome to Stuff to Blow Your Mind. My name is 11 00:00:46,320 --> 00:00:49,559 Speaker 1: Robert Lamb and I'm Joe McCormick. And right before we 12 00:00:49,600 --> 00:00:52,559 Speaker 1: started recording today, we were just talking about that iconic 13 00:00:52,640 --> 00:00:56,800 Speaker 1: scene and returned to the Jedi where the droids are 14 00:00:56,800 --> 00:00:59,480 Speaker 1: sent to the droid torture Chamber. Do you remember there? 15 00:00:59,520 --> 00:01:01,640 Speaker 1: I guess it's not just a droid torture chamber. It's 16 00:01:01,680 --> 00:01:05,920 Speaker 1: sort of like the uh, the Droid onboarding center right 17 00:01:05,959 --> 00:01:08,200 Speaker 1: where the you know, R two, D two and C 18 00:01:08,400 --> 00:01:10,600 Speaker 1: three po have been given as gifts to job of 19 00:01:10,640 --> 00:01:13,080 Speaker 1: the Hut and they go meet their new like droid 20 00:01:13,160 --> 00:01:15,959 Speaker 1: boss and he's like you you're a feisty little one 21 00:01:16,000 --> 00:01:18,760 Speaker 1: and he's signing them in but uh, he sees that 22 00:01:18,920 --> 00:01:21,280 Speaker 1: R two D two is a is a bad robot 23 00:01:21,319 --> 00:01:24,720 Speaker 1: who needs discipline, and R two D two is confronted 24 00:01:24,760 --> 00:01:29,080 Speaker 1: with these images of robots being punished with various corporal punishments, 25 00:01:29,160 --> 00:01:32,280 Speaker 1: like one is getting stretched on a robot rack and 26 00:01:32,319 --> 00:01:36,160 Speaker 1: another one is getting its feet burned. Yes, this is 27 00:01:36,200 --> 00:01:40,040 Speaker 1: a This is a great scene, one that that definitely 28 00:01:40,080 --> 00:01:42,320 Speaker 1: burns its way into your your brain as a as 29 00:01:42,360 --> 00:01:44,200 Speaker 1: a young viewer, and maybe you don't think about it 30 00:01:44,240 --> 00:01:46,640 Speaker 1: that much for a long time, but uh, it's it's 31 00:01:46,680 --> 00:01:49,360 Speaker 1: still in there. It's it takes place in the bowels 32 00:01:49,400 --> 00:01:53,720 Speaker 1: of job as Palace on tattooing and it's um, yeah, 33 00:01:53,720 --> 00:01:58,240 Speaker 1: it's like Droid intake but also Droid corrections. It's there. 34 00:01:58,280 --> 00:02:00,440 Speaker 1: There are a number of different department that I think 35 00:02:00,480 --> 00:02:04,640 Speaker 1: are converging here and and it ray it ultimately kind 36 00:02:04,640 --> 00:02:09,120 Speaker 1: of raises some interesting questions about um about ethics and 37 00:02:09,160 --> 00:02:12,880 Speaker 1: punishment and in crime, and certainly as it relates to 38 00:02:13,200 --> 00:02:16,800 Speaker 1: two robots. Uh. Of course, one thing, this important distress 39 00:02:16,840 --> 00:02:18,840 Speaker 1: here is like none of this was really intended in 40 00:02:18,880 --> 00:02:23,560 Speaker 1: these scenes. This was about having droids doing things that 41 00:02:23,960 --> 00:02:27,079 Speaker 1: humans would be doing to each other in other pieces 42 00:02:27,080 --> 00:02:30,919 Speaker 1: of cinema, certainly, things like old pirate movies or old 43 00:02:31,000 --> 00:02:33,120 Speaker 1: sin bad movies or what have you. I mean, that's 44 00:02:33,200 --> 00:02:36,399 Speaker 1: kind of that's kind of Star Wars in a nutshell, right. Uh. 45 00:02:36,440 --> 00:02:39,720 Speaker 1: This whole portion of Return of the Jedi is essentially 46 00:02:39,800 --> 00:02:43,160 Speaker 1: a big pirate movie, a big swashbuckler set in an 47 00:02:43,160 --> 00:02:45,640 Speaker 1: alien location. Oh yeah, the job of the hud as 48 00:02:45,639 --> 00:02:50,120 Speaker 1: a pirate captain. Yeah. But something interesting occurs when you 49 00:02:50,160 --> 00:02:55,840 Speaker 1: replace the humans in these these tropic scenes with machines. Uh, 50 00:02:55,880 --> 00:02:57,720 Speaker 1: and then you think about it, you know, you think 51 00:02:57,720 --> 00:03:00,720 Speaker 1: about why is that robot torturing the other as if 52 00:03:00,720 --> 00:03:03,880 Speaker 1: it makes perfect sense if it's humans doing it, But 53 00:03:04,000 --> 00:03:06,799 Speaker 1: then the things that we create in our image when 54 00:03:06,800 --> 00:03:09,280 Speaker 1: they're doing it, suddenly we start seeing the flaws in 55 00:03:09,280 --> 00:03:12,480 Speaker 1: our reasoning. Suddenly we start questioning, well, how does this 56 00:03:12,520 --> 00:03:15,960 Speaker 1: whole system supposed to work? Uh? And maybe this whole 57 00:03:15,960 --> 00:03:18,840 Speaker 1: system doesn't work. Well. Yeah, there are multiple levels of 58 00:03:18,880 --> 00:03:22,080 Speaker 1: absurdities in the scene. One is the idea that this 59 00:03:22,240 --> 00:03:24,600 Speaker 1: robot is just sort of like coolly telling are two, 60 00:03:24,639 --> 00:03:26,600 Speaker 1: that he is going to learn some discipline. But then 61 00:03:26,680 --> 00:03:29,640 Speaker 1: the image that accompanies that is like, clearly just like 62 00:03:29,919 --> 00:03:33,840 Speaker 1: extreme robot torture, Like it's something way beyond what would 63 00:03:33,840 --> 00:03:36,480 Speaker 1: have to do with with discipline in the real world. 64 00:03:36,720 --> 00:03:39,040 Speaker 1: But then the other level of absurdity is that it's 65 00:03:39,120 --> 00:03:42,280 Speaker 1: robots in the scene, but coming off of the issue 66 00:03:42,400 --> 00:03:46,240 Speaker 1: of just like barbaric pirate torture directly and more to 67 00:03:46,360 --> 00:03:51,720 Speaker 1: the broader question of robots and discipline and punishment. Uh, 68 00:03:51,960 --> 00:03:53,960 Speaker 1: this is something that we actually wanted to talk about 69 00:03:54,000 --> 00:03:57,800 Speaker 1: today because the issue of robot moral and legal agency 70 00:03:58,400 --> 00:04:00,760 Speaker 1: is something I've been interested in for a long time. 71 00:04:00,800 --> 00:04:02,720 Speaker 1: I've talked about it, It's come up on the show 72 00:04:02,760 --> 00:04:06,320 Speaker 1: in the past and in briefer ways, um, and today 73 00:04:06,360 --> 00:04:09,320 Speaker 1: I wanted to come back and devote a full episode 74 00:04:09,320 --> 00:04:11,080 Speaker 1: to the subject. I guess actually we're gonna be talking 75 00:04:11,080 --> 00:04:13,840 Speaker 1: about this for a couple of episodes now. The question 76 00:04:14,040 --> 00:04:19,640 Speaker 1: of as machines AI robots become more independent and act 77 00:04:19,760 --> 00:04:23,280 Speaker 1: more like agents more like humans do, how are we 78 00:04:23,360 --> 00:04:28,200 Speaker 1: to understand their moral and legal culpability when they do 79 00:04:28,320 --> 00:04:31,440 Speaker 1: something that harms people? And is there such a thing 80 00:04:31,520 --> 00:04:36,279 Speaker 1: as robot punishment robot discipline? Do these concepts reflect anything 81 00:04:36,320 --> 00:04:39,840 Speaker 1: that's achievable in the real world and practical, and if so, 82 00:04:39,960 --> 00:04:42,120 Speaker 1: how would any of this work? Yeah? I think one 83 00:04:42,120 --> 00:04:44,599 Speaker 1: of the most interesting things about this topic is that 84 00:04:44,839 --> 00:04:47,520 Speaker 1: it does force us to force a face off between 85 00:04:47,520 --> 00:04:51,120 Speaker 1: what robots and ai actually are or will be, and 86 00:04:51,160 --> 00:04:55,440 Speaker 1: how we think about them, indeed, how we anthropomorphize them. Um, 87 00:04:55,560 --> 00:04:58,720 Speaker 1: And perhaps it might be helpful to take a step 88 00:04:58,760 --> 00:05:02,120 Speaker 1: back and think about something far less advanced as a 89 00:05:02,200 --> 00:05:04,520 Speaker 1: robot and think something. I think about something more like 90 00:05:04,560 --> 00:05:08,880 Speaker 1: a hammer. Okay, so everyone's heard the old adage that 91 00:05:08,920 --> 00:05:11,919 Speaker 1: it's a poor carpenter who blames their tools, right, But 92 00:05:12,000 --> 00:05:14,800 Speaker 1: of course we do this all the time. Uh, the 93 00:05:14,839 --> 00:05:18,080 Speaker 1: hammer slips, it hits our fingers, and we may, at 94 00:05:18,160 --> 00:05:20,680 Speaker 1: least in the heat of the moment, blame the hammer 95 00:05:21,080 --> 00:05:23,800 Speaker 1: for the failure. Now, we may get over this quickly, 96 00:05:24,160 --> 00:05:26,880 Speaker 1: but then again, we may decide that the hammer truly 97 00:05:26,960 --> 00:05:29,640 Speaker 1: is at fault and it should be used less. We 98 00:05:29,720 --> 00:05:33,240 Speaker 1: might also take this idea to a number of different extremes. 99 00:05:33,279 --> 00:05:35,720 Speaker 1: We might decide that the hammer is not merely at 100 00:05:35,760 --> 00:05:38,880 Speaker 1: fault but faulty, and then we're entitled to at least 101 00:05:38,880 --> 00:05:41,719 Speaker 1: a refund for its purchase. Or we might decide that 102 00:05:41,760 --> 00:05:44,680 Speaker 1: the hammer needs to actually be punished and and this 103 00:05:44,760 --> 00:05:48,040 Speaker 1: of course is ridiculous. And yet the idea of punishing 104 00:05:48,080 --> 00:05:50,440 Speaker 1: the hammer by say, putting it in the corner, or 105 00:05:50,560 --> 00:05:53,800 Speaker 1: perhaps you have an old toolbox of shame that's just 106 00:05:53,920 --> 00:05:57,559 Speaker 1: for the misbehaving tools, or maybe it's it's less thought 107 00:05:57,560 --> 00:06:00,400 Speaker 1: out and you just throw the hammer across the yard 108 00:06:00,720 --> 00:06:04,040 Speaker 1: as punishment for what it has done to you. Um. Again, 109 00:06:04,080 --> 00:06:06,920 Speaker 1: these are ridiculous things to do, but the idea of 110 00:06:06,920 --> 00:06:08,920 Speaker 1: doing them is not that far and from us. Um, 111 00:06:09,040 --> 00:06:10,960 Speaker 1: those of you listening, you may have engaged in this 112 00:06:11,000 --> 00:06:13,640 Speaker 1: sort of thing as well. You might also simply throw 113 00:06:13,680 --> 00:06:17,599 Speaker 1: the tool away and otherwise perfectly good tool. Um. I 114 00:06:17,640 --> 00:06:19,600 Speaker 1: know that I did this once with a knife sharpening 115 00:06:19,640 --> 00:06:22,039 Speaker 1: gadget that caused me to cut my finger my and 116 00:06:22,320 --> 00:06:25,600 Speaker 1: my like reaction was this thing has now injured me, 117 00:06:25,920 --> 00:06:28,520 Speaker 1: it has drawn my blood. Uh, I am, I'm getting 118 00:06:28,600 --> 00:06:30,359 Speaker 1: rid of it. It goes in the trash. It bore 119 00:06:30,440 --> 00:06:33,880 Speaker 1: malice against me. Yeah, or you know, I you know 120 00:06:33,960 --> 00:06:36,720 Speaker 1: ultimately it's I mean, I mean you you get into 121 00:06:36,800 --> 00:06:39,440 Speaker 1: arguments about different tools, like is this a dangerous tool? 122 00:06:39,480 --> 00:06:41,800 Speaker 1: And in that case that was my reasoning. It's like 123 00:06:42,240 --> 00:06:45,360 Speaker 1: this tool is dangerous, it's not enabling me to do 124 00:06:45,400 --> 00:06:48,560 Speaker 1: what I want to do without drawing blood, so it 125 00:06:48,560 --> 00:06:50,840 Speaker 1: goes in the trash um. But then there have been 126 00:06:50,880 --> 00:06:53,800 Speaker 1: other cases where like I had a mandolin for slicing 127 00:06:53,880 --> 00:06:57,760 Speaker 1: up carrots, and um, I like nicked my finger on it. 128 00:06:57,839 --> 00:07:00,720 Speaker 1: But oh I I nicked my fingers A rule they 129 00:07:00,760 --> 00:07:04,000 Speaker 1: can be. But I nicked my finger not using it, 130 00:07:04,279 --> 00:07:06,760 Speaker 1: but going into the drawer for something else. So I 131 00:07:06,800 --> 00:07:09,279 Speaker 1: punished it by putting at the very bottom of the drawer. 132 00:07:10,040 --> 00:07:12,240 Speaker 1: But I didn't throw it away. Uh huh. So I 133 00:07:12,320 --> 00:07:14,080 Speaker 1: think if we all think, think back, you know, we 134 00:07:14,120 --> 00:07:16,160 Speaker 1: have examples of this sort of thing from our from 135 00:07:16,200 --> 00:07:18,680 Speaker 1: our life. Well sure, I mean I'm going to talk 136 00:07:18,960 --> 00:07:20,800 Speaker 1: in this episode about some of the ways that we 137 00:07:20,920 --> 00:07:26,160 Speaker 1: mindlessly apply social rules to robots. But yeah, I think 138 00:07:26,160 --> 00:07:28,800 Speaker 1: what you're illustrating here is that you don't even have 139 00:07:28,840 --> 00:07:32,000 Speaker 1: to get to the robot agency stage before people start 140 00:07:32,080 --> 00:07:35,640 Speaker 1: doing that. I mean people mindlessly, to a lesser extent, 141 00:07:35,760 --> 00:07:39,840 Speaker 1: mindlessly apply social rules and rules derived for managing human 142 00:07:39,880 --> 00:07:44,800 Speaker 1: relationships to inanimate objects with no moving parts. Yeah. Yeah, 143 00:07:44,800 --> 00:07:47,200 Speaker 1: you don't even have to get into a room ba 144 00:07:47,360 --> 00:07:49,320 Speaker 1: or anything, or you know, you can deal with the hammer, 145 00:07:49,360 --> 00:07:52,640 Speaker 1: the can opener but we also you know, it's it's 146 00:07:52,640 --> 00:07:55,480 Speaker 1: also a sad fact that many pat pet owners will 147 00:07:55,680 --> 00:07:59,560 Speaker 1: punish an animal for a transgression, transgression, but scientific evidence 148 00:07:59,600 --> 00:08:03,440 Speaker 1: shows that this tends to not actually work at least 149 00:08:03,560 --> 00:08:07,160 Speaker 1: in most of the circumstances that it's used. Um so 150 00:08:07,960 --> 00:08:11,960 Speaker 1: you know, even it's not merely with with tools and 151 00:08:12,040 --> 00:08:15,320 Speaker 1: inanimate objects, but even non human entities were liable to 152 00:08:15,360 --> 00:08:20,280 Speaker 1: engage in this kind of discipline based thinking. Now, most 153 00:08:20,320 --> 00:08:22,840 Speaker 1: of the studies I think that necessarily relate to dogs 154 00:08:23,520 --> 00:08:25,320 Speaker 1: if I remember correctly, And there's a lot going on 155 00:08:25,400 --> 00:08:28,240 Speaker 1: here that doesn't relate directly to inanimate objects and robots, 156 00:08:28,440 --> 00:08:30,760 Speaker 1: but it illustrates how we tend to approach the punishment 157 00:08:30,760 --> 00:08:34,199 Speaker 1: of other agents and perceived agents. Well, yeah, there's a disconnect, 158 00:08:34,200 --> 00:08:35,920 Speaker 1: and this will be highlighted in one of the papers 159 00:08:35,960 --> 00:08:38,160 Speaker 1: we're going to talk about in in this pair of episodes, 160 00:08:38,200 --> 00:08:42,600 Speaker 1: But there's a disconnect in that punishment is often logically 161 00:08:43,320 --> 00:08:47,040 Speaker 1: characterized as serving one type of purpose, but then is 162 00:08:47,080 --> 00:08:50,200 Speaker 1: applied more like it serves another type of purpose. So 163 00:08:50,240 --> 00:08:53,920 Speaker 1: like it is logically explained as say a deterrent, right, 164 00:08:53,960 --> 00:08:57,120 Speaker 1: I mean, if you if you're talking about uh, legal theories, 165 00:08:57,120 --> 00:08:59,280 Speaker 1: of punishment. One of the main things that people come 166 00:08:59,360 --> 00:09:02,040 Speaker 1: up with is say, well, the remedy provided by the 167 00:09:02,120 --> 00:09:05,000 Speaker 1: law is in order to punish the person who did 168 00:09:05,000 --> 00:09:07,520 Speaker 1: the bad thing, in order to send a message that 169 00:09:07,559 --> 00:09:10,760 Speaker 1: people should not do this bad thing and thus maybe 170 00:09:10,800 --> 00:09:13,720 Speaker 1: discourage other people from doing something similar in the future, 171 00:09:13,840 --> 00:09:16,679 Speaker 1: or discourage the same person from doing it again. And 172 00:09:16,760 --> 00:09:19,280 Speaker 1: if it were to actually serve that purpose, it's debatable 173 00:09:19,280 --> 00:09:21,720 Speaker 1: in what cases it does actually serve that purpose. Maybe 174 00:09:21,760 --> 00:09:24,280 Speaker 1: sometimes it does, but that is a you know, you 175 00:09:24,280 --> 00:09:28,040 Speaker 1: could argue that's a rational, logical thing that prevents harm. 176 00:09:28,400 --> 00:09:31,440 Speaker 1: But the way punishment is actually often inflicted in the 177 00:09:31,440 --> 00:09:35,400 Speaker 1: real world seems to be more consistent with judgments based 178 00:09:35,480 --> 00:09:41,199 Speaker 1: on like emotional satisfaction of the idea of having been wronged. Yeah. 179 00:09:41,720 --> 00:09:45,480 Speaker 1: And then also we get into this area where, uh, 180 00:09:45,600 --> 00:09:50,360 Speaker 1: we have a couple of different factors encouraging traditions of discipline. Um, 181 00:09:50,480 --> 00:09:54,280 Speaker 1: particularly if we look at a parenthood, which there's some 182 00:09:54,440 --> 00:09:57,920 Speaker 1: crossover between discipline and parents who I didn't discipline and 183 00:09:58,040 --> 00:10:01,640 Speaker 1: the criminal justice system. But you know, uh, not everything 184 00:10:01,720 --> 00:10:04,079 Speaker 1: is going to line up one to one here. But um, 185 00:10:04,800 --> 00:10:07,200 Speaker 1: on the childhood example, it's benn argue that parents use 186 00:10:07,240 --> 00:10:10,319 Speaker 1: punishment first of all, because it's an emotional response out 187 00:10:10,320 --> 00:10:13,720 Speaker 1: of anger and anger that may be mismanaged. But then 188 00:10:14,320 --> 00:10:17,800 Speaker 1: on top of this, it's, you know, something that's culturally 189 00:10:17,840 --> 00:10:22,400 Speaker 1: passed down and punishment may seem to work. I was 190 00:10:22,400 --> 00:10:25,760 Speaker 1: reading about this in a Psychology Today article by Michael Carson, 191 00:10:26,080 --> 00:10:29,880 Speaker 1: pH d j D. And uh, this is what they said. Quote, 192 00:10:29,920 --> 00:10:32,560 Speaker 1: because the child is inhibited in your presence, it's easy 193 00:10:32,600 --> 00:10:35,880 Speaker 1: to think they would be inhibited in your absence. Punishment 194 00:10:35,920 --> 00:10:41,160 Speaker 1: produces politeness, not morality. Thus, the inhibited, obedient child inadvertently 195 00:10:41,200 --> 00:10:45,760 Speaker 1: reinforces the parents punitive behavior by acting obedient. For the 196 00:10:45,760 --> 00:10:49,440 Speaker 1: sorts of parents who find obedient children reinforcing, Yeah, that 197 00:10:49,520 --> 00:10:52,280 Speaker 1: raises an interesting question. I mean, I've been mainly thinking 198 00:10:52,320 --> 00:10:56,439 Speaker 1: for this episode about about legal punishments, but like when 199 00:10:56,480 --> 00:10:59,120 Speaker 1: it comes down to parenting, that's a very different kind 200 00:10:59,160 --> 00:11:03,320 Speaker 1: of thing because both parenting in the legal system involved punishment, 201 00:11:03,520 --> 00:11:07,440 Speaker 1: but parenting is not subject to a legal system, right, 202 00:11:07,480 --> 00:11:10,680 Speaker 1: So there's no there is no systematized way by which 203 00:11:10,760 --> 00:11:13,920 Speaker 1: justice is administered from a parent. It's just just I mean, 204 00:11:13,960 --> 00:11:15,280 Speaker 1: I think a lot of times it's just sort of 205 00:11:15,280 --> 00:11:17,440 Speaker 1: like whatever the parent can manage to do in the 206 00:11:17,480 --> 00:11:21,000 Speaker 1: moment because like the kids driving them crazy or something. Yeah, 207 00:11:21,080 --> 00:11:24,079 Speaker 1: usually the child can't take it to a higher court, right, 208 00:11:24,440 --> 00:11:26,960 Speaker 1: But I mean, I think you're absolutely right that whether 209 00:11:27,000 --> 00:11:30,040 Speaker 1: you're talking about discipline administered by a parent or the 210 00:11:30,120 --> 00:11:33,280 Speaker 1: justice system as a whole, I'd say that both are 211 00:11:33,280 --> 00:11:38,440 Speaker 1: probably based more on tradition and philosophy and less on 212 00:11:38,600 --> 00:11:41,680 Speaker 1: a scientifically rigorous study of the most efficient ways to 213 00:11:41,760 --> 00:11:44,960 Speaker 1: reduce harm. And one of the interesting things about thinking 214 00:11:45,000 --> 00:11:48,920 Speaker 1: about how law could potentially be applied to harm caused 215 00:11:49,040 --> 00:11:52,960 Speaker 1: by autonomous machines is that it may help give us 216 00:11:53,000 --> 00:11:56,600 Speaker 1: some insights on ways that the justice system as it 217 00:11:56,640 --> 00:11:59,800 Speaker 1: exists and is applied to humans today tends to be 218 00:12:00,080 --> 00:12:04,960 Speaker 1: of irrationally already, like with respect to humans. Yeah, and again, 219 00:12:04,960 --> 00:12:08,240 Speaker 1: this is what's so interesting about this this paper, I mean, well, 220 00:12:08,280 --> 00:12:10,880 Speaker 1: the papers that we're going to discuss this topic in general, though, 221 00:12:11,320 --> 00:12:15,000 Speaker 1: is if you start, you start comparing machine possibilities to 222 00:12:15,080 --> 00:12:19,040 Speaker 1: human possibilities, and it's on one level of thought experiment 223 00:12:19,120 --> 00:12:21,839 Speaker 1: in how you would hold machines are responsible, but then 224 00:12:21,840 --> 00:12:24,560 Speaker 1: it makes you rethink the way humans are held responsible. 225 00:12:24,640 --> 00:12:27,280 Speaker 1: You know. It's like, um, you might think you have 226 00:12:27,320 --> 00:12:31,000 Speaker 1: a pretty square a way, like if if an adult 227 00:12:31,080 --> 00:12:34,280 Speaker 1: sells a pack of cigarettes to someone who's underage, right, 228 00:12:34,679 --> 00:12:36,480 Speaker 1: but then one of a machine does the same thing, 229 00:12:37,600 --> 00:12:39,080 Speaker 1: how do you treat the machine? Do you treat the 230 00:12:39,080 --> 00:12:41,240 Speaker 1: machine like an adult? And then in trying to figure 231 00:12:41,280 --> 00:12:43,600 Speaker 1: out how to treat this machine, does it make you 232 00:12:43,600 --> 00:12:46,160 Speaker 1: rethink how you should be treating the adult who engaged 233 00:12:46,200 --> 00:12:48,520 Speaker 1: in this behavior. I don't know, yeah, And I think 234 00:12:48,520 --> 00:12:51,000 Speaker 1: a lot of that will come down to our understanding 235 00:12:51,040 --> 00:12:53,880 Speaker 1: of what the machine is capable of, like what kind 236 00:12:53,920 --> 00:12:56,920 Speaker 1: of constraints it has, what type of what level of 237 00:12:57,000 --> 00:13:00,600 Speaker 1: autonomy it seems to be operating at. I mean, again, Weirdly, 238 00:13:00,640 --> 00:13:03,400 Speaker 1: even when people set out to define clear rules for 239 00:13:03,440 --> 00:13:06,439 Speaker 1: what makes a machine culpable, there there's still going to 240 00:13:06,520 --> 00:13:08,760 Speaker 1: be a lot of subjectivity in it. I'm I'm looking 241 00:13:08,800 --> 00:13:13,319 Speaker 1: at like legal definitions of what constitutes a robot versus 242 00:13:13,400 --> 00:13:17,640 Speaker 1: just a machine, and some of these definitions involved things like, well, 243 00:13:17,679 --> 00:13:21,760 Speaker 1: a robot feels like a social agent. So there's still, like, 244 00:13:22,040 --> 00:13:24,560 Speaker 1: you know, an element of subjectivity. But I think that's 245 00:13:24,559 --> 00:13:27,040 Speaker 1: correct in how we actually apply the term most of 246 00:13:27,080 --> 00:13:30,080 Speaker 1: the time, right, Like something is like a gut feeling 247 00:13:30,200 --> 00:13:33,760 Speaker 1: about how this machine is behaving in your world. Is 248 00:13:33,760 --> 00:13:37,520 Speaker 1: it acting more like a fixed, you know, brainless machine, 249 00:13:37,600 --> 00:13:41,920 Speaker 1: or is it acting a little bit more like a person. So, 250 00:13:42,120 --> 00:13:44,840 Speaker 1: while it would be one thing if if it were 251 00:13:44,960 --> 00:13:48,360 Speaker 1: basically a cigarette vending machine that was selling to children, 252 00:13:48,360 --> 00:13:50,640 Speaker 1: but if it were a machine that went door to 253 00:13:50,720 --> 00:13:54,520 Speaker 1: door and and rang the doorbell and then asked for 254 00:13:54,559 --> 00:13:56,800 Speaker 1: the children so they could sell them cigarettes, that would 255 00:13:56,800 --> 00:13:58,480 Speaker 1: be a different matter. I mean, yeah, I mean I 256 00:13:58,559 --> 00:14:01,920 Speaker 1: think that would require different types of remedies. Probably, Yeah, 257 00:14:02,160 --> 00:14:03,800 Speaker 1: I mean I think a lot of people would probably 258 00:14:03,800 --> 00:14:06,480 Speaker 1: look at the cigarette vending machine and say where was 259 00:14:06,520 --> 00:14:09,240 Speaker 1: the vending machine placed? Why was it in a place 260 00:14:09,280 --> 00:14:12,640 Speaker 1: that children could have access to it? Rather than attacking 261 00:14:12,640 --> 00:14:15,280 Speaker 1: the fundamentals of the machine itself, if it's going door 262 00:14:15,320 --> 00:14:17,600 Speaker 1: to door and giving cigarettes to kids, yeah, then people 263 00:14:17,600 --> 00:14:20,760 Speaker 1: are probably going to attack the fundamentals and the moral 264 00:14:20,880 --> 00:14:24,920 Speaker 1: character of the robot. You have not attacked the robot itself. 265 00:14:24,960 --> 00:14:34,760 Speaker 1: It would just rob justice in somebody's front yard. Yeah, alright, 266 00:14:34,760 --> 00:14:36,400 Speaker 1: So I guess I want to introduce one of the 267 00:14:36,400 --> 00:14:38,840 Speaker 1: papers we're gonna be looking at in this pair of episodes, 268 00:14:39,000 --> 00:14:42,680 Speaker 1: and it is by Mark A. Limley and Brian casey 269 00:14:42,880 --> 00:14:46,920 Speaker 1: called Remedies for Robots published in the University of Chicago 270 00:14:47,040 --> 00:14:51,840 Speaker 1: Law Review in twenty nineteen. And this is a big paper. 271 00:14:51,920 --> 00:14:54,760 Speaker 1: It's like eighty something pages long with a with a 272 00:14:54,800 --> 00:14:57,840 Speaker 1: lot of different interesting, uh thoughts in it. We're not 273 00:14:57,880 --> 00:15:00,560 Speaker 1: going to be able to cover the entire thing in depth, 274 00:15:00,640 --> 00:15:03,280 Speaker 1: but it's worth looking up. You can easily find a 275 00:15:03,280 --> 00:15:05,960 Speaker 1: full PDF of it if you want to read it 276 00:15:06,000 --> 00:15:07,800 Speaker 1: in depth. And we're gonna look at some of the 277 00:15:07,960 --> 00:15:11,040 Speaker 1: larger framework it lays out and then some interesting thoughts 278 00:15:11,160 --> 00:15:13,440 Speaker 1: raised to buy it. But to kick it off here 279 00:15:13,600 --> 00:15:17,600 Speaker 1: the the author's right quote, what happens when artificially intelligent 280 00:15:17,680 --> 00:15:22,680 Speaker 1: robots misbehave? The question is not just hypothetical. As robotics 281 00:15:22,680 --> 00:15:27,320 Speaker 1: and artificial intelligence systems increasingly integrate into our society, they 282 00:15:27,360 --> 00:15:30,920 Speaker 1: will do bad things. We seek to explore what remedies 283 00:15:30,960 --> 00:15:33,960 Speaker 1: the law can and should provide once a robot has 284 00:15:34,040 --> 00:15:38,000 Speaker 1: caused harm. Now, obviously we're going to be focused less 285 00:15:38,080 --> 00:15:41,720 Speaker 1: on the like minute particulars of US legal precedent here 286 00:15:41,760 --> 00:15:45,600 Speaker 1: and more on the broader issues they raise about robot agency, 287 00:15:45,920 --> 00:15:49,400 Speaker 1: robot moral decision making, and how that interacts with harm 288 00:15:49,600 --> 00:15:53,240 Speaker 1: and morality and justice. And the authors start out in 289 00:15:53,280 --> 00:15:55,600 Speaker 1: their introduction by giving what I think is a really 290 00:15:55,640 --> 00:16:00,040 Speaker 1: fantastic example of how an autonomous robot with behave of 291 00:16:00,160 --> 00:16:03,120 Speaker 1: years guided by machine learning, which is how you know, 292 00:16:03,560 --> 00:16:07,640 Speaker 1: increasingly most robots are going to be controlled, can end 293 00:16:07,680 --> 00:16:10,600 Speaker 1: up doing things that are the exact opposite of what 294 00:16:10,680 --> 00:16:13,720 Speaker 1: was intended. So this case that they site is based 295 00:16:13,760 --> 00:16:16,840 Speaker 1: on a true story from a presentation at the eleventh 296 00:16:16,880 --> 00:16:22,480 Speaker 1: annual Stanford e Commerce Best Practices Conference in June, and 297 00:16:22,520 --> 00:16:27,480 Speaker 1: it goes like this quote. Engineers training and artificially intelligent 298 00:16:27,680 --> 00:16:31,440 Speaker 1: self flying drone were perplexed. They were trying to get 299 00:16:31,480 --> 00:16:34,840 Speaker 1: the drone to stay within a predefined circle and head 300 00:16:34,880 --> 00:16:38,040 Speaker 1: toward its center. Things were going well for a while. 301 00:16:38,440 --> 00:16:42,240 Speaker 1: The drone received positive reinforcement for its successful flights, and 302 00:16:42,280 --> 00:16:45,040 Speaker 1: it was improving its ability to navigate toward the middle 303 00:16:45,120 --> 00:16:49,960 Speaker 1: quickly and accurately. Then suddenly things changed. When the drone 304 00:16:50,000 --> 00:16:53,040 Speaker 1: near the edge of the circle, it would inexplicably turn 305 00:16:53,160 --> 00:16:57,040 Speaker 1: away from the center, leaving the circle. What went wrong? 306 00:16:57,800 --> 00:17:00,720 Speaker 1: After a long time spent puzzling over the problem, the 307 00:17:00,760 --> 00:17:04,960 Speaker 1: designers realized that whenever the drone left the circle during tests, 308 00:17:05,240 --> 00:17:08,359 Speaker 1: they had turned it off. Someone would then pick it 309 00:17:08,480 --> 00:17:11,600 Speaker 1: up and carry it back into the circle to start again. 310 00:17:12,440 --> 00:17:16,560 Speaker 1: From this pattern, the drones algorithm had learned correctly that 311 00:17:16,640 --> 00:17:19,840 Speaker 1: when it was sufficiently far from the center, the optimal 312 00:17:19,880 --> 00:17:22,040 Speaker 1: way to get back to the middle was to simply 313 00:17:22,160 --> 00:17:25,840 Speaker 1: leave it all together. As far as the drone was concerned, 314 00:17:25,920 --> 00:17:30,320 Speaker 1: it had discovered a wormhole somehow, flying outside of the 315 00:17:30,320 --> 00:17:33,760 Speaker 1: circle could be relied upon to magically teleport it closer 316 00:17:33,800 --> 00:17:37,280 Speaker 1: to the center, and far from violating the rules instilled 317 00:17:37,280 --> 00:17:40,480 Speaker 1: in it by its engineers, the drone had actually followed 318 00:17:40,520 --> 00:17:43,600 Speaker 1: them to a t. In doing so, however, it had 319 00:17:43,640 --> 00:17:48,520 Speaker 1: discovered an unforeseen shortcut, one that subverted its designer's true intent. 320 00:17:49,400 --> 00:17:52,800 Speaker 1: That's really good, that's that's it's as a yes, I 321 00:17:52,840 --> 00:17:55,800 Speaker 1: love it. This is such a great example of how 322 00:17:55,960 --> 00:17:59,560 Speaker 1: robots can fail in ways that are perfectly logical for 323 00:17:59,600 --> 00:18:03,000 Speaker 1: the machine means themselves, but hard for humans to predict 324 00:18:03,000 --> 00:18:06,520 Speaker 1: in advance, because we're not understanding how our you know, 325 00:18:06,680 --> 00:18:09,479 Speaker 1: our programming or the data sets we're training it on 326 00:18:10,000 --> 00:18:13,320 Speaker 1: is biasing its behavior in ways that that are strange 327 00:18:13,359 --> 00:18:15,880 Speaker 1: to us. And in this case, of of course, such 328 00:18:15,920 --> 00:18:20,480 Speaker 1: a malfunction is harmless, but as autonomous machines become more 329 00:18:20,520 --> 00:18:23,960 Speaker 1: and more integrated into the broader culture, not just in 330 00:18:24,160 --> 00:18:28,399 Speaker 1: controlled contained locations like factory floors and laboratories, but in 331 00:18:28,480 --> 00:18:31,080 Speaker 1: the wild so on the streets and in our homes 332 00:18:31,160 --> 00:18:35,919 Speaker 1: and stuff. There will inevitably be cases where robots fail 333 00:18:36,240 --> 00:18:40,480 Speaker 1: like this and fail in ways that cause catastrophic harm 334 00:18:40,600 --> 00:18:43,760 Speaker 1: to people. Yeah, and plus as an aside, we we 335 00:18:43,840 --> 00:18:46,040 Speaker 1: have to realize that even in cases where the machines 336 00:18:46,080 --> 00:18:48,880 Speaker 1: have not failed, there will be gray areas in which 337 00:18:48,880 --> 00:18:51,840 Speaker 1: it's not completely clear, and an argument could be made 338 00:18:51,840 --> 00:18:54,760 Speaker 1: in these cases for machine culpability with a variety of 339 00:18:54,800 --> 00:18:58,160 Speaker 1: intense and possible biases in place. Oh yeah, that's another 340 00:18:58,200 --> 00:19:00,240 Speaker 1: thing these authors talk about that there can be all 341 00:19:00,320 --> 00:19:04,359 Speaker 1: kinds of ways that, uh, that robotics and AI could 342 00:19:04,440 --> 00:19:07,920 Speaker 1: end up causing extreme harm to people without ever doing 343 00:19:07,960 --> 00:19:10,840 Speaker 1: anything that if a human did, it would be illegal. 344 00:19:11,480 --> 00:19:14,920 Speaker 1: What one example they give is like if, um, if 345 00:19:15,080 --> 00:19:19,639 Speaker 1: Google were to suddenly change its Google Maps algorithm so 346 00:19:19,720 --> 00:19:24,120 Speaker 1: that it routed all of the city's traffic through your neighborhood. Like, 347 00:19:24,560 --> 00:19:27,679 Speaker 1: nothing illegal about that. It doesn't like commit a crime 348 00:19:27,720 --> 00:19:31,400 Speaker 1: against you, but this is going to drastically negatively impact 349 00:19:31,400 --> 00:19:33,800 Speaker 1: your quality of life, and it's a decision that's just 350 00:19:33,920 --> 00:19:36,679 Speaker 1: like a could be a quirk of an algorithm in 351 00:19:36,680 --> 00:19:40,560 Speaker 1: a machine. Now this paper in particular concerns the legal 352 00:19:40,640 --> 00:19:44,520 Speaker 1: concept of remedies. So I was reading about remedies. A 353 00:19:44,840 --> 00:19:48,199 Speaker 1: common legal definition that I found is quote the means 354 00:19:48,280 --> 00:19:52,000 Speaker 1: to achieve justice in any matter in which legal rights 355 00:19:52,040 --> 00:19:55,760 Speaker 1: are involved. Or in the words of Limely and Casey, 356 00:19:55,840 --> 00:19:58,159 Speaker 1: what do I get when I win? Right? So, if 357 00:19:58,200 --> 00:20:00,320 Speaker 1: you if you take somebody to court, be as you 358 00:20:00,359 --> 00:20:04,280 Speaker 1: say they have harmed you, whatever outcome you're seeking from that, 359 00:20:04,280 --> 00:20:07,960 Speaker 1: that court case is the remedy. So usually when a 360 00:20:07,960 --> 00:20:10,639 Speaker 1: court case finds that somebody has done something wrong to 361 00:20:10,680 --> 00:20:13,919 Speaker 1: harm somebody else, the court responds to the finding of 362 00:20:13,960 --> 00:20:17,520 Speaker 1: guilt or blame by enforcing this remedy. And common remedies 363 00:20:17,520 --> 00:20:20,720 Speaker 1: would include a payment of money, right, a guilty defendant 364 00:20:20,720 --> 00:20:23,760 Speaker 1: has to pay money to the plaintiff, a punishment of 365 00:20:23,760 --> 00:20:26,680 Speaker 1: the offender, like maybe they go to jail, or a 366 00:20:26,680 --> 00:20:29,560 Speaker 1: court order to do something or not to do something. 367 00:20:29,680 --> 00:20:32,879 Speaker 1: For example, somebody is ordered not to drive a vehicle, 368 00:20:33,080 --> 00:20:35,240 Speaker 1: or they are ordered not to go within a hundred 369 00:20:35,280 --> 00:20:38,199 Speaker 1: feet of somebody else or something like that. Yeah, or 370 00:20:38,320 --> 00:20:40,879 Speaker 1: their their eyeball is removed, or they have to spend 371 00:20:41,040 --> 00:20:43,600 Speaker 1: a night in a hunted house something like that. Hopefully 372 00:20:43,640 --> 00:20:46,280 Speaker 1: not in modern law. But wait a minute, there are 373 00:20:46,320 --> 00:20:49,400 Speaker 1: some Sometimes you do read about some really strange like 374 00:20:49,760 --> 00:20:52,199 Speaker 1: remedies that are ordered by judges like I order you 375 00:20:52,240 --> 00:20:54,560 Speaker 1: to I don't know, to wear chicken suit or something, 376 00:20:55,119 --> 00:20:58,320 Speaker 1: right like, Yeah, there's some judges who like to get creative. 377 00:20:58,400 --> 00:21:01,720 Speaker 1: It seems weird. Yeah, I wonder I have There have 378 00:21:01,760 --> 00:21:03,639 Speaker 1: been cases where someone has to spend a night and 379 00:21:03,640 --> 00:21:05,720 Speaker 1: a hounded house due to a court order. I think 380 00:21:05,720 --> 00:21:07,879 Speaker 1: that would be a good setup for a film. But anyway, 381 00:21:08,200 --> 00:21:10,439 Speaker 1: so when you start looking at the idea of remedies, 382 00:21:10,600 --> 00:21:14,600 Speaker 1: remedies are complicated because they involve different types of implied 383 00:21:14,680 --> 00:21:18,760 Speaker 1: satisfaction on behalf of the victim or plaintiff. And some 384 00:21:18,880 --> 00:21:22,919 Speaker 1: are very clear and material and others are much more abstract. 385 00:21:23,000 --> 00:21:24,720 Speaker 1: So the very the ones that are very clear and 386 00:21:24,760 --> 00:21:28,040 Speaker 1: material are like, if I hit your car with my 387 00:21:28,119 --> 00:21:31,080 Speaker 1: car and I'm clearly at fault, I need to give 388 00:21:31,160 --> 00:21:34,440 Speaker 1: you a payment of cash to offset the material losses 389 00:21:34,480 --> 00:21:37,080 Speaker 1: to the value of your car. Right. But then other 390 00:21:37,119 --> 00:21:40,399 Speaker 1: times it's it's more abstract. It's you know, punishment of 391 00:21:40,440 --> 00:21:43,800 Speaker 1: an offender, to give the victim a sense of justice 392 00:21:44,119 --> 00:21:47,960 Speaker 1: or to allegedly discourage someone from committing this type of 393 00:21:48,000 --> 00:21:50,719 Speaker 1: harm or offense in the future. And then the authors 394 00:21:50,800 --> 00:21:53,480 Speaker 1: right that things get way more complicated when you bring 395 00:21:53,680 --> 00:21:57,480 Speaker 1: robots and AI into the picture. For example, if you're 396 00:21:57,480 --> 00:21:59,919 Speaker 1: trying to give a court order to a person, you know, 397 00:22:00,080 --> 00:22:02,680 Speaker 1: saying like you shall not drive a car, you shall 398 00:22:02,720 --> 00:22:05,080 Speaker 1: not you know, come within a hundred feet of this person. 399 00:22:05,440 --> 00:22:07,600 Speaker 1: You can do so in natural language. You can like 400 00:22:07,680 --> 00:22:10,480 Speaker 1: speak a sentence to them and you can expect them 401 00:22:10,520 --> 00:22:13,879 Speaker 1: to understand. But how do you get a court to 402 00:22:14,200 --> 00:22:17,720 Speaker 1: give an order to a robot not to do something? 403 00:22:18,480 --> 00:22:22,639 Speaker 1: Most robots don't have natural language processing, and even if 404 00:22:22,680 --> 00:22:24,440 Speaker 1: they do, a lot of times it's not that good. 405 00:22:24,840 --> 00:22:27,280 Speaker 1: So you might think, okay, well you just you know, 406 00:22:27,359 --> 00:22:30,560 Speaker 1: you give the court order to the robots programmer and 407 00:22:30,600 --> 00:22:32,879 Speaker 1: then it will and then they'll have to program the 408 00:22:32,960 --> 00:22:36,000 Speaker 1: robot to obey. But this is also really complicated, like 409 00:22:36,359 --> 00:22:40,199 Speaker 1: whose responsibility is it the robots current owner or the 410 00:22:40,240 --> 00:22:44,720 Speaker 1: original contractor or creator who made the robot? Uh? And 411 00:22:44,880 --> 00:22:47,680 Speaker 1: what if this is like an end user consumer device 412 00:22:48,080 --> 00:22:51,560 Speaker 1: that the owner doesn't have any ability to reprogram or 413 00:22:51,800 --> 00:22:54,119 Speaker 1: what if, in the case of robots whose behavior is 414 00:22:54,240 --> 00:22:56,919 Speaker 1: driven by machine learning or some other kind of system 415 00:22:57,000 --> 00:23:00,199 Speaker 1: that is, for practical purposes, a black box, what if 416 00:23:00,240 --> 00:23:02,760 Speaker 1: it's not even clear how you could reprogram it to 417 00:23:02,920 --> 00:23:08,800 Speaker 1: reliably obey the rule. Yeah, because there's a chance you 418 00:23:08,920 --> 00:23:14,440 Speaker 1: got to this position because the robot misinterpreted what was 419 00:23:14,520 --> 00:23:18,760 Speaker 1: asked of it. So if you then make additional requirements, 420 00:23:19,040 --> 00:23:22,280 Speaker 1: ones that maybe you know, haven't actually been tested before 421 00:23:22,359 --> 00:23:24,120 Speaker 1: but are just you know, that that are then brought 422 00:23:24,119 --> 00:23:27,720 Speaker 1: on by the court, that could conceivably create new problems, 423 00:23:27,840 --> 00:23:30,760 Speaker 1: right yeah, Yeah, totally, And and it keeps getting even 424 00:23:30,760 --> 00:23:33,760 Speaker 1: more complicated from there, like Limle and Casey right quote. 425 00:23:33,800 --> 00:23:38,080 Speaker 1: To complicate matters further, some systems, including many self driving cars, 426 00:23:38,560 --> 00:23:45,119 Speaker 1: distribute responsibility for their robots between both designers and downstream operators. 427 00:23:45,480 --> 00:23:48,639 Speaker 1: For systems of this kind, it has already proven extremely 428 00:23:48,640 --> 00:23:53,679 Speaker 1: difficult to allocate responsibility when accidents inevitably occur. It just 429 00:23:53,680 --> 00:23:56,160 Speaker 1: seems like a real, real fast way to get into 430 00:23:56,200 --> 00:23:59,840 Speaker 1: skynet territory, where it's like the robot then decides that 431 00:24:00,119 --> 00:24:02,600 Speaker 1: only way to assure that it never sells cigarettes to 432 00:24:02,680 --> 00:24:06,119 Speaker 1: children again is to destroy all humans. That sounds like 433 00:24:06,160 --> 00:24:09,400 Speaker 1: finding a wormhole to me. We will be getting into 434 00:24:09,440 --> 00:24:14,640 Speaker 1: some more wormhole territory as we go on, so more complications. Uh. 435 00:24:14,680 --> 00:24:16,639 Speaker 1: The authors bring up the idea of how to courts 436 00:24:16,720 --> 00:24:20,560 Speaker 1: compel a person or a company to obey a court order. Right, Like, 437 00:24:20,600 --> 00:24:23,720 Speaker 1: if you know a company is like dumping poison that's 438 00:24:23,760 --> 00:24:26,959 Speaker 1: harming somebody, and the person sues that company, what does 439 00:24:27,000 --> 00:24:29,240 Speaker 1: the court do to get them to stop? Well, the 440 00:24:29,240 --> 00:24:31,919 Speaker 1: there is a threat of contempt of court if they 441 00:24:31,960 --> 00:24:35,600 Speaker 1: don't stop doing it. Right, Courts usually just assume that 442 00:24:35,640 --> 00:24:38,720 Speaker 1: people are motivated by a desire not to pay huge 443 00:24:38,760 --> 00:24:41,560 Speaker 1: monetary damages or a desire not to go to jail. 444 00:24:42,080 --> 00:24:45,000 Speaker 1: Would that have any motivating power on a robot? It 445 00:24:45,359 --> 00:24:47,919 Speaker 1: would only have that power to the extent that the 446 00:24:48,040 --> 00:24:51,440 Speaker 1: robot had been programmed to take that into account. If 447 00:24:51,440 --> 00:24:53,520 Speaker 1: it hadn't, it wouldn't matter at all. Like, you know, 448 00:24:53,680 --> 00:24:56,440 Speaker 1: most robots probably do not have any opinion one way 449 00:24:56,520 --> 00:24:59,080 Speaker 1: or another about whether about going to jail or having 450 00:24:59,080 --> 00:25:02,320 Speaker 1: to pay damages. So you'd have to explicitly program it 451 00:25:02,400 --> 00:25:07,520 Speaker 1: to be disincentivized by potential punishments. Yeah, because take the 452 00:25:07,560 --> 00:25:10,840 Speaker 1: cigarette robot for example, Like it's prime it's prime directive 453 00:25:10,920 --> 00:25:14,639 Speaker 1: is just to sell delicious cigarettes to human beings, like 454 00:25:14,720 --> 00:25:17,080 Speaker 1: the the what what else? What kind of leverage do 455 00:25:17,080 --> 00:25:19,600 Speaker 1: you have? Right? Exactly? So in that case you'd be 456 00:25:19,640 --> 00:25:22,480 Speaker 1: faced with either you'd be trying to find some kind 457 00:25:22,520 --> 00:25:25,560 Speaker 1: of human who's responsible for its behavior, but you could 458 00:25:25,800 --> 00:25:28,640 Speaker 1: very well run into the problem that like, you can't 459 00:25:28,680 --> 00:25:31,800 Speaker 1: really identify any one person who seems to be at 460 00:25:31,800 --> 00:25:34,800 Speaker 1: fault for what it did, and it's doing this bad thing, 461 00:25:34,920 --> 00:25:37,120 Speaker 1: so so what are you going to do about it? Yeah, 462 00:25:37,480 --> 00:25:39,679 Speaker 1: And then of course things get even weirder when you 463 00:25:39,720 --> 00:25:42,160 Speaker 1: start getting into that that other side. You know, that's 464 00:25:42,200 --> 00:25:46,359 Speaker 1: like the more like direct and material remedies that can 465 00:25:46,400 --> 00:25:50,040 Speaker 1: be provided by courts, either like a monetary award to 466 00:25:50,160 --> 00:25:53,159 Speaker 1: the victim or in order to stop doing something that 467 00:25:53,200 --> 00:25:55,480 Speaker 1: causes harm. On the other hand, you've got this thing 468 00:25:55,560 --> 00:25:58,520 Speaker 1: that courts often end up engaging in, and people are 469 00:25:58,560 --> 00:26:02,800 Speaker 1: are largely driven in motivated by however however irrational it 470 00:26:02,880 --> 00:26:06,400 Speaker 1: might be in some cases, And that's the perceived abstract 471 00:26:06,520 --> 00:26:09,840 Speaker 1: value of punishment, you know, not just material damages to 472 00:26:09,880 --> 00:26:11,919 Speaker 1: a victim or in order not to do something, but 473 00:26:12,000 --> 00:26:16,920 Speaker 1: the inflicting of punishments, specifically to demonstrate the court's displeasure 474 00:26:16,960 --> 00:26:20,760 Speaker 1: with the original behavior of the defendant. Uh So, they 475 00:26:20,840 --> 00:26:23,199 Speaker 1: raise a question that's brought up in a paper by 476 00:26:23,240 --> 00:26:27,320 Speaker 1: a professor named Christina Mulligan, who explores the subject of 477 00:26:27,400 --> 00:26:29,679 Speaker 1: should you have the right to punch a robot that 478 00:26:29,760 --> 00:26:32,760 Speaker 1: hurts you limly in case? He called the call this 479 00:26:32,880 --> 00:26:37,320 Speaker 1: the expressive component of remedies, And though a desire to 480 00:26:37,480 --> 00:26:41,639 Speaker 1: see offenders punished maybe an extremely natural and nearly universal 481 00:26:41,720 --> 00:26:45,840 Speaker 1: human drive, it's debatable whether it actually serves a purpose 482 00:26:45,920 --> 00:26:49,320 Speaker 1: in reducing harm, and if it does, in what cases 483 00:26:49,359 --> 00:26:53,480 Speaker 1: it does. I love this idea because, in a very 484 00:26:53,560 --> 00:26:56,240 Speaker 1: literal level, it makes me think, well, why would you 485 00:26:56,240 --> 00:26:58,120 Speaker 1: punch a robot? They're made out of out of metal. 486 00:26:58,160 --> 00:26:59,879 Speaker 1: You're gonna hurt your hand. All you're gonna do is 487 00:27:00,040 --> 00:27:01,399 Speaker 1: or your hand, and you're not going to hurt the 488 00:27:01,480 --> 00:27:06,119 Speaker 1: robot unless first of all, you design the robots so 489 00:27:06,119 --> 00:27:09,520 Speaker 1: that it has at least one punchable portion of its anatomy, 490 00:27:10,000 --> 00:27:12,679 Speaker 1: and then for it to be more than just you know, 491 00:27:13,280 --> 00:27:17,600 Speaker 1: a cathartic uh uh thing for you, then you have 492 00:27:17,640 --> 00:27:19,880 Speaker 1: to also make sure there's some sort of feedback right 493 00:27:19,920 --> 00:27:25,119 Speaker 1: where Yeah, like you punch cigarette bought in it's punchable area, 494 00:27:25,720 --> 00:27:28,879 Speaker 1: then it will say owl. And maybe it will I 495 00:27:28,880 --> 00:27:31,760 Speaker 1: don't know, ottaw inciner rate one packet of cigarettes so 496 00:27:31,800 --> 00:27:33,520 Speaker 1: that it can never sell them, that sort of thing. 497 00:27:34,000 --> 00:27:37,040 Speaker 1: But then, yeah, you're having to design your robots to 498 00:27:37,240 --> 00:27:40,200 Speaker 1: to suffer to a certain extent, which I guess that 499 00:27:40,320 --> 00:27:42,800 Speaker 1: means that goes back to what C three PO said, Right, 500 00:27:43,240 --> 00:27:45,720 Speaker 1: he said, you know about about being made to suffer. 501 00:27:45,800 --> 00:27:48,359 Speaker 1: It seems to be our lot in life. Oh that's interesting. 502 00:27:48,400 --> 00:27:50,439 Speaker 1: I hadn't thought about that. Yeah, clearly R two, D 503 00:27:50,520 --> 00:27:53,720 Speaker 1: two and C three p O have have inherent desires 504 00:27:53,760 --> 00:27:57,119 Speaker 1: to avoid pain. They have been programmed with that. Yeah, 505 00:27:57,359 --> 00:28:00,640 Speaker 1: but as we've said that, that's not standard issue for robots. 506 00:28:00,680 --> 00:28:03,320 Speaker 1: Most robots don't care about whether or not they get injured, 507 00:28:03,440 --> 00:28:06,200 Speaker 1: Like that's not a motivating factor for them. And again 508 00:28:06,240 --> 00:28:09,280 Speaker 1: it raises this bizarre question of like, what are you 509 00:28:09,359 --> 00:28:12,040 Speaker 1: doing when you punch the robot? Like what is I 510 00:28:12,080 --> 00:28:15,000 Speaker 1: guess it's making you feel better, But does it make 511 00:28:15,040 --> 00:28:17,560 Speaker 1: you feel better if, like you know that the robot 512 00:28:17,600 --> 00:28:21,440 Speaker 1: doesn't actually care? Yeah, and then what then what needs 513 00:28:21,440 --> 00:28:24,120 Speaker 1: to be done to convince you that it does care. Yeah, 514 00:28:24,160 --> 00:28:26,520 Speaker 1: it just gets very sticky, very quickly, and then of 515 00:28:26,560 --> 00:28:29,479 Speaker 1: course turns the mirror back on the way we handle 516 00:28:30,320 --> 00:28:34,199 Speaker 1: human to human scenarios. Right. But anyway, Limbly and Casey, 517 00:28:34,320 --> 00:28:37,240 Speaker 1: I guess to to summarize their position, they say, Okay, 518 00:28:37,280 --> 00:28:42,600 Speaker 1: increasingly independent robots and AI are coming. They're they're infiltrating 519 00:28:42,640 --> 00:28:45,280 Speaker 1: more and more into society, and they will inevitably do 520 00:28:45,320 --> 00:28:49,240 Speaker 1: bad things. When that happens, the legal system will try 521 00:28:49,280 --> 00:28:52,560 Speaker 1: to order remedies to make things right, and you know 522 00:28:52,640 --> 00:28:56,560 Speaker 1: when when harm has been caused. Our current legal understanding 523 00:28:56,560 --> 00:28:59,880 Speaker 1: of remedies is based on the assumption of human agents 524 00:29:00,160 --> 00:29:03,720 Speaker 1: human agents only, and its rules are not suited to 525 00:29:03,800 --> 00:29:07,640 Speaker 1: dealing with robot crime or robot offenses quote. As we 526 00:29:07,720 --> 00:29:11,000 Speaker 1: have shown, Failing to recognize those differences could result in 527 00:29:11,120 --> 00:29:16,760 Speaker 1: significant unintended consequences, inadvertently encouraging the wrong behaviors, or even 528 00:29:16,840 --> 00:29:21,920 Speaker 1: rendering our most important remedial mechanisms functionally irrelevant. Uh So, 529 00:29:22,080 --> 00:29:24,960 Speaker 1: to take robot agents into account, we're going to have 530 00:29:25,000 --> 00:29:28,520 Speaker 1: to examine and rethink how our systems of remedies work. 531 00:29:29,040 --> 00:29:31,840 Speaker 1: But and this is a point we've been making already, 532 00:29:31,880 --> 00:29:34,440 Speaker 1: this could have multiple benefits because it could also lead 533 00:29:34,480 --> 00:29:37,200 Speaker 1: to a better understanding of how we apply these remedies 534 00:29:37,240 --> 00:29:41,360 Speaker 1: to cases dealing exclusively with humans. Quote. Indeed, one of 535 00:29:41,400 --> 00:29:44,720 Speaker 1: the most pressing challenges raised by the technology is its 536 00:29:44,720 --> 00:29:48,640 Speaker 1: tendency to reveal the trade offs between sidal economic and 537 00:29:48,720 --> 00:29:52,320 Speaker 1: legal values that many of us today make without deeply 538 00:29:52,360 --> 00:29:57,160 Speaker 1: appreciating the downstream consequences. They right, we need a law 539 00:29:57,200 --> 00:30:00,560 Speaker 1: of remedies for robots, but in the final analysis, remedies 540 00:30:00,600 --> 00:30:03,360 Speaker 1: for robots may also end up being remedies for all 541 00:30:03,400 --> 00:30:06,040 Speaker 1: of us. Now, like I said, this is a very 542 00:30:06,080 --> 00:30:08,240 Speaker 1: long paper. We can't do justice to all of the 543 00:30:08,280 --> 00:30:11,120 Speaker 1: subjects they raise, but to focus on some highlights, I 544 00:30:11,200 --> 00:30:14,440 Speaker 1: thought one interesting place to look was when they try 545 00:30:14,480 --> 00:30:16,840 Speaker 1: to get into the definition of what actually makes a 546 00:30:16,960 --> 00:30:20,360 Speaker 1: robot in the legal sense. Obviously, there's going to be 547 00:30:20,400 --> 00:30:24,400 Speaker 1: some difficulty here because think about how differently the term 548 00:30:24,520 --> 00:30:26,920 Speaker 1: is used and how many different things it's applied to 549 00:30:27,000 --> 00:30:30,480 Speaker 1: in the world. Uh. The authors here cite a professor 550 00:30:30,600 --> 00:30:34,120 Speaker 1: Ryan Callo, who in the past had written that there 551 00:30:34,160 --> 00:30:38,560 Speaker 1: are three important characteristics that define a robot and make 552 00:30:38,560 --> 00:30:42,120 Speaker 1: it different from any machine, just like a computer or phone, 553 00:30:42,720 --> 00:30:46,360 Speaker 1: and Callo says that these three uh, these three qualities 554 00:30:46,400 --> 00:30:51,920 Speaker 1: are embodiment, emergence, and social valence. So to quote from Calo, 555 00:30:52,200 --> 00:30:56,480 Speaker 1: robotics combines, arguably for the first time, the promiscuity of 556 00:30:56,560 --> 00:31:01,280 Speaker 1: information with the embodied capacity to do physical arm. Robots 557 00:31:01,320 --> 00:31:06,320 Speaker 1: display increasingly emergent behavior, permitting the technology to accomplish both 558 00:31:06,400 --> 00:31:10,320 Speaker 1: useful and unfortunate tasks in unexpected ways. I like that 559 00:31:10,400 --> 00:31:14,960 Speaker 1: idea of unfortunate tasks. Um and robots, more so than 560 00:31:15,000 --> 00:31:18,800 Speaker 1: any technology and history, feel to us like social actors, 561 00:31:18,920 --> 00:31:23,200 Speaker 1: a tendency so strong that soldiers sometimes jeopardize themselves to 562 00:31:23,360 --> 00:31:26,920 Speaker 1: preserve the lives of military robots in the field, and 563 00:31:27,080 --> 00:31:30,160 Speaker 1: lives is in quotes there. Yeah, you may remember this 564 00:31:30,280 --> 00:31:32,600 Speaker 1: from the film that came out a few years back, 565 00:31:32,920 --> 00:31:38,360 Speaker 1: Saving Private Cigarette Robot. It's quite touching. I mean, it 566 00:31:38,400 --> 00:31:40,440 Speaker 1: seems absurd, but it does seem to play on our 567 00:31:40,520 --> 00:31:43,440 Speaker 1: natural biases. I want to talk about a couple of 568 00:31:43,480 --> 00:31:46,880 Speaker 1: examples from a psychology paper in a second, but like, uh, 569 00:31:47,440 --> 00:31:50,680 Speaker 1: we're we're just so ready to look at machines like 570 00:31:50,800 --> 00:31:53,600 Speaker 1: humans and and treat them as such. It seems almost 571 00:31:53,600 --> 00:31:57,520 Speaker 1: impossible to avoid. But anyway to pick up with limely 572 00:31:57,600 --> 00:32:00,400 Speaker 1: and Casey after after that Callo quote they a quote. 573 00:32:00,440 --> 00:32:03,480 Speaker 1: In light of these qualities, Calo argues that robots are 574 00:32:03,520 --> 00:32:07,960 Speaker 1: best thought of as artificial objects or systems that sense, process, 575 00:32:08,040 --> 00:32:11,880 Speaker 1: and act upon the world to at least some degree. Thus, 576 00:32:11,920 --> 00:32:14,600 Speaker 1: a robot, in the strongest, fullest sense of the term, 577 00:32:14,920 --> 00:32:18,240 Speaker 1: exists in the world as a corporeal object with the 578 00:32:18,280 --> 00:32:22,960 Speaker 1: capacity to exert itself physically. Though, it's interesting to me 579 00:32:23,200 --> 00:32:26,640 Speaker 1: that even this attempt to give a strict and legally 580 00:32:26,720 --> 00:32:30,600 Speaker 1: useful definition of a robot includes a subjective component. I 581 00:32:30,880 --> 00:32:34,000 Speaker 1: brought this up earlier, the component about human feelings, the 582 00:32:34,240 --> 00:32:38,600 Speaker 1: social valence criterion. The Calos sites here this means they 583 00:32:38,720 --> 00:32:41,960 Speaker 1: feel to us like social actors. Yeah. Like I was 584 00:32:42,000 --> 00:32:45,480 Speaker 1: wondering in all of this, like, where does a particularly 585 00:32:45,680 --> 00:32:50,120 Speaker 1: malicious robo call fit into the scenario, Say, a robo 586 00:32:50,240 --> 00:32:53,280 Speaker 1: call that is not just about trying to sell you something, 587 00:32:53,320 --> 00:32:55,520 Speaker 1: but it's like, you know, actively trying to say, get 588 00:32:55,520 --> 00:32:59,040 Speaker 1: a credit card number out of you for nefarious purposes. Yeah, 589 00:32:59,080 --> 00:33:02,080 Speaker 1: that's a really good point. And and along those lines, 590 00:33:02,160 --> 00:33:05,520 Speaker 1: Limbly and Casey argue that actually they don't think the 591 00:33:05,600 --> 00:33:10,080 Speaker 1: embodiment criteria of hardware is necessarily a good one that 592 00:33:10,160 --> 00:33:13,440 Speaker 1: maybe our concept of a robot should be less limited 593 00:33:13,480 --> 00:33:17,000 Speaker 1: to the essentialist quality of being embodied and more just 594 00:33:17,200 --> 00:33:22,880 Speaker 1: applied to anything that exhibits intelligent behavior and exactly things 595 00:33:22,920 --> 00:33:25,760 Speaker 1: like that robot call would be would be a good example. Uh, 596 00:33:25,880 --> 00:33:28,240 Speaker 1: the things we think of as robots probably do. They 597 00:33:28,280 --> 00:33:31,840 Speaker 1: they're not just like stand alone objects. They interact with 598 00:33:31,920 --> 00:33:34,680 Speaker 1: the broader world in some way, but they could be 599 00:33:34,920 --> 00:33:39,080 Speaker 1: entirely software based. Yeah, I guess certainly the room is 600 00:33:39,120 --> 00:33:41,560 Speaker 1: a great example, or any kind of like vacuuming robot 601 00:33:41,640 --> 00:33:44,520 Speaker 1: where it's it's it's it's in your house or it's 602 00:33:44,520 --> 00:33:46,720 Speaker 1: in a room in your house. It's a it's interacting 603 00:33:46,720 --> 00:33:50,800 Speaker 1: in your environment and it's essentially making decisions about how 604 00:33:50,840 --> 00:33:54,040 Speaker 1: best to move around that space. Sure, but if you 605 00:33:54,080 --> 00:33:56,240 Speaker 1: want to take it out of the embodied space, you 606 00:33:56,280 --> 00:33:59,080 Speaker 1: could have the idea of bots. On the Internet, there's 607 00:33:59,120 --> 00:34:03,600 Speaker 1: things out there are acting autonomously to some extent and doing, 608 00:34:03,640 --> 00:34:07,320 Speaker 1: you know, executing some behavior, acting almost maliciously. We were 609 00:34:07,480 --> 00:34:11,320 Speaker 1: tempted to call them bots meaning short for robots, because 610 00:34:11,400 --> 00:34:15,080 Speaker 1: they have some kind of apparent independent agency and they're 611 00:34:15,120 --> 00:34:19,359 Speaker 1: doing something that seems at least halfway intelligent. Right, yeah, yeah, 612 00:34:19,400 --> 00:34:22,359 Speaker 1: and you can easily imagine how they could they are 613 00:34:22,400 --> 00:34:24,160 Speaker 1: and well they I mean they are used maliciously in 614 00:34:24,160 --> 00:34:27,480 Speaker 1: some cases. But how something like a social media boat 615 00:34:27,520 --> 00:34:30,200 Speaker 1: that responds to certain comments in a particular way, like, 616 00:34:30,239 --> 00:34:32,799 Speaker 1: it's very easy to imagine how you how that could 617 00:34:32,840 --> 00:34:36,400 Speaker 1: be utilized in a way that would be not only 618 00:34:36,880 --> 00:34:40,919 Speaker 1: annoying but just that outright harmful, even physically harmful. Oh yeah, 619 00:34:40,920 --> 00:34:43,200 Speaker 1: I mean think about some of these, say like bots 620 00:34:43,239 --> 00:34:47,400 Speaker 1: on social media that try to crowdsource information like during 621 00:34:47,400 --> 00:34:51,359 Speaker 1: a natural disaster or something like that. You could imagine uh, 622 00:34:51,400 --> 00:34:55,520 Speaker 1: intentionally maliciously manipulating a bot of this kind to like 623 00:34:55,560 --> 00:34:59,120 Speaker 1: have you know, bad information on it or something, yeah yeah, 624 00:34:59,239 --> 00:35:02,880 Speaker 1: or know, anything that a troll can do on social media, 625 00:35:03,120 --> 00:35:06,000 Speaker 1: a bought could conceivably do as well. So that just 626 00:35:06,280 --> 00:35:09,680 Speaker 1: you know, opens up the door, right. But coming back 627 00:35:09,680 --> 00:35:14,240 Speaker 1: to this, so there's this interesting idea that robots feel 628 00:35:14,320 --> 00:35:16,840 Speaker 1: to us like social actors, and that seems to be, 629 00:35:16,880 --> 00:35:20,560 Speaker 1: at least by some people's definitions, a kind of inextricable 630 00:35:21,000 --> 00:35:24,440 Speaker 1: quality of what makes a robot like it feels like, 631 00:35:24,600 --> 00:35:27,320 Speaker 1: at least to a small extent, like a person somehow, 632 00:35:27,719 --> 00:35:30,040 Speaker 1: and it reminds me of the psychology paper I was 633 00:35:30,080 --> 00:35:33,640 Speaker 1: looking at just recently on human social interaction with robots, 634 00:35:34,080 --> 00:35:38,120 Speaker 1: that is by Elizabeth Broadbent called Interactions with Robots The 635 00:35:38,160 --> 00:35:41,560 Speaker 1: Truths We Reveal About Ourselves, published in the Annual Review 636 00:35:41,560 --> 00:35:44,719 Speaker 1: of Psychology in twenty seventeen. Uh, this was a highly 637 00:35:44,840 --> 00:35:46,759 Speaker 1: cited paper, and it seems to be. It's a big 638 00:35:46,800 --> 00:35:50,360 Speaker 1: literature review of a lot of different stuff about about 639 00:35:50,480 --> 00:35:54,960 Speaker 1: how humans interact emotionally and socially with robots. And the 640 00:35:55,000 --> 00:35:57,320 Speaker 1: one section I was thinking about was where she reviews 641 00:35:57,360 --> 00:36:01,280 Speaker 1: a bunch of other studies about how we mindlessly apply 642 00:36:01,520 --> 00:36:04,440 Speaker 1: social rules to robots. So there are a ton of 643 00:36:04,440 --> 00:36:06,640 Speaker 1: different examples, but just to cite a couple of them, 644 00:36:07,320 --> 00:36:10,400 Speaker 1: once she writes up his quote. After using a computer, 645 00:36:10,600 --> 00:36:14,920 Speaker 1: people evaluate its performance more highly if the same computer 646 00:36:15,080 --> 00:36:19,040 Speaker 1: delivers the rating scale, then if another computer delivers the 647 00:36:19,160 --> 00:36:21,800 Speaker 1: rating scale, or if they rate it with pen and paper. 648 00:36:22,960 --> 00:36:25,279 Speaker 1: So like, if you know, you get a thing at 649 00:36:25,280 --> 00:36:27,400 Speaker 1: the end of a test that says like, hey, you know, 650 00:36:27,440 --> 00:36:30,520 Speaker 1: how did you enjoy interacting with this machine? You're you're 651 00:36:31,040 --> 00:36:33,759 Speaker 1: more likely to give it a higher score if you're 652 00:36:33,760 --> 00:36:35,840 Speaker 1: still sitting on the same machine or at least. That 653 00:36:35,920 --> 00:36:40,120 Speaker 1: was what was found by nas at All in UH 654 00:36:40,160 --> 00:36:43,840 Speaker 1: and the and Broadbent rights quote. This result is similar 655 00:36:43,880 --> 00:36:46,719 Speaker 1: to experiment or bias, in which people try not to 656 00:36:46,800 --> 00:36:52,160 Speaker 1: offend a human researcher. Another example of social behavior is reciprocity. 657 00:36:52,360 --> 00:36:55,760 Speaker 1: We help others who help us. People help to computer 658 00:36:55,840 --> 00:36:59,359 Speaker 1: with a task for more time and more accurately if 659 00:36:59,360 --> 00:37:02,399 Speaker 1: the computer first helped them with a task than if 660 00:37:02,400 --> 00:37:04,520 Speaker 1: it did not, And this was found by Fog and 661 00:37:04,640 --> 00:37:09,080 Speaker 1: nas In. I love that idea of people, you know, 662 00:37:09,200 --> 00:37:13,239 Speaker 1: being more reluctant to rate a computer UH poorly if 663 00:37:13,280 --> 00:37:16,640 Speaker 1: they're still interacting with the same computer that that's that 664 00:37:16,719 --> 00:37:19,960 Speaker 1: seems perfectly true to me. But another interesting one from 665 00:37:19,960 --> 00:37:23,840 Speaker 1: the summary is quote research and psychology has shown that 666 00:37:23,880 --> 00:37:27,360 Speaker 1: the presence of an observer can increase people's honesty, but 667 00:37:27,480 --> 00:37:30,640 Speaker 1: incentives for cheating can reduce honesty, and this is found 668 00:37:30,680 --> 00:37:33,680 Speaker 1: by Covey at All in nineteen eighty nine. In a 669 00:37:33,800 --> 00:37:37,440 Speaker 1: robot version of this work, participants given incentives to cheat 670 00:37:37,680 --> 00:37:40,960 Speaker 1: were shown to be less honest when alone compared to 671 00:37:41,320 --> 00:37:44,960 Speaker 1: when they were accompanied by either a human or by 672 00:37:44,960 --> 00:37:47,560 Speaker 1: a simple robot, and that was found by Hoffman at 673 00:37:47,560 --> 00:37:51,960 Speaker 1: All in this illustrates that the social presence of robots 674 00:37:52,000 --> 00:37:54,560 Speaker 1: may make people feel as though they're being watched and 675 00:37:54,719 --> 00:37:58,360 Speaker 1: increase their honesty in an effect similar to that produced 676 00:37:58,400 --> 00:38:01,120 Speaker 1: by the presence of humans. Now, this is inter right. 677 00:38:01,200 --> 00:38:03,560 Speaker 1: This This also reminds me of various studies that have 678 00:38:03,680 --> 00:38:08,680 Speaker 1: gone into sort of the idea of of imagine beings 679 00:38:08,920 --> 00:38:13,160 Speaker 1: or religious beings watching us while we're doing things right, 680 00:38:13,360 --> 00:38:16,160 Speaker 1: or even just like I imagery, like putting some eyes 681 00:38:16,400 --> 00:38:19,160 Speaker 1: imagery on a wall looking at people while they're like, 682 00:38:19,200 --> 00:38:21,240 Speaker 1: I don't know, not supposed to steal from the collection 683 00:38:21,280 --> 00:38:23,319 Speaker 1: plate or something like that. I don't know if it's 684 00:38:23,320 --> 00:38:25,719 Speaker 1: to the same extent, but at least in the same 685 00:38:25,719 --> 00:38:29,120 Speaker 1: direction that the presence of another human is. You know, 686 00:38:28,920 --> 00:38:31,040 Speaker 1: you're you might be a little bit worried that ARE 687 00:38:31,080 --> 00:38:33,440 Speaker 1: two D two is gonna, you know, judge your moral 688 00:38:33,560 --> 00:38:36,680 Speaker 1: character harshly or tattle on you. I'm not as worried 689 00:38:36,680 --> 00:38:47,799 Speaker 1: about R two, but um three po snitch. Coming back 690 00:38:47,880 --> 00:38:50,359 Speaker 1: to Limele and Casey, so they talked for a long 691 00:38:50,360 --> 00:38:53,839 Speaker 1: time about how robots get their intelligence. They talked about 692 00:38:53,840 --> 00:38:57,360 Speaker 1: the importance of machine learning for the modern generations of 693 00:38:57,560 --> 00:39:01,000 Speaker 1: robots and AI that it's just not practical to hard 694 00:39:01,160 --> 00:39:03,440 Speaker 1: code AI the way we used to imagine. You know, 695 00:39:03,480 --> 00:39:05,640 Speaker 1: you'd be a programmer and you're just like creating a 696 00:39:05,640 --> 00:39:08,719 Speaker 1: lot of strings of if then statements like you know, 697 00:39:09,120 --> 00:39:12,000 Speaker 1: the kind of intelligence that we expect from a modern 698 00:39:12,040 --> 00:39:15,720 Speaker 1: AI or or intelligent robot is too complex for people 699 00:39:15,719 --> 00:39:18,480 Speaker 1: to program in a in a direct way like that. Instead, 700 00:39:18,520 --> 00:39:21,160 Speaker 1: they've got to be trained on natural data sets through 701 00:39:21,200 --> 00:39:24,319 Speaker 1: machine learning. But of course doing so comes at the 702 00:39:24,400 --> 00:39:29,799 Speaker 1: cost of increasing uncertainty about their future behaviors. Behaviors could 703 00:39:29,800 --> 00:39:34,080 Speaker 1: emerge that a conscientious programmer would never intentionally hard code 704 00:39:34,080 --> 00:39:37,600 Speaker 1: into the system. Uh So, so that brings us to like, 705 00:39:37,719 --> 00:39:42,560 Speaker 1: what types of harms could we expect from robots and AI? 706 00:39:42,680 --> 00:39:44,400 Speaker 1: And the authors here come up with what I think 707 00:39:44,440 --> 00:39:47,600 Speaker 1: are some very useful categories, some sort of like cubby holes, 708 00:39:47,640 --> 00:39:51,120 Speaker 1: to slot the different types of AI fears into. So 709 00:39:51,160 --> 00:39:54,040 Speaker 1: the first kind is what they call unavoidable harms. These 710 00:39:54,040 --> 00:39:56,160 Speaker 1: are probably not the main ones to be worried about, 711 00:39:56,160 --> 00:39:59,120 Speaker 1: but they are worth thinking about. Uh And this is 712 00:39:59,160 --> 00:40:01,919 Speaker 1: just the fact that some dangers are inherent too many 713 00:40:01,960 --> 00:40:05,160 Speaker 1: products and services, we just accept them as the cost 714 00:40:05,239 --> 00:40:07,960 Speaker 1: of having those products and services in the first place. 715 00:40:08,360 --> 00:40:11,480 Speaker 1: So like this would just be cigarette bought just by 716 00:40:11,560 --> 00:40:14,640 Speaker 1: virtue of selling cigarettes is doing harm to people, right, Yes, 717 00:40:14,800 --> 00:40:17,480 Speaker 1: I mean the fact that you have cigarettes, there is 718 00:40:17,520 --> 00:40:20,320 Speaker 1: some harm coming from that. But there are also ones 719 00:40:20,400 --> 00:40:23,960 Speaker 1: that are more fully integrated into just the way society works, 720 00:40:24,000 --> 00:40:28,360 Speaker 1: like having cars. It is absolutely inevitable that people driving 721 00:40:28,360 --> 00:40:30,719 Speaker 1: cars are going to crash their cars and there will 722 00:40:30,760 --> 00:40:33,120 Speaker 1: be fatalities from that, and you can think of ways 723 00:40:33,120 --> 00:40:36,680 Speaker 1: of reducing it, but there's there's really not any expectation 724 00:40:37,320 --> 00:40:40,080 Speaker 1: that we can have a country that has car based 725 00:40:40,080 --> 00:40:42,840 Speaker 1: transportation and there will not be any accidents because there 726 00:40:42,960 --> 00:40:46,239 Speaker 1: always be things that are that are not even reducible 727 00:40:46,280 --> 00:40:48,880 Speaker 1: to driver error or to malfunction of the cars, right, 728 00:40:48,920 --> 00:40:51,800 Speaker 1: like a tree falls on the road or something, birds, 729 00:40:51,800 --> 00:40:54,160 Speaker 1: wild animals. Any So, even though they're I think there's 730 00:40:54,160 --> 00:40:58,680 Speaker 1: some very convincing arguments to be made that uh a 731 00:40:58,760 --> 00:41:02,520 Speaker 1: switch to self driving cars would create a much safer 732 00:41:03,080 --> 00:41:06,439 Speaker 1: uh travel environment, that it would make roads safer. You're 733 00:41:06,440 --> 00:41:09,920 Speaker 1: not gonna you're not gonna get to absolute zero crashes 734 00:41:10,000 --> 00:41:13,800 Speaker 1: or absolute zero road fatalities, right, I mean, you wouldn't 735 00:41:13,840 --> 00:41:17,480 Speaker 1: even if the driving algorithms were perfect, right, and they're 736 00:41:17,480 --> 00:41:19,439 Speaker 1: probably not going to be perfect. They may well be 737 00:41:19,520 --> 00:41:21,720 Speaker 1: and probably are going to be better than the average 738 00:41:21,760 --> 00:41:25,879 Speaker 1: human driver. Yeah, okay, so that's just there's just unavoidable 739 00:41:25,920 --> 00:41:28,880 Speaker 1: harm that comes from using any type of product or service, 740 00:41:29,000 --> 00:41:31,680 Speaker 1: and when you integrate robotics and AI into that product 741 00:41:31,760 --> 00:41:34,799 Speaker 1: or service, those unavoidable harms will just continue. But that's 742 00:41:34,800 --> 00:41:38,640 Speaker 1: something we already deal with. The next category is deliberate 743 00:41:38,760 --> 00:41:42,719 Speaker 1: least cost harms. This is similar to unavoidable harms, but 744 00:41:42,760 --> 00:41:45,480 Speaker 1: it's in cases where the machine actually is able to 745 00:41:45,680 --> 00:41:49,960 Speaker 1: make a decision with with important ramifications, like it can 746 00:41:50,000 --> 00:41:53,360 Speaker 1: make a decision to act in a way that causes harm, 747 00:41:53,560 --> 00:41:56,880 Speaker 1: but is attempting to cause the least harm possible. So 748 00:41:56,920 --> 00:41:59,480 Speaker 1: in a sense, this is forcing robots to do the 749 00:41:59,520 --> 00:42:02,719 Speaker 1: trolley problem. Right, do you switch to the track that 750 00:42:02,840 --> 00:42:05,279 Speaker 1: has one person sitting on the train tracks instead of 751 00:42:05,280 --> 00:42:09,600 Speaker 1: five people? Yeah? Yeah, And this will be another inevitable 752 00:42:09,640 --> 00:42:12,799 Speaker 1: capability of autonomous cars, but it raises all kinds of 753 00:42:12,840 --> 00:42:17,160 Speaker 1: thorny questions. If an autonomous vehicle can avoid a head 754 00:42:17,200 --> 00:42:20,760 Speaker 1: on collision that will likely kill multiple people by suddenly 755 00:42:20,760 --> 00:42:24,239 Speaker 1: swerving out of the way and hitting one pedestrian, that 756 00:42:24,320 --> 00:42:28,080 Speaker 1: may indeed avoid a greater harm. But that's probably cold 757 00:42:28,080 --> 00:42:31,400 Speaker 1: comfort to the one person who got hit, right, right. Yeah, 758 00:42:31,440 --> 00:42:34,520 Speaker 1: And then when you have a robot or some sort 759 00:42:34,520 --> 00:42:37,719 Speaker 1: of an AI involved in that decision making, I mean, 760 00:42:38,160 --> 00:42:43,480 Speaker 1: it's it's you can just imagine the the the intensity 761 00:42:43,520 --> 00:42:47,120 Speaker 1: of the arguments and the conversations they would ensue. Right. 762 00:42:47,239 --> 00:42:48,920 Speaker 1: But then the authors raised what I think is a 763 00:42:49,000 --> 00:42:52,040 Speaker 1: very interesting point. They say that this kind of life 764 00:42:52,120 --> 00:42:55,680 Speaker 1: or death trolley problem will probably be the exception rather 765 00:42:55,760 --> 00:42:59,360 Speaker 1: than the rule. Instead, they say, quote, uh, far likelier, 766 00:42:59,440 --> 00:43:04,040 Speaker 1: I'll build albeit subtler scenarios involving least cost harms will 767 00:43:04,080 --> 00:43:08,560 Speaker 1: involve robots that make decisions with seemingly trivial implications at 768 00:43:08,600 --> 00:43:12,840 Speaker 1: an individual level, but which result in non trivial impacts 769 00:43:13,000 --> 00:43:16,959 Speaker 1: at scale. Self driving cars, for example, will rarely face 770 00:43:17,040 --> 00:43:20,160 Speaker 1: a stark choice between killing a child or killing two 771 00:43:20,200 --> 00:43:23,719 Speaker 1: elderly people, but thousands of times a day they will 772 00:43:23,760 --> 00:43:27,880 Speaker 1: have to choose precisely where to change lanes, how closely 773 00:43:27,960 --> 00:43:31,280 Speaker 1: to trail another vehicle, when to accelerate on a freeway, 774 00:43:31,320 --> 00:43:34,640 Speaker 1: on ramp, and so forth. Each of these decisions will 775 00:43:34,800 --> 00:43:39,759 Speaker 1: entail some probability of injuring someone. I guess another thing 776 00:43:39,760 --> 00:43:41,279 Speaker 1: to keep in mind, like with the trap, with the 777 00:43:41,320 --> 00:43:45,400 Speaker 1: trolley problem, generally, when you're dealing with it, there's a 778 00:43:45,440 --> 00:43:47,840 Speaker 1: there's a lot of emphasis on the problem aspect of it, 779 00:43:47,920 --> 00:43:51,560 Speaker 1: you know, like the trolley problem should be a an 780 00:43:51,560 --> 00:43:54,400 Speaker 1: ethical dilemma. It should, it should hurt a bit to 781 00:43:54,440 --> 00:43:57,480 Speaker 1: try and figure out how what to do. And the 782 00:43:57,520 --> 00:44:00,759 Speaker 1: idea of the trolley problem being something that um is 783 00:44:00,840 --> 00:44:04,000 Speaker 1: encountered and decided upon, like as in a split second, 784 00:44:04,200 --> 00:44:07,799 Speaker 1: by a machine, um, by an algorithm like that, that 785 00:44:07,880 --> 00:44:10,840 Speaker 1: feels that that feels a bit worse to us. You know, 786 00:44:10,920 --> 00:44:14,400 Speaker 1: that feels like if if it's if it's an easy decision, 787 00:44:14,480 --> 00:44:16,520 Speaker 1: even if it's just based purely on math, you know, 788 00:44:16,880 --> 00:44:20,600 Speaker 1: it's um it feels wrong on some level. Oh yeah, yeah, 789 00:44:20,880 --> 00:44:23,759 Speaker 1: so I think you're right. But also the thing they're 790 00:44:23,760 --> 00:44:26,520 Speaker 1: bringing up here is that the trolley problem you're actually 791 00:44:26,560 --> 00:44:29,719 Speaker 1: more often facing is that every single day you're your 792 00:44:29,719 --> 00:44:32,680 Speaker 1: autonomous car is gonna make you know, hundreds or thousands 793 00:44:32,680 --> 00:44:36,279 Speaker 1: of trolley problem calls where on one track it is 794 00:44:36,920 --> 00:44:41,160 Speaker 1: getting to your destination a few seconds faster, and on 795 00:44:41,200 --> 00:44:43,399 Speaker 1: the other track is a one in a million chance 796 00:44:43,440 --> 00:44:46,440 Speaker 1: of killing somebody. Yeah yeah, And these do we make 797 00:44:46,480 --> 00:44:48,680 Speaker 1: these decisions all the time, but we don't focus on 798 00:44:48,719 --> 00:44:50,320 Speaker 1: these That's but I think that's part of the issue, 799 00:44:50,360 --> 00:44:53,160 Speaker 1: you know exactly. You're like, should I, okay, should I 800 00:44:53,239 --> 00:44:56,320 Speaker 1: take a left on this road? Well, there's a chance 801 00:44:56,360 --> 00:44:59,399 Speaker 1: there's a speeding car just above, just over the edge there, 802 00:44:59,400 --> 00:45:01,440 Speaker 1: and I can't it, but I'm going to take that 803 00:45:01,520 --> 00:45:03,799 Speaker 1: chance because I want to cut three minutes off my 804 00:45:03,880 --> 00:45:06,640 Speaker 1: drive to work. Yes. Uh, this is actually a very 805 00:45:06,640 --> 00:45:09,839 Speaker 1: good point that we we already make these decisions, but 806 00:45:09,920 --> 00:45:14,240 Speaker 1: we just don't think about them in these explicit probability calculations. 807 00:45:14,640 --> 00:45:17,359 Speaker 1: And there may be some consequences to thinking about them 808 00:45:17,360 --> 00:45:19,000 Speaker 1: this way, which is there could be a weird, like 809 00:45:19,080 --> 00:45:22,799 Speaker 1: perceived downside just to making these kind of uh, these 810 00:45:22,840 --> 00:45:26,480 Speaker 1: kind of calculations objective and and explicit. Yeah, I mean 811 00:45:26,480 --> 00:45:28,479 Speaker 1: I've I've run into this with some of the map 812 00:45:28,640 --> 00:45:31,439 Speaker 1: programs that I used to drive before, where I want 813 00:45:31,480 --> 00:45:33,440 Speaker 1: to tell it in some cases like like give me 814 00:45:33,480 --> 00:45:36,560 Speaker 1: the ability and maybe they have this now, but there 815 00:45:36,600 --> 00:45:39,720 Speaker 1: was one left turn in particular where the ability to 816 00:45:39,719 --> 00:45:42,839 Speaker 1: to flag this left turn, this is a dangerous left turn. 817 00:45:43,280 --> 00:45:46,320 Speaker 1: You have put me in a position to make. Um, 818 00:45:46,360 --> 00:45:48,560 Speaker 1: I might know which left turn you're talking about. If 819 00:45:48,600 --> 00:45:51,399 Speaker 1: you probably in town, it's it's it's in town, it's 820 00:45:51,440 --> 00:45:54,600 Speaker 1: near our office, so yeah, I mean yeah. By the way, 821 00:45:54,640 --> 00:45:57,120 Speaker 1: if you're out there working on programming driving apps, you 822 00:45:57,320 --> 00:46:00,799 Speaker 1: should absolutely include the toggle key where you can say 823 00:46:00,840 --> 00:46:04,080 Speaker 1: no left turns please. Yes, that that is highly useful. 824 00:46:04,360 --> 00:46:06,680 Speaker 1: I'm to understand this is how one of my aunts 825 00:46:07,000 --> 00:46:09,160 Speaker 1: got around, Like if they got older and they were 826 00:46:09,239 --> 00:46:11,960 Speaker 1: less adventurous driving, they would only take right turns, and 827 00:46:12,000 --> 00:46:14,080 Speaker 1: they would do all their driving so that no left 828 00:46:14,080 --> 00:46:16,520 Speaker 1: turns were made. I think I have one time read 829 00:46:16,760 --> 00:46:19,879 Speaker 1: this could be totally wrong, but I at least one 830 00:46:19,880 --> 00:46:23,160 Speaker 1: time I remember reading a claim that, like, you know, 831 00:46:23,480 --> 00:46:26,640 Speaker 1: the traffic efficiency would be x percent higher and people 832 00:46:26,640 --> 00:46:29,840 Speaker 1: would spend x number of minutes less time and traffic 833 00:46:29,920 --> 00:46:31,960 Speaker 1: if there were no such thing as left turns, if 834 00:46:32,000 --> 00:46:34,640 Speaker 1: everybody had to get everywhere by only doing you know, 835 00:46:34,760 --> 00:46:38,320 Speaker 1: full right turns to to go around the block. Interesting, 836 00:46:38,560 --> 00:46:40,439 Speaker 1: I'm sure there would be some cases where you can't 837 00:46:40,440 --> 00:46:42,480 Speaker 1: do that, but you know, in a in a grid city, 838 00:46:42,840 --> 00:46:44,719 Speaker 1: seems to make a lot of sense. Maybe you get 839 00:46:44,719 --> 00:46:47,239 Speaker 1: like one left turn of days, some sort of a 840 00:46:47,520 --> 00:46:51,000 Speaker 1: card system. But like I said, I I cannot confirm that. Okay, 841 00:46:51,040 --> 00:46:53,480 Speaker 1: but anyway, the next categories of harm they talked about 842 00:46:53,680 --> 00:46:56,880 Speaker 1: this one is defect driven harms. Uh. This one is 843 00:46:57,000 --> 00:46:59,799 Speaker 1: very easy to understand. The robot harms someone because of 844 00:46:59,840 --> 00:47:02,759 Speaker 1: a design flaw or a bug or a mistake, or 845 00:47:02,840 --> 00:47:06,520 Speaker 1: it's just broken. You know, A warehouse loading robot is 846 00:47:06,600 --> 00:47:09,759 Speaker 1: designed to only operate when no humans are nearby it. 847 00:47:10,000 --> 00:47:12,560 Speaker 1: But there's a malfunction with one of its sensors and 848 00:47:12,640 --> 00:47:15,400 Speaker 1: it fails to detect the presence of a human operator 849 00:47:15,960 --> 00:47:17,560 Speaker 1: trying to get I don't know, a piece of junk 850 00:47:17,600 --> 00:47:19,799 Speaker 1: out of it, out of one of its hinges, and 851 00:47:19,880 --> 00:47:23,320 Speaker 1: it moves and kills them. Okay, this is pretty straightforward, 852 00:47:23,320 --> 00:47:26,040 Speaker 1: just it's broken for some reason. The authors here do 853 00:47:26,120 --> 00:47:28,399 Speaker 1: point out that this gets even more complicated when there 854 00:47:28,480 --> 00:47:32,200 Speaker 1: is a human in the loop e g. An autonomous 855 00:47:32,239 --> 00:47:35,640 Speaker 1: car with a human driver who is supposed to intervene 856 00:47:35,760 --> 00:47:37,840 Speaker 1: in the event of an emergency. They talked about one 857 00:47:37,920 --> 00:47:40,800 Speaker 1: case where this happened with with I believe it was 858 00:47:40,840 --> 00:47:45,040 Speaker 1: an uber autonomous vehicle where both the machine and the 859 00:47:45,160 --> 00:47:49,120 Speaker 1: human fail, that both of them failed to stop a 860 00:47:49,120 --> 00:47:52,719 Speaker 1: collision that hurts someone, like what happens here. Yeah, yeah, 861 00:47:52,960 --> 00:47:55,640 Speaker 1: of course, we we have very similar cases in just 862 00:47:55,719 --> 00:47:58,719 Speaker 1: purely human affairs. Right when questions are asked like where 863 00:47:58,760 --> 00:48:01,600 Speaker 1: was this person's supervisor? Uh wait, you know who? Who 864 00:48:01,600 --> 00:48:04,080 Speaker 1: were the watchers? Who? There should have been some other person, 865 00:48:04,320 --> 00:48:06,439 Speaker 1: There was someone else in the loop here. Why didn't 866 00:48:06,480 --> 00:48:10,240 Speaker 1: they do something to stop this crime from taking place? Right? Okay, 867 00:48:10,239 --> 00:48:13,480 Speaker 1: After that you get into misuse harms. Now, some of 868 00:48:13,480 --> 00:48:16,839 Speaker 1: these are very obvious, very straightforward, like if you program 869 00:48:16,920 --> 00:48:19,759 Speaker 1: a robot directly to go kill someone, or even if 870 00:48:19,800 --> 00:48:23,800 Speaker 1: you program it to wander around at random swinging a machete. 871 00:48:23,840 --> 00:48:26,840 Speaker 1: In these cases, it seems that the human programmer is 872 00:48:26,840 --> 00:48:29,960 Speaker 1: clearly at fault, right, the robot has just become a 873 00:48:30,000 --> 00:48:33,360 Speaker 1: weapon of murder or of reckless endangerment, and the person 874 00:48:33,400 --> 00:48:36,560 Speaker 1: who told it to do that is the person responsible. Yeah. 875 00:48:36,600 --> 00:48:41,240 Speaker 1: Like if you take an automotive, uh like oil change 876 00:48:41,320 --> 00:48:46,840 Speaker 1: robot and you reprogram it to um do appendectomies and 877 00:48:46,880 --> 00:48:49,120 Speaker 1: people die as a result, Like, that's a misuse. You 878 00:48:49,120 --> 00:48:52,400 Speaker 1: can you can only blame the the oil changed robots 879 00:48:52,440 --> 00:48:57,279 Speaker 1: so much because it was not ultimately designed to perform appendectomies. Right. 880 00:48:57,320 --> 00:48:59,480 Speaker 1: In this case, this is more like the hammer example 881 00:48:59,640 --> 00:49:02,840 Speaker 1: used to the beginning this it's not the robot autonomously 882 00:49:03,000 --> 00:49:05,880 Speaker 1: making the decision to do this. Uh, this is somebody 883 00:49:05,960 --> 00:49:10,080 Speaker 1: just using it as a tool of crime. But the 884 00:49:10,080 --> 00:49:13,560 Speaker 1: authors point out that there are cases where quote, people 885 00:49:13,560 --> 00:49:17,120 Speaker 1: will misuse robots in a manner that is neither negligent 886 00:49:17,200 --> 00:49:21,600 Speaker 1: nor criminal, but nevertheless threatens to harm others. And these 887 00:49:21,640 --> 00:49:25,440 Speaker 1: types of harm are especially difficult to predict and prevent. 888 00:49:26,280 --> 00:49:29,560 Speaker 1: So one example is just people love to trick robots, 889 00:49:29,600 --> 00:49:32,759 Speaker 1: people like to to mess around with robots in AI. 890 00:49:33,120 --> 00:49:36,440 Speaker 1: I would admit to myself finding this amusing and principle, 891 00:49:36,480 --> 00:49:38,279 Speaker 1: we've talked about this in uh, you know, the the 892 00:49:38,280 --> 00:49:40,960 Speaker 1: flated sex make Ina episodes. But of course there are 893 00:49:41,040 --> 00:49:43,600 Speaker 1: times when it's not so funny, when when people take 894 00:49:43,640 --> 00:49:47,239 Speaker 1: it to really sinister places. One example the authors bring 895 00:49:47,360 --> 00:49:50,759 Speaker 1: up here is the horrible saga of Microsoft Tay. Do 896 00:49:50,840 --> 00:49:54,040 Speaker 1: you remember this thing? Oh? This was the this is 897 00:49:54,040 --> 00:49:56,680 Speaker 1: the robot that was traveling across the country. No no, 898 00:49:56,680 --> 00:49:59,440 Speaker 1: no, no no, uh though I know what you're talking about there. No, 899 00:50:00,000 --> 00:50:02,279 Speaker 1: maybe we can come back to that. But Tay was 900 00:50:02,440 --> 00:50:07,640 Speaker 1: a Twitter chat bot created by Microsoft that was supposed 901 00:50:07,680 --> 00:50:11,160 Speaker 1: to learn how to interact on the Internet. Just by 902 00:50:11,239 --> 00:50:13,920 Speaker 1: learning from conversations it had with real users. So you 903 00:50:13,920 --> 00:50:16,239 Speaker 1: could tweet it Tay and say hey, how are you doing, 904 00:50:16,440 --> 00:50:18,640 Speaker 1: you know, and you could talk about the weather or whatever. 905 00:50:18,680 --> 00:50:22,880 Speaker 1: But of course who who ended up engaging and training 906 00:50:22,920 --> 00:50:25,600 Speaker 1: this AI to speak? It was like the worst trolls 907 00:50:25,600 --> 00:50:28,239 Speaker 1: on the Internet. So within a matter of hours, this 908 00:50:28,320 --> 00:50:31,919 Speaker 1: brand new chat bot had been transformed from from a 909 00:50:32,000 --> 00:50:34,439 Speaker 1: from a you know, a lump of clay unformed into 910 00:50:34,520 --> 00:50:39,400 Speaker 1: a pornographic nazi. Yes, I do remember this now. And 911 00:50:39,480 --> 00:50:41,760 Speaker 1: this kind of just gets you thinking about the ways 912 00:50:41,840 --> 00:50:45,040 Speaker 1: that people will be will be able to misuse robots 913 00:50:45,080 --> 00:50:49,680 Speaker 1: in ways that guide their behavior in extremely pernicious directions, 914 00:50:49,800 --> 00:50:54,560 Speaker 1: sometimes without the people guiding this misuse necessarily committing any 915 00:50:54,600 --> 00:50:58,439 Speaker 1: kind of identifiable crime. Like people are going to look 916 00:50:58,480 --> 00:51:01,359 Speaker 1: for exploits, they're going to look for ways, they're gonna 917 00:51:01,360 --> 00:51:03,200 Speaker 1: look for cracks in the system. It's it's like within 918 00:51:03,480 --> 00:51:05,799 Speaker 1: with any kind of like a video game system. You know, 919 00:51:05,800 --> 00:51:07,399 Speaker 1: people are just gonna see what they can get away 920 00:51:07,400 --> 00:51:10,240 Speaker 1: with and and just engage in that kind of action, 921 00:51:10,680 --> 00:51:12,920 Speaker 1: sometimes just for the fun of it, right, And sometimes 922 00:51:12,960 --> 00:51:17,280 Speaker 1: that's harmless, but sometimes that's really awful. Yeah, Okay, next, 923 00:51:17,480 --> 00:51:21,360 Speaker 1: category is unforeseen harms. And here's where we start getting 924 00:51:21,360 --> 00:51:25,240 Speaker 1: into the really the really interesting and really difficult cases, 925 00:51:25,760 --> 00:51:29,840 Speaker 1: types of harm that are not unavoidable, not a product 926 00:51:29,840 --> 00:51:35,239 Speaker 1: of defects or misuse, but are still not predicted by creators. Uh. 927 00:51:35,239 --> 00:51:37,359 Speaker 1: And so the authors talk about how, in a way, 928 00:51:37,560 --> 00:51:42,200 Speaker 1: unpredictability is what makes AI potentially useful, right, Like, it 929 00:51:42,280 --> 00:51:46,719 Speaker 1: can potentially arrive at solutions that humans wouldn't have predicted, 930 00:51:47,200 --> 00:51:50,280 Speaker 1: but sometimes it does so in ways that really miss 931 00:51:50,360 --> 00:51:53,240 Speaker 1: the boat and could be extremely harmful if they were 932 00:51:53,280 --> 00:51:57,000 Speaker 1: embodied in action in the real world. Uh. Similar to 933 00:51:57,040 --> 00:51:59,640 Speaker 1: the drone example from the circle that we talked about 934 00:51:59,680 --> 00:52:03,480 Speaker 1: at the inning, But they signed another fantastic example here 935 00:52:03,520 --> 00:52:05,759 Speaker 1: that's kind of chilling. So I'm just going to read 936 00:52:05,800 --> 00:52:08,840 Speaker 1: from Lemle and Casey here. In the ninet nineties, a 937 00:52:08,960 --> 00:52:13,440 Speaker 1: pioneering multi institutional study sought to use machine learning techniques 938 00:52:13,480 --> 00:52:19,000 Speaker 1: to predict health related risks prior to hospitalization. After ingesting 939 00:52:19,000 --> 00:52:23,160 Speaker 1: an enormous quantity of data covering patients with pneumonia, the 940 00:52:23,239 --> 00:52:29,160 Speaker 1: system learned the rule has asthma X delivers lower risk X. 941 00:52:29,680 --> 00:52:33,480 Speaker 1: The colloquial translation is patients with pneumonia who have a 942 00:52:33,600 --> 00:52:36,839 Speaker 1: history of asthma have a lower risk of dying from 943 00:52:36,880 --> 00:52:41,480 Speaker 1: pneumonia than the general population. The machine derived rule was curious, 944 00:52:41,520 --> 00:52:44,600 Speaker 1: to say the least. Far from being protective, asthma can 945 00:52:44,680 --> 00:52:50,920 Speaker 1: seriously complicate pulmonary illnesses, including pneumonia. Perplexed by this counterintuitive result, 946 00:52:51,000 --> 00:52:54,360 Speaker 1: the researchers dug deeper, and what they found was troubling. 947 00:52:54,960 --> 00:52:58,320 Speaker 1: They discovered that quote patients with the history of asthma 948 00:52:58,360 --> 00:53:01,799 Speaker 1: who presented with pneumonia usually were admitted not only to 949 00:53:01,840 --> 00:53:05,000 Speaker 1: the hospital, but directly to the i c U, the 950 00:53:05,040 --> 00:53:08,800 Speaker 1: intensive care unit. Once in the i c U, asthmatic 951 00:53:08,800 --> 00:53:12,800 Speaker 1: pneumonia patients went on to receive more aggressive care, thereby 952 00:53:12,960 --> 00:53:18,080 Speaker 1: raising their survival rates compared to the general population. The rule, 953 00:53:18,160 --> 00:53:21,480 Speaker 1: in other words, reflected a genuine pattern in the data, 954 00:53:21,880 --> 00:53:26,640 Speaker 1: but the machine had confused correlation with causation quote, incorrectly 955 00:53:26,760 --> 00:53:30,520 Speaker 1: learning that asthma lowers risk when in fact, asthmatics have 956 00:53:30,719 --> 00:53:34,080 Speaker 1: much higher risk. It seems like we've got another wormhole here. 957 00:53:35,360 --> 00:53:38,759 Speaker 1: And here the authors introduce an idea of of a 958 00:53:38,800 --> 00:53:43,919 Speaker 1: curve of outcomes that they call a leptokurtic curve. That's 959 00:53:43,920 --> 00:53:46,400 Speaker 1: a strange term, but basically what that means is if 960 00:53:46,520 --> 00:53:50,120 Speaker 1: you are UM, if you're charting what types of outcomes 961 00:53:50,120 --> 00:53:54,040 Speaker 1: you expect from a traditional system like just you know, 962 00:53:54,120 --> 00:53:59,360 Speaker 1: humans looking at data versus a a complex automated system. 963 00:53:59,800 --> 00:54:02,760 Speaker 1: The uh, the sort of the tails of the graph 964 00:54:03,000 --> 00:54:06,200 Speaker 1: with the complex automated system will tend to be fatter, 965 00:54:06,320 --> 00:54:09,600 Speaker 1: meaning you get more extreme events in the positive and 966 00:54:09,640 --> 00:54:13,279 Speaker 1: negative space rather than a you know, a sort of 967 00:54:13,360 --> 00:54:17,600 Speaker 1: rounder clustering of events in the you know, normal operation space, 968 00:54:17,719 --> 00:54:21,880 Speaker 1: if that makes any sense. So, these kinds of unforeseen 969 00:54:21,960 --> 00:54:25,120 Speaker 1: harms are some of the most worrisome types of things 970 00:54:25,160 --> 00:54:27,480 Speaker 1: to expect coming out of robots and AI. But then 971 00:54:27,520 --> 00:54:30,680 Speaker 1: the other one would be systemic harms and this is 972 00:54:30,719 --> 00:54:34,040 Speaker 1: the last category of of harms they talk about. Uh 973 00:54:34,080 --> 00:54:37,200 Speaker 1: the author's right quote. People have long assumed that robots 974 00:54:37,239 --> 00:54:41,640 Speaker 1: are inherently neutral and objective, given that robots simply intake 975 00:54:41,719 --> 00:54:46,080 Speaker 1: data and systematically output results, But they are actually neither. 976 00:54:46,480 --> 00:54:49,400 Speaker 1: Robots are only as neutral as the data they're fed, 977 00:54:49,560 --> 00:54:52,640 Speaker 1: and only as objective as the design choices of those 978 00:54:52,680 --> 00:54:56,960 Speaker 1: who create them. When either bias or subjectivity infiltrates a 979 00:54:57,080 --> 00:55:01,200 Speaker 1: systems inputs or design choices, it is in inevitably reflected 980 00:55:01,239 --> 00:55:04,080 Speaker 1: in the system's outputs. This is your classic garbage in 981 00:55:04,160 --> 00:55:08,600 Speaker 1: garbage out, problem, right, They go on, Accordingly, those responsible 982 00:55:08,640 --> 00:55:12,840 Speaker 1: for overseeing the deployment of robots must anticipate the possibility 983 00:55:12,880 --> 00:55:17,280 Speaker 1: that algorithmically biased applications will cause harms of this systemic 984 00:55:17,360 --> 00:55:21,279 Speaker 1: nature to third parties. So uh, an example that's much 985 00:55:21,320 --> 00:55:24,839 Speaker 1: discussed in this would be an AI trained to make 986 00:55:24,880 --> 00:55:29,800 Speaker 1: decisions about granting loans by studying patterns of which loan 987 00:55:29,920 --> 00:55:33,680 Speaker 1: applicants got their loans granted in the past. And a 988 00:55:33,880 --> 00:55:36,440 Speaker 1: I like this could end up manifesting some type of 989 00:55:36,440 --> 00:55:39,439 Speaker 1: bias that hurts people, like a racial bias in its 990 00:55:39,440 --> 00:55:43,359 Speaker 1: loan assessments, because there was already a bias in the 991 00:55:43,440 --> 00:55:46,520 Speaker 1: real world data set that it was trained on. So, 992 00:55:46,560 --> 00:55:49,200 Speaker 1: in other words, AI that is trained on data from 993 00:55:49,239 --> 00:55:53,120 Speaker 1: the real world, unless it is it is explicitly told 994 00:55:53,160 --> 00:55:55,680 Speaker 1: not to do this, it will tend to reproduce and 995 00:55:55,760 --> 00:56:01,040 Speaker 1: perpetuate any injustices, any inequalities that already exist. And the 996 00:56:01,040 --> 00:56:05,720 Speaker 1: authors here give an example that is based on algorithmically 997 00:56:05,840 --> 00:56:09,040 Speaker 1: derived insurance premiums that I think they're talking about auto 998 00:56:09,080 --> 00:56:13,200 Speaker 1: insurance quote. A recent study by Consumer Reports found that 999 00:56:13,239 --> 00:56:18,120 Speaker 1: contemporary premiums depended less on driving habits and increasingly on 1000 00:56:18,239 --> 00:56:24,280 Speaker 1: socioeconomic factors, including an individual's credit score. After analyzing two 1001 00:56:24,360 --> 00:56:28,880 Speaker 1: billion car insurance price quotes across approximately seven hundred companies, 1002 00:56:29,239 --> 00:56:32,960 Speaker 1: the study found that credit scores factored into insurance algorithms 1003 00:56:33,000 --> 00:56:37,000 Speaker 1: so heavily that perfect drivers with low credit scores often 1004 00:56:37,040 --> 00:56:41,680 Speaker 1: paid substantially more than terrible drivers with high scores. The 1005 00:56:41,719 --> 00:56:45,560 Speaker 1: studies findings raised widespread concerns that AI systems used to 1006 00:56:45,600 --> 00:56:49,520 Speaker 1: generate these quotes could create negative feedback loops that are 1007 00:56:49,520 --> 00:56:53,799 Speaker 1: hard to break. According to one expert quote, higher insurance 1008 00:56:53,840 --> 00:56:57,200 Speaker 1: prices for low income people can translate to higher debt 1009 00:56:57,560 --> 00:57:01,200 Speaker 1: and plummeting credit scores, which can mean use job prospects, 1010 00:57:01,239 --> 00:57:04,440 Speaker 1: which allows debt to pile up, credit scores to sink lower, 1011 00:57:04,520 --> 00:57:08,799 Speaker 1: and insurance rates to increase in a vicious cycle. Uh so, 1012 00:57:08,840 --> 00:57:11,000 Speaker 1: this is kind of a nightmare scenario, right, Like an 1013 00:57:11,000 --> 00:57:15,600 Speaker 1: AI that is too powerful and not explicitly protected against 1014 00:57:15,640 --> 00:57:18,960 Speaker 1: acquiring these types of biases could create these kind of 1015 00:57:19,080 --> 00:57:23,960 Speaker 1: computer enforced prisons in reality, like a machine code for 1016 00:57:24,120 --> 00:57:27,560 Speaker 1: perpetuating whatever state of the world, like whatever state the 1017 00:57:27,600 --> 00:57:30,800 Speaker 1: world was in when the AI was first deployed and 1018 00:57:30,840 --> 00:57:34,920 Speaker 1: then just entrenching it further and further. Yeah, And that 1019 00:57:35,000 --> 00:57:37,360 Speaker 1: kind of thing is especially scary because like, if there's 1020 00:57:37,360 --> 00:57:40,200 Speaker 1: a human making the decision, you can you can call 1021 00:57:40,280 --> 00:57:42,880 Speaker 1: up the human to a witness stand or ask them like, hey, 1022 00:57:42,880 --> 00:57:45,560 Speaker 1: why did you make the decision this way? But if 1023 00:57:45,600 --> 00:57:48,000 Speaker 1: it's an AI doing it, you could say like, hey, 1024 00:57:48,000 --> 00:57:50,400 Speaker 1: why why is it? Why are we getting this outcome 1025 00:57:50,480 --> 00:57:53,320 Speaker 1: that's you know, creating a sort of like cyclical prison 1026 00:57:53,400 --> 00:57:55,680 Speaker 1: out of reality, And they can just say, hey, you 1027 00:57:55,680 --> 00:57:58,080 Speaker 1: know it's the machine with the machine. You know it 1028 00:57:58,360 --> 00:58:00,959 Speaker 1: knows what it's doing. Yeah, yes, the machine it says 1029 00:58:01,000 --> 00:58:04,560 Speaker 1: I learned it from watching you dad, and you have 1030 00:58:04,600 --> 00:58:07,280 Speaker 1: that moment of shame. So I think these different categories 1031 00:58:07,360 --> 00:58:09,440 Speaker 1: that they that they bring up are really important for 1032 00:58:09,480 --> 00:58:13,200 Speaker 1: helping us kind of sort our ideas into into recognizable 1033 00:58:13,280 --> 00:58:16,000 Speaker 1: types for for ways that AI and robots could go 1034 00:58:16,040 --> 00:58:18,320 Speaker 1: wrong and could potentially cause harm that you would seek 1035 00:58:18,400 --> 00:58:22,240 Speaker 1: legal remedy for. And also they help identify the spaces 1036 00:58:22,360 --> 00:58:24,760 Speaker 1: that there's the most worry. I mean, for me, I 1037 00:58:24,760 --> 00:58:27,920 Speaker 1: think that would be like those last two cases, right, 1038 00:58:27,960 --> 00:58:31,439 Speaker 1: the unforeseen problems and the systemic problems are the ones 1039 00:58:31,800 --> 00:58:34,920 Speaker 1: where there's the most real danger, I think, and the 1040 00:58:34,920 --> 00:58:38,080 Speaker 1: most difficulty in trying to figure out how to solve it. Yeah, 1041 00:58:38,120 --> 00:58:41,840 Speaker 1: because we we kind of you know, train ourselves for 1042 00:58:42,240 --> 00:58:45,240 Speaker 1: to a certain extent and sort of culturally focus on 1043 00:58:45,400 --> 00:58:51,000 Speaker 1: the sky net problems, right, the really obvious, um uh 1044 00:58:51,280 --> 00:58:54,160 Speaker 1: situations where the robot car veers off the road in 1045 00:58:54,160 --> 00:58:58,080 Speaker 1: a dangerous way. But the situations where it is just 1046 00:58:58,520 --> 00:59:03,680 Speaker 1: perpetuating what we're already doing, where it's making choices in 1047 00:59:03,800 --> 00:59:06,800 Speaker 1: getting from point A to point B that don't violate 1048 00:59:06,840 --> 00:59:09,920 Speaker 1: anything we told it, but just are an uninventive and 1049 00:59:09,960 --> 00:59:13,440 Speaker 1: even harmful way of doing it. Uh. Yeah, that's that's 1050 00:59:13,480 --> 00:59:16,360 Speaker 1: that's harder to deal with. That's a type of misbehavior 1051 00:59:16,720 --> 00:59:20,280 Speaker 1: that you can't solve by just having Dan o'harla hay 1052 00:59:20,360 --> 00:59:27,160 Speaker 1: stand up and bellowing behave yourselves exactly. Um yeah, yeah, yeah, 1053 00:59:27,200 --> 00:59:29,320 Speaker 1: I mean I can't remember if that even worked. I 1054 00:59:29,360 --> 00:59:31,600 Speaker 1: just remember that was one of my favorite moments in 1055 00:59:31,840 --> 00:59:34,040 Speaker 1: that was RoboCop two, right, was it? Yeah? Well, I 1056 00:59:34,040 --> 00:59:36,520 Speaker 1: mean RoboCop one. I think we're already dealing with this 1057 00:59:36,560 --> 00:59:40,360 Speaker 1: problem of like the sort of like weird dynamics of 1058 00:59:40,400 --> 00:59:43,760 Speaker 1: machine culpability when ed two oh, nine like shoots that 1059 00:59:43,800 --> 00:59:47,120 Speaker 1: guy five hundred times in the boardroom during the demonstration, 1060 00:59:47,560 --> 00:59:50,160 Speaker 1: and then Dan O'Hurley. He's response to it is to 1061 00:59:50,240 --> 00:59:53,080 Speaker 1: turn to Ronnie Cox and say, I'm very disappointed, Dick. 1062 00:59:55,800 --> 00:59:58,360 Speaker 1: But anyway, well, I guess we're running running kind of long, 1063 00:59:58,400 --> 01:00:00,560 Speaker 1: so maybe we should call part one there. But we 1064 01:00:00,600 --> 01:00:05,160 Speaker 1: will resume this discussion about about robot justice and robot 1065 01:00:05,240 --> 01:00:08,240 Speaker 1: punishment in the next episode. That's right, We'll be back 1066 01:00:08,440 --> 01:00:11,640 Speaker 1: with more of this discussion in the meantime. If you 1067 01:00:11,640 --> 01:00:13,720 Speaker 1: would like to check out past episodes of Stuff to 1068 01:00:13,720 --> 01:00:16,800 Speaker 1: Blow Your Mind, uh, and the definitely worth checking out 1069 01:00:16,840 --> 01:00:19,160 Speaker 1: because we have lots of past episodes that deal with 1070 01:00:19,280 --> 01:00:22,080 Speaker 1: robots and AI. We have lots of episodes where we 1071 01:00:22,120 --> 01:00:25,880 Speaker 1: make RoboCop references, so they're all they're all there, go 1072 01:00:25,920 --> 01:00:27,640 Speaker 1: back and check them out. You can find our podcast 1073 01:00:27,680 --> 01:00:29,880 Speaker 1: wherever you get your podcasts. Just look for the Stuff 1074 01:00:29,920 --> 01:00:32,800 Speaker 1: to Blow your Mind podcast feed. UH. In that feed, 1075 01:00:32,840 --> 01:00:35,640 Speaker 1: we put out core episodes of the show on Tuesdays 1076 01:00:35,640 --> 01:00:39,240 Speaker 1: and Thursdays. Mondays, we have a little listener mail Wednesdays, 1077 01:00:39,280 --> 01:00:42,840 Speaker 1: so that's when we do the artifact shorty uh usually, 1078 01:00:43,160 --> 01:00:45,800 Speaker 1: and then on Friday's we do weird house cinema. That's 1079 01:00:45,800 --> 01:00:48,160 Speaker 1: our chance to sort of set most of the science 1080 01:00:48,200 --> 01:00:52,040 Speaker 1: aside and just focus on the films about rampaging robots. 1081 01:00:52,800 --> 01:00:55,560 Speaker 1: You just thinks, as always to our excellent audio producer 1082 01:00:55,640 --> 01:00:57,840 Speaker 1: Seth Nicholas Johnson. If you would like to get in 1083 01:00:57,960 --> 01:01:00,480 Speaker 1: touch with us with feedback on this episode or any other, 1084 01:01:00,720 --> 01:01:02,760 Speaker 1: to suggest a topic for the future, or just to 1085 01:01:02,800 --> 01:01:05,480 Speaker 1: say hello, you can email us at contact at stuff 1086 01:01:05,520 --> 01:01:15,200 Speaker 1: to Blow your Mind dot com. Stuff to Blow Your 1087 01:01:15,200 --> 01:01:18,120 Speaker 1: Mind is production of I Heart Radio. For more podcasts 1088 01:01:18,160 --> 01:01:20,240 Speaker 1: for my heart Radio, visit the i heart Radio app, 1089 01:01:20,400 --> 01:01:32,280 Speaker 1: Apple Podcasts, or wherever you're listening to your favorite shows.