1 00:00:00,160 --> 00:00:05,240 Speaker 1: My welcome to Stuff to Blow Your Mind, the production 2 00:00:05,240 --> 00:00:13,960 Speaker 1: of My Heart Radio. Hey you welcome to Stuff to 3 00:00:13,960 --> 00:00:16,680 Speaker 1: Blow your Mind. My name is Robert Lamb and I'm 4 00:00:16,760 --> 00:00:19,640 Speaker 1: Joe McCormick. And right before we started recording today, we 5 00:00:19,640 --> 00:00:22,480 Speaker 1: were just talking about that iconic scene and returned to 6 00:00:22,560 --> 00:00:26,680 Speaker 1: the Jedi where the droids are sent to the droid 7 00:00:26,840 --> 00:00:28,920 Speaker 1: torture Chamber. Do you remember there? I guess it's not 8 00:00:29,000 --> 00:00:31,360 Speaker 1: just a droid torture chamber. It's sort of like the uh, 9 00:00:32,440 --> 00:00:36,080 Speaker 1: the droid onboarding center right where the you know, R 10 00:00:36,159 --> 00:00:38,320 Speaker 1: two D two and C three PO have been given 11 00:00:38,320 --> 00:00:40,519 Speaker 1: as gifts to job of the Hut and they go 12 00:00:40,640 --> 00:00:43,400 Speaker 1: meet their new like droid boss and he's like, yeah, 13 00:00:43,479 --> 00:00:45,800 Speaker 1: you're a feisty little one, and he's signing them in, 14 00:00:45,920 --> 00:00:48,760 Speaker 1: but uh, he sees that R two D two is 15 00:00:48,800 --> 00:00:51,880 Speaker 1: a is a bad robot who needs a discipline, and 16 00:00:52,120 --> 00:00:54,560 Speaker 1: R two D two is confronted with these images of 17 00:00:54,640 --> 00:00:58,880 Speaker 1: robots being punished with various corporal punishments, like one is 18 00:00:58,920 --> 00:01:02,000 Speaker 1: getting stretched on a robot rack and another one is 19 00:01:02,040 --> 00:01:05,720 Speaker 1: getting its feet burned. Yes, this is a this is 20 00:01:05,760 --> 00:01:09,399 Speaker 1: a great scene one that that that definitely burns its 21 00:01:09,400 --> 00:01:11,840 Speaker 1: way into your your brain as a as a young viewer, 22 00:01:11,920 --> 00:01:13,600 Speaker 1: and maybe you don't think about it that much for 23 00:01:13,640 --> 00:01:16,160 Speaker 1: a long time, but uh, it's it's still in there. 24 00:01:16,240 --> 00:01:19,720 Speaker 1: It's it takes place in the bowels of Joba's palace 25 00:01:20,080 --> 00:01:23,960 Speaker 1: on tattooing, and it's um, yeah, it's like droid intake 26 00:01:24,040 --> 00:01:27,520 Speaker 1: but also droid corrections. It's there are there are a 27 00:01:27,600 --> 00:01:30,440 Speaker 1: number of different departments that I think are converging here 28 00:01:31,160 --> 00:01:34,920 Speaker 1: and and it it ultimately kind of raises some interesting 29 00:01:35,000 --> 00:01:40,240 Speaker 1: questions about um about ethics and punishment and in crime, 30 00:01:40,319 --> 00:01:44,200 Speaker 1: and certainly as it relates to two robots. Uh. Of course, 31 00:01:44,240 --> 00:01:46,440 Speaker 1: one thing, this important distress here is like none of 32 00:01:46,480 --> 00:01:49,560 Speaker 1: this was really intended in these scenes. This was about 33 00:01:49,840 --> 00:01:53,880 Speaker 1: having droids doing things that humans would be doing to 34 00:01:53,920 --> 00:01:57,480 Speaker 1: each other in other pieces of cinema, certainly things like 35 00:01:57,520 --> 00:02:01,040 Speaker 1: old pirate movies or old and bad movies or what 36 00:02:01,120 --> 00:02:03,080 Speaker 1: have you. I mean, that's kind of that's kind of 37 00:02:03,080 --> 00:02:06,600 Speaker 1: Star Wars in a nutshell, right. Uh. This whole portion 38 00:02:06,680 --> 00:02:09,680 Speaker 1: of Return of the Jedi is essentially a big pirate movie, 39 00:02:09,680 --> 00:02:13,679 Speaker 1: a big swashbuckler set in an alien location. Oh yeah, 40 00:02:13,720 --> 00:02:16,399 Speaker 1: the job of the Hut is a pirate captain yeah, 41 00:02:16,440 --> 00:02:20,320 Speaker 1: but something interesting occurs when you replace the humans in 42 00:02:20,360 --> 00:02:25,320 Speaker 1: these these tropic scenes with machines. Uh, and then you 43 00:02:25,360 --> 00:02:27,119 Speaker 1: think about it, you know, you think about, well, why 44 00:02:27,200 --> 00:02:29,680 Speaker 1: is that robot torturing the other? Like as if it 45 00:02:29,720 --> 00:02:33,040 Speaker 1: makes perfect sense if it's humans doing it. But then 46 00:02:33,400 --> 00:02:35,919 Speaker 1: the things that we create in our image when they're 47 00:02:35,919 --> 00:02:38,720 Speaker 1: doing it, Suddenly we start seeing the flaws in our reasoning. 48 00:02:38,760 --> 00:02:41,959 Speaker 1: Suddenly we start questioning, well, how does this whole system 49 00:02:42,040 --> 00:02:46,560 Speaker 1: supposed to work? And maybe this whole system doesn't work. Well. Yeah, 50 00:02:46,560 --> 00:02:49,280 Speaker 1: there are multiple levels of absurdities in the scene. One 51 00:02:49,360 --> 00:02:52,000 Speaker 1: is the idea that this robot is just sort of 52 00:02:52,000 --> 00:02:54,120 Speaker 1: like coolly telling are two that he is going to 53 00:02:54,200 --> 00:02:57,120 Speaker 1: learn some discipline. But then the image that accompanies that 54 00:02:57,280 --> 00:03:00,400 Speaker 1: is like, clearly just like extreme robot tor sure, like 55 00:03:00,440 --> 00:03:03,240 Speaker 1: it's something way beyond what would have to do with 56 00:03:03,520 --> 00:03:06,160 Speaker 1: with discipline in the real world. But then the other 57 00:03:06,240 --> 00:03:08,959 Speaker 1: level of absurdity is that it's robots in the scene, 58 00:03:09,400 --> 00:03:12,280 Speaker 1: but coming off of the issue of just like barbaric 59 00:03:12,360 --> 00:03:16,840 Speaker 1: pirate torture directly and more to the broader question of 60 00:03:17,040 --> 00:03:21,520 Speaker 1: robots and discipline and punishment. Uh, this is something that 61 00:03:21,560 --> 00:03:24,200 Speaker 1: we actually wanted to talk about today because the issue 62 00:03:24,280 --> 00:03:28,080 Speaker 1: of robot moral and legal agency is something I've been 63 00:03:28,160 --> 00:03:30,680 Speaker 1: interested in for a long time. I've talked about it, 64 00:03:30,680 --> 00:03:32,320 Speaker 1: It's come up on the show in the past and 65 00:03:32,440 --> 00:03:36,040 Speaker 1: in briefer ways, um And today I wanted to come 66 00:03:36,040 --> 00:03:38,760 Speaker 1: back and devote a full episode to the subject. I 67 00:03:38,760 --> 00:03:40,440 Speaker 1: guess actually we're gonna be talking about this for a 68 00:03:40,480 --> 00:03:45,680 Speaker 1: couple of episodes now. The question of as machines AI 69 00:03:45,800 --> 00:03:50,040 Speaker 1: robots become more independent and act more like agents, more 70 00:03:50,120 --> 00:03:54,360 Speaker 1: like humans do, how are we to understand their moral 71 00:03:54,520 --> 00:03:58,480 Speaker 1: and legal culpability when they do something that harms people? 72 00:03:59,000 --> 00:04:03,120 Speaker 1: And is there such a thing as robot punishment robot discipline? 73 00:04:03,160 --> 00:04:06,520 Speaker 1: Do these concepts reflect anything that's achievable in the real 74 00:04:06,560 --> 00:04:09,480 Speaker 1: world and practical and if so, how would any of 75 00:04:09,520 --> 00:04:11,760 Speaker 1: this work? Yeah? I think one of the most interesting 76 00:04:11,800 --> 00:04:14,480 Speaker 1: things about this topic is that it does force us 77 00:04:14,480 --> 00:04:17,520 Speaker 1: to force a face off between what robots and AI 78 00:04:17,680 --> 00:04:21,599 Speaker 1: actually are or will be, and how we think about them, indeed, 79 00:04:21,640 --> 00:04:26,560 Speaker 1: how we anthropomorphize them. Um And perhaps it might be 80 00:04:26,600 --> 00:04:28,760 Speaker 1: helpful to take a step back and think about something 81 00:04:29,240 --> 00:04:32,240 Speaker 1: far less advanced as a robot and think something I 82 00:04:32,320 --> 00:04:36,799 Speaker 1: think about something more like a hammer Okay, so everyone's 83 00:04:36,839 --> 00:04:38,960 Speaker 1: heard the old adage that it's a poor carpenter who 84 00:04:38,960 --> 00:04:41,960 Speaker 1: blames their tools, right, But of course we do this 85 00:04:42,040 --> 00:04:45,800 Speaker 1: all the time. Uh, the hammer slips, it hits our fingers, 86 00:04:46,160 --> 00:04:48,240 Speaker 1: and we may, at least in the heat of the moment, 87 00:04:48,600 --> 00:04:51,839 Speaker 1: blame the hammer for the failure. Now, we may get 88 00:04:51,839 --> 00:04:54,440 Speaker 1: over this quickly, but then again, we may decide that 89 00:04:54,800 --> 00:04:57,480 Speaker 1: the hammer truly is at fault and it should be 90 00:04:57,600 --> 00:05:00,840 Speaker 1: used less. We might also take this idea to a 91 00:05:00,920 --> 00:05:03,440 Speaker 1: number of different extremes. We might decide that the hammer 92 00:05:03,760 --> 00:05:06,560 Speaker 1: is not merely at fault but faulty, and then we're 93 00:05:06,680 --> 00:05:09,720 Speaker 1: entitled to at least a refund for its purchase. Or 94 00:05:09,760 --> 00:05:12,680 Speaker 1: we might decide that the hammer needs to actually be punished. 95 00:05:13,040 --> 00:05:15,600 Speaker 1: And and this, of course is ridiculous. And yet the 96 00:05:15,680 --> 00:05:18,320 Speaker 1: idea of punishing the hammer by say, putting it in 97 00:05:18,360 --> 00:05:21,479 Speaker 1: the corner, or perhaps you have an old toolbox of 98 00:05:21,560 --> 00:05:25,479 Speaker 1: shame that's just for the misbehaving tools, or maybe it's 99 00:05:25,600 --> 00:05:27,880 Speaker 1: it's less thought out and you just throw the hammer 100 00:05:28,360 --> 00:05:31,200 Speaker 1: across the yard as punishment for what it has done 101 00:05:31,240 --> 00:05:34,320 Speaker 1: to you. Um. Again, these are ridiculous things to do, 102 00:05:34,920 --> 00:05:36,920 Speaker 1: but the idea of doing them is not that far 103 00:05:36,960 --> 00:05:38,960 Speaker 1: and from us. Um, those of you listening, you may 104 00:05:38,960 --> 00:05:41,240 Speaker 1: have engaged in this sort of thing as well. You 105 00:05:41,320 --> 00:05:44,360 Speaker 1: might also simply throw the tool away and otherwise perfectly 106 00:05:44,400 --> 00:05:47,479 Speaker 1: good tool. Um. I know that I did this once 107 00:05:47,480 --> 00:05:49,880 Speaker 1: with a knife sharpening gadget that caused me to cut 108 00:05:49,880 --> 00:05:53,160 Speaker 1: my finger. My and my like reaction was this thing 109 00:05:53,240 --> 00:05:56,559 Speaker 1: has now injured me, it has drawn my blood. Uh 110 00:05:56,600 --> 00:05:58,080 Speaker 1: I am, I'm getting rid of it. It goes in 111 00:05:58,120 --> 00:06:01,440 Speaker 1: the trash. It bore malice again to me. Yeah, or 112 00:06:01,480 --> 00:06:04,479 Speaker 1: you know, I you know ultimately it's I mean, I 113 00:06:04,520 --> 00:06:06,960 Speaker 1: mean you you get into arguments about different tools, like 114 00:06:07,120 --> 00:06:09,520 Speaker 1: is this a dangerous tool? And in that case that 115 00:06:09,600 --> 00:06:12,880 Speaker 1: was my reasoning. It's like this tool is dangerous. It's 116 00:06:12,920 --> 00:06:15,360 Speaker 1: not enabling me to do what I want to do 117 00:06:15,440 --> 00:06:19,000 Speaker 1: without drawing blood, So it goes in the trash. Um. 118 00:06:19,040 --> 00:06:20,760 Speaker 1: But then there have been other cases where like I 119 00:06:20,800 --> 00:06:24,599 Speaker 1: had a mandolin for slicing up carrots, and um, I 120 00:06:24,680 --> 00:06:27,440 Speaker 1: like nicked my finger on it. But oh I I 121 00:06:27,560 --> 00:06:30,120 Speaker 1: nicked my fingers. Things are brutal, they can be. But 122 00:06:30,640 --> 00:06:33,920 Speaker 1: I nicked my finger, not using it but going into 123 00:06:33,960 --> 00:06:36,279 Speaker 1: the drawer for something else. So I punished it by 124 00:06:36,320 --> 00:06:39,080 Speaker 1: putting at the very bottom of the drawer, but I 125 00:06:39,120 --> 00:06:41,480 Speaker 1: didn't throw it away. Uh huh. So I think if 126 00:06:41,480 --> 00:06:43,640 Speaker 1: we all think, think back, you know, we have examples 127 00:06:43,640 --> 00:06:45,560 Speaker 1: of this sort of thing from our from our life. 128 00:06:45,880 --> 00:06:48,040 Speaker 1: Well sure, I mean, I'm going to talk in this 129 00:06:48,080 --> 00:06:50,960 Speaker 1: episode about some of the ways that we mindlessly apply 130 00:06:51,279 --> 00:06:55,280 Speaker 1: social rules to robots. But yeah, I think what you're 131 00:06:55,279 --> 00:06:58,120 Speaker 1: illustrating here is that you don't even have to get 132 00:06:58,160 --> 00:07:01,280 Speaker 1: to the robot agency stage of four people start doing that. 133 00:07:01,320 --> 00:07:05,559 Speaker 1: I mean people mindlessly, to a lesser extent, mindlessly apply 134 00:07:05,800 --> 00:07:09,800 Speaker 1: social rules and rules derived for managing human relationships to 135 00:07:10,200 --> 00:07:13,920 Speaker 1: inanimate objects with no moving parts. Yeah. Yeah, you don't 136 00:07:13,920 --> 00:07:16,760 Speaker 1: even have to get into a room ba or anything, 137 00:07:16,880 --> 00:07:18,280 Speaker 1: or you know, you can deal with the hammer, the 138 00:07:18,320 --> 00:07:21,760 Speaker 1: can opener. But we also you know, it's it's also 139 00:07:21,880 --> 00:07:25,120 Speaker 1: a sad fact that many pet owners will punish an 140 00:07:25,160 --> 00:07:29,040 Speaker 1: animal for a transgression transgression, but scientific evidence shows that 141 00:07:29,120 --> 00:07:32,840 Speaker 1: this tends to not actually work, at least in most 142 00:07:32,880 --> 00:07:37,000 Speaker 1: of the circumstances that it's used. Um So, you know, 143 00:07:37,080 --> 00:07:41,760 Speaker 1: even it's not merely with with tools and inanimate objects, 144 00:07:41,760 --> 00:07:44,760 Speaker 1: but even non human entities were liable to engage in 145 00:07:44,800 --> 00:07:49,320 Speaker 1: this kind of discipline based thinking. Now, most of the 146 00:07:49,320 --> 00:07:52,600 Speaker 1: studies I think that necessarily relate to dogs, if I 147 00:07:52,640 --> 00:07:54,480 Speaker 1: remember correctly, And there's a lot going on here that 148 00:07:54,520 --> 00:07:57,520 Speaker 1: doesn't relate directly to inanimate objects and robots, but it 149 00:07:57,560 --> 00:07:59,800 Speaker 1: illustrates how we tend to approach the punishment of other 150 00:07:59,840 --> 00:08:03,120 Speaker 1: agents and perceived agents. Well, yeah, there's a disconnect, and 151 00:08:03,120 --> 00:08:04,920 Speaker 1: this will be highlighted in one of the papers we're 152 00:08:04,920 --> 00:08:07,000 Speaker 1: going to talk about in in this pair of episodes, 153 00:08:07,040 --> 00:08:11,480 Speaker 1: But there's a disconnect in that punishment is often logically 154 00:08:12,160 --> 00:08:15,880 Speaker 1: characterized as serving one type of purpose, but then is 155 00:08:15,960 --> 00:08:19,040 Speaker 1: applied more like it serves another type of purpose. So 156 00:08:19,080 --> 00:08:22,760 Speaker 1: like it is logically explained as say a deterrent, right, 157 00:08:22,800 --> 00:08:25,560 Speaker 1: I mean, if you if you're talking about uh, legal 158 00:08:25,600 --> 00:08:28,000 Speaker 1: theories of punishment, one of the main things that people 159 00:08:28,000 --> 00:08:30,760 Speaker 1: come up with is say, well, the remedy provided by 160 00:08:30,800 --> 00:08:33,640 Speaker 1: the law is in order to punish the person who 161 00:08:33,640 --> 00:08:36,240 Speaker 1: did the bad thing in order to send a message 162 00:08:36,280 --> 00:08:39,280 Speaker 1: that people should not do this bad thing and thus 163 00:08:39,320 --> 00:08:42,600 Speaker 1: maybe discourage other people from doing something similar in the future, 164 00:08:42,679 --> 00:08:45,520 Speaker 1: or discourage the same person from doing it again. And 165 00:08:45,600 --> 00:08:48,120 Speaker 1: if it were to actually serve that purpose, it's debatable 166 00:08:48,120 --> 00:08:50,520 Speaker 1: in what cases it does actually serve that purpose. Maybe 167 00:08:50,600 --> 00:08:53,120 Speaker 1: sometimes it does, but that is a you know, you 168 00:08:53,120 --> 00:08:56,920 Speaker 1: could argue that's a rational, logical thing that prevents harm. 169 00:08:57,240 --> 00:09:00,200 Speaker 1: But the way punishment is actually often inflict it in 170 00:09:00,240 --> 00:09:03,880 Speaker 1: the real world seems to be more consistent with judgments 171 00:09:03,920 --> 00:09:08,199 Speaker 1: based on like emotional satisfaction of the idea of having 172 00:09:08,240 --> 00:09:11,720 Speaker 1: been wronged. Yeah. Yeah. And then also we get into 173 00:09:11,720 --> 00:09:15,520 Speaker 1: this area where, uh, we have a couple of different 174 00:09:15,559 --> 00:09:20,199 Speaker 1: factors encouraging traditions of discipline. Um, particularly if we look 175 00:09:20,200 --> 00:09:25,480 Speaker 1: at a parenthood, which there's some crossover between discipline and 176 00:09:25,480 --> 00:09:28,600 Speaker 1: parenthood and discipline and the criminal justice system. But uh, 177 00:09:28,640 --> 00:09:31,240 Speaker 1: you know, uh, not everything is going to line up 178 00:09:31,280 --> 00:09:34,680 Speaker 1: one to one here. But um, on the childhood example, 179 00:09:34,679 --> 00:09:37,120 Speaker 1: it's been argued that parents use punishment first of all, 180 00:09:37,160 --> 00:09:40,680 Speaker 1: because it's an emotional response out of anger and anger 181 00:09:40,840 --> 00:09:43,920 Speaker 1: that may be mismanaged. But then on top of this, 182 00:09:44,720 --> 00:09:48,760 Speaker 1: it's you know, something that's culturally passed down and punishment 183 00:09:48,840 --> 00:09:52,000 Speaker 1: may seem to work. I was reading about this in 184 00:09:52,040 --> 00:09:56,079 Speaker 1: a Psychology Today article by Michael Carson, pH d j D. 185 00:09:56,760 --> 00:09:59,240 Speaker 1: And uh, this is what they said, quote, because the 186 00:09:59,320 --> 00:10:01,520 Speaker 1: child is an bited in your presence it's easy to 187 00:10:01,559 --> 00:10:05,920 Speaker 1: think they would be inhibited in your absence. Punishment produces politeness, 188 00:10:05,960 --> 00:10:10,880 Speaker 1: not morality. Thus the inhibited, obedient child inadvertently reinforces the 189 00:10:10,920 --> 00:10:14,960 Speaker 1: parents punitive behavior by acting obedient. For the sorts of 190 00:10:15,000 --> 00:10:18,880 Speaker 1: parents who find obedient children reinforcing, Yeah, that raises an 191 00:10:18,880 --> 00:10:21,360 Speaker 1: interesting question. I mean, I've been mainly thinking for this 192 00:10:21,400 --> 00:10:25,720 Speaker 1: episode about about legal punishments, but like when it comes 193 00:10:25,760 --> 00:10:28,320 Speaker 1: down to parenting, that's a very different kind of thing 194 00:10:28,400 --> 00:10:32,520 Speaker 1: because both parenting in the legal system involved punishment, but 195 00:10:32,800 --> 00:10:36,440 Speaker 1: parenting is not subject to a legal system, right, So 196 00:10:36,520 --> 00:10:40,040 Speaker 1: there's no there is no systematized way by which justice 197 00:10:40,120 --> 00:10:42,760 Speaker 1: is administered from a parent. It's just just I mean, 198 00:10:42,800 --> 00:10:44,120 Speaker 1: I think a lot of times it's just sort of 199 00:10:44,160 --> 00:10:46,640 Speaker 1: like whatever the parent can manage to do in the moment, 200 00:10:46,720 --> 00:10:49,880 Speaker 1: because like the kids driving them crazy or something. Yeah, 201 00:10:49,960 --> 00:10:52,559 Speaker 1: usually the child can't take it to a higher court. 202 00:10:53,320 --> 00:10:55,800 Speaker 1: But I mean, I think you're absolutely right that whether 203 00:10:55,880 --> 00:10:58,840 Speaker 1: you're talking about discipline administered by a parent or the 204 00:10:59,000 --> 00:11:02,480 Speaker 1: justice system is whole, I'd say that both are probably 205 00:11:02,520 --> 00:11:07,520 Speaker 1: based more on tradition and philosophy and less on a 206 00:11:07,600 --> 00:11:11,920 Speaker 1: scientifically rigorous study of the most efficient ways to reduce harm. 207 00:11:11,960 --> 00:11:14,360 Speaker 1: And one of the interesting things about thinking about how 208 00:11:14,520 --> 00:11:18,960 Speaker 1: law could potentially be applied to harm caused by autonomous 209 00:11:19,000 --> 00:11:22,599 Speaker 1: machines is that it may help give us some insights 210 00:11:22,640 --> 00:11:26,000 Speaker 1: on ways that the justice system as it exists and 211 00:11:26,120 --> 00:11:30,439 Speaker 1: is applied to humans today tends to behave irrationally already, 212 00:11:30,559 --> 00:11:34,080 Speaker 1: like with respect to humans. Yeah, and again, this is 213 00:11:34,120 --> 00:11:37,079 Speaker 1: what's so interesting about this this paper, I mean, well, 214 00:11:37,120 --> 00:11:39,720 Speaker 1: the papers that we're going to discuss this topic in general, though, 215 00:11:40,160 --> 00:11:43,840 Speaker 1: is if you start, you start comparing machine possibilities to 216 00:11:43,920 --> 00:11:47,880 Speaker 1: human possibilities, and it's on one level of thought experiment 217 00:11:47,960 --> 00:11:50,760 Speaker 1: in how you would hold machines responsible, but then it 218 00:11:50,800 --> 00:11:53,680 Speaker 1: makes you rethink the way humans are held responsible. You know. 219 00:11:54,080 --> 00:11:56,240 Speaker 1: It's like, um, you know, I think you have a 220 00:11:56,240 --> 00:12:00,880 Speaker 1: pretty square away. Like if if an adult sells a 221 00:12:00,880 --> 00:12:03,800 Speaker 1: pack of cigarettes to someone who's underage, right, but then 222 00:12:03,840 --> 00:12:06,680 Speaker 1: one of a machine does the same thing, how do 223 00:12:06,720 --> 00:12:08,319 Speaker 1: you treat the machine? Do you treat a machine like 224 00:12:08,360 --> 00:12:10,480 Speaker 1: an adult? And then in trying to figure out how 225 00:12:10,520 --> 00:12:13,079 Speaker 1: to treat this machine, does it make you rethink how 226 00:12:13,120 --> 00:12:15,720 Speaker 1: you should be treating the adult who engaged in this behavior? 227 00:12:15,800 --> 00:12:17,720 Speaker 1: I don't know. Yeah, And I think a lot of 228 00:12:17,720 --> 00:12:20,400 Speaker 1: that will come down to our understanding of what the 229 00:12:20,480 --> 00:12:24,040 Speaker 1: machine is capable of, like what kind of constraints it has, 230 00:12:24,440 --> 00:12:27,160 Speaker 1: what type of what level of autonomy it seems to 231 00:12:27,200 --> 00:12:30,160 Speaker 1: be operating at. I mean, again, Weirdly, even when people 232 00:12:30,200 --> 00:12:32,800 Speaker 1: set out to define clear rules for what makes a 233 00:12:32,840 --> 00:12:35,679 Speaker 1: machine culpable, there there's still going to be a lot 234 00:12:35,720 --> 00:12:39,880 Speaker 1: of subjectivity in it. I'm looking at like legal definitions 235 00:12:39,920 --> 00:12:43,760 Speaker 1: of what constitutes a robot versus just a machine, and 236 00:12:43,920 --> 00:12:47,120 Speaker 1: some of these definitions involved things like, well, a robot 237 00:12:47,360 --> 00:12:51,080 Speaker 1: feels like a social agent. So there's still, like, you know, 238 00:12:51,120 --> 00:12:53,960 Speaker 1: an element of subjectivity. But I think that's correct in 239 00:12:54,040 --> 00:12:56,480 Speaker 1: how we actually apply the term most of the time, right, 240 00:12:56,520 --> 00:13:00,240 Speaker 1: Like something is like a gut feeling about how this 241 00:13:00,280 --> 00:13:03,720 Speaker 1: machine is behaving in your world. Is it acting more 242 00:13:03,840 --> 00:13:06,800 Speaker 1: like a fixed, you know, brainless machine or is it 243 00:13:06,840 --> 00:13:11,280 Speaker 1: acting a little bit more like a person. So while 244 00:13:12,000 --> 00:13:14,240 Speaker 1: it would be one thing if if it were basically 245 00:13:14,240 --> 00:13:17,280 Speaker 1: a cigarette vending machine that was selling to children, but 246 00:13:17,320 --> 00:13:19,959 Speaker 1: if it were a machine that went door to door 247 00:13:20,440 --> 00:13:23,480 Speaker 1: and and rang the doorbell and then asked for the 248 00:13:23,559 --> 00:13:25,679 Speaker 1: children so they could sell them cigarettes, that would be 249 00:13:25,679 --> 00:13:27,559 Speaker 1: a different matter. I mean, yeah, I mean, I think 250 00:13:27,600 --> 00:13:31,080 Speaker 1: that would require different types of remedies probably, Yeah, I 251 00:13:31,120 --> 00:13:32,760 Speaker 1: mean I think a lot of people would probably look 252 00:13:32,760 --> 00:13:35,480 Speaker 1: at the cigarette vending machine and say, where was the 253 00:13:35,559 --> 00:13:38,200 Speaker 1: vending machine placed? Why was it in a place that 254 00:13:38,320 --> 00:13:41,600 Speaker 1: children could have access to it? Rather than attacking the 255 00:13:41,600 --> 00:13:44,240 Speaker 1: fundamentals of the machine itself. If it's going door to 256 00:13:44,280 --> 00:13:46,600 Speaker 1: door and giving cigarettes to kids, yeah, then people are 257 00:13:46,600 --> 00:13:50,160 Speaker 1: probably going to attack the fundamentals and the moral character 258 00:13:50,559 --> 00:13:53,760 Speaker 1: of the robot. Right, you have not attacked the robot itself. 259 00:13:53,800 --> 00:14:00,319 Speaker 1: It would just rob justice in somebody's front yard. Yeah, 260 00:14:01,200 --> 00:14:05,080 Speaker 1: thank Alright, So I guess I want to introduce one 261 00:14:05,120 --> 00:14:06,720 Speaker 1: of the papers we're gonna be looking at in this 262 00:14:06,760 --> 00:14:10,200 Speaker 1: pair of episodes, and it is by Mark A. Limley 263 00:14:10,360 --> 00:14:14,520 Speaker 1: and Brian Casey called Remedies for Robots, published in the 264 00:14:14,600 --> 00:14:18,920 Speaker 1: University of Chicago Law Review in twenty nineteen. And this 265 00:14:19,000 --> 00:14:22,600 Speaker 1: is a big paper. It's like eighty something pages long 266 00:14:22,640 --> 00:14:26,000 Speaker 1: with a with a lot of different interesting, uh thoughts 267 00:14:26,000 --> 00:14:27,760 Speaker 1: in it. We're not going to be able to cover 268 00:14:27,880 --> 00:14:31,200 Speaker 1: the entire thing in depth, but it's worth looking up. 269 00:14:31,240 --> 00:14:33,240 Speaker 1: You can easily find a full PDF of it if 270 00:14:33,240 --> 00:14:35,920 Speaker 1: you want to read it in depth. And we're gonna 271 00:14:35,920 --> 00:14:38,440 Speaker 1: look at some of the larger framework it lays out, 272 00:14:38,440 --> 00:14:41,320 Speaker 1: and then some interesting thoughts raised to buy it, But 273 00:14:41,400 --> 00:14:43,960 Speaker 1: to kick it off here the the author's right quote, 274 00:14:44,000 --> 00:14:48,960 Speaker 1: what happens when artificially intelligent robots misbehave? The question is 275 00:14:49,000 --> 00:14:54,239 Speaker 1: not just hypothetical. As robotics and artificial intelligence systems increasingly 276 00:14:54,360 --> 00:14:57,920 Speaker 1: integrate into our society, they will do bad things. We 277 00:14:58,000 --> 00:15:01,000 Speaker 1: seek to explore what remedies the all can and should 278 00:15:01,080 --> 00:15:05,480 Speaker 1: provide once a robot has caused harm. Now, obviously we're 279 00:15:05,480 --> 00:15:08,880 Speaker 1: going to be focused less on the like minute particulars 280 00:15:08,920 --> 00:15:11,760 Speaker 1: of US legal precedent here and more on the broader 281 00:15:11,800 --> 00:15:16,520 Speaker 1: issues they raise about robot agency, robot moral decision making 282 00:15:16,640 --> 00:15:19,960 Speaker 1: and how that interacts with harm and morality and justice. 283 00:15:20,600 --> 00:15:23,440 Speaker 1: And the authors start out in their introduction by giving 284 00:15:23,440 --> 00:15:26,520 Speaker 1: what I think is a really fantastic example of how 285 00:15:26,560 --> 00:15:31,360 Speaker 1: an autonomous robot with behaviors guided by machine learning, which 286 00:15:31,400 --> 00:15:34,720 Speaker 1: is how you know increasingly most robots are going to 287 00:15:34,760 --> 00:15:37,840 Speaker 1: be controlled, can end up doing things that are the 288 00:15:37,880 --> 00:15:41,480 Speaker 1: exact opposite of what was intended. So this case that 289 00:15:41,520 --> 00:15:44,000 Speaker 1: they cite is based on a true story from a 290 00:15:44,040 --> 00:15:48,680 Speaker 1: presentation at the eleventh Annual Stanford e Commerce Best Practices 291 00:15:48,800 --> 00:15:54,440 Speaker 1: Conference in June, and it goes like this quote. Engineers 292 00:15:54,520 --> 00:15:59,480 Speaker 1: training and artificially intelligent self flying drone were perplexed. They 293 00:15:59,520 --> 00:16:01,840 Speaker 1: were trying to get the drone to stay within a 294 00:16:01,920 --> 00:16:05,920 Speaker 1: predefined circle and head toward its center. Things were going 295 00:16:05,960 --> 00:16:09,560 Speaker 1: well for a while. The drone received positive reinforcement for 296 00:16:09,600 --> 00:16:12,600 Speaker 1: its successful flights, and it was improving its ability to 297 00:16:12,720 --> 00:16:17,800 Speaker 1: navigate toward the middle quickly and accurately. Then suddenly things changed. 298 00:16:18,160 --> 00:16:20,480 Speaker 1: When the drone near the edge of the circle, it 299 00:16:20,520 --> 00:16:24,360 Speaker 1: would inexplicably turn away from the center, leaving the circle. 300 00:16:25,040 --> 00:16:28,640 Speaker 1: What went wrong. After a long time spent puzzling over 301 00:16:28,680 --> 00:16:32,160 Speaker 1: the problem, the designers realized that whenever the drone left 302 00:16:32,200 --> 00:16:36,200 Speaker 1: the circle during tests they had turned it off. Someone 303 00:16:36,240 --> 00:16:38,920 Speaker 1: would then pick it up and carry it back into 304 00:16:38,920 --> 00:16:42,680 Speaker 1: the circle to start again. From this pattern, the drones 305 00:16:42,760 --> 00:16:46,760 Speaker 1: algorithm had learned correctly that when it was sufficiently far 306 00:16:46,880 --> 00:16:49,600 Speaker 1: from the center, the optimal way to get back to 307 00:16:49,680 --> 00:16:53,080 Speaker 1: the middle was to simply leave it all together. As 308 00:16:53,120 --> 00:16:55,840 Speaker 1: far as the drone was concerned, it had discovered a 309 00:16:55,920 --> 00:16:59,800 Speaker 1: worm hole. Somehow, flying outside of the circle could be 310 00:17:00,080 --> 00:17:03,240 Speaker 1: led upon to magically teleport it closer to the center, 311 00:17:03,680 --> 00:17:06,520 Speaker 1: and far from violating the rules instilled in it by 312 00:17:06,600 --> 00:17:10,080 Speaker 1: its engineers, the drone had actually followed them to a t. 313 00:17:10,840 --> 00:17:14,399 Speaker 1: In doing so, however, it had discovered an unforeseen shortcut, 314 00:17:14,680 --> 00:17:18,840 Speaker 1: one that subverted its designers true intent. That's really good. 315 00:17:18,880 --> 00:17:22,760 Speaker 1: That's that's it's as Yes, I love it. This is 316 00:17:22,800 --> 00:17:26,239 Speaker 1: such a great example of how robots can fail in 317 00:17:26,320 --> 00:17:30,280 Speaker 1: ways that are perfectly logical for the machines themselves, but 318 00:17:30,440 --> 00:17:33,240 Speaker 1: hard for humans to predict in advance, because we're not 319 00:17:33,400 --> 00:17:37,119 Speaker 1: understanding how our you know, our programming or the data 320 00:17:37,160 --> 00:17:40,480 Speaker 1: sets were training it on is biasing its behavior in 321 00:17:40,840 --> 00:17:43,760 Speaker 1: ways that that are strange to us. And in this case, 322 00:17:43,800 --> 00:17:48,080 Speaker 1: of course, such a malfunction is harmless. But as autonomous 323 00:17:48,119 --> 00:17:51,600 Speaker 1: machines become more and more integrated into the broader culture, 324 00:17:52,040 --> 00:17:56,960 Speaker 1: not just in controlled contained locations like factory floors and laboratories, 325 00:17:56,960 --> 00:17:59,320 Speaker 1: but in the wild, so on the streets and in 326 00:17:59,359 --> 00:18:03,720 Speaker 1: our homes and stuff, there will inevitably be cases where 327 00:18:03,800 --> 00:18:07,800 Speaker 1: robots fail like this and fail in ways that cause 328 00:18:08,240 --> 00:18:12,360 Speaker 1: catastrophic harm to people. Yeah, and plus, as an aside, 329 00:18:12,359 --> 00:18:14,440 Speaker 1: we we have to realize that even in cases where 330 00:18:14,440 --> 00:18:17,040 Speaker 1: the machines have not failed, there will be gray areas 331 00:18:17,359 --> 00:18:20,280 Speaker 1: in which it's not completely clear, and an argument could 332 00:18:20,320 --> 00:18:23,040 Speaker 1: be made in these cases for machine culpability with a 333 00:18:23,119 --> 00:18:26,560 Speaker 1: variety of intense and possible biases in place. Oh yeah, 334 00:18:26,560 --> 00:18:28,719 Speaker 1: that's another thing these authors talk about that there can 335 00:18:28,760 --> 00:18:32,880 Speaker 1: be all kinds of ways that that robotics and AI 336 00:18:32,920 --> 00:18:36,440 Speaker 1: could end up causing extreme harm to people without ever 337 00:18:36,520 --> 00:18:39,720 Speaker 1: doing anything that if a human did it would be illegal. 338 00:18:40,320 --> 00:18:43,760 Speaker 1: What one example they give is like if, um, if 339 00:18:43,920 --> 00:18:48,520 Speaker 1: Google were to suddenly change its Google Maps algorithm so 340 00:18:48,560 --> 00:18:52,960 Speaker 1: that it routed all of the city's traffic through your neighborhood. Like, 341 00:18:53,440 --> 00:18:56,520 Speaker 1: nothing illegal about that. It doesn't like commit a crime 342 00:18:56,560 --> 00:18:59,840 Speaker 1: against you, but this is going to drastically negatively in 343 00:19:00,040 --> 00:19:02,480 Speaker 1: hacked your quality of life. And it's a decision that's 344 00:19:02,560 --> 00:19:05,400 Speaker 1: just like a could be a quirk of an algorithm 345 00:19:05,440 --> 00:19:08,960 Speaker 1: in a machine. Now, this paper in particular concerns the 346 00:19:09,080 --> 00:19:13,159 Speaker 1: legal concept of remedies. So I was reading about remedies. 347 00:19:13,320 --> 00:19:16,600 Speaker 1: A common legal definition that I found is quote the 348 00:19:16,680 --> 00:19:20,480 Speaker 1: means to achieve justice in any matter in which legal 349 00:19:20,600 --> 00:19:24,640 Speaker 1: rights are involved. Or in the words of Limely and Casey, 350 00:19:24,680 --> 00:19:27,040 Speaker 1: what do I get when I win? Right? So, if 351 00:19:27,040 --> 00:19:29,360 Speaker 1: you if you take somebody to court because you say 352 00:19:29,359 --> 00:19:33,119 Speaker 1: they have harmed you. Whatever outcome you're seeking from that 353 00:19:33,119 --> 00:19:36,800 Speaker 1: that court case is the remedy. So usually when a 354 00:19:36,800 --> 00:19:39,480 Speaker 1: court case finds that somebody has done something wrong to 355 00:19:39,560 --> 00:19:42,760 Speaker 1: harm somebody else, the court responds to the finding of 356 00:19:42,800 --> 00:19:46,360 Speaker 1: guilt or blame by enforcing this remedy. And common remedies 357 00:19:46,359 --> 00:19:49,560 Speaker 1: would include a payment of money, right a guilty defendant 358 00:19:49,560 --> 00:19:52,600 Speaker 1: has to pay money to the plaintiff, a punishment of 359 00:19:52,640 --> 00:19:55,520 Speaker 1: the offender like maybe they go to jail, or a 360 00:19:55,520 --> 00:19:58,399 Speaker 1: court order to do something or not to do something. 361 00:19:58,520 --> 00:20:01,719 Speaker 1: For example, somebody is ordered not to drive a vehicle, 362 00:20:01,960 --> 00:20:04,119 Speaker 1: or they are ordered not to go within a hundred 363 00:20:04,160 --> 00:20:07,040 Speaker 1: feet of somebody else or something like that. Yeah, or 364 00:20:07,160 --> 00:20:09,720 Speaker 1: their their eyeball is removed, or they have to spend 365 00:20:09,880 --> 00:20:12,440 Speaker 1: a night in a haunted house something like that. Hopefully 366 00:20:12,480 --> 00:20:15,120 Speaker 1: not in modern law. But wait a minute, there are 367 00:20:15,160 --> 00:20:18,280 Speaker 1: some sometimes you do read about some really strange like 368 00:20:18,600 --> 00:20:21,040 Speaker 1: remedies that are ordered by judges like I order you 369 00:20:21,080 --> 00:20:22,880 Speaker 1: to I don't I don't know, to wear chicken suit 370 00:20:22,960 --> 00:20:26,280 Speaker 1: or something right like, Yeah, there's some judges who like 371 00:20:26,359 --> 00:20:30,159 Speaker 1: to get creative. It seems weird. Yeah, I wonder I 372 00:20:30,200 --> 00:20:31,920 Speaker 1: have there have been in cases where someone has to 373 00:20:31,920 --> 00:20:33,639 Speaker 1: spend a night in a haunted house due to a 374 00:20:33,640 --> 00:20:35,479 Speaker 1: court order. I think that would be a good setup 375 00:20:35,520 --> 00:20:37,960 Speaker 1: for a film. But anyway, so when you start looking 376 00:20:38,000 --> 00:20:41,159 Speaker 1: at the idea of remedies, remedies are complicated because they 377 00:20:41,200 --> 00:20:45,480 Speaker 1: involve different types of implied satisfaction on behalf of the 378 00:20:45,560 --> 00:20:49,639 Speaker 1: victim or plaintiff. And some are very clear and material, 379 00:20:49,840 --> 00:20:52,520 Speaker 1: and others are much more abstract. So the very the 380 00:20:52,520 --> 00:20:55,120 Speaker 1: ones that are very clear and material are like, if 381 00:20:55,160 --> 00:20:58,240 Speaker 1: I hit your car with my car and I'm clearly 382 00:20:58,280 --> 00:21:00,800 Speaker 1: at fault, I need to give you a payment of 383 00:21:00,840 --> 00:21:04,000 Speaker 1: cash to offset the material losses to the value of 384 00:21:04,000 --> 00:21:07,440 Speaker 1: your car, right. But then other times it's it's more abstract. 385 00:21:07,520 --> 00:21:10,720 Speaker 1: It's you know, punishment of an offender to give the 386 00:21:10,840 --> 00:21:15,440 Speaker 1: victim a sense of justice or to allegedly discourage someone 387 00:21:15,560 --> 00:21:18,320 Speaker 1: from committing this type of harm or offense in the future. 388 00:21:18,840 --> 00:21:21,040 Speaker 1: And then the authors right that things get way more 389 00:21:21,119 --> 00:21:24,479 Speaker 1: complicated when you bring robots and AI into the picture. 390 00:21:24,880 --> 00:21:27,760 Speaker 1: For example, if you're trying to give a court order 391 00:21:27,760 --> 00:21:30,159 Speaker 1: to a person you know saying like you shall not 392 00:21:30,280 --> 00:21:32,720 Speaker 1: drive a car, you shall not you know, come within 393 00:21:32,760 --> 00:21:34,840 Speaker 1: a hundred feet of this person. You can do so 394 00:21:34,920 --> 00:21:37,600 Speaker 1: in natural language, you can like speak a sentence to 395 00:21:37,680 --> 00:21:41,000 Speaker 1: them and you can expect them to understand. But how 396 00:21:41,040 --> 00:21:44,320 Speaker 1: do you get a court to give an order to 397 00:21:44,440 --> 00:21:49,080 Speaker 1: a robot not to do something? Most robots don't have 398 00:21:49,240 --> 00:21:52,000 Speaker 1: natural language processing, and even if they do, a lot 399 00:21:52,040 --> 00:21:54,880 Speaker 1: of times it's not that good. So you might think, okay, 400 00:21:54,920 --> 00:21:57,280 Speaker 1: well you just you know, you give the court order 401 00:21:57,359 --> 00:22:00,800 Speaker 1: to the robots programmer and then it and then they'll 402 00:22:00,800 --> 00:22:03,240 Speaker 1: have to program the robot to obey. But this is 403 00:22:03,280 --> 00:22:07,560 Speaker 1: also really complicated, like whose responsibility is it the robots 404 00:22:07,600 --> 00:22:12,120 Speaker 1: current owner or the original contractor or creator who made 405 00:22:12,160 --> 00:22:14,720 Speaker 1: the robot? Uh? And what if this is like an 406 00:22:14,840 --> 00:22:18,359 Speaker 1: end user consumer device that the owner doesn't have any 407 00:22:18,359 --> 00:22:21,600 Speaker 1: ability to reprogram, or what if, in the case of 408 00:22:21,680 --> 00:22:24,800 Speaker 1: robots whose behavior is driven by machine learning or some 409 00:22:24,880 --> 00:22:27,480 Speaker 1: other kind of system that is for practical purposes, a 410 00:22:27,560 --> 00:22:30,239 Speaker 1: black box, what if it's not even clear how you 411 00:22:30,280 --> 00:22:35,919 Speaker 1: could reprogram it to reliably obey the rule. Yeah, because 412 00:22:36,640 --> 00:22:40,080 Speaker 1: there's a chance you got to this position because the 413 00:22:40,200 --> 00:22:45,320 Speaker 1: robot misinterpreted what was asked of it. So if you 414 00:22:45,400 --> 00:22:49,200 Speaker 1: then make additional requirements ones that maybe you know, haven't 415 00:22:49,200 --> 00:22:52,159 Speaker 1: actually been tested before, but are just you know that 416 00:22:52,280 --> 00:22:54,680 Speaker 1: that are then brought on by the court that could 417 00:22:54,720 --> 00:22:58,520 Speaker 1: conceivably create new problems, right yeah, yeah, totally, and and 418 00:22:58,600 --> 00:23:00,919 Speaker 1: it keeps getting even more comp located from there like 419 00:23:01,160 --> 00:23:04,920 Speaker 1: Limle and Casey right quote. To complicate matters further, some systems, 420 00:23:04,960 --> 00:23:10,560 Speaker 1: including many self driving cars, distribute responsibility for their robots 421 00:23:10,600 --> 00:23:15,520 Speaker 1: between both designers and downstream operators. For systems of this kind, 422 00:23:15,560 --> 00:23:20,080 Speaker 1: it has already proven extremely difficult to allocate responsibility when 423 00:23:20,160 --> 00:23:23,600 Speaker 1: accidents inevitably occur. It just seems like a real, real 424 00:23:23,920 --> 00:23:26,679 Speaker 1: fast way to get into skynet territory, where it's like 425 00:23:27,040 --> 00:23:29,920 Speaker 1: the robot then decides that the only way to assure 426 00:23:30,000 --> 00:23:33,040 Speaker 1: that it never sells cigarettes to children again is to 427 00:23:33,160 --> 00:23:36,119 Speaker 1: destroy all humans. That sounds like finding a wormhole to me. 428 00:23:36,960 --> 00:23:39,840 Speaker 1: We will be getting into some more wormhole territory as 429 00:23:39,840 --> 00:23:44,200 Speaker 1: we go on, so more complications. Uh. The authors bring 430 00:23:44,280 --> 00:23:46,479 Speaker 1: up the idea of how to courts compel a person 431 00:23:46,680 --> 00:23:49,399 Speaker 1: or a company to obey a court order. Right Like, 432 00:23:49,440 --> 00:23:52,560 Speaker 1: if you know a company is like dumping poison that's 433 00:23:52,600 --> 00:23:55,840 Speaker 1: harming somebody, and the person sues that company, what does 434 00:23:55,840 --> 00:23:58,080 Speaker 1: the court do to get them to stop, Well, the 435 00:23:58,080 --> 00:24:00,800 Speaker 1: there is a threat of contempt of court if they 436 00:24:00,800 --> 00:24:04,440 Speaker 1: don't stop doing it. Right, Courts usually just assume that 437 00:24:04,480 --> 00:24:07,560 Speaker 1: people are motivated by a desire not to pay huge 438 00:24:07,600 --> 00:24:10,399 Speaker 1: monetary damages or a desire not to go to jail. 439 00:24:10,920 --> 00:24:13,840 Speaker 1: Would that have any motivating power on a robot. It 440 00:24:14,200 --> 00:24:16,800 Speaker 1: would only have that power to the extent that the 441 00:24:16,880 --> 00:24:20,280 Speaker 1: robot had been programmed to take that into account. If 442 00:24:20,320 --> 00:24:22,359 Speaker 1: it hadn't, it wouldn't matter at all. Like, you know, 443 00:24:22,520 --> 00:24:25,320 Speaker 1: most robots probably do not have any opinion one way 444 00:24:25,400 --> 00:24:27,920 Speaker 1: or another about whether about going to jail or having 445 00:24:27,960 --> 00:24:31,159 Speaker 1: to pay damages, So you'd have to explicitly program it 446 00:24:31,240 --> 00:24:36,359 Speaker 1: to be disincentivized by potential punishments. Yeah, because take the 447 00:24:36,400 --> 00:24:39,680 Speaker 1: cigarette robot for example, Like it's prime it's prime directive 448 00:24:39,760 --> 00:24:43,520 Speaker 1: is just to sell delicious cigarettes to human beings like 449 00:24:43,560 --> 00:24:45,919 Speaker 1: the the what what else? What kind of leverage do 450 00:24:45,920 --> 00:24:48,439 Speaker 1: you have? Right exactly, So in that case, you'd be 451 00:24:48,480 --> 00:24:51,320 Speaker 1: faced with either you'd be trying to find some kind 452 00:24:51,359 --> 00:24:54,400 Speaker 1: of human who's responsible for its behavior, but you could 453 00:24:54,640 --> 00:24:57,520 Speaker 1: very well run into the problem that like you can't 454 00:24:57,520 --> 00:25:00,680 Speaker 1: really identify any one person who's seems to be at 455 00:25:00,680 --> 00:25:03,640 Speaker 1: fault for what it did, and it's doing this bad thing, 456 00:25:03,760 --> 00:25:05,400 Speaker 1: so so what are you going to do about it? 457 00:25:06,359 --> 00:25:08,560 Speaker 1: And then of course things get even weirder when you 458 00:25:08,560 --> 00:25:11,000 Speaker 1: start getting into that that other side. You know, that's 459 00:25:11,040 --> 00:25:15,199 Speaker 1: like the more like direct and material remedies that can 460 00:25:15,240 --> 00:25:18,879 Speaker 1: be provided by courts, either like a monetary award to 461 00:25:19,000 --> 00:25:22,000 Speaker 1: the victim or in order to stop doing something that 462 00:25:22,040 --> 00:25:24,320 Speaker 1: causes harm. On the other hand, you've got this thing 463 00:25:24,400 --> 00:25:27,400 Speaker 1: that courts often end up engaging in, and people are 464 00:25:27,440 --> 00:25:31,639 Speaker 1: are largely driven and motivated by however, however irrational it 465 00:25:31,720 --> 00:25:35,240 Speaker 1: might be in some cases, and that's the perceived abstract 466 00:25:35,359 --> 00:25:38,679 Speaker 1: value of punishment, you know, not just material damages to 467 00:25:38,720 --> 00:25:40,760 Speaker 1: a victim or in order not to do something, but 468 00:25:40,840 --> 00:25:45,760 Speaker 1: the inflicting of punishments specifically to demonstrate the court's displeasure 469 00:25:45,800 --> 00:25:49,159 Speaker 1: with the original behavior of the defendant. Uh so uh 470 00:25:49,480 --> 00:25:51,920 Speaker 1: They raise a question that's brought up in a paper 471 00:25:51,960 --> 00:25:56,120 Speaker 1: by a professor named Christina Mulligan who explores the subject 472 00:25:56,160 --> 00:25:58,400 Speaker 1: of should you have the right to punch a robot 473 00:25:58,480 --> 00:26:01,040 Speaker 1: that hurts you? Uh lee? In case? He called the 474 00:26:01,240 --> 00:26:05,560 Speaker 1: called this the expressive component of remedies, and though a 475 00:26:05,640 --> 00:26:09,480 Speaker 1: desire to see offenders punished. Maybe an extremely natural and 476 00:26:09,520 --> 00:26:14,160 Speaker 1: nearly universal human drive. It's debatable whether it actually serves 477 00:26:14,160 --> 00:26:17,439 Speaker 1: a purpose in reducing harm, and if it does, in 478 00:26:17,520 --> 00:26:21,000 Speaker 1: what cases it does. I I love this idea because, 479 00:26:21,640 --> 00:26:24,600 Speaker 1: on a very literal level, it makes me think, well, 480 00:26:24,600 --> 00:26:26,439 Speaker 1: why would you punch a robot. They're made out of 481 00:26:26,440 --> 00:26:28,280 Speaker 1: out of metal. You're gonna hurt your hand. All you're 482 00:26:28,280 --> 00:26:29,760 Speaker 1: gonna do is hurt your hand, and you're not going 483 00:26:29,800 --> 00:26:34,160 Speaker 1: to hurt the robot unless first of all, you design 484 00:26:34,240 --> 00:26:36,800 Speaker 1: the robot so that it has at least one punchable 485 00:26:36,840 --> 00:26:40,239 Speaker 1: portion of its anatomy, and then for it to be 486 00:26:40,320 --> 00:26:43,600 Speaker 1: more than just a you know, a cathartic uh uh 487 00:26:44,880 --> 00:26:47,159 Speaker 1: thing for you, then you have to also make sure 488 00:26:47,200 --> 00:26:51,199 Speaker 1: there's some sort of feedback right where punch. Yeah, like 489 00:26:51,240 --> 00:26:55,000 Speaker 1: you punch cigarette bought in it's punchable area, then it 490 00:26:55,040 --> 00:26:58,040 Speaker 1: will say owl, and maybe it will I don't know, 491 00:26:58,040 --> 00:27:00,800 Speaker 1: ottaw inciner rate one packet of reigrettes so that it 492 00:27:00,880 --> 00:27:03,360 Speaker 1: can never sell them that sort of thing. But then, yeah, 493 00:27:03,359 --> 00:27:06,840 Speaker 1: you're having to design your robots to to suffer to 494 00:27:06,840 --> 00:27:09,880 Speaker 1: a certain extent, which I guess that means that goes 495 00:27:09,920 --> 00:27:12,480 Speaker 1: back to what C three po said, right, he said, 496 00:27:12,760 --> 00:27:15,040 Speaker 1: you know about about being made to suffer. It seems 497 00:27:15,040 --> 00:27:17,320 Speaker 1: to be our lot in life. Oh, that's interesting. I 498 00:27:17,320 --> 00:27:19,560 Speaker 1: hadn't thought about that. Yeah. Clearly R two, D two 499 00:27:19,600 --> 00:27:22,679 Speaker 1: and C three p O have have inherent desires to 500 00:27:22,720 --> 00:27:26,280 Speaker 1: avoid pain. They have been programmed with that. Yeah, but 501 00:27:26,359 --> 00:27:29,440 Speaker 1: as we've said that, that's not standard issue for robots. 502 00:27:29,520 --> 00:27:32,359 Speaker 1: Most robots don't care about whether or not they get injured, Like, 503 00:27:32,440 --> 00:27:35,159 Speaker 1: that's not a motivating factor for them. And again it 504 00:27:35,240 --> 00:27:38,560 Speaker 1: raises this bizarre question of like, what are you doing 505 00:27:38,640 --> 00:27:40,919 Speaker 1: when you punch the robot? Like, what is the I 506 00:27:40,920 --> 00:27:43,840 Speaker 1: guess it's making you feel better, But does it make 507 00:27:43,880 --> 00:27:46,359 Speaker 1: you feel better if like you know that the robot 508 00:27:46,440 --> 00:27:50,280 Speaker 1: doesn't actually care? Yeah, and then what then what needs 509 00:27:50,320 --> 00:27:53,000 Speaker 1: to be done to convince you that it does? K Yeah, 510 00:27:53,000 --> 00:27:55,359 Speaker 1: it just gets very sticky, very quickly. And then of 511 00:27:55,400 --> 00:27:58,320 Speaker 1: course turns the mirror back on the way we handle 512 00:27:59,160 --> 00:28:02,600 Speaker 1: human to humans in areas. Right, but anyway, limly in 513 00:28:02,680 --> 00:28:06,120 Speaker 1: casey I guess to to summarize their position, they say, Okay, 514 00:28:06,160 --> 00:28:11,440 Speaker 1: increasingly independent robots and AI are coming. They're they're infiltrating 515 00:28:11,480 --> 00:28:14,119 Speaker 1: more and more into society, and they will inevitably do 516 00:28:14,200 --> 00:28:18,080 Speaker 1: bad things. When that happens, the legal system will try 517 00:28:18,119 --> 00:28:21,439 Speaker 1: to order remedies to make things right in you know, 518 00:28:21,480 --> 00:28:25,399 Speaker 1: when when harm has been caused. Our current legal understanding 519 00:28:25,400 --> 00:28:28,760 Speaker 1: of remedies is based on the assumption of human agents 520 00:28:28,760 --> 00:28:32,440 Speaker 1: and human agents only, and its rules are not suited 521 00:28:32,480 --> 00:28:36,400 Speaker 1: to dealing with robot crime or robot offenses quote. As 522 00:28:36,440 --> 00:28:39,760 Speaker 1: we have shown, failing to recognize those differences could result 523 00:28:39,800 --> 00:28:45,360 Speaker 1: in significant unintended consequences, inadvertently encouraging the wrong behaviors or 524 00:28:45,400 --> 00:28:50,840 Speaker 1: even rendering our most important remedial mechanisms functionally irrelevant. Uh So, 525 00:28:50,920 --> 00:28:53,840 Speaker 1: to take robot agents into account, we're going to have 526 00:28:53,880 --> 00:28:57,360 Speaker 1: to examine and rethink how our systems of remedies work. 527 00:28:57,880 --> 00:29:00,680 Speaker 1: But and this is a point we've been making all ready, 528 00:29:00,720 --> 00:29:03,280 Speaker 1: this could have multiple benefits because it could also lead 529 00:29:03,320 --> 00:29:06,040 Speaker 1: to a better understanding of how we apply these remedies 530 00:29:06,080 --> 00:29:10,200 Speaker 1: to cases dealing exclusively with humans quote. Indeed, one of 531 00:29:10,240 --> 00:29:13,560 Speaker 1: the most pressing challenges raised by the technology is its 532 00:29:13,600 --> 00:29:17,480 Speaker 1: tendency to reveal the trade offs between sidal economic and 533 00:29:17,560 --> 00:29:21,200 Speaker 1: legal values that many of us today make without deeply 534 00:29:21,240 --> 00:29:26,000 Speaker 1: appreciating the downstream consequences. They write, we need a law 535 00:29:26,040 --> 00:29:29,400 Speaker 1: of remedies for robots, but in the final analysis, remedies 536 00:29:29,440 --> 00:29:32,200 Speaker 1: for robots may also end up being remedies for all 537 00:29:32,280 --> 00:29:34,880 Speaker 1: of us. Now, like I said, this is a very 538 00:29:34,920 --> 00:29:37,080 Speaker 1: long paper. We can't do justice to all of the 539 00:29:37,120 --> 00:29:40,000 Speaker 1: subjects they raise, but to focus on some highlights, I 540 00:29:40,040 --> 00:29:43,280 Speaker 1: thought one interesting place to look was when they try 541 00:29:43,320 --> 00:29:45,680 Speaker 1: to get into the definition of what actually makes a 542 00:29:45,800 --> 00:29:49,200 Speaker 1: robot in the legal sense. Obviously, there's going to be 543 00:29:49,240 --> 00:29:53,240 Speaker 1: some difficulty here because think about how differently the term 544 00:29:53,360 --> 00:29:55,760 Speaker 1: is used and how many different things it's applied to 545 00:29:55,840 --> 00:29:58,640 Speaker 1: in the world. Uh. The authors here is cite a 546 00:29:58,720 --> 00:30:02,640 Speaker 1: professor Ryan Hallo, who in the past had written that 547 00:30:02,720 --> 00:30:07,160 Speaker 1: there are three important characteristics that define a robot and 548 00:30:07,240 --> 00:30:10,040 Speaker 1: make it different from any machine, just like a computer 549 00:30:10,360 --> 00:30:14,440 Speaker 1: or phone. And Callo says that these three uh, these 550 00:30:14,480 --> 00:30:19,360 Speaker 1: three qualities are embodiment, emergence, and social valence. So to 551 00:30:19,440 --> 00:30:23,880 Speaker 1: quote from Calo, robotics combines, arguably for the first time, 552 00:30:24,240 --> 00:30:28,040 Speaker 1: the promiscuity of information with the embodied capacity to do 553 00:30:28,240 --> 00:30:34,200 Speaker 1: physical harm. Robots display increasingly emergent behavior, permitting the technology 554 00:30:34,240 --> 00:30:38,840 Speaker 1: to accomplish both useful and unfortunate tasks in unexpected ways. 555 00:30:38,840 --> 00:30:43,240 Speaker 1: I like that idea of unfortunate tasks. Um and robots, 556 00:30:43,280 --> 00:30:46,360 Speaker 1: more so than any technology and history, feel to us 557 00:30:46,400 --> 00:30:50,440 Speaker 1: like social actors, a tendency so strong that soldiers sometimes 558 00:30:50,520 --> 00:30:55,080 Speaker 1: jeopardize themselves to preserve the lives of military robots in 559 00:30:55,080 --> 00:30:58,200 Speaker 1: the field, and lives is in quotes there. Yeah, you 560 00:30:58,240 --> 00:31:00,680 Speaker 1: may remember this from the film it came out a 561 00:31:00,680 --> 00:31:06,120 Speaker 1: few years back, Saving Private Cigarette Robot. It's quite touching. 562 00:31:06,800 --> 00:31:08,840 Speaker 1: I mean, it seems absurd, but it does seem to 563 00:31:08,840 --> 00:31:11,400 Speaker 1: play on our natural biases. I want to talk about 564 00:31:11,800 --> 00:31:14,360 Speaker 1: a couple of examples from a psychology paper in a second, 565 00:31:14,400 --> 00:31:18,680 Speaker 1: but like, uh, we're we're just so ready to look 566 00:31:18,680 --> 00:31:21,440 Speaker 1: at machines like humans and and treat them as such. 567 00:31:21,520 --> 00:31:25,520 Speaker 1: It seems almost impossible to avoid. But anyway to pick 568 00:31:25,600 --> 00:31:28,360 Speaker 1: up with Limly and Casey after after that callo quote 569 00:31:28,560 --> 00:31:31,320 Speaker 1: they say quote. In light of these qualities, Calo argues 570 00:31:31,360 --> 00:31:34,560 Speaker 1: that robots are best thought of as artificial objects or 571 00:31:34,640 --> 00:31:38,440 Speaker 1: systems that sense, process, and act upon the world to 572 00:31:38,560 --> 00:31:42,240 Speaker 1: at least some degree. Thus, a robot in the strongest, 573 00:31:42,280 --> 00:31:45,240 Speaker 1: fullest sense of the term, exists in the world as 574 00:31:45,240 --> 00:31:50,520 Speaker 1: a corporeal object with the capacity to exert itself physically. Though, 575 00:31:50,760 --> 00:31:54,080 Speaker 1: It's interesting to me that even this attempt to give 576 00:31:54,160 --> 00:31:58,120 Speaker 1: a strict and legally useful definition of a robot includes 577 00:31:58,160 --> 00:32:01,280 Speaker 1: a subjective component. I brought this up earlier, the component 578 00:32:01,320 --> 00:32:06,120 Speaker 1: about human feelings, the social valence criterion. The callos sites here. 579 00:32:06,440 --> 00:32:10,480 Speaker 1: This means they feel to us like social actors. Yeah, 580 00:32:10,520 --> 00:32:12,480 Speaker 1: Like I was wondering in all of this, like, where 581 00:32:12,520 --> 00:32:18,200 Speaker 1: does a particularly malicious robo call fit into the scenario? Say, 582 00:32:18,200 --> 00:32:21,360 Speaker 1: a robo call that is not just about trying to 583 00:32:21,360 --> 00:32:23,800 Speaker 1: sell you something, but it's like, you know, actively trying 584 00:32:23,840 --> 00:32:25,640 Speaker 1: to say, get a credit card number out of you 585 00:32:25,760 --> 00:32:29,480 Speaker 1: for nefarious purposes. Yeah, that's a really good point. And 586 00:32:29,280 --> 00:32:32,920 Speaker 1: and along those lines, Limbly and Casey argue that actually, 587 00:32:33,600 --> 00:32:37,840 Speaker 1: they don't think the embodiment criteria of hardware is necessarily 588 00:32:37,920 --> 00:32:40,640 Speaker 1: a good one. That maybe our concept of a robot 589 00:32:41,000 --> 00:32:44,080 Speaker 1: should be less limited to the essentialist quality of being 590 00:32:44,120 --> 00:32:49,960 Speaker 1: embodied and more just apply to anything that exhibits intelligent behavior. 591 00:32:50,760 --> 00:32:53,320 Speaker 1: And exactly things like that robot call would be would 592 00:32:53,360 --> 00:32:55,480 Speaker 1: be a good example. Uh, the things we think of 593 00:32:55,520 --> 00:32:58,680 Speaker 1: as robots probably do. They They're not just like stand 594 00:32:58,720 --> 00:33:02,560 Speaker 1: alone objects. They interact with the broader world in some way, 595 00:33:02,720 --> 00:33:07,080 Speaker 1: but they could be entirely software based. Yeah, I guess certainly. 596 00:33:07,160 --> 00:33:09,240 Speaker 1: The roomba is a great example, or any kind of 597 00:33:09,280 --> 00:33:13,040 Speaker 1: like vacuuming robot where it's it's it's it's in your house, 598 00:33:13,160 --> 00:33:15,000 Speaker 1: or it's in a room in your house. It's it's 599 00:33:15,080 --> 00:33:19,360 Speaker 1: interacting in your environment and it's essentially making decisions about 600 00:33:19,440 --> 00:33:22,800 Speaker 1: how best to move around that space. Sure, but if 601 00:33:22,840 --> 00:33:24,880 Speaker 1: you want to take it out of the embodied space, 602 00:33:25,000 --> 00:33:27,400 Speaker 1: you could have the idea of bots on the Internet. 603 00:33:27,400 --> 00:33:32,440 Speaker 1: There's things out there acting autonomously to some extent and doing, 604 00:33:32,480 --> 00:33:35,960 Speaker 1: you know, executing some behavior, acting almost maliciously. But we 605 00:33:36,040 --> 00:33:39,640 Speaker 1: we are tempted to call them bots meaning short for robots, 606 00:33:39,680 --> 00:33:43,720 Speaker 1: because they have some kind of apparent independent agency and 607 00:33:43,720 --> 00:33:48,240 Speaker 1: they're doing something that seems at least halfway intelligent. Right, Yeah, 608 00:33:48,240 --> 00:33:51,200 Speaker 1: And you can easily imagine how they could they are 609 00:33:51,240 --> 00:33:53,000 Speaker 1: and well they I mean, they are used maliciously in 610 00:33:53,040 --> 00:33:56,360 Speaker 1: some cases, but how something like a social media bot 611 00:33:56,360 --> 00:33:59,040 Speaker 1: that responds to certain comments in a particular way, Like, 612 00:33:59,120 --> 00:34:01,640 Speaker 1: it's very easy to imagine how you how that could 613 00:34:01,680 --> 00:34:04,880 Speaker 1: be utilized in a way that would be like not 614 00:34:04,960 --> 00:34:09,160 Speaker 1: only annoying, but just that outright harmful, even physically harmful. 615 00:34:09,360 --> 00:34:11,439 Speaker 1: Oh yeah, I mean think about some of these, uh say, 616 00:34:11,480 --> 00:34:15,239 Speaker 1: like bots on social media that try to crowdsource information, 617 00:34:15,440 --> 00:34:18,799 Speaker 1: like during a natural disaster or something like that. You 618 00:34:18,800 --> 00:34:23,480 Speaker 1: could imagine uh, intentionally maliciously manipulating a bot of this 619 00:34:23,640 --> 00:34:26,480 Speaker 1: kind to like have you know, bad information on it 620 00:34:26,600 --> 00:34:29,560 Speaker 1: or something. Yeah, yeah, or you know, anything that a 621 00:34:29,600 --> 00:34:33,640 Speaker 1: troll can do on social media, a bought could conceivably 622 00:34:33,680 --> 00:34:35,840 Speaker 1: do as well. So that just you know, opens up 623 00:34:35,880 --> 00:34:40,000 Speaker 1: the door, right. But coming back to this, so there's 624 00:34:40,040 --> 00:34:44,800 Speaker 1: this interesting idea that robots feel to us like social actors, 625 00:34:44,840 --> 00:34:47,400 Speaker 1: and that seems to be, at least by some people's definitions, 626 00:34:47,680 --> 00:34:51,680 Speaker 1: a kind of inextricable quality of what makes a robot 627 00:34:51,880 --> 00:34:54,840 Speaker 1: like it feels like, at least to a small extent, 628 00:34:54,960 --> 00:34:57,680 Speaker 1: like a person somehow. And it reminds me of the 629 00:34:57,800 --> 00:35:00,800 Speaker 1: psychology paper I was looking at just recently on human 630 00:35:00,880 --> 00:35:05,360 Speaker 1: social interaction with robots that is by Elizabeth Broadbent called 631 00:35:05,480 --> 00:35:09,360 Speaker 1: Interactions with Robots The Truths We Reveal about Ourselves, published 632 00:35:09,360 --> 00:35:13,040 Speaker 1: in the Annual Review of Psychology in seventeen. Uh. This 633 00:35:13,120 --> 00:35:15,080 Speaker 1: was a highly cited paper and it seems to be 634 00:35:15,120 --> 00:35:17,240 Speaker 1: it's a big literature review of a lot of different 635 00:35:17,560 --> 00:35:22,560 Speaker 1: stuff about about how humans interact emotionally and socially with robots, 636 00:35:23,080 --> 00:35:25,520 Speaker 1: And the one section I was thinking about was where 637 00:35:25,520 --> 00:35:28,839 Speaker 1: she reviews a bunch of other studies about how we 638 00:35:28,960 --> 00:35:32,960 Speaker 1: mindlessly apply social rules to robots. So there are a 639 00:35:33,000 --> 00:35:35,120 Speaker 1: ton of different examples, but just to cite a couple 640 00:35:35,160 --> 00:35:38,600 Speaker 1: of them, one she writes up his quote. After using 641 00:35:38,600 --> 00:35:42,640 Speaker 1: a computer, people evaluate its performance more highly if the 642 00:35:42,800 --> 00:35:47,160 Speaker 1: same computer delivers the rating scale then if another computer 643 00:35:47,320 --> 00:35:49,799 Speaker 1: delivers the rating scale, or if they rate it with 644 00:35:49,880 --> 00:35:53,640 Speaker 1: pen and paper. So like, if you know, you get 645 00:35:53,640 --> 00:35:56,040 Speaker 1: a thing at the end of a test that says like, hey, 646 00:35:56,080 --> 00:35:58,560 Speaker 1: you know, how did you enjoy interacting with this machine, 647 00:35:58,920 --> 00:36:02,400 Speaker 1: You're more likely to give it a higher score if 648 00:36:02,440 --> 00:36:04,600 Speaker 1: you're still sitting on the same machine. Or at least 649 00:36:04,600 --> 00:36:06,840 Speaker 1: that was what was found by nas at all in 650 00:36:06,920 --> 00:36:12,120 Speaker 1: nine UH and the and Broadbent rights quote. This result 651 00:36:12,160 --> 00:36:15,200 Speaker 1: is similar to experiment or bias, in which people try 652 00:36:15,280 --> 00:36:19,240 Speaker 1: not to offend a human researcher. Another example of social 653 00:36:19,239 --> 00:36:23,680 Speaker 1: behavior is reciprocity. We help others who help us. People 654 00:36:23,760 --> 00:36:26,759 Speaker 1: helped to computer with a task for more time and 655 00:36:26,800 --> 00:36:30,760 Speaker 1: more accurately if the computer first helped them with a task, 656 00:36:30,880 --> 00:36:32,799 Speaker 1: then if it did not, and this was found by 657 00:36:32,880 --> 00:36:37,520 Speaker 1: Fog and nas In. I love that idea of people, 658 00:36:37,719 --> 00:36:41,239 Speaker 1: you know, being more reluctant to rate a computer, uh 659 00:36:41,239 --> 00:36:44,840 Speaker 1: poorly if they're still interacting with the same computer. That 660 00:36:44,960 --> 00:36:48,440 Speaker 1: that's that seems perfectly true to me. But another interesting 661 00:36:48,480 --> 00:36:52,239 Speaker 1: one from the summary is quote research and psychology has 662 00:36:52,239 --> 00:36:56,000 Speaker 1: shown that the presence of an observer can increase people's honesty, 663 00:36:56,120 --> 00:36:59,200 Speaker 1: but incentives for cheating can reduce honesty, and this is 664 00:36:59,239 --> 00:37:02,400 Speaker 1: found by Covey at all in nineteen eighty nine. In 665 00:37:02,440 --> 00:37:05,880 Speaker 1: a robot version of this work, participants given incentives to 666 00:37:05,960 --> 00:37:09,680 Speaker 1: cheat were shown to be less honest when alone compared 667 00:37:09,719 --> 00:37:13,600 Speaker 1: to when they were accompanied by either a human or 668 00:37:13,640 --> 00:37:16,200 Speaker 1: by a simple robot, and that was found by Hoffman 669 00:37:16,280 --> 00:37:20,239 Speaker 1: at all in. This illustrates that the social presence of 670 00:37:20,320 --> 00:37:23,280 Speaker 1: robots may make people feel as though they're being watched 671 00:37:23,320 --> 00:37:26,640 Speaker 1: and increase their honesty in an effect similar to that 672 00:37:26,719 --> 00:37:30,000 Speaker 1: produced by the presence of humans. Now this is inter right. 673 00:37:30,040 --> 00:37:32,440 Speaker 1: This This also reminds me of various studies that have 674 00:37:32,520 --> 00:37:37,560 Speaker 1: gone into sort of the idea of of imagine beings 675 00:37:37,760 --> 00:37:42,000 Speaker 1: or religious beings watching us while we're doing things right, 676 00:37:42,200 --> 00:37:45,000 Speaker 1: or even just like I imagery, like putting some eyes 677 00:37:45,239 --> 00:37:48,000 Speaker 1: imagery on a wall looking at people while they're like, 678 00:37:48,040 --> 00:37:50,080 Speaker 1: I don't know, not supposed to steal from the collection 679 00:37:50,120 --> 00:37:52,120 Speaker 1: plate or something like that. I don't know if it's 680 00:37:52,160 --> 00:37:54,560 Speaker 1: to the same extent, but at least in the same 681 00:37:54,600 --> 00:37:57,960 Speaker 1: direction that the presence of another human is. You know, 682 00:37:57,800 --> 00:37:59,920 Speaker 1: your You might be a little bit worried that are 683 00:38:00,000 --> 00:38:02,279 Speaker 1: who D two is gonna, you know, judge your moral 684 00:38:02,400 --> 00:38:05,520 Speaker 1: character harshly or tattle on you. I'm not as worried 685 00:38:05,520 --> 00:38:16,480 Speaker 1: about our two, but um three po snitch than Coming 686 00:38:16,520 --> 00:38:19,000 Speaker 1: back to Limele and Casey, so they talked for a 687 00:38:19,040 --> 00:38:22,480 Speaker 1: long time about how robots get their intelligence. They talked 688 00:38:22,480 --> 00:38:25,640 Speaker 1: about the importance of machine learning for the modern generations 689 00:38:25,719 --> 00:38:28,920 Speaker 1: of robots and AI, that it's just not practical to 690 00:38:29,520 --> 00:38:32,279 Speaker 1: hard code AI the way we used to imagine. You know, 691 00:38:32,320 --> 00:38:34,480 Speaker 1: you'd be a programmer and you're just like creating a 692 00:38:34,480 --> 00:38:37,600 Speaker 1: lot of strings of if then statements like you know, 693 00:38:37,960 --> 00:38:40,840 Speaker 1: the kind of intelligence that we expect from a modern 694 00:38:40,880 --> 00:38:44,560 Speaker 1: AI or or intelligent robot is too complex for people 695 00:38:44,600 --> 00:38:47,320 Speaker 1: to program in a in a direct way like that. Instead, 696 00:38:47,360 --> 00:38:50,000 Speaker 1: they've got to be trained on natural data sets through 697 00:38:50,040 --> 00:38:53,160 Speaker 1: machine learning, but of course doing so comes at the 698 00:38:53,239 --> 00:38:58,640 Speaker 1: cost of increasing uncertainty about their future behaviors. Behaviors could 699 00:38:58,640 --> 00:39:02,160 Speaker 1: emerge that a conchi and just programmer would never intentionally 700 00:39:02,280 --> 00:39:05,959 Speaker 1: hard code into the system. Uh So, so that brings 701 00:39:06,000 --> 00:39:09,560 Speaker 1: us to like, what types of harms could we expect 702 00:39:09,640 --> 00:39:12,680 Speaker 1: from robots and AI? And the authors here come up 703 00:39:12,760 --> 00:39:15,040 Speaker 1: with what I think are some very useful categories, some 704 00:39:15,120 --> 00:39:17,800 Speaker 1: sort of like cubby holes to slot the different types 705 00:39:17,840 --> 00:39:20,920 Speaker 1: of AI fears into. So the first kind is what 706 00:39:20,960 --> 00:39:23,839 Speaker 1: they call unavoidable harms. These are probably not the main 707 00:39:23,880 --> 00:39:26,440 Speaker 1: ones to be worried about, but they are worth thinking about. 708 00:39:26,880 --> 00:39:29,400 Speaker 1: Uh And this is just the fact that some dangers 709 00:39:29,440 --> 00:39:32,880 Speaker 1: are inherent too many products and services, we just accept 710 00:39:32,960 --> 00:39:36,000 Speaker 1: them as the cost of having those products and services 711 00:39:36,000 --> 00:39:38,279 Speaker 1: in the first place. So like this would just be 712 00:39:38,880 --> 00:39:42,080 Speaker 1: cigarette bought just by virtue of selling cigarettes is doing 713 00:39:42,080 --> 00:39:44,719 Speaker 1: harm to people? Right? Yes, I mean, yeah, the fact 714 00:39:44,840 --> 00:39:47,319 Speaker 1: that you have cigarettes. There is some harm coming from that. 715 00:39:47,400 --> 00:39:51,120 Speaker 1: But there are also ones that are more fully integrated 716 00:39:51,120 --> 00:39:54,680 Speaker 1: into just the way society works, like having cars. It 717 00:39:54,840 --> 00:39:58,080 Speaker 1: is absolutely inevitable that people driving cars are going to 718 00:39:58,080 --> 00:40:00,759 Speaker 1: crash their cars and there will be FATA lalities from that, 719 00:40:01,000 --> 00:40:03,000 Speaker 1: and you can think of ways of reducing it, but 720 00:40:03,040 --> 00:40:07,080 Speaker 1: there's there's really not any expectation that we can have 721 00:40:07,320 --> 00:40:10,239 Speaker 1: a country that has car based transportation and there will 722 00:40:10,280 --> 00:40:13,200 Speaker 1: not be any accidents because there will always be things 723 00:40:13,280 --> 00:40:16,120 Speaker 1: that are that are not even reducible to driver error 724 00:40:16,239 --> 00:40:18,240 Speaker 1: or to malfunction of the cars, right, like a tree 725 00:40:18,280 --> 00:40:21,440 Speaker 1: falls on the road or something, right, birds, wild animals. 726 00:40:21,440 --> 00:40:24,080 Speaker 1: Any So, even though they're I think there's some very 727 00:40:24,080 --> 00:40:28,120 Speaker 1: convincing arguments to be made that uh a switch to 728 00:40:28,280 --> 00:40:33,360 Speaker 1: self driving cars would create a much safer uh travel environment, 729 00:40:33,360 --> 00:40:36,080 Speaker 1: that it would make roads safer. You're not gonna you're 730 00:40:36,080 --> 00:40:39,840 Speaker 1: not gonna get to absolute zero crashes or absolute zero 731 00:40:39,960 --> 00:40:43,360 Speaker 1: road fatalities, right, I mean you wouldn't even if the 732 00:40:43,800 --> 00:40:46,839 Speaker 1: driving algorithms were perfect, right, and they're probably not going 733 00:40:46,880 --> 00:40:49,200 Speaker 1: to be perfect. They may well be and probably are 734 00:40:49,239 --> 00:40:52,800 Speaker 1: going to be better than the average human driver. Yeah, Okay, 735 00:40:52,840 --> 00:40:55,640 Speaker 1: So that's just there's just unavoidable harm that comes from 736 00:40:55,760 --> 00:40:58,319 Speaker 1: using any type of product or service, and when you 737 00:40:58,360 --> 00:41:01,000 Speaker 1: integrate robotics in AI and of that product or service, 738 00:41:01,080 --> 00:41:04,200 Speaker 1: those unavoidable harms will just continue. But that's something we 739 00:41:04,360 --> 00:41:08,960 Speaker 1: already deal with. The next category is deliberate least cost harms. 740 00:41:09,560 --> 00:41:12,280 Speaker 1: This is similar to unavoidable harms, but it's in cases 741 00:41:12,280 --> 00:41:15,920 Speaker 1: where the machine actually is able to make a decision 742 00:41:16,160 --> 00:41:19,520 Speaker 1: with with important ramifications, like it can make a decision 743 00:41:19,560 --> 00:41:22,719 Speaker 1: to act in a way that causes harm, but is 744 00:41:22,760 --> 00:41:26,200 Speaker 1: attempting to cause the least harm possible. So, in a sense, 745 00:41:26,320 --> 00:41:29,640 Speaker 1: this is forcing robots to do the trolley problem. Right, 746 00:41:29,960 --> 00:41:32,520 Speaker 1: do you switch to the track that has one person 747 00:41:32,560 --> 00:41:36,680 Speaker 1: sitting on the train tracks instead of five people? And 748 00:41:36,719 --> 00:41:40,480 Speaker 1: this will be another inevitable capability of autonomous cars, but 749 00:41:40,520 --> 00:41:44,319 Speaker 1: it raises all kinds of thorny questions. If an autonomous 750 00:41:44,440 --> 00:41:47,440 Speaker 1: vehicle can avoid a head on collision that will likely 751 00:41:47,520 --> 00:41:50,600 Speaker 1: kill multiple people by suddenly swerving out of the way 752 00:41:50,680 --> 00:41:55,279 Speaker 1: and hitting one pedestrian, that may indeed avoid a greater harm, 753 00:41:55,880 --> 00:41:58,239 Speaker 1: but that's probably cold comfort to the one person who 754 00:41:58,320 --> 00:42:01,560 Speaker 1: got hit. Right, Yeah, and then when you you have 755 00:42:01,600 --> 00:42:05,239 Speaker 1: a robot or some sort of an AI involved in 756 00:42:05,280 --> 00:42:09,000 Speaker 1: that decision making, I mean, it's it's you can just 757 00:42:09,080 --> 00:42:13,799 Speaker 1: imagine the the the intensity of the arguments and the 758 00:42:13,920 --> 00:42:17,280 Speaker 1: conversations that would ensue. Right, But then the authors raised 759 00:42:17,280 --> 00:42:19,759 Speaker 1: what I think is a very interesting point. They say 760 00:42:19,800 --> 00:42:22,520 Speaker 1: that this kind of life or death trolley problem will 761 00:42:22,520 --> 00:42:27,160 Speaker 1: probably be the exception rather than the rule. Instead, they say, quote, uh, 762 00:42:27,360 --> 00:42:32,000 Speaker 1: far likelier, I'll build albeit subtler scenarios involving least cost 763 00:42:32,080 --> 00:42:36,279 Speaker 1: harms will involve robots that make decisions with seemingly trivial 764 00:42:36,400 --> 00:42:40,480 Speaker 1: implications at an individual level, but which result in non 765 00:42:40,560 --> 00:42:45,000 Speaker 1: trivial impacts at scale. Self driving cars, for example, will 766 00:42:45,120 --> 00:42:48,400 Speaker 1: rarely face a stark choice between killing a child or 767 00:42:48,480 --> 00:42:52,200 Speaker 1: killing two elderly people, but thousands of times a day 768 00:42:52,239 --> 00:42:55,480 Speaker 1: they will have to choose precisely where to change lanes, 769 00:42:55,880 --> 00:42:59,480 Speaker 1: how closely to trail another vehicle, when to accelerate on 770 00:42:59,520 --> 00:43:02,719 Speaker 1: a freeway on ramp, and so forth. Each of these 771 00:43:02,760 --> 00:43:08,160 Speaker 1: decisions will entail some probability of injuring someone, I guess. 772 00:43:08,160 --> 00:43:09,799 Speaker 1: Another thing to keep in mind, like with the trap, 773 00:43:09,920 --> 00:43:13,759 Speaker 1: with the trolley problem, generally, when you're dealing with it, 774 00:43:13,920 --> 00:43:15,920 Speaker 1: there's a there's a lot of emphasis on the problem 775 00:43:16,000 --> 00:43:18,920 Speaker 1: aspect of it, you know, like the trolley problem should 776 00:43:18,960 --> 00:43:22,600 Speaker 1: be a an ethical dilemma. It should it should hurt 777 00:43:22,640 --> 00:43:25,120 Speaker 1: a bit to try and figure out how of what 778 00:43:25,280 --> 00:43:27,960 Speaker 1: to do, and the idea of the trolley problem being 779 00:43:28,040 --> 00:43:31,960 Speaker 1: something that is encountered and decided upon, like as in 780 00:43:32,000 --> 00:43:35,840 Speaker 1: a split second, by a machine, um, by an algorithm 781 00:43:35,880 --> 00:43:39,200 Speaker 1: like that. That feels that that feels a bit worse 782 00:43:39,239 --> 00:43:42,120 Speaker 1: to us. You know, that feels like if if it's 783 00:43:42,160 --> 00:43:44,160 Speaker 1: if it's an easy decision, even if it's just based 784 00:43:44,160 --> 00:43:47,960 Speaker 1: purely on math, you know, it's um, it feels wrong 785 00:43:47,960 --> 00:43:50,880 Speaker 1: on some level. Oh yeah, yeah, so I think you're right. 786 00:43:51,120 --> 00:43:53,600 Speaker 1: But also the thing they're bringing up here is that 787 00:43:53,719 --> 00:43:56,680 Speaker 1: the trolley problem you're actually more often facing is that 788 00:43:57,280 --> 00:43:59,680 Speaker 1: every single day you're your autonomous car is going to 789 00:43:59,800 --> 00:44:02,800 Speaker 1: make you know, hundreds or thousands of trolley problem calls 790 00:44:02,800 --> 00:44:07,480 Speaker 1: where on one track it is getting to your destination 791 00:44:08,000 --> 00:44:10,920 Speaker 1: a few seconds faster, and on the other track is 792 00:44:10,960 --> 00:44:14,279 Speaker 1: a one in a million chance of killing somebody. Yeah yeah, 793 00:44:14,320 --> 00:44:16,399 Speaker 1: and these do. We make these decisions all the time, 794 00:44:16,560 --> 00:44:18,399 Speaker 1: but we don't focus on these. That's but I think 795 00:44:18,400 --> 00:44:20,839 Speaker 1: that's part of the issue, you know, Yeah, exactly, You're like, 796 00:44:20,880 --> 00:44:24,440 Speaker 1: should I, okay, should I take a left on this road? Well, 797 00:44:24,480 --> 00:44:27,440 Speaker 1: there's a chance there's a speeding car just above, just 798 00:44:27,560 --> 00:44:29,520 Speaker 1: over the edge there, and I can't see it, but 799 00:44:29,560 --> 00:44:31,360 Speaker 1: I'm going to take that chance because I want to 800 00:44:31,400 --> 00:44:34,239 Speaker 1: cut three minutes off my drive to work. Yes. Uh, 801 00:44:34,520 --> 00:44:36,680 Speaker 1: this is actually a very good point that we we 802 00:44:36,800 --> 00:44:40,400 Speaker 1: already make these decisions, but we just don't think about 803 00:44:40,480 --> 00:44:44,160 Speaker 1: them in these explicit probability calculations, and there may be 804 00:44:44,239 --> 00:44:46,920 Speaker 1: some consequences to thinking about them this way, which is 805 00:44:46,960 --> 00:44:49,640 Speaker 1: there could be a weird like perceived downside just to 806 00:44:49,880 --> 00:44:53,919 Speaker 1: making these kind of these kind of calculations objective and 807 00:44:53,920 --> 00:44:56,319 Speaker 1: and explicit. Yeah, I mean I've I've run into this 808 00:44:56,360 --> 00:44:58,640 Speaker 1: with some of the map programs that I used to 809 00:44:58,719 --> 00:45:01,040 Speaker 1: drive before, where I want to tell it in some 810 00:45:01,080 --> 00:45:03,520 Speaker 1: cases like like give me the ability and maybe they 811 00:45:03,520 --> 00:45:06,360 Speaker 1: have this now, but there was one left turn in 812 00:45:06,400 --> 00:45:09,400 Speaker 1: particular where I would the ability to to flag this 813 00:45:09,480 --> 00:45:12,400 Speaker 1: left turn. This is a dangerous left turn. You have 814 00:45:12,520 --> 00:45:15,520 Speaker 1: put me in a position to make. Um, I might 815 00:45:15,640 --> 00:45:17,920 Speaker 1: know which left turn you're talking about. If you probably 816 00:45:18,200 --> 00:45:21,080 Speaker 1: in town, it's it's it's in town. It's near our office. 817 00:45:21,120 --> 00:45:23,799 Speaker 1: So yeah, I mean yeah, by the way, if you're 818 00:45:23,800 --> 00:45:27,200 Speaker 1: out there working on programming driving apps, you should absolutely 819 00:45:27,239 --> 00:45:29,759 Speaker 1: include the A toggle key where you can say no 820 00:45:29,920 --> 00:45:33,399 Speaker 1: left turns please. Yes, that that is highly useful. I'm 821 00:45:33,440 --> 00:45:36,719 Speaker 1: to understand this is how one of my aunts got around. Like, 822 00:45:36,880 --> 00:45:39,160 Speaker 1: if they got older and they were less adventurous driving, 823 00:45:39,200 --> 00:45:41,319 Speaker 1: they would only take right turns, and they would do 824 00:45:41,360 --> 00:45:43,680 Speaker 1: all their driving so that no left turns were made. 825 00:45:44,040 --> 00:45:46,000 Speaker 1: I think I have one time read This could be 826 00:45:46,040 --> 00:45:49,360 Speaker 1: totally wrong, but I at least one time I remember 827 00:45:49,400 --> 00:45:53,520 Speaker 1: reading a claim that, like, you know, the traffic efficiency 828 00:45:53,520 --> 00:45:56,480 Speaker 1: would be x percent higher and people would spend x 829 00:45:57,000 --> 00:45:59,279 Speaker 1: number of minutes less time in traffic if there were 830 00:45:59,320 --> 00:46:01,560 Speaker 1: no such thing as left turns, if everybody had to 831 00:46:01,560 --> 00:46:04,680 Speaker 1: get everywhere by only doing you know, full right turns 832 00:46:04,719 --> 00:46:07,880 Speaker 1: to to go around the block. Interesting. I'm sure there 833 00:46:07,880 --> 00:46:09,920 Speaker 1: would be some cases where you can't do that, but 834 00:46:10,000 --> 00:46:12,080 Speaker 1: you know, in a in a grid city, seems to 835 00:46:12,120 --> 00:46:13,919 Speaker 1: make a lot of sense. Maybe you get like one 836 00:46:14,000 --> 00:46:17,040 Speaker 1: left turn of day. It's some sort of card system. 837 00:46:17,239 --> 00:46:19,880 Speaker 1: But like I said, I I cannot confirm that. Okay, 838 00:46:19,880 --> 00:46:22,320 Speaker 1: But anyway, the next categories of harm they talk about 839 00:46:22,520 --> 00:46:25,719 Speaker 1: this one is defect driven harms. Uh, this one is 840 00:46:25,880 --> 00:46:28,680 Speaker 1: very easy to understand. The robot harms someone because of 841 00:46:28,680 --> 00:46:31,600 Speaker 1: a design flaw or a bug or a mistake, or 842 00:46:31,680 --> 00:46:35,360 Speaker 1: it's just broken. You know, a warehouse loading robot is 843 00:46:35,440 --> 00:46:38,640 Speaker 1: designed to only operate when no humans are nearby it. 844 00:46:38,840 --> 00:46:41,400 Speaker 1: But there's a malfunction with one of its sensors and 845 00:46:41,480 --> 00:46:44,280 Speaker 1: it fails to detect the presence of a human operator 846 00:46:44,800 --> 00:46:46,400 Speaker 1: trying to get I don't know, a piece of junk 847 00:46:46,440 --> 00:46:48,640 Speaker 1: out of it out of one of its hinges, and 848 00:46:48,719 --> 00:46:52,160 Speaker 1: it moves and kills them. Okay, this is pretty straightforward. 849 00:46:52,160 --> 00:46:54,880 Speaker 1: Just it's broken for some reason. The authors here do 850 00:46:54,960 --> 00:46:57,200 Speaker 1: point out that this gets even more complicated when there 851 00:46:57,320 --> 00:47:01,040 Speaker 1: is a human in the loop e g. An autonomous 852 00:47:01,120 --> 00:47:04,440 Speaker 1: car with a human driver who is supposed to intervene 853 00:47:04,600 --> 00:47:06,719 Speaker 1: in the event of an emergency. They talked about one 854 00:47:06,760 --> 00:47:09,640 Speaker 1: case where this happened with with I believe it was 855 00:47:09,680 --> 00:47:13,880 Speaker 1: an uber autonomous vehicle where both the machine and the 856 00:47:14,000 --> 00:47:17,960 Speaker 1: human fail, that both of them failed to stop a 857 00:47:18,000 --> 00:47:21,640 Speaker 1: collision that hurts someone, Like what happens here? Yeah, yeah, 858 00:47:21,800 --> 00:47:24,480 Speaker 1: of course, we we have very similar cases in just 859 00:47:24,560 --> 00:47:27,560 Speaker 1: purely human affairs. Right when questions are asked, like where 860 00:47:27,600 --> 00:47:30,600 Speaker 1: was this person's supervisor? Uh, you know, who who were 861 00:47:30,600 --> 00:47:32,920 Speaker 1: the watchers? Who? There should have been some other person, 862 00:47:33,160 --> 00:47:35,279 Speaker 1: there was someone else in the loop here, Why didn't 863 00:47:35,320 --> 00:47:39,080 Speaker 1: they do something to stop this crime from taking place? Right? Okay? 864 00:47:39,120 --> 00:47:42,319 Speaker 1: After that you get into misuse harms. Now, some of 865 00:47:42,320 --> 00:47:45,759 Speaker 1: these are very obvious, very straightforward, like if you program 866 00:47:45,760 --> 00:47:48,600 Speaker 1: a robot directly to go kill someone, or even if 867 00:47:48,640 --> 00:47:52,680 Speaker 1: you program it to wander around at random swinging a machete. 868 00:47:52,680 --> 00:47:55,680 Speaker 1: In these cases, it seems that the human programmer is 869 00:47:55,719 --> 00:47:58,839 Speaker 1: clearly at fault, Right, the robot has just become a 870 00:47:58,840 --> 00:48:02,240 Speaker 1: weapon of murder or of reckless endangerment, and the person 871 00:48:02,239 --> 00:48:05,399 Speaker 1: who told it to do that is the person responsible. Yeah. 872 00:48:05,440 --> 00:48:10,080 Speaker 1: Like if you take an automotive, uh, like oil changed 873 00:48:10,160 --> 00:48:15,680 Speaker 1: robot and you reprogram it to um do appendectomies and 874 00:48:15,760 --> 00:48:17,960 Speaker 1: people die as a result, Like, that's a misuse you 875 00:48:17,960 --> 00:48:21,240 Speaker 1: can you can only blame the the oil changed robots 876 00:48:21,280 --> 00:48:26,160 Speaker 1: so much because it was not ultimately designed to perform appendectomies. Right. 877 00:48:26,160 --> 00:48:28,320 Speaker 1: In this case, this is more like the hammer example 878 00:48:28,520 --> 00:48:31,680 Speaker 1: used at the beginning. This it's not the robot autonomously 879 00:48:31,840 --> 00:48:34,760 Speaker 1: making the decision to do this. Uh, this is somebody 880 00:48:34,800 --> 00:48:38,640 Speaker 1: just using it as a tool of crime. Yeah, But 881 00:48:38,840 --> 00:48:42,040 Speaker 1: the authors point out that there are cases where quote, 882 00:48:42,080 --> 00:48:45,280 Speaker 1: people will misuse robots in a manner that is neither 883 00:48:45,440 --> 00:48:50,160 Speaker 1: negligent nor criminal, but nevertheless threatens to harm others, and 884 00:48:50,239 --> 00:48:54,280 Speaker 1: these types of harm are especially difficult to predict and prevent. 885 00:48:55,120 --> 00:48:58,400 Speaker 1: So one example is just people love to trick robots, 886 00:48:58,440 --> 00:49:02,000 Speaker 1: people like to mess around with robots and AI. I 887 00:49:02,040 --> 00:49:05,520 Speaker 1: would admit to myself finding this amusing and principle we've 888 00:49:05,520 --> 00:49:07,480 Speaker 1: talked about this in uh, you know, the flate of 889 00:49:07,520 --> 00:49:10,319 Speaker 1: sex Mokena episodes. But of course there are times when 890 00:49:10,360 --> 00:49:12,680 Speaker 1: it's not so funny, when when people take it to 891 00:49:12,760 --> 00:49:16,560 Speaker 1: really sinister places. One example the authors bring up here 892 00:49:16,719 --> 00:49:20,120 Speaker 1: is the horrible saga of Microsoft ta Do you remember 893 00:49:20,160 --> 00:49:23,360 Speaker 1: this thing? Oh? This was the this is the robot 894 00:49:23,400 --> 00:49:26,000 Speaker 1: that was traveling across the country. No, no, no, no, 895 00:49:26,160 --> 00:49:28,279 Speaker 1: uh though I know what you're talking about there. No, 896 00:49:28,760 --> 00:49:31,160 Speaker 1: maybe we can come back to that. But Tay was 897 00:49:31,280 --> 00:49:36,440 Speaker 1: a Twitter chat bot created by Microsoft that was supposed 898 00:49:36,520 --> 00:49:40,000 Speaker 1: to learn how to interact on the Internet just by 899 00:49:40,080 --> 00:49:42,759 Speaker 1: learning from conversations it had with real users. So you 900 00:49:42,760 --> 00:49:45,120 Speaker 1: could tweet it Tay and say hey, how are you doing? 901 00:49:45,280 --> 00:49:47,480 Speaker 1: You know, and you could talk about the weather or whatever. 902 00:49:47,560 --> 00:49:51,760 Speaker 1: But of course who who ended up engaging and training 903 00:49:51,760 --> 00:49:54,440 Speaker 1: this AI to speak? It was like the worst trolls 904 00:49:54,480 --> 00:49:57,040 Speaker 1: on the internet. So within a matter of hours, this 905 00:49:57,200 --> 00:50:00,680 Speaker 1: brand new chat bot had been transformed from from a 906 00:50:00,840 --> 00:50:03,359 Speaker 1: from a you know, a lump of clay unformed into 907 00:50:03,360 --> 00:50:08,279 Speaker 1: a pornographic nazi. Yes, I do remember this now. And 908 00:50:08,360 --> 00:50:10,600 Speaker 1: this kind of just gets you thinking about the ways 909 00:50:10,680 --> 00:50:13,920 Speaker 1: that people will be will be able to misuse robots 910 00:50:13,960 --> 00:50:18,520 Speaker 1: in ways that guide their behavior in extremely pernicious directions, 911 00:50:18,640 --> 00:50:23,399 Speaker 1: sometimes without the people guiding this misuse necessarily committing any 912 00:50:23,480 --> 00:50:26,960 Speaker 1: kind of identifiable crime. Yeah, Like people are going to 913 00:50:27,040 --> 00:50:30,200 Speaker 1: look for exploits, They're gonna look for ways, they're gonna 914 00:50:30,200 --> 00:50:32,040 Speaker 1: look for cracks in the system. It's it's like within 915 00:50:32,320 --> 00:50:34,640 Speaker 1: with any kind of like a video game system. You know, 916 00:50:34,640 --> 00:50:36,200 Speaker 1: people are just gonna see what they can get away 917 00:50:36,280 --> 00:50:39,080 Speaker 1: with and and just engage in that kind of action, 918 00:50:39,520 --> 00:50:41,800 Speaker 1: sometimes just for the fun of it, right, And sometimes 919 00:50:41,800 --> 00:50:46,120 Speaker 1: that's harmless, but sometimes that's really awful. Yeah, Okay. Next 920 00:50:46,320 --> 00:50:50,200 Speaker 1: category is unforeseen harms. And here's where we start getting 921 00:50:50,200 --> 00:50:54,920 Speaker 1: into the really really interesting and really difficult cases, types 922 00:50:54,960 --> 00:50:58,799 Speaker 1: of harm that are not unavoidable, not a product of 923 00:50:58,880 --> 00:51:04,080 Speaker 1: defects or miss use, but are still not predicted by creators. Uh. 924 00:51:04,120 --> 00:51:06,200 Speaker 1: And so the authors talk about how, in a way, 925 00:51:06,400 --> 00:51:11,040 Speaker 1: unpredictability is what makes AI potentially useful, right like, it 926 00:51:11,120 --> 00:51:15,640 Speaker 1: can potentially arrive at solutions that humans wouldn't have predicted. 927 00:51:16,040 --> 00:51:19,120 Speaker 1: But sometimes it does so in ways that really miss 928 00:51:19,200 --> 00:51:22,080 Speaker 1: the boat and could be extremely harmful if they were 929 00:51:22,160 --> 00:51:25,840 Speaker 1: embodied in action in the real world. Uh, similar to 930 00:51:25,880 --> 00:51:28,520 Speaker 1: the drone example from the circle that we talked about 931 00:51:28,560 --> 00:51:32,319 Speaker 1: at the beginning. But they signed another fantastic example here. 932 00:51:32,360 --> 00:51:34,839 Speaker 1: That's kind of chilling. So I'm just gonna read from 933 00:51:35,040 --> 00:51:38,400 Speaker 1: Limle and Casey here. In the ninety nineties, a pioneering 934 00:51:38,480 --> 00:51:42,440 Speaker 1: multi institutional study sought to use machine learning techniques to 935 00:51:42,480 --> 00:51:48,000 Speaker 1: predict health related risks prior to hospitalization. After ingesting an 936 00:51:48,080 --> 00:51:52,480 Speaker 1: enormous quantity of data covering patients with pneumonia, the system 937 00:51:52,600 --> 00:51:58,040 Speaker 1: learned the rule has asthma X delivers lower risk X. 938 00:51:58,520 --> 00:52:02,360 Speaker 1: The colloquial translation is patients with pneumonia who have a 939 00:52:02,440 --> 00:52:05,680 Speaker 1: history of asthma have a lower risk of dying from 940 00:52:05,719 --> 00:52:10,360 Speaker 1: pneumonia than the general population. The machine derived rule was curious, 941 00:52:10,400 --> 00:52:13,439 Speaker 1: to say the least, far from being protective, asthma can 942 00:52:13,520 --> 00:52:19,800 Speaker 1: seriously complicate pulmonary illnesses, including pneumonia. Perplexed by this counterintuitive result, 943 00:52:19,880 --> 00:52:23,200 Speaker 1: the researchers dug deeper, and what they found was troubling. 944 00:52:23,800 --> 00:52:27,160 Speaker 1: They discovered that quote, patients with the history of asthma 945 00:52:27,200 --> 00:52:30,640 Speaker 1: who presented with pneumonia usually were admitted not only to 946 00:52:30,680 --> 00:52:33,840 Speaker 1: the hospital but directly to the i c U, the 947 00:52:33,880 --> 00:52:37,640 Speaker 1: intensive care unit. Once in the i c U, asthmatic 948 00:52:37,640 --> 00:52:41,680 Speaker 1: pneumonia patients went on to receive more aggressive care, thereby 949 00:52:41,840 --> 00:52:46,920 Speaker 1: raising their survival rates compared to the general population. The rule, 950 00:52:47,000 --> 00:52:50,320 Speaker 1: in other words, reflected a genuine pattern in the data, 951 00:52:50,719 --> 00:52:55,520 Speaker 1: but the machine had confused correlation with causation quote, incorrectly 952 00:52:55,640 --> 00:52:59,360 Speaker 1: learning that asthma lowers risk when in fact, asthmatics have 953 00:52:59,520 --> 00:53:02,120 Speaker 1: much high fire risk. It seems like we've got another 954 00:53:02,160 --> 00:53:06,960 Speaker 1: wormhole here. And here the authors introduce an idea of 955 00:53:06,960 --> 00:53:12,080 Speaker 1: of a curve of outcomes that they call a leptokurtic curve. 956 00:53:12,560 --> 00:53:15,080 Speaker 1: That's a strange term, but basically what that means is 957 00:53:15,120 --> 00:53:18,319 Speaker 1: if you are um, if you're charting what types of 958 00:53:18,360 --> 00:53:22,880 Speaker 1: outcomes you expect from a traditional system like just you know, 959 00:53:23,000 --> 00:53:28,640 Speaker 1: humans looking at data versus a a complex automated system, 960 00:53:28,719 --> 00:53:32,200 Speaker 1: the sort of the tails of the graph with the 961 00:53:32,280 --> 00:53:35,560 Speaker 1: complex automated system will tend to be fatter, meaning you 962 00:53:35,600 --> 00:53:39,320 Speaker 1: get more extreme events in the positive and negative space, 963 00:53:39,920 --> 00:53:43,320 Speaker 1: rather than a you know, a sort of rounder clustering 964 00:53:43,400 --> 00:53:46,759 Speaker 1: of events in the you know, normal operation space, if 965 00:53:46,760 --> 00:53:51,160 Speaker 1: that makes any sense. So, these kinds of unforeseen harms 966 00:53:51,160 --> 00:53:54,040 Speaker 1: are some of the most worrisome types of things to 967 00:53:54,120 --> 00:53:56,440 Speaker 1: expect coming out of robots and AI. But then the 968 00:53:56,520 --> 00:53:59,600 Speaker 1: other one would be systemic harms And this is the 969 00:53:59,680 --> 00:54:03,000 Speaker 1: last category of of harms they talk about. UH the 970 00:54:03,000 --> 00:54:06,239 Speaker 1: author's right quote. People have long assumed that robots are 971 00:54:06,320 --> 00:54:10,880 Speaker 1: inherently neutral and objective, given that robots simply intake data 972 00:54:11,200 --> 00:54:15,760 Speaker 1: and systematically output results. But they are actually neither. Robots 973 00:54:15,760 --> 00:54:18,520 Speaker 1: are only as neutral as the data they're fed, and 974 00:54:18,640 --> 00:54:21,719 Speaker 1: only as objective as the design choices of those who 975 00:54:21,760 --> 00:54:26,360 Speaker 1: create them. When either bias or subjectivity infiltrates the systems 976 00:54:26,400 --> 00:54:30,279 Speaker 1: inputs or design choices, it is inevitably reflected in the 977 00:54:30,320 --> 00:54:34,319 Speaker 1: system's outputs. This is your classic garbage in, garbage out, problem, right, 978 00:54:35,200 --> 00:54:38,920 Speaker 1: They go on. Accordingly, those responsible for overseeing the deployment 979 00:54:38,960 --> 00:54:44,240 Speaker 1: of robots must anticipate the possibility that algorithmically biased applications 980 00:54:44,239 --> 00:54:47,680 Speaker 1: will cause harms of this systemic nature to third parties. 981 00:54:48,239 --> 00:54:51,320 Speaker 1: So UH, an example that's much discussed in this would 982 00:54:51,320 --> 00:54:55,960 Speaker 1: be an AI trained to make decisions about granting loans 983 00:54:56,360 --> 00:55:00,560 Speaker 1: by studying patterns of which loan applicants got their loans 984 00:55:00,600 --> 00:55:03,359 Speaker 1: granted in the past. And a I like this could 985 00:55:03,440 --> 00:55:06,640 Speaker 1: end up manifesting some type of bias that hurts people, 986 00:55:06,680 --> 00:55:10,239 Speaker 1: like a racial bias in its loan assessments, because there 987 00:55:10,320 --> 00:55:13,560 Speaker 1: was already a bias in the real world data set 988 00:55:13,640 --> 00:55:16,320 Speaker 1: that it was trained on. So, in other words, AI 989 00:55:16,440 --> 00:55:19,840 Speaker 1: that is trained on data from the real world, unless 990 00:55:19,880 --> 00:55:22,719 Speaker 1: it is it is explicitly told not to do this, 991 00:55:22,880 --> 00:55:27,160 Speaker 1: it will tend to reproduce and perpetuate any injustices, any 992 00:55:27,160 --> 00:55:30,680 Speaker 1: inequalities that already exist. And the authors here give an 993 00:55:30,760 --> 00:55:36,680 Speaker 1: example that is based on algorithmically derived insurance premiums that 994 00:55:36,800 --> 00:55:40,040 Speaker 1: I think they're talking about auto insurance quote. A recent 995 00:55:40,120 --> 00:55:44,240 Speaker 1: study by Consumer Reports found that contemporary premiums depended less 996 00:55:44,280 --> 00:55:49,680 Speaker 1: on driving habits and increasingly on socioeconomic factors, including an 997 00:55:49,680 --> 00:55:54,880 Speaker 1: individual's credit score. After analyzing two billion car insurance price 998 00:55:54,960 --> 00:55:59,000 Speaker 1: quotes across approximately seven hundred companies, the study found that 999 00:55:59,080 --> 00:56:03,640 Speaker 1: credit scores actored into insurance algorithms so heavily that perfect 1000 00:56:03,760 --> 00:56:07,919 Speaker 1: drivers with low credit scores often paid substantially more than 1001 00:56:08,120 --> 00:56:12,440 Speaker 1: terrible drivers with high scores. The studies findings raised widespread 1002 00:56:12,440 --> 00:56:15,840 Speaker 1: concerns that AI systems used to generate these quotes could 1003 00:56:15,960 --> 00:56:20,160 Speaker 1: create negative feedback loops that are hard to break. According 1004 00:56:20,200 --> 00:56:24,080 Speaker 1: to one expert, quote, higher insurance prices for low income 1005 00:56:24,160 --> 00:56:27,840 Speaker 1: people can translate to higher debt and plummeting credit scores, 1006 00:56:28,040 --> 00:56:31,080 Speaker 1: which can mean reduce job prospects, which allows debt to 1007 00:56:31,120 --> 00:56:34,480 Speaker 1: pile up, credit scores to sink lower, and insurance rates 1008 00:56:34,480 --> 00:56:37,880 Speaker 1: to increase in a vicious cycle. Uh So this is 1009 00:56:37,920 --> 00:56:40,360 Speaker 1: kind of a nightmare scenario, right, Like an AI that 1010 00:56:40,520 --> 00:56:45,200 Speaker 1: is too powerful and not explicitly protected against acquiring these 1011 00:56:45,200 --> 00:56:49,360 Speaker 1: types of biases could create these kind of computer enforced 1012 00:56:49,480 --> 00:56:54,320 Speaker 1: prisons in reality, like a machine code for perpetuating whatever 1013 00:56:54,440 --> 00:56:56,960 Speaker 1: state of the world, like whatever state the world was 1014 00:56:57,040 --> 00:57:00,920 Speaker 1: in when the AI was first deployed, and then entrenching 1015 00:57:01,000 --> 00:57:04,239 Speaker 1: it further and further. Yeah, and that kind of thing 1016 00:57:04,320 --> 00:57:07,240 Speaker 1: is especially scary because, like, if there's a human making 1017 00:57:07,280 --> 00:57:09,680 Speaker 1: the decision, you can you can call up the human 1018 00:57:09,719 --> 00:57:12,000 Speaker 1: to a witness stand or ask them like, hey, why 1019 00:57:12,040 --> 00:57:14,600 Speaker 1: did you make the decision this way? But if it's 1020 00:57:14,640 --> 00:57:17,120 Speaker 1: an AI doing it, you could say like, hey, why 1021 00:57:17,160 --> 00:57:19,520 Speaker 1: why is it? Why are we getting this outcome that's 1022 00:57:19,560 --> 00:57:22,439 Speaker 1: you know, creating a sort of like cyclical prison out 1023 00:57:22,440 --> 00:57:24,640 Speaker 1: of reality, And they can just say, hey, you know 1024 00:57:24,760 --> 00:57:27,400 Speaker 1: it's the machine with the machine. You know it knows 1025 00:57:27,400 --> 00:57:29,560 Speaker 1: what it's doing. Yeah, and then yes, the machine it 1026 00:57:29,600 --> 00:57:33,280 Speaker 1: says I learned it from watching you dad, and you 1027 00:57:33,320 --> 00:57:35,520 Speaker 1: have that moment of shame. So I think these different 1028 00:57:35,520 --> 00:57:38,160 Speaker 1: categories that they that they bring up are really important 1029 00:57:38,200 --> 00:57:41,160 Speaker 1: for helping us kind of sort our ideas into into 1030 00:57:41,200 --> 00:57:44,760 Speaker 1: recognizable types for for ways that AI and robots could 1031 00:57:44,760 --> 00:57:46,919 Speaker 1: go wrong and could potentially cause harm that you would 1032 00:57:46,920 --> 00:57:50,520 Speaker 1: seek legal remedy for. And also they help identify the 1033 00:57:50,600 --> 00:57:53,440 Speaker 1: spaces that there's the most worry. I mean, for me, 1034 00:57:53,520 --> 00:57:56,760 Speaker 1: I think that would be like those last two cases, right, 1035 00:57:56,800 --> 00:58:00,280 Speaker 1: the unforeseen problems and the systemic problems are the ones 1036 00:58:00,640 --> 00:58:03,760 Speaker 1: where there's the most real danger I think and the 1037 00:58:03,800 --> 00:58:06,919 Speaker 1: most difficulty in trying to figure out how to solve it. Yeah, 1038 00:58:06,960 --> 00:58:10,680 Speaker 1: because we we kind of you know, train ourselves for 1039 00:58:11,080 --> 00:58:14,120 Speaker 1: to a certain extent and sort of culturally focus on 1040 00:58:14,240 --> 00:58:20,720 Speaker 1: the sky net problems, right, the really obvious um situations 1041 00:58:20,760 --> 00:58:23,080 Speaker 1: where the robot car veers off the road in a 1042 00:58:23,200 --> 00:58:28,080 Speaker 1: dangerous way. But the situations where it is just perpetuating 1043 00:58:28,120 --> 00:58:32,920 Speaker 1: what we're already doing, where it's making choices in getting 1044 00:58:32,920 --> 00:58:36,040 Speaker 1: from point A to point B that don't violate anything 1045 00:58:36,040 --> 00:58:39,000 Speaker 1: we told it, but just are an uninventive and even 1046 00:58:39,080 --> 00:58:42,520 Speaker 1: harmful way of doing it. Uh. Yeah, that's that's that's 1047 00:58:42,520 --> 00:58:45,640 Speaker 1: harder to deal with. That's a type of misbehavior that 1048 00:58:46,520 --> 00:58:49,520 Speaker 1: you can't solve by just having Dan o'harla Hay stand 1049 00:58:49,560 --> 00:58:56,000 Speaker 1: up and bellowing behave yourselves exactly. Um yeah, yeah, yeah, 1050 00:58:56,040 --> 00:58:58,160 Speaker 1: I mean I can't remember if that even worked. I 1051 00:58:58,200 --> 00:59:00,440 Speaker 1: just remember that was one of my favorite moments, and 1052 00:59:00,680 --> 00:59:02,880 Speaker 1: that was RoboCop two, right, was it? Yeah, well, I 1053 00:59:02,880 --> 00:59:05,360 Speaker 1: mean RoboCop one. I think we're already dealing with this 1054 00:59:05,440 --> 00:59:09,200 Speaker 1: problem of like the sort of like weird dynamics of 1055 00:59:09,240 --> 00:59:12,640 Speaker 1: machine culpability. When Ed two oh nine like shoots that 1056 00:59:12,640 --> 00:59:15,959 Speaker 1: guy five hundred times in the boardroom during the demonstration, 1057 00:59:16,400 --> 00:59:19,000 Speaker 1: and then Dan O'Hurley he's response to it is to 1058 00:59:19,080 --> 00:59:21,959 Speaker 1: turn to Ronnie Cox and say, I'm very disappointed, Dick. 1059 00:59:24,680 --> 00:59:27,200 Speaker 1: But anyway, well, I guess we're running running kind of long, 1060 00:59:27,240 --> 00:59:29,400 Speaker 1: so maybe we should call part one there, But we 1061 00:59:29,440 --> 00:59:34,080 Speaker 1: will resume this discussion about about robot justice and robot 1062 00:59:34,080 --> 00:59:37,080 Speaker 1: punishment in the next episode. That's right, we'll be back 1063 00:59:37,280 --> 00:59:40,480 Speaker 1: with more of this discussion in the meantime. If you 1064 00:59:40,480 --> 00:59:42,560 Speaker 1: would like to check out past episodes of Stuff to 1065 00:59:42,560 --> 00:59:45,680 Speaker 1: Blow Your Mind Uh, and they're definitely worth checking out 1066 00:59:45,680 --> 00:59:48,000 Speaker 1: because we have lots of past episodes that deal with 1067 00:59:48,160 --> 00:59:50,920 Speaker 1: robots and AI. We have lots of episodes where we 1068 00:59:50,960 --> 00:59:54,720 Speaker 1: make RoboCop references, so they're all they're all there, go 1069 00:59:54,760 --> 00:59:56,480 Speaker 1: back and check them out. You can find our podcast 1070 00:59:56,520 --> 00:59:58,760 Speaker 1: wherever you get your podcasts. Just look for the Stuff 1071 00:59:58,760 --> 01:00:01,640 Speaker 1: to Blow your Mind podcast need uh in that feed 1072 01:00:01,680 --> 01:00:04,480 Speaker 1: We put out core episodes of the show on Tuesdays 1073 01:00:04,480 --> 01:00:08,080 Speaker 1: and Thursdays. Mondays, we have a little listener mail Wednesdays, 1074 01:00:08,120 --> 01:00:11,680 Speaker 1: so that's when we do the artifact shorty uh usually, 1075 01:00:12,040 --> 01:00:14,640 Speaker 1: and then on Friday's we do Weird House Cinema. That's 1076 01:00:14,640 --> 01:00:17,000 Speaker 1: our chance to sort of set most of the science 1077 01:00:17,040 --> 01:00:20,880 Speaker 1: aside and just focus on the films about rampaging robots. 1078 01:00:21,640 --> 01:00:24,400 Speaker 1: You just thinks. As always to our excellent audio producer 1079 01:00:24,520 --> 01:00:26,680 Speaker 1: Seth Nicholas Johnson. If you would like to get in 1080 01:00:26,800 --> 01:00:29,080 Speaker 1: touch with us with feedback on this episode or any 1081 01:00:29,120 --> 01:00:31,480 Speaker 1: other to suggest a topic for the future, or just 1082 01:00:31,560 --> 01:00:34,040 Speaker 1: to say hello. You can email us at contact at 1083 01:00:34,040 --> 01:00:43,880 Speaker 1: stuff to Blow your Mind dot com. Stuff to Blow 1084 01:00:43,920 --> 01:00:46,480 Speaker 1: Your Mind is production of I Heart Radio. For more 1085 01:00:46,480 --> 01:00:49,080 Speaker 1: podcasts for my heart Radio, visit the iHeart Radio app, 1086 01:00:49,280 --> 01:00:52,000 Speaker 1: Apple Podcasts, or wherever you're listening to your favorite shows. 1087 01:01:00,520 --> 01:01:07,760 Speaker 1: Pretty Toy pop po