1 00:00:03,040 --> 00:00:05,280 Speaker 1: Welcome to Stuff to Blow Your Mind, the production of 2 00:00:05,360 --> 00:00:14,640 Speaker 1: My Heart Radio. Hey, welcome to Stuff to Blow Your Mind. 3 00:00:14,720 --> 00:00:17,319 Speaker 1: My name is Robert Lamb and I'm Joe McCormick, and 4 00:00:17,360 --> 00:00:19,880 Speaker 1: we're back for part two of our talk about punishing 5 00:00:19,920 --> 00:00:23,279 Speaker 1: the robot. We're we're back here to uh to tell 6 00:00:23,320 --> 00:00:26,640 Speaker 1: the robot he's been very bad. Now. In the last episode, 7 00:00:26,680 --> 00:00:30,640 Speaker 1: we talked about the idea of legal agency and culpability 8 00:00:30,720 --> 00:00:34,240 Speaker 1: for robots and other intelligent machines, and for a quick 9 00:00:34,280 --> 00:00:36,600 Speaker 1: refresher on on some of the stuff we went over, 10 00:00:37,479 --> 00:00:40,120 Speaker 1: we talked about the idea that as robots and AI 11 00:00:40,320 --> 00:00:43,320 Speaker 1: become more sophisticated and thus in some ways or in 12 00:00:43,360 --> 00:00:47,960 Speaker 1: some cases more independent and unpredictable, and as they integrate 13 00:00:48,040 --> 00:00:51,000 Speaker 1: more and more into the wild of human society, they 14 00:00:51,040 --> 00:00:54,760 Speaker 1: are just inevitably going to be situations where AI and 15 00:00:55,000 --> 00:00:58,960 Speaker 1: robots do wrong and cause harm to people. Now, of course, 16 00:00:59,000 --> 00:01:02,200 Speaker 1: when a human is wrong and causes harm to another human, 17 00:01:02,240 --> 00:01:04,920 Speaker 1: we have a legal system through which the victim can 18 00:01:05,000 --> 00:01:07,440 Speaker 1: seek various kinds of remedies, And we talked in the 19 00:01:07,520 --> 00:01:10,800 Speaker 1: last episode about the idea of remedies that the simple 20 00:01:10,880 --> 00:01:12,959 Speaker 1: version of that is the remedy is what do I 21 00:01:13,000 --> 00:01:15,480 Speaker 1: get when I win in court? So that can be 22 00:01:15,560 --> 00:01:18,960 Speaker 1: things like monetary rewards. You know, I ran into your 23 00:01:18,959 --> 00:01:21,560 Speaker 1: car with my car, so I pay you money, or 24 00:01:21,600 --> 00:01:24,520 Speaker 1: it can be punishment, or it can be court orders 25 00:01:24,560 --> 00:01:28,560 Speaker 1: like commanding or restricting the behavior of the perpetrator. And 26 00:01:28,600 --> 00:01:31,760 Speaker 1: so we discussed the idea that as robots become more 27 00:01:31,840 --> 00:01:36,000 Speaker 1: unpredictable and more like human agents, more sort of independent, 28 00:01:36,400 --> 00:01:40,399 Speaker 1: and more integrated into society, it might make sense to 29 00:01:40,440 --> 00:01:43,960 Speaker 1: have some kind of system of legal remedies for when 30 00:01:44,080 --> 00:01:48,200 Speaker 1: robots cause harm or commit crimes. But also, as we 31 00:01:48,240 --> 00:01:51,800 Speaker 1: talked about last time, this is much easier said than done. 32 00:01:51,840 --> 00:01:54,640 Speaker 1: It's going to present tons of new problems because our 33 00:01:54,720 --> 00:01:59,400 Speaker 1: legal system is in many ways not equipped to deal 34 00:01:59,600 --> 00:02:04,000 Speaker 1: with with defendants and situations of this kind. And this 35 00:02:04,040 --> 00:02:06,400 Speaker 1: may cause us to ask questions about how we already 36 00:02:06,440 --> 00:02:10,359 Speaker 1: think about culpability and blame and and punishment in the 37 00:02:10,440 --> 00:02:13,240 Speaker 1: legal system. And so in the last episode we talked 38 00:02:13,240 --> 00:02:16,000 Speaker 1: about one big legal paper that we're going to continue 39 00:02:16,000 --> 00:02:18,480 Speaker 1: to explore in this one. It's by Mark A. Limley 40 00:02:18,560 --> 00:02:22,000 Speaker 1: and Brian Casey in the University of Chicago Law Review 41 00:02:22,040 --> 00:02:25,600 Speaker 1: from twenty nineteen called remedies for robots, So I'll be 42 00:02:25,840 --> 00:02:27,560 Speaker 1: referring back to that one a good bit throughout this 43 00:02:27,600 --> 00:02:30,560 Speaker 1: episode too. Now. I think when we left off last time, 44 00:02:30,639 --> 00:02:33,560 Speaker 1: we had mainly been talking about sort of trying to 45 00:02:33,600 --> 00:02:36,600 Speaker 1: categorize the different sorts of harm that could be done 46 00:02:36,680 --> 00:02:40,399 Speaker 1: by robots or AI intelligent machines, and so we talked 47 00:02:40,400 --> 00:02:44,680 Speaker 1: about some things like unavoidable harms and deliberate least cost harms. 48 00:02:44,760 --> 00:02:47,200 Speaker 1: These are sort of going to be unavoidable parts of 49 00:02:47,240 --> 00:02:50,560 Speaker 1: having something like autonomous vehicles, right if you have cars 50 00:02:50,639 --> 00:02:53,280 Speaker 1: driving around on the road, Like, even if they're really 51 00:02:53,360 --> 00:02:56,280 Speaker 1: really good at minimizing harm, there's still going to be 52 00:02:56,360 --> 00:02:59,079 Speaker 1: some cases where there's just no way harm could be 53 00:02:59,120 --> 00:03:02,760 Speaker 1: avoided because their cars. Another would be defect driven harms. 54 00:03:02,760 --> 00:03:05,680 Speaker 1: That's pretty straightforward, that's just where the machine malfunctions or 55 00:03:05,720 --> 00:03:09,160 Speaker 1: breaks in some way. Another would be misuse harms. That's 56 00:03:09,160 --> 00:03:12,840 Speaker 1: where the machine is used in a way that is harmful. 57 00:03:13,240 --> 00:03:16,000 Speaker 1: And in those cases it can be usually pretty clear 58 00:03:16,040 --> 00:03:18,680 Speaker 1: who's at fault. It's the person who misused the machine. 59 00:03:19,040 --> 00:03:20,920 Speaker 1: But then there are a couple of other categories that 60 00:03:20,960 --> 00:03:24,800 Speaker 1: where things get really tricky, which are unforeseen harms and 61 00:03:24,919 --> 00:03:29,200 Speaker 1: systemic harms and in the case of unforeseen harms. One 62 00:03:29,240 --> 00:03:31,720 Speaker 1: example we talked about in the last episode was the 63 00:03:31,840 --> 00:03:35,560 Speaker 1: drone that invented a wormhole. So, you know, people were 64 00:03:35,600 --> 00:03:39,080 Speaker 1: trying to train a drone to move towards like an 65 00:03:39,080 --> 00:03:42,760 Speaker 1: autonomous flying vehicle, to move towards the center of a 66 00:03:42,800 --> 00:03:45,920 Speaker 1: circular area. But the drone started doing a thing where 67 00:03:45,920 --> 00:03:48,160 Speaker 1: when it got sufficiently far away from the center of 68 00:03:48,160 --> 00:03:50,800 Speaker 1: the circle, it would just fly out of the circle altogether. 69 00:03:51,240 --> 00:03:54,000 Speaker 1: And so it seems kind of weird at first, like, Okay, 70 00:03:54,040 --> 00:03:56,440 Speaker 1: why would it be doing that? But then what the 71 00:03:56,480 --> 00:03:59,360 Speaker 1: researchers realized was that whenever it did that, they would 72 00:03:59,400 --> 00:04:01,280 Speaker 1: turn it off off and then they would move it 73 00:04:01,320 --> 00:04:04,200 Speaker 1: back into the circle to start it over again. So, 74 00:04:04,360 --> 00:04:07,320 Speaker 1: from the machine learning point of view of the drone itself, 75 00:04:07,560 --> 00:04:10,360 Speaker 1: it had discovered like a like a time space warp 76 00:04:10,680 --> 00:04:12,840 Speaker 1: that you know. So so it was doing this thing 77 00:04:12,880 --> 00:04:15,240 Speaker 1: that made no sense from a human perspective, but actually 78 00:04:15,360 --> 00:04:20,320 Speaker 1: was it was following its programming exactly. Now for an example, uh, 79 00:04:20,800 --> 00:04:23,880 Speaker 1: sort of a thought experiment of how this could become lethal. 80 00:04:24,160 --> 00:04:27,080 Speaker 1: There's an example that is stuck in my head. I 81 00:04:27,120 --> 00:04:29,960 Speaker 1: can't recall where I heard this who presented this idea? 82 00:04:30,080 --> 00:04:31,760 Speaker 1: And I kind of had it in my head that 83 00:04:31,800 --> 00:04:34,479 Speaker 1: it came from Max tag Mark. But I did some 84 00:04:34,520 --> 00:04:36,960 Speaker 1: searching around in my notes and some searching around and 85 00:04:37,720 --> 00:04:40,039 Speaker 1: one of his books, and I couldn't find it. Perhaps 86 00:04:40,080 --> 00:04:43,440 Speaker 1: you can help refresh me. You remember maybe you remember this, Joe, 87 00:04:43,440 --> 00:04:46,280 Speaker 1: But the idea of the the AI that is running 88 00:04:46,320 --> 00:04:49,200 Speaker 1: deciding how much oxygen needs to be in a train station, 89 00:04:49,240 --> 00:04:52,799 Speaker 1: it didn't given time. Oh this sounds familiar. I don't 90 00:04:52,800 --> 00:04:54,640 Speaker 1: know the answer, but a lot of these thought experiments 91 00:04:54,640 --> 00:04:56,839 Speaker 1: tend to trace back to Nick Bostrom, so I wouldn't 92 00:04:56,839 --> 00:05:00,719 Speaker 1: be surprised if in there. But but go ahead, right, okay, 93 00:05:00,880 --> 00:05:03,320 Speaker 1: as I remember it. The way it works is you have, um, 94 00:05:03,400 --> 00:05:05,719 Speaker 1: you have this AI that's in charge of of making 95 00:05:05,720 --> 00:05:07,920 Speaker 1: sure there's enough oxygen in the train station for when 96 00:05:08,080 --> 00:05:11,640 Speaker 1: humans are there, and it seems to have learned this fine. 97 00:05:11,680 --> 00:05:13,520 Speaker 1: And when humans are there to get on the train, 98 00:05:13,680 --> 00:05:19,760 Speaker 1: everything goes goes well, everybody's breathing fine. And then one day, uh, 99 00:05:19,800 --> 00:05:21,800 Speaker 1: the train arrives a little late or it leaves a 100 00:05:21,800 --> 00:05:24,440 Speaker 1: little right late. You get which whatever it is, and 101 00:05:24,920 --> 00:05:28,160 Speaker 1: there's not enough oxygen and people die and then it 102 00:05:28,200 --> 00:05:31,240 Speaker 1: turns out that the train was not basing its decision 103 00:05:31,760 --> 00:05:34,080 Speaker 1: on when people were there, but it was basing it 104 00:05:34,240 --> 00:05:37,679 Speaker 1: on a clock in the train station, like what train 105 00:05:37,760 --> 00:05:40,960 Speaker 1: it was? Um, And I may be mangling this horribly, 106 00:05:41,040 --> 00:05:43,200 Speaker 1: but you know another way of illustrating the point that 107 00:05:43,240 --> 00:05:47,400 Speaker 1: machine learning could end up, you know, latching onto shortcuts 108 00:05:47,480 --> 00:05:52,279 Speaker 1: or heuristic devices. Uh. That would just seem completely insane 109 00:05:52,760 --> 00:05:56,400 Speaker 1: to a quote unquote logical human mind, but might make 110 00:05:56,440 --> 00:05:59,760 Speaker 1: sense within the framework of the AI. Right. They worked 111 00:05:59,760 --> 00:06:02,680 Speaker 1: in training cases, and it doesn't understand because it doesn't 112 00:06:02,680 --> 00:06:05,719 Speaker 1: have common sense, it doesn't understand why they wouldn't work 113 00:06:05,720 --> 00:06:09,640 Speaker 1: in another case. There was actually a real world case 114 00:06:09,720 --> 00:06:12,760 Speaker 1: that we talked about in Part one where there was 115 00:06:13,120 --> 00:06:17,080 Speaker 1: an attempt to do some machine learning on what risk 116 00:06:17,160 --> 00:06:20,960 Speaker 1: factors would would make a pneumonia case admitted to the 117 00:06:21,000 --> 00:06:24,440 Speaker 1: hospital have a higher or lower chance of survival. And 118 00:06:24,560 --> 00:06:27,640 Speaker 1: one thing that a machine learning algorithm determined was that 119 00:06:27,760 --> 00:06:31,360 Speaker 1: asthma meant that you were you were better off when 120 00:06:31,400 --> 00:06:34,520 Speaker 1: you got pneumonia if you had asthma. But actually the 121 00:06:34,560 --> 00:06:37,000 Speaker 1: reason for that that that isn't true. Actually, the reason 122 00:06:37,040 --> 00:06:38,680 Speaker 1: for that is that if you have asthma, you're a 123 00:06:38,800 --> 00:06:43,000 Speaker 1: higher risk case for pneumonia, so you've got more intensive 124 00:06:43,080 --> 00:06:46,440 Speaker 1: treatment in the hospital and thus had better outcomes on 125 00:06:46,520 --> 00:06:48,760 Speaker 1: the data set that the algorithm was trained on. But 126 00:06:48,760 --> 00:06:52,880 Speaker 1: the algorithm came up with this completely backwards uh failure 127 00:06:52,920 --> 00:06:56,400 Speaker 1: to understand the difference between correlation and causation. There it 128 00:06:56,480 --> 00:07:00,000 Speaker 1: made it look like asthma was a superpower. Now, of course, 129 00:07:00,040 --> 00:07:02,800 Speaker 1: if you take that kind of shortsighted algorithm and you 130 00:07:02,880 --> 00:07:04,680 Speaker 1: make it god, then it will say, oh, I've just 131 00:07:04,720 --> 00:07:07,159 Speaker 1: got to give everybody asthma, so so we'll have a 132 00:07:07,160 --> 00:07:09,680 Speaker 1: better chance of surviving. The point is it can be 133 00:07:10,120 --> 00:07:13,880 Speaker 1: hard to imagine in advance all the cases like this 134 00:07:13,960 --> 00:07:16,880 Speaker 1: that would arise when you've got a world full of 135 00:07:17,840 --> 00:07:20,080 Speaker 1: robots and and AI is running around in it that 136 00:07:20,120 --> 00:07:23,080 Speaker 1: are trained on machine learning. Basically, they're just a number 137 00:07:23,120 --> 00:07:25,840 Speaker 1: of less sort of soft sky nets that you couldn't 138 00:07:25,880 --> 00:07:29,120 Speaker 1: possibly predict, you know, like the skynet scenario being sort 139 00:07:29,160 --> 00:07:31,680 Speaker 1: of like a robots decide that one to end all war, 140 00:07:32,240 --> 00:07:36,000 Speaker 1: and um, you know, humans causal war, therefore end all humans, 141 00:07:36,320 --> 00:07:39,520 Speaker 1: that sort of thing. But there's so many different like 142 00:07:39,640 --> 00:07:42,720 Speaker 1: lesser versions of it. It It could also be destructive or 143 00:07:42,800 --> 00:07:45,640 Speaker 1: annoying or just get in the way of effectively using 144 00:07:45,680 --> 00:07:49,360 Speaker 1: AI for whatever we turn to it for. Uh yeah, yeah. 145 00:07:49,400 --> 00:07:51,720 Speaker 1: To come to an example that is definitely used by 146 00:07:51,840 --> 00:07:55,200 Speaker 1: Nick Bostrom, the paper clip maximizer. You know, a robot 147 00:07:55,240 --> 00:07:57,720 Speaker 1: that is designed to make as many paper clips as 148 00:07:57,760 --> 00:07:59,920 Speaker 1: it can, and it just it looks at your body 149 00:08:00,000 --> 00:08:01,960 Speaker 1: and says, hey, that's full of matter. Those could be 150 00:08:02,000 --> 00:08:07,640 Speaker 1: paper clips. Yeah, yeah, yeah, that would be that would 151 00:08:07,680 --> 00:08:10,120 Speaker 1: be quite an apocalypse. Now, before we get back into 152 00:08:10,200 --> 00:08:12,720 Speaker 1: the main subject and talking about this limly in Casey 153 00:08:12,760 --> 00:08:16,600 Speaker 1: paper with with robots as offenders, there was one thing 154 00:08:16,800 --> 00:08:18,600 Speaker 1: that was interesting I came across. It was just a 155 00:08:18,600 --> 00:08:22,560 Speaker 1: brief footnote in their paper, but about the question of 156 00:08:22,600 --> 00:08:26,480 Speaker 1: what about if the robot is the plaintiff in a case? Uh. 157 00:08:26,760 --> 00:08:29,320 Speaker 1: They said, it's it is possible to imagine a robot 158 00:08:29,360 --> 00:08:32,880 Speaker 1: as a plaintiff in a court case because of course robots, 159 00:08:32,920 --> 00:08:36,160 Speaker 1: you know, can be injured by humans. And they cited 160 00:08:36,200 --> 00:08:38,960 Speaker 1: a bunch of examples of news stories of humans just 161 00:08:39,080 --> 00:08:43,800 Speaker 1: intentionally like torturing and being cruel two robots like that. 162 00:08:43,920 --> 00:08:48,760 Speaker 1: They cited one news article fromen about people just aggressively 163 00:08:48,960 --> 00:08:54,040 Speaker 1: kicking food delivery robots, and then they share another story actually, 164 00:08:54,080 --> 00:08:57,880 Speaker 1: remember this one from the news from about a Silicon 165 00:08:58,040 --> 00:09:01,480 Speaker 1: Valley security robot that was just violently attacked by a 166 00:09:01,559 --> 00:09:04,600 Speaker 1: drunk man in a parking garage. I don't remember this one, 167 00:09:04,640 --> 00:09:07,600 Speaker 1: but I can imagine how it went down. Yeah, exactly. 168 00:09:07,640 --> 00:09:10,480 Speaker 1: So they say that in a case like this, this 169 00:09:10,559 --> 00:09:13,640 Speaker 1: is actually pretty straightforward as a property crime. I mean, 170 00:09:13,760 --> 00:09:17,400 Speaker 1: unless we start getting into a scenario where we're really 171 00:09:17,520 --> 00:09:20,440 Speaker 1: seeing robots as like human beings with their own like 172 00:09:20,520 --> 00:09:25,520 Speaker 1: consciousness and interests and all that, the attacks against robots 173 00:09:25,520 --> 00:09:28,720 Speaker 1: are really probably just property crimes against the owner of 174 00:09:28,720 --> 00:09:31,280 Speaker 1: the robot. It's like, you know, attacking somebody's computer or 175 00:09:31,320 --> 00:09:35,120 Speaker 1: their car or something potentially. But that we'll get into 176 00:09:35,120 --> 00:09:37,560 Speaker 1: some stuff a little later that I think shows some 177 00:09:37,640 --> 00:09:40,640 Speaker 1: other directions that could go in as well, you know, um, 178 00:09:40,679 --> 00:09:45,520 Speaker 1: you know, especially considered the awesome possibility of robots owning themselves. Yeah, 179 00:09:45,559 --> 00:09:48,040 Speaker 1: and and that's obviously a very different world, I mean, 180 00:09:48,080 --> 00:09:50,000 Speaker 1: where you get into the idea like does a robot 181 00:09:50,000 --> 00:09:53,680 Speaker 1: actually have rights? Um, which is not. That's sort of 182 00:09:53,720 --> 00:09:57,200 Speaker 1: beyond the horizon of what's explored in in this paper itself. 183 00:09:57,200 --> 00:09:59,719 Speaker 1: This paper is more focused on like the kinds of 184 00:09:59,800 --> 00:10:02,640 Speaker 1: ro bots that you can practically imagine within the next 185 00:10:02,640 --> 00:10:06,160 Speaker 1: few decades. And and in those cases, it seems like 186 00:10:06,200 --> 00:10:09,120 Speaker 1: all of the really thorny stuff would probably be in 187 00:10:09,600 --> 00:10:14,439 Speaker 1: robots as offenders rather than robots as victims of crimes. Right. 188 00:10:14,480 --> 00:10:17,600 Speaker 1: But to your point, like the the initial crimes against 189 00:10:17,679 --> 00:10:19,360 Speaker 1: robot that we can in the robots that we can 190 00:10:19,400 --> 00:10:22,640 Speaker 1: imagine would be stuff like drunk people pushing them over 191 00:10:22,960 --> 00:10:25,400 Speaker 1: things like that. Yeah, or just like a human driver 192 00:10:25,520 --> 00:10:28,599 Speaker 1: and a human powered vehicle hitting an autonomous vehicle. You know, 193 00:10:29,040 --> 00:10:31,280 Speaker 1: right now, As I mentioned in the last episode, this 194 00:10:31,400 --> 00:10:33,200 Speaker 1: is a very big paper and we're not gonna have 195 00:10:33,280 --> 00:10:35,880 Speaker 1: time to get into every avenue they go down in it. 196 00:10:35,920 --> 00:10:38,120 Speaker 1: But I just wanted to go through uh and and 197 00:10:38,200 --> 00:10:40,880 Speaker 1: mention some ideas that stuck out to me is interesting 198 00:10:41,520 --> 00:10:45,480 Speaker 1: that they discuss. And one thing that really fascinated me 199 00:10:45,520 --> 00:10:50,560 Speaker 1: about this was that the idea of robots as possible 200 00:10:50,600 --> 00:10:54,840 Speaker 1: agents in in a legal context UH brings to the 201 00:10:55,000 --> 00:10:59,360 Speaker 1: for a philosophical argument that has existed in the realm 202 00:10:59,400 --> 00:11:01,880 Speaker 1: of substance of law for a while. Uh. And I'll 203 00:11:01,880 --> 00:11:03,760 Speaker 1: try not to be too dry about this, but I 204 00:11:03,840 --> 00:11:07,439 Speaker 1: think it actually does get to some really interesting philosophical territory. Uh. 205 00:11:07,480 --> 00:11:10,400 Speaker 1: And this is the distinction between what Limbly and Casey 206 00:11:10,480 --> 00:11:16,120 Speaker 1: call the normative versus economic interpretations of substantive law. Again, 207 00:11:16,240 --> 00:11:19,360 Speaker 1: complicated philosophical and legal distinction. I'll try to do my 208 00:11:19,400 --> 00:11:22,880 Speaker 1: best to sum it up simply. So. The normative perspective 209 00:11:22,960 --> 00:11:27,880 Speaker 1: on substantive law says that the law is a prohibition 210 00:11:28,000 --> 00:11:31,760 Speaker 1: against doing something bad. So when something is against the law, 211 00:11:32,040 --> 00:11:35,880 Speaker 1: that means you shouldn't do it, And we would stop 212 00:11:35,920 --> 00:11:38,680 Speaker 1: the offender from doing the thing that's against the law 213 00:11:38,720 --> 00:11:41,600 Speaker 1: if we could. But since we usually can't stop them 214 00:11:41,640 --> 00:11:44,800 Speaker 1: from doing it, often because it already happened, the remedy 215 00:11:44,920 --> 00:11:48,439 Speaker 1: that exists, You know that maybe paying damages to the 216 00:11:48,520 --> 00:11:51,520 Speaker 1: victim or something like that is a is a an 217 00:11:51,559 --> 00:11:54,760 Speaker 1: attempt to right the wrong, in other words, to do 218 00:11:55,000 --> 00:11:58,000 Speaker 1: the next best thing to undoing the harm in the 219 00:11:58,040 --> 00:12:02,360 Speaker 1: first place. So, basically, getting into the idea of negative reinforcement, 220 00:12:02,720 --> 00:12:05,800 Speaker 1: somebody or something did something bad, we can't we couldn't 221 00:12:05,800 --> 00:12:08,319 Speaker 1: stop them from doing something bad, but we can try 222 00:12:08,760 --> 00:12:12,959 Speaker 1: and and give them stimulus that would make them not 223 00:12:13,080 --> 00:12:16,920 Speaker 1: do it again. Be that economic or otherwise. Well, yes, 224 00:12:17,000 --> 00:12:19,199 Speaker 1: but I think what you're saying uh could actually apply 225 00:12:19,240 --> 00:12:21,760 Speaker 1: to both of these conditions I'm going to talk about. 226 00:12:21,800 --> 00:12:25,600 Speaker 1: So I think maybe the distinction comes in about whether 227 00:12:25,760 --> 00:12:28,960 Speaker 1: the whether there is such a thing as an inherent prohibition. 228 00:12:29,120 --> 00:12:32,280 Speaker 1: So the thing that's uh operative in the normative view 229 00:12:32,360 --> 00:12:35,000 Speaker 1: is that the thing that's against the law is a 230 00:12:35,040 --> 00:12:39,680 Speaker 1: thing that should not be done, and thus the remedy 231 00:12:39,840 --> 00:12:42,080 Speaker 1: is an attempt to try to fix the fact that 232 00:12:42,120 --> 00:12:44,880 Speaker 1: it was done in the first place. The the economic 233 00:12:45,040 --> 00:12:47,959 Speaker 1: view is the alternative here, and the way they sum 234 00:12:48,040 --> 00:12:52,960 Speaker 1: that up is there is no such thing as forbidden conduct. Rather, 235 00:12:53,120 --> 00:12:56,920 Speaker 1: a substantive law tells you what the cost of the 236 00:12:57,000 --> 00:13:01,360 Speaker 1: conduct is. Does that distinction make any more sense? Yes, yes, 237 00:13:01,400 --> 00:13:06,480 Speaker 1: So it's basically the first version is doing crimes is bad. Um. 238 00:13:06,640 --> 00:13:09,640 Speaker 1: The second one is doing crimes is expensive. So it's 239 00:13:09,720 --> 00:13:12,760 Speaker 1: the first is crimes should not be done, and the 240 00:13:12,840 --> 00:13:15,280 Speaker 1: second one is crimes can be done if you can 241 00:13:15,320 --> 00:13:18,839 Speaker 1: afford it. Yes, exactly so. In Limley, in cases words, 242 00:13:18,880 --> 00:13:22,480 Speaker 1: quote damages on this view the economic view are simply 243 00:13:22,520 --> 00:13:26,800 Speaker 1: a cost of doing business. One we want defendants to internalize, 244 00:13:27,080 --> 00:13:31,440 Speaker 1: but not necessarily to avoid the conduct altogether. And now 245 00:13:31,559 --> 00:13:33,840 Speaker 1: you might look at this and think, oh, okay, well, 246 00:13:33,880 --> 00:13:37,040 Speaker 1: so the economic view is just like a psychopathic way 247 00:13:37,080 --> 00:13:39,640 Speaker 1: of looking at things, And in a certain sense, you 248 00:13:39,640 --> 00:13:41,520 Speaker 1: could look at it that as like if you're calculating 249 00:13:41,520 --> 00:13:45,400 Speaker 1: what's the economic cost of murder? Then yeah, okay, that 250 00:13:45,520 --> 00:13:49,040 Speaker 1: does just like that's evil, that's like psychopathic. But they're 251 00:13:49,040 --> 00:13:51,959 Speaker 1: actually all kinds of cases we're thinking about. The economic 252 00:13:52,080 --> 00:13:55,160 Speaker 1: view makes more sense of the way we actually behave 253 00:13:55,640 --> 00:13:59,280 Speaker 1: And they use the example of stopping at a traffic light. Yes, 254 00:13:59,640 --> 00:14:02,480 Speaker 1: so read from Limely in casey here quote. Under the 255 00:14:02,640 --> 00:14:06,320 Speaker 1: normative view, a red light stands as a prohibition against 256 00:14:06,400 --> 00:14:09,480 Speaker 1: traveling through an intersection, with the remedy being a ticket 257 00:14:09,600 --> 00:14:12,559 Speaker 1: or a fine against those who are caught breaking the prohibition. 258 00:14:13,160 --> 00:14:15,839 Speaker 1: We would stop you from running the red light if 259 00:14:15,840 --> 00:14:19,160 Speaker 1: we could, but because policing every intersection in the country 260 00:14:19,200 --> 00:14:22,800 Speaker 1: would be impossible, we instead punish those we do catch 261 00:14:22,920 --> 00:14:26,120 Speaker 1: in hopes of deterring others. So in this first case, 262 00:14:26,240 --> 00:14:29,120 Speaker 1: you running a red light is bad, you should not 263 00:14:29,240 --> 00:14:31,480 Speaker 1: do it, and the cost of doing it, you know, 264 00:14:31,520 --> 00:14:34,120 Speaker 1: the punishment you face for doing it is an attempt 265 00:14:34,160 --> 00:14:37,240 Speaker 1: to right that wrong. But then they say, under the 266 00:14:37,280 --> 00:14:41,040 Speaker 1: economic view, however, and absolute prohibition against running red lights 267 00:14:41,080 --> 00:14:44,520 Speaker 1: was never the intention. Rather, the red light merely signals 268 00:14:44,560 --> 00:14:47,800 Speaker 1: a consequence for those who do, in fact choose to 269 00:14:47,840 --> 00:14:51,120 Speaker 1: travel through the intersection. As in the first instance, the 270 00:14:51,200 --> 00:14:54,080 Speaker 1: remedy available is a fine or a ticket. But under 271 00:14:54,120 --> 00:14:56,280 Speaker 1: this view, the choice of whether or not to violate 272 00:14:56,320 --> 00:14:59,120 Speaker 1: the law depends on the willingness of the lawbreaker to 273 00:14:59,240 --> 00:15:03,040 Speaker 1: accept the penalty. So in the case of a red light, 274 00:15:03,080 --> 00:15:05,640 Speaker 1: well that that might make more sense if you're like 275 00:15:06,040 --> 00:15:08,360 Speaker 1: sitting at a red light and you look around and 276 00:15:08,400 --> 00:15:11,320 Speaker 1: there are no other cars anywhere near you, and you've 277 00:15:11,360 --> 00:15:14,320 Speaker 1: you've got a clear view of the entire intersection and 278 00:15:14,360 --> 00:15:17,200 Speaker 1: the red lights not changing, and you think maybe it's broken, 279 00:15:17,600 --> 00:15:19,400 Speaker 1: and you're just like, Okay, I'm I'm just going to 280 00:15:19,520 --> 00:15:21,840 Speaker 1: drive through. Well, if if you reach that point where 281 00:15:21,880 --> 00:15:24,200 Speaker 1: you're like, I think it's broken, that I feel like 282 00:15:24,200 --> 00:15:26,520 Speaker 1: that's a slightly different case. But if you're just like, 283 00:15:26,600 --> 00:15:30,560 Speaker 1: nobody's watching, I'm gonna do it, um and and and 284 00:15:30,720 --> 00:15:33,480 Speaker 1: the light isn't taking an absurd amount of time or 285 00:15:33,640 --> 00:15:36,480 Speaker 1: longer than you're you're accustomed to, Yeah, I don't know 286 00:15:36,600 --> 00:15:39,440 Speaker 1: how the the belief that the light is broken would 287 00:15:39,480 --> 00:15:42,280 Speaker 1: factor into that, but yeah it is. I mean one 288 00:15:42,320 --> 00:15:44,720 Speaker 1: thing that I think is clear that in it's that 289 00:15:44,800 --> 00:15:49,320 Speaker 1: in many cases there are people, especially I think companies 290 00:15:49,360 --> 00:15:53,080 Speaker 1: and corporations that operate on the economic view. And it 291 00:15:53,240 --> 00:15:56,840 Speaker 1: is something that I think people generally look at and say, Okay, 292 00:15:56,880 --> 00:15:59,280 Speaker 1: that that's kind of grimy. Like it like a company 293 00:15:59,320 --> 00:16:02,840 Speaker 1: that says, Okay, there is a fine for not obeying 294 00:16:02,960 --> 00:16:06,440 Speaker 1: this environmental regulation, and we're going to make more money 295 00:16:06,480 --> 00:16:09,720 Speaker 1: by violating the regulation than we would pay in the fine. Anybody, 296 00:16:09,760 --> 00:16:11,800 Speaker 1: So we're just gonna pay it. Yeah, you hear about 297 00:16:11,800 --> 00:16:14,760 Speaker 1: that with factories, for instance, where where there'll, yeah, there'll 298 00:16:14,760 --> 00:16:17,400 Speaker 1: be some situation where that the fine is not significant 299 00:16:17,560 --> 00:16:20,240 Speaker 1: enough to really be at a turrent. It's just a 300 00:16:20,680 --> 00:16:24,840 Speaker 1: fit for them breaking that that mandate being called on it. Occasionally. 301 00:16:24,840 --> 00:16:27,960 Speaker 1: It's just the cost of doing business. Right. Uh. So 302 00:16:28,040 --> 00:16:30,320 Speaker 1: there's a funny way to describe this point of view 303 00:16:30,320 --> 00:16:32,080 Speaker 1: that the authors bring up here that they call it 304 00:16:32,120 --> 00:16:36,680 Speaker 1: the bad man theory. And this comes from Justice Oliver 305 00:16:36,760 --> 00:16:40,040 Speaker 1: Wendell Holmes, who is a U. S. Supreme Court justice. Uh. 306 00:16:40,120 --> 00:16:43,640 Speaker 1: And he's talking about the economic view of substantive law. Uh. 307 00:16:43,680 --> 00:16:46,160 Speaker 1: And Holmes wrote, quote, if you want to know the 308 00:16:46,240 --> 00:16:48,400 Speaker 1: law and nothing else, you must look at it as 309 00:16:48,480 --> 00:16:52,440 Speaker 1: a bad man who cares only for the material consequences 310 00:16:52,440 --> 00:16:55,520 Speaker 1: which such knowledge enables him to predict, not as a 311 00:16:55,560 --> 00:16:59,280 Speaker 1: good one who finds his reasons for conduct, whether inside 312 00:16:59,280 --> 00:17:02,160 Speaker 1: the law or outside of it, in the vaguer sanctions 313 00:17:02,200 --> 00:17:05,240 Speaker 1: of conscience. Uh. And so they write, the measure of 314 00:17:05,280 --> 00:17:07,520 Speaker 1: the substantive law, in other words, is not to be 315 00:17:07,560 --> 00:17:11,239 Speaker 1: mixed up with moral qualms, but is simply coextensive with 316 00:17:11,320 --> 00:17:14,360 Speaker 1: its remedy. No more and no less. It just is 317 00:17:14,440 --> 00:17:17,280 Speaker 1: what the remedy is. It's the cost of doing business. Now. 318 00:17:17,280 --> 00:17:19,879 Speaker 1: Of course, there are plenty of legal scholars and philosophers 319 00:17:19,880 --> 00:17:22,800 Speaker 1: who would dispute how Holmes thinks of this. But the 320 00:17:22,840 --> 00:17:26,280 Speaker 1: interesting question is how does this apply to robots? If 321 00:17:26,320 --> 00:17:30,960 Speaker 1: you're programming a robot to behave well, you actually don't 322 00:17:31,040 --> 00:17:34,280 Speaker 1: get to just sort of like jump over this distinction 323 00:17:34,359 --> 00:17:36,840 Speaker 1: the way humans do when they think about their own 324 00:17:36,880 --> 00:17:39,640 Speaker 1: moral conduct. Right, Like, you're not sitting when you're trying 325 00:17:39,640 --> 00:17:41,879 Speaker 1: to think what's a good way to be a good person. 326 00:17:42,400 --> 00:17:44,840 Speaker 1: You're not sitting around thinking about well, am I going 327 00:17:44,880 --> 00:17:48,080 Speaker 1: by the normative view of morality or the economic view 328 00:17:48,119 --> 00:17:51,160 Speaker 1: of morality? You know? Um, you just sort of act 329 00:17:51,200 --> 00:17:53,119 Speaker 1: a certain way whatever it seems to you the right 330 00:17:53,160 --> 00:17:55,199 Speaker 1: way to do. But if you're trying to program a 331 00:17:55,280 --> 00:17:57,639 Speaker 1: robot to behave well, you have to make a choice 332 00:17:57,720 --> 00:18:00,960 Speaker 1: whether to embrace the normative view or the economic view. 333 00:18:01,119 --> 00:18:04,439 Speaker 1: Does a robot view a red light, say, as a 334 00:18:04,600 --> 00:18:08,200 Speaker 1: firm prohibition against forward movement, it's just a bad thing 335 00:18:08,280 --> 00:18:10,360 Speaker 1: and you shouldn't do it to drive through a red light? 336 00:18:10,960 --> 00:18:14,800 Speaker 1: Or does it just view it as a substantial discouragement 337 00:18:14,880 --> 00:18:18,040 Speaker 1: against forward motion that has a certain cost, and if 338 00:18:18,119 --> 00:18:22,119 Speaker 1: you were to overcome that cost, then you drive on through. Yeah, 339 00:18:22,160 --> 00:18:24,280 Speaker 1: this is a great, great question because I feel like 340 00:18:24,280 --> 00:18:27,080 Speaker 1: with humans, we're probably mixing a match and all the time, 341 00:18:27,680 --> 00:18:30,960 Speaker 1: you know, perhaps you even done the same law breaking behavior. 342 00:18:31,040 --> 00:18:32,920 Speaker 1: You know, we may do both on on one thing, 343 00:18:33,200 --> 00:18:34,919 Speaker 1: and we do one on another thing, and then the 344 00:18:34,960 --> 00:18:37,159 Speaker 1: other one on as still a third thing. But with 345 00:18:37,200 --> 00:18:39,639 Speaker 1: the robot, it seems like you're gonna deal more or 346 00:18:39,720 --> 00:18:44,480 Speaker 1: less with kind of an absolute direction. Either they're going 347 00:18:44,520 --> 00:18:48,000 Speaker 1: to be um either the law is is to be 348 00:18:48,040 --> 00:18:50,760 Speaker 1: obeyed or the law is to be taken into your 349 00:18:50,760 --> 00:18:53,040 Speaker 1: cost analysis. Well, yeah, so they talk about how the 350 00:18:53,080 --> 00:18:57,120 Speaker 1: normative view is actually very much like uh, Isaac Asimov's 351 00:18:57,200 --> 00:19:02,000 Speaker 1: Laws of Robotics, inviolable rules, and the the Asimov story 352 00:19:02,080 --> 00:19:06,160 Speaker 1: is doing a very good job of demonstrating why inviolable 353 00:19:06,240 --> 00:19:10,200 Speaker 1: rules are really difficult to implement in the real world. 354 00:19:10,280 --> 00:19:14,240 Speaker 1: Like you know that they Asimov explored this brilliantly, and 355 00:19:14,320 --> 00:19:16,560 Speaker 1: along these lines, the authors here argued that there there 356 00:19:16,560 --> 00:19:19,080 Speaker 1: are major reasons to think it will just not make 357 00:19:19,160 --> 00:19:23,200 Speaker 1: any practical sense to program robots with a normative view 358 00:19:23,200 --> 00:19:26,560 Speaker 1: of legal remedies. That probably when people make AI s 359 00:19:26,600 --> 00:19:28,679 Speaker 1: and robots that that have to take these kind of 360 00:19:28,720 --> 00:19:32,560 Speaker 1: things into account, they're almost definitely going to program them 361 00:19:32,600 --> 00:19:35,920 Speaker 1: according to the the economic view, right. Uh, They say 362 00:19:35,920 --> 00:19:38,520 Speaker 1: that quote encoding the rule don't run a red light 363 00:19:38,560 --> 00:19:42,480 Speaker 1: as an absolute prohibition, for example, might sometimes conflict with 364 00:19:42,520 --> 00:19:45,560 Speaker 1: the more compelling goal of not letting your driver die 365 00:19:45,760 --> 00:19:50,160 Speaker 1: by being hit by an oncoming truck. So the robots 366 00:19:50,160 --> 00:19:54,400 Speaker 1: are probably going to have to be economically economically motivated 367 00:19:54,440 --> 00:19:57,560 Speaker 1: to an extent like this. Um. But then they talk 368 00:19:57,600 --> 00:20:00,800 Speaker 1: about how you know, this gets very complicated because robots 369 00:20:00,840 --> 00:20:05,479 Speaker 1: will calculate the risks of reward and punishment with different 370 00:20:05,560 --> 00:20:08,800 Speaker 1: biases than humans, or maybe even without the biases that 371 00:20:08,880 --> 00:20:11,840 Speaker 1: humans have that the legal system relies on in order 372 00:20:11,880 --> 00:20:16,120 Speaker 1: to keep us obedient. Humans are highly motivated, usually by 373 00:20:16,119 --> 00:20:19,320 Speaker 1: certain types of punishments that like, you know, humans like 374 00:20:19,440 --> 00:20:23,239 Speaker 1: really don't want to spend a month in jail, you know, 375 00:20:23,280 --> 00:20:26,000 Speaker 1: most of the time. And you can't just rely on 376 00:20:26,160 --> 00:20:29,480 Speaker 1: a robot to be incredibly motivated by something like this, 377 00:20:29,760 --> 00:20:31,520 Speaker 1: first of all, because like it wouldn't even make sense 378 00:20:31,560 --> 00:20:34,360 Speaker 1: to send the robot itself to jail. So you need 379 00:20:34,440 --> 00:20:39,360 Speaker 1: some kind of organized system for making a robot understand 380 00:20:39,440 --> 00:20:42,760 Speaker 1: the cost of bad behavior in a systematized way that 381 00:20:42,920 --> 00:20:46,880 Speaker 1: made sense to the robot as a as a demotivating incentive. Yeah, 382 00:20:47,000 --> 00:20:49,760 Speaker 1: Like shame comes to mind as another aspect of of 383 00:20:49,840 --> 00:20:52,240 Speaker 1: all this, Like how do you shame a robot? Do 384 00:20:52,240 --> 00:20:55,200 Speaker 1: you have to program a robot to feel shamee and 385 00:20:55,320 --> 00:20:57,880 Speaker 1: being uh, you know, made to give a public apology 386 00:20:57,960 --> 00:21:00,280 Speaker 1: or something? Yeah? Uh so, so they are you that 387 00:21:00,280 --> 00:21:02,520 Speaker 1: that it really only makes sense for robots to look 388 00:21:02,520 --> 00:21:05,560 Speaker 1: at legal remedies in an economic way, and then they 389 00:21:05,640 --> 00:21:09,399 Speaker 1: write quote it thus appears that Justice Holmes, archetypical bad man, 390 00:21:09,440 --> 00:21:13,159 Speaker 1: will finally be brought to corporeal form, though ironically not 391 00:21:13,280 --> 00:21:16,119 Speaker 1: as a man at all. And if Justice Holmes metaphorical 392 00:21:16,160 --> 00:21:21,080 Speaker 1: subject is truly morally impoverished and analytically deficient, as some accused, 393 00:21:21,119 --> 00:21:25,440 Speaker 1: it will have significant ramifications for robots. But yeah, thinking 394 00:21:25,440 --> 00:21:28,439 Speaker 1: about these incentives, it gets more and more difficult, Like 395 00:21:28,520 --> 00:21:31,760 Speaker 1: the more you try to imagine the particulars humans have 396 00:21:32,119 --> 00:21:35,399 Speaker 1: self motivations, you know, pre existing motivations that can just 397 00:21:35,480 --> 00:21:38,560 Speaker 1: be assumed. In most cases, humans don't want to pay 398 00:21:38,600 --> 00:21:41,320 Speaker 1: out money, Humans don't want to go to jail. How 399 00:21:41,320 --> 00:21:45,560 Speaker 1: would these costs be instantiated as motivating for robots? You 400 00:21:45,560 --> 00:21:48,840 Speaker 1: would have to you would have to basically force some 401 00:21:49,040 --> 00:21:52,960 Speaker 1: humans I guess meaning the programmers or creators of the robots, 402 00:21:53,280 --> 00:21:57,400 Speaker 1: to instill those costs as motivating on the robot. But 403 00:21:57,560 --> 00:22:00,280 Speaker 1: that's not always going to be easy to do because, okay, 404 00:22:00,280 --> 00:22:03,199 Speaker 1: imagine a robot does violate one of these norms and 405 00:22:03,240 --> 00:22:06,040 Speaker 1: it causes harm to somebody, and as a result, the 406 00:22:06,040 --> 00:22:09,360 Speaker 1: court says, okay, uh, someone, you know, someone has been 407 00:22:09,400 --> 00:22:13,919 Speaker 1: harmed by this negligent or failed autonomous vehicle, and now 408 00:22:14,200 --> 00:22:18,240 Speaker 1: there must be a payout. Who actually pays? Where is 409 00:22:18,320 --> 00:22:22,280 Speaker 1: the pain of the punishment located? A bunch of complications 410 00:22:22,280 --> 00:22:25,040 Speaker 1: to this problem arise like, it gets way more complicated 411 00:22:25,080 --> 00:22:28,879 Speaker 1: than just the programmer or the owner, especially because in 412 00:22:28,920 --> 00:22:31,880 Speaker 1: this age of artificial intelligence, there is a kind of 413 00:22:32,440 --> 00:22:36,560 Speaker 1: there's a kind of distributed responsibility across many parties. The 414 00:22:36,560 --> 00:22:40,640 Speaker 1: authors write, quote robots are composed of many complex components, 415 00:22:40,760 --> 00:22:44,160 Speaker 1: learning from their interactions with thousands, millions, or even billions 416 00:22:44,160 --> 00:22:47,520 Speaker 1: of data points, and they are often designed, operated, least 417 00:22:47,600 --> 00:22:51,400 Speaker 1: or owned by different companies. Which party is to internalize 418 00:22:51,400 --> 00:22:54,120 Speaker 1: these costs, The one that designed the robot or AI 419 00:22:54,200 --> 00:22:56,840 Speaker 1: in the first place, and that might even be multiple companies, 420 00:22:57,200 --> 00:22:59,680 Speaker 1: The one that collected and curated the data set used 421 00:22:59,680 --> 00:23:03,000 Speaker 1: to train and its algorithm in unpredictable ways, the users 422 00:23:03,000 --> 00:23:05,240 Speaker 1: who bought the robot and deployed it in the field. 423 00:23:05,640 --> 00:23:08,080 Speaker 1: And then it gets even more complicated than that, because 424 00:23:08,240 --> 00:23:11,760 Speaker 1: the authors start going into tons of ways that we 425 00:23:11,800 --> 00:23:16,040 Speaker 1: can predict now that it's unlikely that these costs will 426 00:23:16,080 --> 00:23:20,560 Speaker 1: be internalized in commercially produced why robots in ways that 427 00:23:20,600 --> 00:23:24,680 Speaker 1: are socially optimal, Because if if you're you're asking a 428 00:23:24,720 --> 00:23:28,720 Speaker 1: corporation that makes robots to take into account some type 429 00:23:28,720 --> 00:23:34,879 Speaker 1: of economic disincentive against the robot behaving badly, other economic 430 00:23:34,960 --> 00:23:38,360 Speaker 1: incentives are going to be competing with those disincentives, right, 431 00:23:39,040 --> 00:23:41,760 Speaker 1: So the author's right. For instance, if I make it 432 00:23:41,800 --> 00:23:44,600 Speaker 1: clear that my car will kill its driver rather than 433 00:23:44,680 --> 00:23:47,920 Speaker 1: run over a pedestrian, if the issue arises, people might 434 00:23:47,960 --> 00:23:51,200 Speaker 1: not buy my car. The economic costs of lost sales 435 00:23:51,240 --> 00:23:54,359 Speaker 1: may swamp the costs of liability from a contrary choice. 436 00:23:54,760 --> 00:23:57,399 Speaker 1: In the other direction, car companies could run into pr 437 00:23:57,480 --> 00:24:00,879 Speaker 1: problems if their cars run over kids. But simply it 438 00:24:01,000 --> 00:24:05,000 Speaker 1: is aggregate profits, not just profits related to legal sanctions, 439 00:24:05,200 --> 00:24:08,639 Speaker 1: that will drive robot decision making. And then there are 440 00:24:08,720 --> 00:24:10,840 Speaker 1: still a million other things to consider. I mean, one 441 00:24:10,880 --> 00:24:13,320 Speaker 1: thing they talk about is the idea that even within 442 00:24:13,440 --> 00:24:18,439 Speaker 1: corporations that produce UH robots and ai uh, the parts 443 00:24:18,480 --> 00:24:21,800 Speaker 1: of those corporations don't all understand what the other parts 444 00:24:21,800 --> 00:24:25,320 Speaker 1: are doing. You know. They say workers within these corporations 445 00:24:25,320 --> 00:24:28,119 Speaker 1: are likely to be siloed in ways that interfere with 446 00:24:28,160 --> 00:24:32,560 Speaker 1: effective cost internalization. UH. Quote. Machine learning is a specialized 447 00:24:32,600 --> 00:24:37,280 Speaker 1: programming skill, and programmers aren't economists. Uh. And then they 448 00:24:37,359 --> 00:24:39,280 Speaker 1: talk about why in many cases, it's going to be 449 00:24:39,359 --> 00:24:43,280 Speaker 1: really difficult to answer the question of why an AI 450 00:24:43,400 --> 00:24:45,879 Speaker 1: did what it did, So can you even determine that 451 00:24:45,960 --> 00:24:48,639 Speaker 1: the AI say, was was acting in a way that 452 00:24:48,760 --> 00:24:52,439 Speaker 1: wasn't reasonable, Like, how could you ever fundamentally examine the 453 00:24:52,560 --> 00:24:55,119 Speaker 1: state of mind of the AI well enough to to 454 00:24:55,280 --> 00:24:58,520 Speaker 1: prove that the decision it made wasn't the most reasonable 455 00:24:58,560 --> 00:25:01,400 Speaker 1: one from its own perspective of But then another thing 456 00:25:01,400 --> 00:25:03,439 Speaker 1: they raise, I think is a really interesting point, and 457 00:25:03,480 --> 00:25:05,720 Speaker 1: this gets into one of the things we talked about 458 00:25:05,760 --> 00:25:10,800 Speaker 1: in the last episode where thinking about culpability UH for 459 00:25:11,200 --> 00:25:15,600 Speaker 1: AI and robots actually makes us is going to force 460 00:25:15,680 --> 00:25:19,159 Speaker 1: us to re examine our ideas of of culpability and 461 00:25:19,200 --> 00:25:23,200 Speaker 1: blame when it comes to human decision making. Because they 462 00:25:23,200 --> 00:25:27,639 Speaker 1: talk about this the idea that quote, the sheer rationality 463 00:25:27,720 --> 00:25:32,280 Speaker 1: of robot decision making may itself provoke the ire of humans. 464 00:25:32,840 --> 00:25:35,359 Speaker 1: Now how would that be? It seems like we would say, okay, well, 465 00:25:35,480 --> 00:25:37,960 Speaker 1: you know, we want robots to be as rational as possible. 466 00:25:37,960 --> 00:25:41,520 Speaker 1: We don't want them to be irrational. But it is 467 00:25:41,640 --> 00:25:46,479 Speaker 1: often only by carelessly putting costs and risks out of 468 00:25:46,520 --> 00:25:49,120 Speaker 1: mind that we are able to go about our lives. 469 00:25:49,920 --> 00:25:54,359 Speaker 1: For example, people drive cars, and no matter how safe 470 00:25:54,359 --> 00:25:57,399 Speaker 1: of a driver you are, driving a car comes with 471 00:25:57,440 --> 00:26:01,280 Speaker 1: the unavoidable risk that you will harms one uh the 472 00:26:01,480 --> 00:26:04,880 Speaker 1: right quote. Any economist will tell you that the optimal 473 00:26:04,960 --> 00:26:08,640 Speaker 1: number of deaths from many socially beneficial activities is more 474 00:26:08,720 --> 00:26:11,920 Speaker 1: than zero where it. Otherwise, our cars would never go 475 00:26:12,080 --> 00:26:15,480 Speaker 1: more than five miles per hour. Indeed, we would rarely 476 00:26:15,560 --> 00:26:18,400 Speaker 1: leave our homes at all. Even today, we deal with 477 00:26:18,440 --> 00:26:22,920 Speaker 1: those costs and remedies law unevenly. The effective statistical price 478 00:26:22,960 --> 00:26:25,560 Speaker 1: of a human life in court decisions is all over 479 00:26:25,640 --> 00:26:29,040 Speaker 1: the map. The calculation is generally done ad hoc and 480 00:26:29,119 --> 00:26:33,240 Speaker 1: after the fact. That allows us to avoid explicitly discussing 481 00:26:33,320 --> 00:26:37,240 Speaker 1: politically fraught concepts that can lead to accusations of trading 482 00:26:37,320 --> 00:26:40,719 Speaker 1: lives for cash. And it may work acceptably for humans 483 00:26:40,720 --> 00:26:44,200 Speaker 1: because we have instinctive reactions against injuring others that make 484 00:26:44,280 --> 00:26:48,119 Speaker 1: deterrence less important. But in many instances robots will need 485 00:26:48,200 --> 00:26:52,360 Speaker 1: to quantify the value we put on a life if 486 00:26:52,400 --> 00:26:56,040 Speaker 1: they are to modify their behavior at all. Accordingly, the 487 00:26:56,080 --> 00:26:59,080 Speaker 1: companies that make robots will have to figure out how 488 00:26:59,160 --> 00:27:01,840 Speaker 1: much they value you human life, and they will have 489 00:27:01,920 --> 00:27:05,000 Speaker 1: to write it down in the algorithm for all to see, 490 00:27:05,400 --> 00:27:09,679 Speaker 1: at least after extensive discovery UH referring to like you 491 00:27:09,680 --> 00:27:11,920 Speaker 1: know what, the courts will find out by looking into 492 00:27:12,000 --> 00:27:14,680 Speaker 1: how these algorithms are created. And I think this is 493 00:27:14,720 --> 00:27:18,000 Speaker 1: a fantastic point, Like, in order for a robot to 494 00:27:18,080 --> 00:27:21,240 Speaker 1: make ethical decisions about living in the real world, it's 495 00:27:21,240 --> 00:27:23,959 Speaker 1: going to have to do things like put a price 496 00:27:24,080 --> 00:27:27,280 Speaker 1: tag on you know, what kind of risk to human 497 00:27:27,320 --> 00:27:30,240 Speaker 1: life is acceptable in order for it to do anything, 498 00:27:31,000 --> 00:27:34,080 Speaker 1: And we don't, and that seems monstrous to us. It 499 00:27:34,119 --> 00:27:38,359 Speaker 1: does not seem reasonable for any percent chance of harming 500 00:27:38,400 --> 00:27:43,320 Speaker 1: a human, of killing somebody to be the unacceptable risk 501 00:27:43,400 --> 00:27:45,760 Speaker 1: of your day to day activities. And yet it actually 502 00:27:45,800 --> 00:27:48,600 Speaker 1: already is that, you know, it always is that way 503 00:27:48,680 --> 00:27:52,119 Speaker 1: whenever we do anything, but we just like have to 504 00:27:52,119 --> 00:27:54,879 Speaker 1: put it out of mind, like we can't think about it. Yeah, 505 00:27:54,920 --> 00:27:59,000 Speaker 1: I mean, like, what's the alternative, right A programming monstrous 506 00:27:59,040 --> 00:28:02,240 Speaker 1: self delusion into the self driving car where it says 507 00:28:02,680 --> 00:28:05,560 Speaker 1: I will not get into a wreck on my on 508 00:28:05,680 --> 00:28:09,159 Speaker 1: my next route because I cannot That cannot happen to me. 509 00:28:09,240 --> 00:28:11,280 Speaker 1: It has never happened to me before, it will never happen. 510 00:28:11,359 --> 00:28:15,600 Speaker 1: You know, These sorts of you know, ridiculous, not even 511 00:28:15,680 --> 00:28:17,560 Speaker 1: statements that we make in our mind. It's just kind 512 00:28:17,560 --> 00:28:20,359 Speaker 1: of like assumptions, like that's that's the kind of thing 513 00:28:20,400 --> 00:28:22,919 Speaker 1: that happens to other drivers, and it's not going to 514 00:28:22,960 --> 00:28:25,320 Speaker 1: happen to me, even though we we've all seen the 515 00:28:25,760 --> 00:28:28,440 Speaker 1: you know, the statistics before. Yeah, exactly, I mean, I 516 00:28:28,640 --> 00:28:31,200 Speaker 1: think this is a really good point. And uh so, 517 00:28:31,320 --> 00:28:34,680 Speaker 1: in this case, the robot wouldn't even necessarily be doing 518 00:28:34,760 --> 00:28:37,359 Speaker 1: something evil. In fact, you could argue there could be 519 00:28:37,359 --> 00:28:40,600 Speaker 1: cases where the robot is behaving in a way that 520 00:28:40,760 --> 00:28:43,959 Speaker 1: is far safer, far less risky than the average human 521 00:28:44,040 --> 00:28:46,920 Speaker 1: doing the same thing. But the very fact of its 522 00:28:47,000 --> 00:28:52,480 Speaker 1: clearly coded rationality reveals something that is already true about 523 00:28:52,520 --> 00:28:55,640 Speaker 1: human societies, which we can't really bear to look at 524 00:28:55,800 --> 00:29:05,360 Speaker 1: or think about. So another thing that the authors explored 525 00:29:05,400 --> 00:29:08,680 Speaker 1: that I think is really interesting is the idea of 526 00:29:08,760 --> 00:29:14,360 Speaker 1: how robot punishment would make it like directly punishing the 527 00:29:14,520 --> 00:29:19,400 Speaker 1: robot itself, whether how that possibility might make us rethink 528 00:29:19,640 --> 00:29:23,480 Speaker 1: the idea of punishing humans. Uh Now, of course, it's 529 00:29:23,560 --> 00:29:26,120 Speaker 1: just the case that whether or not it actually serves 530 00:29:26,160 --> 00:29:28,720 Speaker 1: as any kind of deterrent, whether or not it actually 531 00:29:29,240 --> 00:29:33,600 Speaker 1: rationally reduces harm, it may just be unavoidable that humans 532 00:29:33,680 --> 00:29:39,480 Speaker 1: sometimes feel they want to inflict direct harm on a perpetrator, 533 00:29:39,640 --> 00:29:42,800 Speaker 1: as punishment for the crime they're alleged to have committed, 534 00:29:43,240 --> 00:29:46,080 Speaker 1: and that may well translate to robots themselves. I mean, 535 00:29:46,200 --> 00:29:48,680 Speaker 1: you can imagine we we've all i think raged against 536 00:29:48,760 --> 00:29:51,480 Speaker 1: an inanimate object before. We wanted to kick a printer 537 00:29:51,680 --> 00:29:53,880 Speaker 1: or something like that. Uh. And we talked in the 538 00:29:53,960 --> 00:29:57,680 Speaker 1: last episode about some of that psychological research about how 539 00:29:57,720 --> 00:30:02,120 Speaker 1: people mindlessly apply social rule to robots. The authors here right, 540 00:30:02,240 --> 00:30:05,600 Speaker 1: Certainly people punch or smash inanimate objects all the time. 541 00:30:06,040 --> 00:30:09,520 Speaker 1: Juries might similarly want to punish a robot not to 542 00:30:09,600 --> 00:30:14,000 Speaker 1: create optimal cost internalization, but because it makes the jury 543 00:30:14,040 --> 00:30:17,640 Speaker 1: and the victim feel better. The authors write later towards 544 00:30:17,680 --> 00:30:21,280 Speaker 1: their conclusion about the idea of directly punishing robots that quote, 545 00:30:21,320 --> 00:30:25,200 Speaker 1: this seems socially wasteful. Punishing robots not to make them 546 00:30:25,200 --> 00:30:28,120 Speaker 1: behave better, but just to punish them is kind of 547 00:30:28,120 --> 00:30:31,200 Speaker 1: like kicking a puppy that can't understand why it's being hurt. 548 00:30:31,560 --> 00:30:34,320 Speaker 1: The same might be true of punishing people to make 549 00:30:34,400 --> 00:30:37,360 Speaker 1: us feel better, but with robots, the punishment is stripped 550 00:30:37,360 --> 00:30:39,840 Speaker 1: of any pretense that it is sending a message to 551 00:30:39,960 --> 00:30:43,840 Speaker 1: make the robot understand the wrongness of its actions. Now 552 00:30:44,040 --> 00:30:46,720 Speaker 1: I'm pretty sympathetic personally to the point of view that 553 00:30:46,840 --> 00:30:49,760 Speaker 1: a lot of punishment that happens in the world is 554 00:30:49,800 --> 00:30:53,800 Speaker 1: not actually uh, is not actually a rational way to 555 00:30:54,000 --> 00:30:58,320 Speaker 1: reduce harm, but just kind of like is uh. You know, 556 00:30:58,760 --> 00:31:00,720 Speaker 1: if it serves any purpose, it is the purpose of 557 00:31:00,760 --> 00:31:03,880 Speaker 1: the emotional satisfaction of people who feel they've been wronged, 558 00:31:03,960 --> 00:31:07,400 Speaker 1: or people who want to demonstrate moral approbrium on on 559 00:31:07,560 --> 00:31:12,200 Speaker 1: the offender. But I understand that. You know, in some cases, 560 00:31:12,240 --> 00:31:16,280 Speaker 1: you could imagine that punishing somebody serves as an object 561 00:31:16,320 --> 00:31:19,960 Speaker 1: example that deters behavior in the future, and to the 562 00:31:20,000 --> 00:31:22,000 Speaker 1: extent that that is ever the case. If it is 563 00:31:22,040 --> 00:31:25,440 Speaker 1: the case, could punishing a robot serve that role, could 564 00:31:25,480 --> 00:31:29,040 Speaker 1: actually inflicting say like like punching a robot or somehow 565 00:31:29,080 --> 00:31:32,560 Speaker 1: otherwise punishing a robot serve as a kind of object 566 00:31:32,600 --> 00:31:36,800 Speaker 1: example that deters behavior in humans, say, say the humans 567 00:31:36,840 --> 00:31:40,320 Speaker 1: who will program the robots of the future. It's a 568 00:31:40,400 --> 00:31:44,160 Speaker 1: weird kind of symbolism to imagine. Yeah, I mean, when 569 00:31:44,200 --> 00:31:48,040 Speaker 1: you start thinking about, you know, the ways to punish robots, 570 00:31:48,080 --> 00:31:50,160 Speaker 1: I mean you think of some of the more ridiculous 571 00:31:50,160 --> 00:31:53,120 Speaker 1: examples that have been brought up in sci fi and 572 00:31:53,120 --> 00:31:57,480 Speaker 1: sci fi comedy like robot Hells and so forth. Um, 573 00:31:57,560 --> 00:32:00,320 Speaker 1: and or the just the idea of even destroying or 574 00:32:00,360 --> 00:32:05,120 Speaker 1: deleting a robot that is faulty or misbehaving. Um. But 575 00:32:05,120 --> 00:32:07,280 Speaker 1: but maybe, you know, maybe it ends up being something 576 00:32:07,480 --> 00:32:10,960 Speaker 1: more like I think of game systems right where say, 577 00:32:11,000 --> 00:32:15,240 Speaker 1: if you accumulate too many of uh say, madness points, 578 00:32:15,320 --> 00:32:17,880 Speaker 1: your I don't know, your movement is cut in half, 579 00:32:17,920 --> 00:32:20,360 Speaker 1: that sort of thing, and then that has a ramification 580 00:32:20,520 --> 00:32:23,000 Speaker 1: on how you play the game and to what extent 581 00:32:23,080 --> 00:32:26,200 Speaker 1: you can play the game well. And therefore, like playing 582 00:32:26,200 --> 00:32:29,080 Speaker 1: into the economic model, you know, it could it could 583 00:32:29,080 --> 00:32:33,080 Speaker 1: have sort of artificially constructed but very real consequences on 584 00:32:33,160 --> 00:32:37,160 Speaker 1: how well a system could behave, you know, but then again, 585 00:32:37,320 --> 00:32:40,080 Speaker 1: you could imagine ways that an AI might find ways 586 00:32:40,120 --> 00:32:42,880 Speaker 1: to to circumvent that, and say, well, if I play 587 00:32:42,920 --> 00:32:45,479 Speaker 1: the game a certain way where I don't need to 588 00:32:46,000 --> 00:32:48,560 Speaker 1: move at normal speed, I can just move at half 589 00:32:48,600 --> 00:32:51,720 Speaker 1: speed but have the benefit of getting to break these rules, 590 00:32:52,240 --> 00:32:54,000 Speaker 1: then who knows, you know, it just I feel like 591 00:32:54,520 --> 00:32:59,120 Speaker 1: there it seems an inescapable maze. Yeah. Well, that's that's 592 00:32:59,160 --> 00:33:03,040 Speaker 1: interesting because is edging toward another thing that the authors 593 00:33:03,080 --> 00:33:04,960 Speaker 1: actually talked about here, which is the idea of a 594 00:33:05,080 --> 00:33:09,320 Speaker 1: robot death penalty. Uh. And this is funny because I 595 00:33:10,320 --> 00:33:13,040 Speaker 1: again because personally, you know, I see a lot of 596 00:33:13,040 --> 00:33:16,040 Speaker 1: flaws in in applying a death penalty to humans. I 597 00:33:16,120 --> 00:33:21,160 Speaker 1: think that is a very flawed judicial remedy. But I 598 00:33:21,160 --> 00:33:24,800 Speaker 1: can understand a death penalty for robots. Like you know, 599 00:33:25,200 --> 00:33:28,440 Speaker 1: robots don't have the same rights as human defendants. If 600 00:33:28,480 --> 00:33:32,000 Speaker 1: a robot is malfunctioning or behaving in a way that 601 00:33:32,160 --> 00:33:35,680 Speaker 1: is so dangerous as to suggest it is likely in 602 00:33:35,680 --> 00:33:39,719 Speaker 1: the future to continue to endanger human lives to an 603 00:33:39,800 --> 00:33:43,520 Speaker 1: unacceptable extent, then yeah, it seems to me reasonable that 604 00:33:43,640 --> 00:33:47,440 Speaker 1: you should just turn off that robot permanently. Okay, But 605 00:33:47,440 --> 00:33:49,440 Speaker 1: but then again, and then it raises the question, well, 606 00:33:49,440 --> 00:33:52,280 Speaker 1: what about what what led us to this malfunction? Is 607 00:33:52,320 --> 00:33:55,000 Speaker 1: there something in the system itself that needs to be 608 00:33:55,040 --> 00:33:58,400 Speaker 1: remedied in order to prevent that from happening? Again, that's 609 00:33:58,440 --> 00:34:01,360 Speaker 1: like very good point, and the authors bring up exactly 610 00:34:01,360 --> 00:34:04,720 Speaker 1: this concern. Yeah, so they say, well, then again, so 611 00:34:04,800 --> 00:34:07,400 Speaker 1: a robot might not have human rights where you would 612 00:34:07,400 --> 00:34:09,920 Speaker 1: be concerned about the death penalty for the robot's own 613 00:34:10,000 --> 00:34:12,759 Speaker 1: good but you might be concerned about what you are 614 00:34:12,880 --> 00:34:15,680 Speaker 1: failing to be able to learn from. Allowing the robot 615 00:34:15,719 --> 00:34:18,279 Speaker 1: to continue to operate like that that could help you 616 00:34:18,680 --> 00:34:21,560 Speaker 1: refine AI in the future. Maybe not letting it continue 617 00:34:21,600 --> 00:34:23,960 Speaker 1: to operate in the wild, but I don't know, keeping 618 00:34:23,960 --> 00:34:27,160 Speaker 1: it operative in some sense because like, whatever it's doing 619 00:34:27,239 --> 00:34:29,520 Speaker 1: is something we need to understand better. So, you know, 620 00:34:29,600 --> 00:34:33,880 Speaker 1: with the robot prison instead of robot death penalty. UM, 621 00:34:34,080 --> 00:34:36,879 Speaker 1: and of course that the human comparison to be made 622 00:34:36,960 --> 00:34:40,000 Speaker 1: is equally as is frustrating because you end up with 623 00:34:40,000 --> 00:34:44,160 Speaker 1: scenarios where you'll have, um, a society that's very pro 624 00:34:44,480 --> 00:34:47,920 Speaker 1: death penalty. But then when it comes to doing the 625 00:34:47,960 --> 00:34:50,000 Speaker 1: same sort of backwork and saying, well, what led to 626 00:34:50,040 --> 00:34:53,560 Speaker 1: this case, what were some of the systematic problems, uh, 627 00:34:53,840 --> 00:34:56,799 Speaker 1: cultural problems, societal problems, I don't know, you know, well, 628 00:34:56,840 --> 00:34:59,040 Speaker 1: whatever it is that that led to this case that 629 00:34:59,120 --> 00:35:01,560 Speaker 1: needed to be remedied with death, should we correct those 630 00:35:01,600 --> 00:35:04,640 Speaker 1: problems too, And in some cases the answer seems to be, oh, no, 631 00:35:04,680 --> 00:35:07,000 Speaker 1: we're not doing that. We'll just we'll just do the 632 00:35:07,040 --> 00:35:10,240 Speaker 1: death penalty as is necessary, even though it doesn't actually 633 00:35:10,280 --> 00:35:13,520 Speaker 1: prevent us from reaching this this point over and over again. 634 00:35:13,600 --> 00:35:15,120 Speaker 1: I mean, I feel like It's one of the most 635 00:35:15,160 --> 00:35:18,239 Speaker 1: common features of the tough on crime mentality that it 636 00:35:18,360 --> 00:35:22,719 Speaker 1: is resistant to the idea of understanding why what led 637 00:35:22,760 --> 00:35:25,560 Speaker 1: a person to commit a crime. I mean, you've heard 638 00:35:25,960 --> 00:35:27,680 Speaker 1: I'm trying to think of an example of somebody, but 639 00:35:27,760 --> 00:35:30,439 Speaker 1: I mean you've heard the person say, oh, uh, you know, oh, 640 00:35:30,480 --> 00:35:32,920 Speaker 1: you're just gonna give some sob story about what happened 641 00:35:32,920 --> 00:35:36,239 Speaker 1: when he was a child or something like that. Yeah, yeah, yeah, yeah, 642 00:35:36,280 --> 00:35:39,919 Speaker 1: I've definitely encountered that that that counter argument before. Yeah, 643 00:35:40,040 --> 00:35:42,000 Speaker 1: but yeah, I mean I think we're probably on the 644 00:35:42,040 --> 00:35:45,080 Speaker 1: same page that it really probably is very useful to 645 00:35:45,080 --> 00:35:48,080 Speaker 1: try to understand what are the common underlying conditions that 646 00:35:48,160 --> 00:35:51,799 Speaker 1: you can detect when people do something bad. And of 647 00:35:51,840 --> 00:35:54,200 Speaker 1: course the same thing would be true of robots, right, 648 00:35:54,239 --> 00:35:57,560 Speaker 1: and it seems like with robots there would potentially be 649 00:35:57,719 --> 00:36:02,160 Speaker 1: room for true rehabilitation with with with these things. If not, 650 00:36:02,480 --> 00:36:04,040 Speaker 1: I mean, certainly you could look at it in a 651 00:36:04,080 --> 00:36:07,600 Speaker 1: software hardware scenario where like, Okay, the software's something's wrong 652 00:36:07,600 --> 00:36:09,960 Speaker 1: with the software, Well delete that put in some some 653 00:36:10,080 --> 00:36:14,600 Speaker 1: healthy software, um, but keep the hardware. Uh you know, 654 00:36:14,719 --> 00:36:17,920 Speaker 1: that's in a way that's rehabilitation right there. It's a 655 00:36:17,920 --> 00:36:21,000 Speaker 1: sort of rehabilitation that's not possible with humans. We can't 656 00:36:21,400 --> 00:36:23,879 Speaker 1: wipe somebody's mental state and replace it with a new, 657 00:36:23,960 --> 00:36:26,560 Speaker 1: factory clean mental state. You know, we can't go back 658 00:36:26,600 --> 00:36:32,200 Speaker 1: and edit someone's memories and traumas and what have you. Uh. 659 00:36:32,320 --> 00:36:34,759 Speaker 1: But with machines, it seems like we would have more 660 00:36:34,840 --> 00:36:38,920 Speaker 1: ability to do something of that nature. Yeah, though, this 661 00:36:39,000 --> 00:36:40,640 Speaker 1: is another thing that comes up, and I mean, of 662 00:36:40,680 --> 00:36:43,239 Speaker 1: course it probably would be useful to try to learn 663 00:36:43,400 --> 00:36:46,920 Speaker 1: from failed AI in order to better perfect AI and robots. 664 00:36:46,960 --> 00:36:50,000 Speaker 1: But on the other hand, in in basically the idea 665 00:36:50,000 --> 00:36:54,520 Speaker 1: of trying to rehabilitate or reprogram robots that do wrong, uh, 666 00:36:54,600 --> 00:36:57,000 Speaker 1: the authors point out that they're probably going to be 667 00:36:57,040 --> 00:37:01,320 Speaker 1: a lot of difficulties in enforcing, say, the the equivalent 668 00:37:01,360 --> 00:37:03,880 Speaker 1: of court orders against robots. So one thing that is 669 00:37:03,880 --> 00:37:07,840 Speaker 1: a common remedy in in legal cases against humans, as 670 00:37:07,920 --> 00:37:10,319 Speaker 1: you might get a restraining order, you know, you need 671 00:37:10,360 --> 00:37:13,719 Speaker 1: to stay fifty feet away from somebody right fifty feet 672 00:37:13,719 --> 00:37:16,319 Speaker 1: away from the plaintiff or something like that, or you 673 00:37:16,360 --> 00:37:18,800 Speaker 1: need to not operate a vehicle or you know something. 674 00:37:19,080 --> 00:37:21,440 Speaker 1: There will be cases where it's probably difficult to enforce 675 00:37:21,520 --> 00:37:23,799 Speaker 1: that same kind of thing on a robot, especially on 676 00:37:24,080 --> 00:37:29,759 Speaker 1: robots whose behavior is determined by a complex interaction of 677 00:37:29,920 --> 00:37:33,360 Speaker 1: rules that are not explicitly coded by humans. So you know, 678 00:37:33,480 --> 00:37:36,440 Speaker 1: most AI these days is not going to be a 679 00:37:36,480 --> 00:37:39,719 Speaker 1: series of if then statements written by humans, but it's 680 00:37:39,760 --> 00:37:42,799 Speaker 1: going to be determined by machine learning, which can to 681 00:37:43,040 --> 00:37:45,920 Speaker 1: some extent be sort of reverse engineered and and somewhat 682 00:37:45,960 --> 00:37:48,600 Speaker 1: understood by humans. But the more complex it is, the 683 00:37:48,600 --> 00:37:50,920 Speaker 1: harder it is to do that. And so there might 684 00:37:50,920 --> 00:37:53,960 Speaker 1: be a lot of cases where you know, you say, okay, 685 00:37:54,000 --> 00:37:56,560 Speaker 1: this robot needs to do X, it needs to obit, 686 00:37:56,640 --> 00:37:59,080 Speaker 1: you know, stay fifty feet away from the plaintiff or something, 687 00:37:59,520 --> 00:38:02,360 Speaker 1: but the person whoever is in charge of the robot 688 00:38:02,440 --> 00:38:04,239 Speaker 1: might say, I don't know how to make it do that. 689 00:38:05,480 --> 00:38:08,400 Speaker 1: Or the possibly more tragic or funnier example would be 690 00:38:08,440 --> 00:38:12,560 Speaker 1: the it discovers the equivalent of the drone with the 691 00:38:12,600 --> 00:38:14,799 Speaker 1: wormhole that we talked about in the last episode, right 692 00:38:14,840 --> 00:38:17,640 Speaker 1: where it's the robot is told to keep fifty feet 693 00:38:17,680 --> 00:38:20,480 Speaker 1: of distance between you and the plaintiff. Robot obeys the 694 00:38:20,560 --> 00:38:23,560 Speaker 1: role by lifting the plaintiff and throwing them fifty feet away. 695 00:38:24,920 --> 00:38:27,480 Speaker 1: So to read another section from Limeley and casey. Here, 696 00:38:27,480 --> 00:38:30,560 Speaker 1: they're right to issue an effective injunction that causes a 697 00:38:30,640 --> 00:38:32,879 Speaker 1: robot to do what we want it to do and 698 00:38:32,960 --> 00:38:37,920 Speaker 1: nothing else, requires both extreme foresight and extreme precision in 699 00:38:38,040 --> 00:38:41,080 Speaker 1: drafting it. If injunctions are to work at all, courts 700 00:38:41,120 --> 00:38:43,280 Speaker 1: will have to spend a lot more time thinking about 701 00:38:43,320 --> 00:38:46,920 Speaker 1: exactly what they want to happen and all the possible 702 00:38:46,920 --> 00:38:51,239 Speaker 1: circumstances that could arise. If past experience is any indication, 703 00:38:51,520 --> 00:38:54,200 Speaker 1: courts are unlikely to do it very well. That's not 704 00:38:54,280 --> 00:38:57,600 Speaker 1: a knock on courts. Rather, the problem is twofold words 705 00:38:57,600 --> 00:39:02,120 Speaker 1: are notoriously bad at conveying our tended meaning, and people 706 00:39:02,160 --> 00:39:06,080 Speaker 1: are notoriously bad at predicting the future. Coders, for their part, 707 00:39:06,120 --> 00:39:08,880 Speaker 1: aren't known for their deep understanding of the law, and 708 00:39:08,960 --> 00:39:12,080 Speaker 1: so we should expect errors in translation even if the 709 00:39:12,120 --> 00:39:15,439 Speaker 1: injunction is flawlessly written. And if we fall into any 710 00:39:15,480 --> 00:39:19,400 Speaker 1: of these traps, the consequences of drafting the injunction incompletely 711 00:39:19,800 --> 00:39:23,279 Speaker 1: maybe quite severe. So I'm imagining you issue a cord 712 00:39:23,400 --> 00:39:26,240 Speaker 1: order to a robot to do something or not do something. 713 00:39:26,480 --> 00:39:28,880 Speaker 1: You're kind of in the situation of like the monkeys 714 00:39:28,920 --> 00:39:32,280 Speaker 1: pawl wish you know, right, like you, Oh, you shouldn't 715 00:39:32,280 --> 00:39:34,920 Speaker 1: have phrased it that way. Now you're in for real trouble. 716 00:39:36,000 --> 00:39:37,759 Speaker 1: Or what's the better example of that? And there's some 717 00:39:37,800 --> 00:39:39,840 Speaker 1: movie we were just talking about recently with like the 718 00:39:40,480 --> 00:39:43,279 Speaker 1: Bad Genie who when you phrase a wish wrong, does 719 00:39:43,440 --> 00:39:47,040 Speaker 1: you know works it out on you in a terrible way. Um, 720 00:39:47,200 --> 00:39:49,279 Speaker 1: I don't know. We were talking about Lepricn or wish 721 00:39:49,280 --> 00:39:52,320 Speaker 1: Master or something. Does LEPrecon grant wishes? I don't remember 722 00:39:52,560 --> 00:39:56,759 Speaker 1: LEPrecon granting any wishes. What's he do? Then? I think 723 00:39:56,760 --> 00:39:58,759 Speaker 1: the only one I've seen is Lepricn in space, so 724 00:39:58,760 --> 00:40:01,360 Speaker 1: it is. I'm a little foggy on the the logic. 725 00:40:02,200 --> 00:40:04,440 Speaker 1: I don't think he grants wishes. He just he just 726 00:40:04,600 --> 00:40:07,880 Speaker 1: like rides around on skateboards and punishes people. He just 727 00:40:07,960 --> 00:40:10,480 Speaker 1: attacks people who try to get his gold and stuff. Well, 728 00:40:10,520 --> 00:40:13,279 Speaker 1: but Lepricans in general are known for this sort of thing. Though, 729 00:40:13,400 --> 00:40:17,520 Speaker 1: where are they? Okay, if you're not precise enough, they'll 730 00:40:17,520 --> 00:40:20,279 Speaker 1: work something in there to cheat you out of your 731 00:40:20,480 --> 00:40:23,279 Speaker 1: your your prize. I'm trying to think, so like, don't 732 00:40:23,280 --> 00:40:25,480 Speaker 1: come within fifty feet of the plaintiff, and so the 733 00:40:25,800 --> 00:40:28,240 Speaker 1: robot I don't know, like it builds a big yard 734 00:40:28,320 --> 00:40:30,960 Speaker 1: stick made out of human feed or something. Yeah, yeah, 735 00:40:31,080 --> 00:40:34,920 Speaker 1: has fifty ft long arms again to lift them into 736 00:40:34,960 --> 00:40:38,279 Speaker 1: the air. Something to that effect. Or say the say 737 00:40:38,320 --> 00:40:42,799 Speaker 1: it's uh, for some reason, schools are just too dangerous 738 00:40:43,239 --> 00:40:46,839 Speaker 1: and this self driving car is not permitted to go 739 00:40:47,000 --> 00:40:50,160 Speaker 1: within um, you know some you know so many blocks 740 00:40:50,160 --> 00:40:52,480 Speaker 1: of an active school, and so it calls in a 741 00:40:52,520 --> 00:40:56,120 Speaker 1: bomb threat on that school every day in order to 742 00:40:56,239 --> 00:40:57,880 Speaker 1: get the kids out so that it can actually go 743 00:40:57,960 --> 00:41:00,359 Speaker 1: buy I don't know, something to that effect. Maybe, well, 744 00:41:00,400 --> 00:41:03,560 Speaker 1: that reminds me of a funny observation that uh, not 745 00:41:03,640 --> 00:41:07,759 Speaker 1: that this is lawful activity, but uh, a funny observation 746 00:41:07,840 --> 00:41:11,600 Speaker 1: that the authors make towards their conclusion. They bring up 747 00:41:11,719 --> 00:41:18,760 Speaker 1: there are cases of of crashes with autonomous vehicles where 748 00:41:18,920 --> 00:41:23,400 Speaker 1: the autonomous vehicle didn't crash into someone. The autonomous vehicle, 749 00:41:23,880 --> 00:41:28,200 Speaker 1: you could argue, caused a crash, but somebody else ran 750 00:41:28,480 --> 00:41:33,440 Speaker 1: into the autonomous vehicle because the autonomous vehicle did something 751 00:41:33,480 --> 00:41:38,400 Speaker 1: that is legal and presumably safe but unexpected. And examples 752 00:41:38,400 --> 00:41:41,719 Speaker 1: here would be driving the speed limit in certain areas 753 00:41:41,920 --> 00:41:45,359 Speaker 1: or coming to a complete stop at an intersection. And 754 00:41:45,400 --> 00:41:47,560 Speaker 1: this is another way that the authors are bringing up 755 00:41:47,600 --> 00:41:51,719 Speaker 1: the idea that, uh, examining robot logic is really going 756 00:41:51,760 --> 00:41:53,640 Speaker 1: to have to cause us to re examine the way 757 00:41:53,760 --> 00:41:56,240 Speaker 1: humans interact with the law, because there are cases where 758 00:41:56,520 --> 00:42:00,480 Speaker 1: people cause problems that lead to harm by a baying 759 00:42:00,520 --> 00:42:03,239 Speaker 1: the rules. Oh yeah, Like I think of this all 760 00:42:03,280 --> 00:42:06,200 Speaker 1: the time, and imagine most people do when when driving 761 00:42:06,239 --> 00:42:09,400 Speaker 1: for any long distance, because you have the speed limit 762 00:42:09,440 --> 00:42:13,120 Speaker 1: as it's posted, you have the speed that the majority 763 00:42:13,120 --> 00:42:17,799 Speaker 1: of people are driving. Um, you know, you have that 764 00:42:17,840 --> 00:42:19,640 Speaker 1: sort of ten mile over zone. Then you have the 765 00:42:19,640 --> 00:42:22,480 Speaker 1: people who are driving exceedingly fast. Then you have that 766 00:42:22,560 --> 00:42:26,479 Speaker 1: minimum speed limit that virtually nobody is driving forty miles 767 00:42:26,480 --> 00:42:29,759 Speaker 1: per hour on the interstate, but it's posted. Uh, and 768 00:42:29,840 --> 00:42:32,640 Speaker 1: therefore it would be legal to drive forty one mile 769 00:42:32,760 --> 00:42:35,279 Speaker 1: per hour if you were a robot and weren't in 770 00:42:35,280 --> 00:42:38,640 Speaker 1: a particular hurry. And perhaps that's you know, maximum efficiency 771 00:42:38,719 --> 00:42:42,319 Speaker 1: for your travel. Uh. There's so many, so many things 772 00:42:42,320 --> 00:42:45,040 Speaker 1: like that to think about, and I think we're probably 773 00:42:45,080 --> 00:42:48,360 Speaker 1: not even very good at at guessing until we encounter 774 00:42:48,440 --> 00:42:51,440 Speaker 1: them through robots. How many other situations there are like 775 00:42:51,520 --> 00:42:55,480 Speaker 1: this in the world, where where you can technically be 776 00:42:55,680 --> 00:42:57,880 Speaker 1: within the bounds of the law, like you're doing what 777 00:42:58,080 --> 00:43:00,759 Speaker 1: by the book you're supposed to be doing. Actually, it's 778 00:43:00,800 --> 00:43:04,319 Speaker 1: really dangerous to be doing it that way. So how 779 00:43:04,360 --> 00:43:06,880 Speaker 1: are you supposed to interrogate a robot state of mind? 780 00:43:06,960 --> 00:43:09,200 Speaker 1: And when it comes to stuff like that. But so anyway, 781 00:43:09,200 --> 00:43:12,080 Speaker 1: this leads to the author's talking about the difficulties in 782 00:43:12,080 --> 00:43:15,040 Speaker 1: in robots state of mind valuation, and they say, quote, 783 00:43:15,280 --> 00:43:18,400 Speaker 1: robots don't seem to be good targets for rules based 784 00:43:18,440 --> 00:43:20,680 Speaker 1: on moral blame or state of mind, but they are 785 00:43:20,719 --> 00:43:23,960 Speaker 1: good at data. So we might consider a legal standard 786 00:43:24,080 --> 00:43:28,320 Speaker 1: that bases liability on how safe the robot is compared 787 00:43:28,360 --> 00:43:31,160 Speaker 1: to others of its type. This would be a sort 788 00:43:31,160 --> 00:43:34,840 Speaker 1: of robotic reasonableness test that could take the form of 789 00:43:34,880 --> 00:43:37,800 Speaker 1: a carrot, such as a safe harbor for self driving 790 00:43:37,840 --> 00:43:42,440 Speaker 1: cars that are significantly safer than average or significantly safer 791 00:43:42,440 --> 00:43:45,880 Speaker 1: than human drivers. Or we could use a stick holding 792 00:43:45,960 --> 00:43:49,439 Speaker 1: robots liable if they lagged behind their peers, or even 793 00:43:49,440 --> 00:43:51,960 Speaker 1: shutting down the worst ten percent of robots in a 794 00:43:52,040 --> 00:43:55,480 Speaker 1: category every year. So I'm not sure if I agree 795 00:43:55,520 --> 00:43:57,400 Speaker 1: with this, but this was an interesting idea to me. 796 00:43:57,480 --> 00:44:02,520 Speaker 1: So instead of like trying to to interrogate the underlying logic, 797 00:44:02,800 --> 00:44:07,760 Speaker 1: of a type of autonomous car, robot or whatever. Because 798 00:44:08,000 --> 00:44:11,040 Speaker 1: it's so difficult to try to understand the underlying logic. 799 00:44:11,400 --> 00:44:16,040 Speaker 1: What if you just compare its outcomes to other machines 800 00:44:16,200 --> 00:44:19,879 Speaker 1: of the same genre as it, or two humans. I mean, 801 00:44:20,040 --> 00:44:22,040 Speaker 1: you can imagine this working better in the case of 802 00:44:22,080 --> 00:44:24,880 Speaker 1: something like autonomous cars, then you can and you know 803 00:44:24,960 --> 00:44:28,200 Speaker 1: other cases where the robot is essentially introducing a sort 804 00:44:28,200 --> 00:44:30,719 Speaker 1: of a new genre of agent into the world. But 805 00:44:30,760 --> 00:44:33,880 Speaker 1: autonomous cars are in many ways going to be roughly 806 00:44:33,920 --> 00:44:38,680 Speaker 1: equivalent in outcomes to human drivers in in regular cars, 807 00:44:39,000 --> 00:44:40,920 Speaker 1: and so would it make more sense to try to 808 00:44:41,040 --> 00:44:46,600 Speaker 1: understand the reasoning behind each autonomous vehicle's decision making when 809 00:44:46,640 --> 00:44:50,080 Speaker 1: it gets into an accident, or uh, to compare its 810 00:44:50,160 --> 00:44:53,239 Speaker 1: behavior to I don't know, some kind of aggregate or 811 00:44:53,400 --> 00:44:57,720 Speaker 1: standard of human driving or other autonomous vehicles, or maybe 812 00:44:57,719 --> 00:45:00,120 Speaker 1: we just we just tell it, Look, most humans I 813 00:45:00,120 --> 00:45:02,680 Speaker 1: have like selfish bastards, So just go do it and 814 00:45:02,760 --> 00:45:05,000 Speaker 1: do what you gotta do well. I mean, I would 815 00:45:05,000 --> 00:45:09,240 Speaker 1: say that there is a downside risk to not taking 816 00:45:09,280 --> 00:45:13,840 Speaker 1: this stuff seriously enough, which is uh, which is something 817 00:45:13,920 --> 00:45:17,400 Speaker 1: like that, I mean something like essentially letting robots go 818 00:45:17,560 --> 00:45:21,120 Speaker 1: hog wild? Because they can well be designed and not 819 00:45:21,160 --> 00:45:24,080 Speaker 1: saying that anybody would be you know, maliciously going wahaha 820 00:45:24,200 --> 00:45:26,960 Speaker 1: and rubbing their hands together while they make it this case. 821 00:45:27,040 --> 00:45:30,399 Speaker 1: But you know, you could imagine a situation where there 822 00:45:30,400 --> 00:45:33,400 Speaker 1: are more and more robots entering the world, where the uh, 823 00:45:33,680 --> 00:45:38,560 Speaker 1: the corporate responsibility for them is so diffuse that nobody 824 00:45:38,600 --> 00:45:42,319 Speaker 1: can locate the one person who's responsible for the robots behavior, 825 00:45:42,520 --> 00:45:45,680 Speaker 1: and thus nobody ever really makes the robot, you know, 826 00:45:46,440 --> 00:45:49,239 Speaker 1: behave morally at all. So robots just sort of like 827 00:45:49,560 --> 00:45:53,840 Speaker 1: become a new class of superhuman psychopaths that are immune 828 00:45:53,920 --> 00:45:57,600 Speaker 1: from all consequences. In fact, I would say that is 829 00:45:57,640 --> 00:46:00,880 Speaker 1: a robot apocalypse scenario I've never seen before done in 830 00:46:00,880 --> 00:46:03,280 Speaker 1: a movie. It's always like when the robots are terrible 831 00:46:03,320 --> 00:46:06,080 Speaker 1: to us, it's always like organized, it's always like that, 832 00:46:06,239 --> 00:46:09,240 Speaker 1: you know, they okay, they decide humans are a cancer 833 00:46:09,360 --> 00:46:11,319 Speaker 1: or something and there so they're going to wipe us out. 834 00:46:11,600 --> 00:46:15,080 Speaker 1: What if instead, it's the problem is just that robots, 835 00:46:15,160 --> 00:46:19,920 Speaker 1: sort of, by corporate negligence and distributed responsibility for their 836 00:46:19,920 --> 00:46:23,480 Speaker 1: behavior among humans, robots just end up being ultimately a 837 00:46:23,640 --> 00:46:27,240 Speaker 1: moral and we're flooded with these a moral critters running 838 00:46:27,239 --> 00:46:30,080 Speaker 1: around all over the place that are pretty smart and 839 00:46:30,120 --> 00:46:33,120 Speaker 1: really powerful. I guess there are You do see some 840 00:46:33,239 --> 00:46:38,240 Speaker 1: shades of this in UM in some futuristic sci fi genres. 841 00:46:38,239 --> 00:46:41,800 Speaker 1: I'm particularly thinking of some of the models of cyberpunk genre, 842 00:46:41,920 --> 00:46:47,160 Speaker 1: where the the corporation model has been has been embraced 843 00:46:47,239 --> 00:46:52,160 Speaker 1: as the way of understanding the future of Aiyes, um, 844 00:46:52,239 --> 00:46:54,480 Speaker 1: but but yeah, I think I think for the most 845 00:46:54,480 --> 00:46:58,120 Speaker 1: part this this scenario hasn't been as explored as much 846 00:46:58,200 --> 00:46:59,560 Speaker 1: we tend to. We tend to want to go for 847 00:46:59,640 --> 00:47:03,239 Speaker 1: the eve overlord or the out of control kilbot rather 848 00:47:03,280 --> 00:47:07,000 Speaker 1: than this, right, yeah, you want you want an identifiable villain, 849 00:47:07,080 --> 00:47:09,879 Speaker 1: just like they do in the courts. But yeah, sometimes uh, 850 00:47:10,160 --> 00:47:14,000 Speaker 1: sometimes corporations or manufacturers can be kind of slippery and 851 00:47:14,160 --> 00:47:23,440 Speaker 1: saying like whose thing is this? Thank so, I was 852 00:47:23,440 --> 00:47:25,840 Speaker 1: thinking about all this, about the idea of you know, 853 00:47:25,880 --> 00:47:29,480 Speaker 1: particularly self driving cars being like the main example we 854 00:47:29,480 --> 00:47:32,080 Speaker 1: we ruminate on with with this sort of thing. UM, 855 00:47:32,239 --> 00:47:35,239 Speaker 1: I decided to to look to the book Life three 856 00:47:35,239 --> 00:47:39,120 Speaker 1: point oh by Max teg Mark. UM, which is a 857 00:47:39,200 --> 00:47:41,040 Speaker 1: is a really great book came out of a couple 858 00:47:41,080 --> 00:47:43,719 Speaker 1: of years back, and Max teg Mark is a Swedish 859 00:47:43,719 --> 00:47:47,920 Speaker 1: American physicist, cosmologist and machine learning researcher. If you've been 860 00:47:47,920 --> 00:47:49,800 Speaker 1: listening to the show for a while, you might remember 861 00:47:49,800 --> 00:47:52,440 Speaker 1: that I briefly interviewed him, had like a mini interview 862 00:47:52,480 --> 00:47:56,719 Speaker 1: with him at the World Science Festival, UH several years back. Yeah, 863 00:47:56,719 --> 00:47:59,840 Speaker 1: and I know I've referenced his book Our Mathematical Universe 864 00:48:00,120 --> 00:48:03,480 Speaker 1: previous episodes. Yeah. So so these are these are both 865 00:48:03,480 --> 00:48:06,919 Speaker 1: books intended for a wide audience, very very readable. Life 866 00:48:06,920 --> 00:48:09,600 Speaker 1: three point oh does a does a fabulous job of 867 00:48:09,840 --> 00:48:14,120 Speaker 1: walking the reader through these various scenarios of UH in 868 00:48:14,200 --> 00:48:16,919 Speaker 1: many cases of of AI scendency and how it could work. 869 00:48:17,360 --> 00:48:21,000 Speaker 1: And he gets into this this topic of UM of 870 00:48:21,120 --> 00:48:25,080 Speaker 1: legality and UM and and AI and self driving cars. 871 00:48:25,320 --> 00:48:28,040 Speaker 1: Now he does not make any allusions to Johnny Cab 872 00:48:28,080 --> 00:48:30,880 Speaker 1: in Total Recall, but I'm going to make allusions to 873 00:48:30,960 --> 00:48:32,839 Speaker 1: Johnny Cab in Total Recall is a way of sort 874 00:48:32,840 --> 00:48:36,879 Speaker 1: of putting a manic face on self driving cars. How 875 00:48:36,920 --> 00:48:40,759 Speaker 1: did I get here? The door opened, you got in, 876 00:48:42,320 --> 00:48:46,200 Speaker 1: it's sound reason, so UM, imagine that you're in a 877 00:48:46,239 --> 00:48:49,840 Speaker 1: self driving Johnny cab and it recks. So the basic 878 00:48:49,920 --> 00:48:52,920 Speaker 1: question you might ask is are you responsible? For this 879 00:48:53,000 --> 00:48:56,000 Speaker 1: wreck as the occupant. That seems ridiculous to think so, right, 880 00:48:56,040 --> 00:48:59,400 Speaker 1: you weren't driving it, You just told it where to go? Um, 881 00:48:59,520 --> 00:49:02,319 Speaker 1: Are the owners of the Johnny Cab responsible? Now? This 882 00:49:02,360 --> 00:49:06,239 Speaker 1: seems more reasonable, right, sure, but again it runs into 883 00:49:06,280 --> 00:49:10,040 Speaker 1: a lot of the problems we were just raising there. Yeah, 884 00:49:10,040 --> 00:49:12,279 Speaker 1: but tag Mark points out that there is this other 885 00:49:12,320 --> 00:49:16,719 Speaker 1: option and that American legal scholar David of Latic has 886 00:49:16,760 --> 00:49:19,560 Speaker 1: pointed out that perhaps it is the Johnny Cab itself 887 00:49:19,600 --> 00:49:23,120 Speaker 1: that should be responsible. Now we've been already been discussing 888 00:49:23,120 --> 00:49:24,759 Speaker 1: a lot of this, like what does that mean? What 889 00:49:24,800 --> 00:49:27,279 Speaker 1: does it mean if a have a Johnny Cab of 890 00:49:27,360 --> 00:49:30,800 Speaker 1: a self driving vehicle is responsible for the wreck that 891 00:49:30,960 --> 00:49:33,200 Speaker 1: it is in? What you know, how do we even 892 00:49:33,320 --> 00:49:36,080 Speaker 1: begin to make sense of that statement? Do you do 893 00:49:36,120 --> 00:49:40,160 Speaker 1: you take the damages out of the Johnny cabs bank account? Well, 894 00:49:40,840 --> 00:49:42,600 Speaker 1: that's the thing. We we kind of end up getting 895 00:49:42,640 --> 00:49:46,480 Speaker 1: into that scenario because if the Johnny Cab has responsibilities 896 00:49:47,280 --> 00:49:50,080 Speaker 1: than than tag Mark rights, why not let it own 897 00:49:50,200 --> 00:49:53,319 Speaker 1: car insurance? Not only would this allow for it to 898 00:49:53,480 --> 00:49:57,360 Speaker 1: financially handle accidents, it would also potentially serve as a 899 00:49:57,400 --> 00:50:01,719 Speaker 1: design incentive and a purchasing incent incentive. So the the 900 00:50:01,800 --> 00:50:04,720 Speaker 1: idea here is the better self driving cars with better 901 00:50:04,760 --> 00:50:09,120 Speaker 1: records will qualify for lower premiums, and the less reliable 902 00:50:09,160 --> 00:50:11,680 Speaker 1: models will have to pay higher premiums. So if the 903 00:50:11,760 --> 00:50:16,040 Speaker 1: Johnny Cab runs into enough stuff and explodes enough, then 904 00:50:16,120 --> 00:50:18,719 Speaker 1: that brand of Johnny Caps simply won't be able to 905 00:50:18,760 --> 00:50:22,040 Speaker 1: take to the streets anymore. Oh this is interesting, okay, 906 00:50:22,040 --> 00:50:24,000 Speaker 1: So in order to mean this is very much the 907 00:50:24,040 --> 00:50:27,640 Speaker 1: economic model that we were discussing earlier. So when Schwarzenegger 908 00:50:27,719 --> 00:50:29,880 Speaker 1: hops in and Johnny Cap says, where would you like 909 00:50:29,920 --> 00:50:33,160 Speaker 1: to go? And he says, drive, just drive anywhere? And 910 00:50:33,239 --> 00:50:35,359 Speaker 1: he says, I don't know where that is. And so 911 00:50:35,640 --> 00:50:39,400 Speaker 1: so his incentive to not just like blindly plow forward 912 00:50:39,520 --> 00:50:41,960 Speaker 1: is how much would it cost if I ran into 913 00:50:42,000 --> 00:50:46,880 Speaker 1: something when I did that? Yeah? Exactly. But but Tamar 914 00:50:47,080 --> 00:50:49,760 Speaker 1: points out that the implications of letting a self driving 915 00:50:49,800 --> 00:50:53,000 Speaker 1: car own car insurance it ultimately goes beyond this situation, 916 00:50:53,239 --> 00:50:55,920 Speaker 1: because how does the Johnny Cab pay for its insurance 917 00:50:55,960 --> 00:51:00,680 Speaker 1: policy that, again it hypothetically owns in this scenario? Should 918 00:51:00,680 --> 00:51:03,200 Speaker 1: we let it own money in order to do this? 919 00:51:03,320 --> 00:51:05,360 Speaker 1: Does it have its own bank account? Like you alluded 920 00:51:05,360 --> 00:51:09,759 Speaker 1: to earlier, especially if it's operating as an independent contractor 921 00:51:09,760 --> 00:51:13,960 Speaker 1: of sorts, perhaps paying back certain percentages or fees to 922 00:51:14,040 --> 00:51:16,560 Speaker 1: a greater cab company. Like maybe that's how it would work. 923 00:51:16,960 --> 00:51:20,040 Speaker 1: And if it can own money, well, can it also 924 00:51:20,080 --> 00:51:23,200 Speaker 1: own property like perhaps at the very least it rents 925 00:51:23,239 --> 00:51:28,360 Speaker 1: garage space, uh, but maybe it owns garage space for itself, um, 926 00:51:28,400 --> 00:51:31,040 Speaker 1: you know, or a maintenance facility or the tools that 927 00:51:31,120 --> 00:51:33,799 Speaker 1: work on it. Does it own those as well? Does 928 00:51:33,800 --> 00:51:36,759 Speaker 1: it own spare parts? Does it own the bottles of 929 00:51:36,800 --> 00:51:41,160 Speaker 1: water that go inside of itself for its customers? Does 930 00:51:41,200 --> 00:51:43,759 Speaker 1: it own the complementary wet towels for your head that 931 00:51:43,840 --> 00:51:48,239 Speaker 1: it keeps on hand? Yeah? Um, I mean if nothing else. 932 00:51:48,239 --> 00:51:50,160 Speaker 1: It seems like if it owned things like, the more 933 00:51:50,200 --> 00:51:53,520 Speaker 1: things it owns, the more things that you could potentially 934 00:51:54,120 --> 00:51:58,520 Speaker 1: um uh invoke a penalty upon through the legal system. 935 00:51:58,600 --> 00:52:01,800 Speaker 1: And if they can own money and property and again 936 00:52:02,640 --> 00:52:07,040 Speaker 1: potentially themselves, then Tegmark takes it a step further. He writes, 937 00:52:07,280 --> 00:52:10,200 Speaker 1: if this is the case, quote, there's nothing legally stopping 938 00:52:10,239 --> 00:52:13,799 Speaker 1: smart computers from making money on the stock market and 939 00:52:13,920 --> 00:52:17,440 Speaker 1: using it to buy online services. Once the computer starts 940 00:52:17,440 --> 00:52:20,560 Speaker 1: paying humans to work for it. It can accomplish anything 941 00:52:20,600 --> 00:52:23,520 Speaker 1: that humans can do. I see. So you might say 942 00:52:23,560 --> 00:52:26,920 Speaker 1: that even if you're skeptical of an AI's ability to 943 00:52:27,080 --> 00:52:30,960 Speaker 1: have say the emotional and cultural intelligence to uh to 944 00:52:31,200 --> 00:52:34,880 Speaker 1: write a popular screenplay or you know, create a popular movie, 945 00:52:34,920 --> 00:52:37,319 Speaker 1: it just doesn't get humans well enough to do that. 946 00:52:37,640 --> 00:52:39,960 Speaker 1: It could, at least if it had its own economic 947 00:52:40,000 --> 00:52:44,000 Speaker 1: agency pay humans to do that, right, right and um. 948 00:52:44,040 --> 00:52:46,359 Speaker 1: Elsewhere in the book, tag Mark gets into a lot 949 00:52:46,360 --> 00:52:49,440 Speaker 1: of this, especially the entertainment idea, presenting a scenario by 950 00:52:49,440 --> 00:52:53,680 Speaker 1: which machines like this could gain the entertainment industry in 951 00:52:53,800 --> 00:52:57,240 Speaker 1: order to to ascend to you know, extreme financial power. 952 00:52:57,520 --> 00:52:59,120 Speaker 1: A lot of it is just like sort of playing 953 00:52:59,160 --> 00:53:03,680 Speaker 1: the algorithms, you know, like doing corporation stuff and then 954 00:53:03,920 --> 00:53:08,080 Speaker 1: hiring humans as necessary to to bring that to fruition. 955 00:53:08,160 --> 00:53:10,880 Speaker 1: You know, I mean, would this be all that different 956 00:53:10,960 --> 00:53:13,160 Speaker 1: from any of our like I don't know, Disney or 957 00:53:13,200 --> 00:53:17,839 Speaker 1: Comic book Studios or whatever exists today. Yeah, yeah, exactly. Um. 958 00:53:18,040 --> 00:53:20,040 Speaker 1: So you know we already know the sort of prowess 959 00:53:20,080 --> 00:53:22,520 Speaker 1: that computers have when it comes to the stock market. 960 00:53:22,560 --> 00:53:25,360 Speaker 1: Tech Mark you know, points out that, you know, you know, 961 00:53:25,400 --> 00:53:27,360 Speaker 1: what we have examples of this in the world already 962 00:53:27,840 --> 00:53:30,200 Speaker 1: where we're using AI, and he writes that it could 963 00:53:30,280 --> 00:53:33,120 Speaker 1: lead to a situation where most of the economy is 964 00:53:33,200 --> 00:53:36,600 Speaker 1: owned and controlled by machines. And this, he warns, is 965 00:53:36,640 --> 00:53:38,759 Speaker 1: not that crazy, considering that we already live in a 966 00:53:38,800 --> 00:53:42,760 Speaker 1: world where non human entities called corporations exert tremendous power 967 00:53:43,000 --> 00:53:45,719 Speaker 1: and hold tremendous wealth. I think there there is a 968 00:53:45,840 --> 00:53:48,759 Speaker 1: large amount of overlap between the concept of corporation and 969 00:53:48,800 --> 00:53:52,319 Speaker 1: the concept of an AI. Yeah, and uh, And then 970 00:53:52,320 --> 00:53:54,719 Speaker 1: there are steps beyond this as well. If if machines 971 00:53:54,920 --> 00:53:56,839 Speaker 1: can do all of these things. So if they can, 972 00:53:57,120 --> 00:54:00,400 Speaker 1: if they can if a machine can own property, if 973 00:54:00,440 --> 00:54:03,640 Speaker 1: it can potentially own itself, if it can if it 974 00:54:03,680 --> 00:54:06,240 Speaker 1: can buy things, if it can invest in the stock market, 975 00:54:06,280 --> 00:54:09,480 Speaker 1: if it can accumulate financial power. If you can do 976 00:54:09,520 --> 00:54:12,200 Speaker 1: all these things, then should they also get the right 977 00:54:12,239 --> 00:54:16,440 Speaker 1: to vote as well? You know, it's it's potentially paying taxes? 978 00:54:16,680 --> 00:54:19,680 Speaker 1: Does it get to vote in addition to that? And 979 00:54:19,719 --> 00:54:23,160 Speaker 1: then if not, why and what becomes the caveat that 980 00:54:23,200 --> 00:54:25,880 Speaker 1: determines the right to vote in the scenario? Now, if 981 00:54:25,880 --> 00:54:28,680 Speaker 1: I understand you, right, I think you're saying the tech 982 00:54:28,760 --> 00:54:31,799 Speaker 1: mark is is exploring these possibilities as stuff that he 983 00:54:31,840 --> 00:54:35,320 Speaker 1: thinks might not be as implausible as people would suspect, 984 00:54:35,480 --> 00:54:38,120 Speaker 1: rather than his stuff where he's like, here's my ideal world, 985 00:54:38,760 --> 00:54:41,560 Speaker 1: right right, He's saying like, look, you know, this is 986 00:54:41,600 --> 00:54:44,120 Speaker 1: already where we are. We know what a I can do, 987 00:54:44,320 --> 00:54:47,120 Speaker 1: and we can easily extrapolate where it might go. These 988 00:54:47,160 --> 00:54:49,760 Speaker 1: are the scenarios we should we should potentially be prepared 989 00:54:49,800 --> 00:54:52,960 Speaker 1: for in much the same way that nobody, nobody really 990 00:54:53,040 --> 00:54:56,600 Speaker 1: at an intuitive level, believes that a corporation is a person, 991 00:54:56,880 --> 00:54:59,640 Speaker 1: like a like a human being as a person. Uh, 992 00:54:59,680 --> 00:55:02,200 Speaker 1: you know, it's at least done well enough at convincing 993 00:55:02,239 --> 00:55:04,680 Speaker 1: the courts that it is a person. So would you 994 00:55:04,719 --> 00:55:06,960 Speaker 1: not be able to expect the same coming out of 995 00:55:07,000 --> 00:55:11,880 Speaker 1: machines that were sophisticated enough right and convincing the court is? Uh, 996 00:55:12,040 --> 00:55:14,440 Speaker 1: I'm glad you brought that up, because that's that's another 997 00:55:14,520 --> 00:55:17,719 Speaker 1: area that tech markets into. So what does it mean 998 00:55:17,880 --> 00:55:22,879 Speaker 1: when judges have to potentially judge aiyes? Um? Would these 999 00:55:22,920 --> 00:55:26,279 Speaker 1: be specialized judges with technical knowledge and understanding of the 1000 00:55:26,520 --> 00:55:29,600 Speaker 1: complex systems involved, Uh, you know, or is it going 1001 00:55:29,640 --> 00:55:32,000 Speaker 1: to be a human judge judging a machine as if 1002 00:55:32,000 --> 00:55:35,200 Speaker 1: it were a human. Um. You know, both of these 1003 00:55:35,239 --> 00:55:39,320 Speaker 1: are possibilities. But then here's another idea that Tegmarks discusses 1004 00:55:39,320 --> 00:55:43,440 Speaker 1: at length. What if we use robo judges um. And 1005 00:55:43,520 --> 00:55:46,600 Speaker 1: this ultimately goes beyond the idea of using robo judges 1006 00:55:46,640 --> 00:55:48,719 Speaker 1: to judge at the robots, but potentially using them to 1007 00:55:48,800 --> 00:55:52,799 Speaker 1: judge humans as well. Um. Because while human judges have 1008 00:55:52,840 --> 00:55:56,840 Speaker 1: limited ability to understand the technical knowledge of cases, robo judges, 1009 00:55:57,120 --> 00:56:00,320 Speaker 1: tech Mark points out would in theory have un limitted 1010 00:56:00,400 --> 00:56:03,520 Speaker 1: learning and memory capacity. They could also be copied, so 1011 00:56:03,560 --> 00:56:06,240 Speaker 1: there would be no staffing shortages you need to judges today, 1012 00:56:06,239 --> 00:56:10,040 Speaker 1: We'll just copy and paste, right, uh and simplification. But 1013 00:56:10,040 --> 00:56:13,040 Speaker 1: but you know, essentially, once you have one, you can 1014 00:56:13,080 --> 00:56:16,600 Speaker 1: have many. Uh. This way justice could be cheaper and 1015 00:56:16,840 --> 00:56:20,320 Speaker 1: just maybe a little more just by removing the human equation, 1016 00:56:20,719 --> 00:56:24,040 Speaker 1: or at least so the machines would argue, right, but 1017 00:56:24,080 --> 00:56:25,839 Speaker 1: then the others. The side of the thing is, we've 1018 00:56:25,880 --> 00:56:30,440 Speaker 1: already discussed how human created a I is susceptible to 1019 00:56:30,440 --> 00:56:33,840 Speaker 1: to bias, so we could potentially, you know, create we 1020 00:56:33,880 --> 00:56:36,960 Speaker 1: could create a robo judge. But if we're not not careful, 1021 00:56:37,000 --> 00:56:39,000 Speaker 1: it could be bugged, it could be hacked, it could 1022 00:56:39,000 --> 00:56:42,000 Speaker 1: be otherwise compromised, where it just might have these various 1023 00:56:42,040 --> 00:56:44,879 Speaker 1: biases that it is um that it is using when 1024 00:56:44,920 --> 00:56:48,480 Speaker 1: it's judging humans or machines. And then you'd have to 1025 00:56:48,520 --> 00:56:51,319 Speaker 1: have public trust in such a system as well. So 1026 00:56:51,600 --> 00:56:53,239 Speaker 1: we run into a lot of the same problems we 1027 00:56:53,320 --> 00:56:56,360 Speaker 1: run into when we're talking about trusting the machine to 1028 00:56:56,480 --> 00:57:01,200 Speaker 1: drive us across town. Yeah, Like, so if robot judge, 1029 00:57:01,320 --> 00:57:04,400 Speaker 1: even if now I'm certainly not granting this because I 1030 00:57:04,640 --> 00:57:06,839 Speaker 1: don't necessarily believe this was the case, but even if 1031 00:57:06,880 --> 00:57:10,120 Speaker 1: it were true that a robot judge would be better 1032 00:57:10,360 --> 00:57:13,080 Speaker 1: at judging cases than a human and like more fair 1033 00:57:13,120 --> 00:57:15,920 Speaker 1: and more just, you could run into problems with public 1034 00:57:15,920 --> 00:57:18,880 Speaker 1: trust in those kind of judges because, for example, they 1035 00:57:18,920 --> 00:57:23,160 Speaker 1: make the calculations explicit, right, the same way we talked about, 1036 00:57:23,160 --> 00:57:26,960 Speaker 1: like placing a certain value on a human life. Uh, 1037 00:57:27,000 --> 00:57:29,160 Speaker 1: it's something that we all sort of do, but we 1038 00:57:29,240 --> 00:57:31,360 Speaker 1: don't like to think about it or acknowledge we do it. 1039 00:57:31,400 --> 00:57:33,520 Speaker 1: We just do it at an intuitive level that's sort 1040 00:57:33,560 --> 00:57:36,560 Speaker 1: of hidden in the dark recesses of the mind, and 1041 00:57:36,560 --> 00:57:38,920 Speaker 1: and and don't think about it. A machine would have 1042 00:57:38,960 --> 00:57:41,400 Speaker 1: to like put a number on that and and for 1043 00:57:41,520 --> 00:57:44,480 Speaker 1: public transparency reasons, that number would probably need to be 1044 00:57:44,480 --> 00:57:48,800 Speaker 1: publicly accessible. Yeah, another area, and this is where this 1045 00:57:48,840 --> 00:57:51,800 Speaker 1: is another topic and robotics that you know, we could 1046 00:57:51,840 --> 00:57:55,200 Speaker 1: easily discuss at at extreme length, but there's a robotic 1047 00:57:55,240 --> 00:57:58,120 Speaker 1: surgery to consider. You know. While we continue to make 1048 00:57:58,160 --> 00:58:01,680 Speaker 1: great strides and robotic surgery, and in some cases the 1049 00:58:01,760 --> 00:58:05,760 Speaker 1: robotic surgery route is indisputably the safest route, there remains 1050 00:58:05,760 --> 00:58:09,760 Speaker 1: a lot of discussion regarding UM. You know, how robot 1051 00:58:09,800 --> 00:58:13,800 Speaker 1: surgery is, UM is progressing, where it's headed, and how 1052 00:58:13,880 --> 00:58:19,360 Speaker 1: malpractice potentially factors into everything UM. Now to despite the 1053 00:58:19,400 --> 00:58:21,560 Speaker 1: advances that we've seen, we're not quite at the medical 1054 00:58:21,680 --> 00:58:25,240 Speaker 1: droid level, you know, like the autonomous UH surgical bought. 1055 00:58:25,840 --> 00:58:28,200 Speaker 1: But as reported by Dennis Grady in the New York 1056 00:58:28,200 --> 00:58:32,200 Speaker 1: Times just last year, AI coupled with new imaging techniques, 1057 00:58:32,200 --> 00:58:35,360 Speaker 1: are already showing promise as a means of diagnosing tumors 1058 00:58:35,600 --> 00:58:40,640 Speaker 1: as accurately as human physicians, but at far greater speed. UM. 1059 00:58:40,680 --> 00:58:45,120 Speaker 1: So it's interesting to to think about these advancements, but 1060 00:58:45,160 --> 00:58:48,480 Speaker 1: at the same time realize that particularly in an AI 1061 00:58:48,640 --> 00:58:52,040 Speaker 1: we're talking more about AI I mean, particularly in AI 1062 00:58:52,080 --> 00:58:55,600 Speaker 1: and medicine. We're talking about AI assisted medicine or AI 1063 00:58:55,640 --> 00:59:00,480 Speaker 1: assisted surgery. So the human AI relationship is in these 1064 00:59:00,560 --> 00:59:04,320 Speaker 1: cases not one of replacement, but of cooperation, at least 1065 00:59:05,000 --> 00:59:08,400 Speaker 1: for the near term. Yeah. Yeah, yeah, I see that 1066 00:59:08,440 --> 00:59:11,400 Speaker 1: because I mean, there are many reasons for that, but 1067 00:59:11,480 --> 00:59:13,560 Speaker 1: one of the one of the reasons that strikes me 1068 00:59:13,640 --> 00:59:16,960 Speaker 1: is it comes back to a perhaps sometimes irrational desire 1069 00:59:17,120 --> 00:59:20,080 Speaker 1: to inflict punishment on a person who has done wrong, 1070 00:59:20,160 --> 00:59:22,360 Speaker 1: even if it doesn't like help the person who has 1071 00:59:22,400 --> 00:59:26,360 Speaker 1: been harmed in the first place. Um. There there are 1072 00:59:26,400 --> 00:59:29,120 Speaker 1: certain just like intuitions we have, and I think one 1073 00:59:29,120 --> 00:59:32,760 Speaker 1: of them is we we feel more confident if there 1074 00:59:32,840 --> 00:59:36,880 Speaker 1: is somebody in the loop who would suffer from the 1075 00:59:36,960 --> 00:59:40,320 Speaker 1: consequences of failure, you know, like the fit Like it 1076 00:59:40,360 --> 00:59:43,680 Speaker 1: doesn't just help that, Like, oh no, I I assure 1077 00:59:43,680 --> 00:59:47,560 Speaker 1: you the surgical robot has, you know, strong incentives within 1078 00:59:47,600 --> 00:59:50,840 Speaker 1: its programming not to fail, not to botch this surgery 1079 00:59:50,880 --> 00:59:52,720 Speaker 1: and take out your you know, remove one of your 1080 00:59:52,800 --> 00:59:56,200 Speaker 1: vital organs. Yeah. Like, on one level, on some level, 1081 00:59:56,240 --> 00:59:59,360 Speaker 1: we want that person to know their careers on the line, 1082 00:59:59,360 --> 01:00:01,240 Speaker 1: are the reput pation is on the line. You know, 1083 01:00:01,600 --> 01:00:06,120 Speaker 1: I think most people would feel better going under surgery 1084 01:00:06,240 --> 01:00:09,880 Speaker 1: with the knowledge that if the surgeon were to do 1085 01:00:09,960 --> 01:00:12,120 Speaker 1: something bad to you. It's not just enough to know 1086 01:00:12,200 --> 01:00:14,760 Speaker 1: that the surgeon surgeon is going to try really hard 1087 01:00:14,840 --> 01:00:17,680 Speaker 1: not to do something bad to you. You also want 1088 01:00:17,760 --> 01:00:20,960 Speaker 1: the like second order guarantee that, like, if the surgeon 1089 01:00:21,080 --> 01:00:23,320 Speaker 1: were to screw up and take take out one of 1090 01:00:23,320 --> 01:00:26,800 Speaker 1: your vital organs, something bad would happen to them and 1091 01:00:26,880 --> 01:00:30,440 Speaker 1: they would suffer. But with a robot, they wouldn't suffer. 1092 01:00:30,560 --> 01:00:34,320 Speaker 1: It's just like, oh, whoops. I wonder if we end 1093 01:00:34,400 --> 01:00:37,200 Speaker 1: up reaching a point with this in this discussion where 1094 01:00:37,240 --> 01:00:40,320 Speaker 1: you know, we're talking about robots hiring people, do we 1095 01:00:40,400 --> 01:00:43,920 Speaker 1: end up in a in a position where aiyes, higher 1096 01:00:44,000 --> 01:00:48,120 Speaker 1: humans not so much because they need human um expertise 1097 01:00:48,600 --> 01:00:53,360 Speaker 1: or human skills or human senses the ability to feel pain. Yeah, 1098 01:00:53,400 --> 01:00:56,439 Speaker 1: and to be culpable, Like they need somebody that will, 1099 01:00:56,600 --> 01:01:01,240 Speaker 1: like essentially aies hiring humans to be scapegoats in the 1100 01:01:01,280 --> 01:01:04,640 Speaker 1: system or in their in the in the in their 1101 01:01:04,680 --> 01:01:08,680 Speaker 1: particular job. Uh So they're like, yeah, we need a 1102 01:01:08,760 --> 01:01:10,400 Speaker 1: human in the loop. Not because I need a human 1103 01:01:10,440 --> 01:01:12,400 Speaker 1: in the loop. I can do this by myself, but 1104 01:01:12,520 --> 01:01:15,320 Speaker 1: if something goes wrong. If you know, then there's always 1105 01:01:15,320 --> 01:01:17,800 Speaker 1: a certain chance that something will happen. I need a 1106 01:01:17,880 --> 01:01:21,080 Speaker 1: human there that will bear the blame. Every robot essentially 1107 01:01:21,080 --> 01:01:24,480 Speaker 1: needs a human co pilot, even in cases where robots 1108 01:01:25,320 --> 01:01:29,160 Speaker 1: far outperformed the humans, just because the human copilot has 1109 01:01:29,200 --> 01:01:33,520 Speaker 1: to be there to accept responsibility for failure. Oh yeah. 1110 01:01:33,640 --> 01:01:35,560 Speaker 1: In the first episode, we talked about the idea of 1111 01:01:35,560 --> 01:01:39,680 Speaker 1: there being like a punchable um plate on a robot, 1112 01:01:40,160 --> 01:01:42,240 Speaker 1: um for when it for when we feel like we 1113 01:01:42,280 --> 01:01:44,560 Speaker 1: need to punish it. It's like that, except instead of 1114 01:01:44,600 --> 01:01:46,760 Speaker 1: a specialized plate on the robot itself, it's just a 1115 01:01:46,800 --> 01:01:51,480 Speaker 1: person that the robot hired. A whipping boy. Oh this 1116 01:01:51,560 --> 01:01:56,360 Speaker 1: is so horrible and and so perversely plausible. I can 1117 01:01:56,480 --> 01:01:59,400 Speaker 1: I can kind of see it. It's like in my lifetime, 1118 01:01:59,440 --> 01:02:04,400 Speaker 1: I can see it. Well, thanks for the nightmares, Rob, Well, no, 1119 01:02:04,480 --> 01:02:06,800 Speaker 1: I think we've had plenty of potential nightmares discuss here. 1120 01:02:06,800 --> 01:02:08,600 Speaker 1: But I mean we shouldn't just focus on the nightmares. 1121 01:02:08,640 --> 01:02:12,400 Speaker 1: I mean, again, to be clear, Um, you know, so 1122 01:02:12,480 --> 01:02:14,840 Speaker 1: the idea of self driving cars, the idea of robot 1123 01:02:14,840 --> 01:02:18,760 Speaker 1: assisted surgery, I mean, we're ultimately talking about the aim 1124 01:02:19,080 --> 01:02:23,920 Speaker 1: of of of creating safer practices of saving him and lives. So, uh, 1125 01:02:23,960 --> 01:02:26,919 Speaker 1: you know it's all it's not all nightmares and um 1126 01:02:27,120 --> 01:02:31,160 Speaker 1: robot health scapes. But we have to be realistic about 1127 01:02:31,600 --> 01:02:37,200 Speaker 1: the very complex UM scenarios and tasks that we're building 1128 01:02:37,240 --> 01:02:40,680 Speaker 1: things around and unleashing machine intelligence upon. Yeah. I mean 1129 01:02:40,720 --> 01:02:43,440 Speaker 1: I made this clear in uh in the previous episode. 1130 01:02:43,560 --> 01:02:46,280 Speaker 1: I'm not like down on things like autonomous vehicles. I mean, 1131 01:02:46,360 --> 01:02:50,200 Speaker 1: ultimately I think autonomous vehicles are are probably a good thing. Um, 1132 01:02:50,680 --> 01:02:53,200 Speaker 1: but I do think it's really important for people to 1133 01:02:54,000 --> 01:03:00,280 Speaker 1: start paying attention to these, uh, these unbelievably complicated philosophical, moral, 1134 01:03:00,360 --> 01:03:05,200 Speaker 1: and legal questions that will inevitably arise as more independent 1135 01:03:05,240 --> 01:03:09,080 Speaker 1: and intelligent agents infiltrate our our world. All right, Well, 1136 01:03:09,120 --> 01:03:10,880 Speaker 1: on that note, we're gonna go ahead close it out. 1137 01:03:11,800 --> 01:03:14,000 Speaker 1: But if you would like to listen to other episodes 1138 01:03:14,080 --> 01:03:16,440 Speaker 1: of Stuff to Blow Your Mind, you know where to 1139 01:03:16,520 --> 01:03:19,360 Speaker 1: find them. You can find our core episodes on Tuesdays 1140 01:03:19,400 --> 01:03:22,480 Speaker 1: and Thursdays in the Stuff to Blow Your Mind podcast feed. 1141 01:03:23,200 --> 01:03:26,240 Speaker 1: On Monday's, we tend to do listener mail. On Wednesdays, 1142 01:03:26,640 --> 01:03:29,640 Speaker 1: we tend to bust out an artifact shorty episode and 1143 01:03:29,720 --> 01:03:31,720 Speaker 1: on Fridays we do a little weird house cinema where 1144 01:03:31,720 --> 01:03:33,400 Speaker 1: we don't want to talk about science so much as 1145 01:03:33,440 --> 01:03:36,440 Speaker 1: we just talk about one weird movie or another, and 1146 01:03:36,480 --> 01:03:39,200 Speaker 1: then we have a little rerun on the weekend. Huge Things. 1147 01:03:39,240 --> 01:03:42,760 Speaker 1: As always to our excellent audio producer Seth Nicholas Johnson. 1148 01:03:43,120 --> 01:03:44,760 Speaker 1: If you would like to get in touch with us 1149 01:03:44,800 --> 01:03:47,360 Speaker 1: with feedback on this episode or any other, to suggest 1150 01:03:47,440 --> 01:03:49,400 Speaker 1: a topic for the future, or just to say hello, 1151 01:03:49,880 --> 01:03:52,520 Speaker 1: you can email us at contact at stuff to Blow 1152 01:03:52,600 --> 01:04:02,480 Speaker 1: your Mind dot com. Stuff to Blow Your Mind is 1153 01:04:02,520 --> 01:04:05,200 Speaker 1: production of I Heart Radio. For more podcasts for my 1154 01:04:05,280 --> 01:04:08,200 Speaker 1: Heart Radio, visit the i heart Radio app, Apple Podcasts, 1155 01:04:08,280 --> 01:04:10,040 Speaker 1: or wherever you listening to your favorite shows.