1 00:00:05,680 --> 00:00:07,960 Speaker 1: Hey, welcome to Stuff to Blow Your Mind. My name 2 00:00:08,119 --> 00:00:11,240 Speaker 1: is Robert Lamb and I'm Joe McCormick, and it's Saturday. 3 00:00:11,280 --> 00:00:13,880 Speaker 1: Time for an episode from the Vault. This is the 4 00:00:14,000 --> 00:00:18,520 Speaker 1: episode Punish the Machine, Part two. It's all about about 5 00:00:18,560 --> 00:00:22,639 Speaker 1: machine moral agency, legal agency and culpability, or what we 6 00:00:22,720 --> 00:00:25,560 Speaker 1: make of this emergent world when when machines are acting 7 00:00:25,560 --> 00:00:31,920 Speaker 1: in an increasingly autonomous way. Let us delay no further. 8 00:00:32,640 --> 00:00:34,960 Speaker 1: Welcome to Stuff to Blow Your Mind production of My 9 00:00:35,080 --> 00:00:44,240 Speaker 1: Heart Radio. Hey, welcome to Stuff to Blow Your Mind. 10 00:00:44,280 --> 00:00:46,879 Speaker 1: My name is Robert Lamb and I'm Joe McCormick, and 11 00:00:46,920 --> 00:00:49,480 Speaker 1: we're back for part two of our talk about Punishing 12 00:00:49,479 --> 00:00:51,960 Speaker 1: the Robot. We are we are back here to uh 13 00:00:52,080 --> 00:00:55,240 Speaker 1: to tell the robot he's been very bad now. In 14 00:00:55,280 --> 00:00:58,520 Speaker 1: the last episode, we talked about the idea of legal 15 00:00:58,560 --> 00:01:03,200 Speaker 1: agency and culpability for robots and other intelligent machines, and 16 00:01:03,320 --> 00:01:05,559 Speaker 1: for a quick refresher on on some of the stuff 17 00:01:05,600 --> 00:01:08,560 Speaker 1: we went over. We talked about the idea that as 18 00:01:08,680 --> 00:01:12,200 Speaker 1: robots and AI become more sophisticated and thus in some 19 00:01:12,280 --> 00:01:16,560 Speaker 1: ways or in some cases more independent and unpredictable, and 20 00:01:16,640 --> 00:01:19,160 Speaker 1: as they integrate more and more into the wild of 21 00:01:19,240 --> 00:01:23,080 Speaker 1: human society, there are just inevitably going to be situations 22 00:01:23,160 --> 00:01:28,200 Speaker 1: where AI and robots do wrong and cause harm to people. Now, 23 00:01:28,200 --> 00:01:30,880 Speaker 1: of course, when a human does wrong and causes harm 24 00:01:30,920 --> 00:01:33,679 Speaker 1: to another human, we have a legal system through which 25 00:01:33,680 --> 00:01:36,600 Speaker 1: the victim can seek various kinds of remedies. And we 26 00:01:36,640 --> 00:01:38,960 Speaker 1: talked in the last episode about the idea of remedies 27 00:01:39,000 --> 00:01:42,039 Speaker 1: that the simple version of that is the remedy is 28 00:01:42,080 --> 00:01:44,520 Speaker 1: what do I get when I win in court? So 29 00:01:44,640 --> 00:01:47,800 Speaker 1: that can be things like monetary rewards. You know, I 30 00:01:47,920 --> 00:01:49,880 Speaker 1: ran into your car with my car, so I pay 31 00:01:49,920 --> 00:01:53,000 Speaker 1: you money, Or it can be punishment, or it can 32 00:01:53,040 --> 00:01:56,480 Speaker 1: be court orders like commanding or restricting the behavior of 33 00:01:56,520 --> 00:01:59,680 Speaker 1: the perpetrator. And so we discussed the idea that as 34 00:01:59,800 --> 00:02:04,520 Speaker 1: roots become more unpredictable and more like human agents, more 35 00:02:04,560 --> 00:02:09,119 Speaker 1: sort of independent, and more integrated into society, it might 36 00:02:09,240 --> 00:02:12,160 Speaker 1: make sense to have some kind of system of legal 37 00:02:12,200 --> 00:02:17,360 Speaker 1: remedies for when robots cause harm or commit crimes. But also, 38 00:02:17,440 --> 00:02:20,720 Speaker 1: as we talked about last time, this is much easier 39 00:02:20,720 --> 00:02:23,000 Speaker 1: said than done. It's going to present tons of new 40 00:02:23,040 --> 00:02:27,239 Speaker 1: problems because our legal system is in many ways not 41 00:02:27,520 --> 00:02:32,799 Speaker 1: equipped to deal with with defendants and situations of this kind, 42 00:02:33,280 --> 00:02:35,359 Speaker 1: and this may cause us to ask questions about how 43 00:02:35,360 --> 00:02:39,720 Speaker 1: we already think about culpability and blame and and punishment 44 00:02:39,800 --> 00:02:42,440 Speaker 1: in the legal system. And so in the last episode 45 00:02:42,440 --> 00:02:44,920 Speaker 1: we talked about one big legal paper that we're going 46 00:02:45,000 --> 00:02:47,480 Speaker 1: to continue to explore in this one. It's by Mark A. 47 00:02:47,680 --> 00:02:51,160 Speaker 1: Limley and Brian Casey in the University of Chicago Law 48 00:02:51,200 --> 00:02:55,040 Speaker 1: Review from twenty nineteen, called Remedies for Robots. So I'll 49 00:02:55,080 --> 00:02:57,079 Speaker 1: be referring back to that one a good bit throughout 50 00:02:57,080 --> 00:02:59,520 Speaker 1: this episode two. Now, I think when we left off 51 00:02:59,600 --> 00:03:02,760 Speaker 1: last time, we had mainly been talking about sort of 52 00:03:02,760 --> 00:03:05,760 Speaker 1: trying to categorize the different sorts of harm that could 53 00:03:05,760 --> 00:03:09,600 Speaker 1: be done by robots or AI intelligent machines. And so 54 00:03:09,639 --> 00:03:12,919 Speaker 1: we talked about some things like unavoidable harms and deliberate 55 00:03:12,919 --> 00:03:15,440 Speaker 1: at least cost harms. These are sort of going to 56 00:03:15,480 --> 00:03:19,480 Speaker 1: be unavoidable parts of having something like autonomous vehicles, right, 57 00:03:19,520 --> 00:03:21,720 Speaker 1: if you have cars driving around on the road, Like, 58 00:03:21,760 --> 00:03:25,080 Speaker 1: even if they're really really good at minimizing harm, there's 59 00:03:25,080 --> 00:03:27,880 Speaker 1: still going to be some cases where there's just no 60 00:03:27,960 --> 00:03:31,119 Speaker 1: way harm could be avoided because their cars. Another would 61 00:03:31,120 --> 00:03:33,720 Speaker 1: be defect driven harms. That's pretty straightforward. That's just where 62 00:03:33,840 --> 00:03:37,160 Speaker 1: the machine malfunctions or breaks in some way. Another would 63 00:03:37,160 --> 00:03:41,200 Speaker 1: be misuse harms. That's where the machine is used in 64 00:03:41,240 --> 00:03:43,840 Speaker 1: a way that is harmful, and in those cases it 65 00:03:43,880 --> 00:03:46,640 Speaker 1: can be usually pretty clear who's at fault. It's the 66 00:03:46,640 --> 00:03:49,120 Speaker 1: person who misused the machine. But then there are a 67 00:03:49,120 --> 00:03:52,000 Speaker 1: couple of other categories that where things get really tricky, 68 00:03:52,360 --> 00:03:56,520 Speaker 1: which are unforeseen harms and systemic harms. And in the 69 00:03:56,560 --> 00:03:59,840 Speaker 1: case of unforeseen harms, one example we talked about in 70 00:03:59,840 --> 00:04:04,120 Speaker 1: the last episode was the drone that invented a wormhole. So, 71 00:04:04,440 --> 00:04:07,080 Speaker 1: you know, people were trying to train a drone to 72 00:04:07,320 --> 00:04:11,360 Speaker 1: move towards like an autonomous flying vehicle, to move towards 73 00:04:11,440 --> 00:04:14,760 Speaker 1: the center of a circular area. But the drone started 74 00:04:14,800 --> 00:04:17,119 Speaker 1: doing a thing where when it got sufficiently far away 75 00:04:17,120 --> 00:04:18,760 Speaker 1: from the center of the circle, it would just fly 76 00:04:18,920 --> 00:04:21,680 Speaker 1: out of the circle altogether. And so it seems kind 77 00:04:21,680 --> 00:04:24,240 Speaker 1: of weird at first, like, Okay, why would it be 78 00:04:24,279 --> 00:04:27,479 Speaker 1: doing that, But then what the researchers realized was that 79 00:04:27,520 --> 00:04:29,919 Speaker 1: whenever it did that, they would turn it off, and 80 00:04:29,960 --> 00:04:32,240 Speaker 1: then they would move it back into the circle to 81 00:04:32,279 --> 00:04:35,520 Speaker 1: start it over again. So from the machine learning point 82 00:04:35,520 --> 00:04:38,280 Speaker 1: of view of the drone itself. It had discovered like 83 00:04:38,360 --> 00:04:41,560 Speaker 1: a like a time space warp that you know. So 84 00:04:41,560 --> 00:04:43,239 Speaker 1: so it was doing this thing that made no sense 85 00:04:43,279 --> 00:04:45,800 Speaker 1: from a human perspective, but actually was it was following 86 00:04:45,800 --> 00:04:50,600 Speaker 1: its programming exactly. Now for an example, uh, sort of 87 00:04:50,640 --> 00:04:54,120 Speaker 1: a thought experiment of how this could become lethal. There's 88 00:04:54,120 --> 00:04:56,920 Speaker 1: an example that is stuck in my head. I can't 89 00:04:56,920 --> 00:04:59,680 Speaker 1: recall where I heard this, who presented this idea? And 90 00:04:59,720 --> 00:05:01,479 Speaker 1: I kind of had it in my head that it 91 00:05:01,480 --> 00:05:04,400 Speaker 1: came from Max tag Mark. But I did some searching 92 00:05:04,440 --> 00:05:06,760 Speaker 1: around in my notes and some searching around and uh, 93 00:05:07,279 --> 00:05:09,680 Speaker 1: one of his books, and I couldn't find it. Perhaps 94 00:05:09,680 --> 00:05:12,680 Speaker 1: you can help refresh me. You remember maybe you remember 95 00:05:12,680 --> 00:05:15,279 Speaker 1: this joke, but the idea of the the AI that 96 00:05:15,400 --> 00:05:17,960 Speaker 1: is running deciding how much oxygen needs to be in 97 00:05:18,000 --> 00:05:21,200 Speaker 1: a train station, it didn't given time. Oh this sounds familiar. 98 00:05:21,320 --> 00:05:23,440 Speaker 1: I don't know the answer, but a lot of these 99 00:05:23,440 --> 00:05:25,960 Speaker 1: thought experiments tend to trace back to Nick Bostrom, so 100 00:05:26,000 --> 00:05:29,279 Speaker 1: I wouldn't be surprised if in there. But but go ahead, 101 00:05:29,360 --> 00:05:31,960 Speaker 1: right okay, as I remember it. The way it works 102 00:05:32,040 --> 00:05:34,440 Speaker 1: is you have you have this AI that's in charge 103 00:05:34,480 --> 00:05:36,680 Speaker 1: of of making sure there's enough oxygen in the train 104 00:05:36,720 --> 00:05:40,320 Speaker 1: station for when humans are there, and it seems to 105 00:05:40,360 --> 00:05:42,360 Speaker 1: have learned this fine, and when humans are there to 106 00:05:42,480 --> 00:05:46,359 Speaker 1: get on the train, everything goes goes well, everybody's breathing fine. 107 00:05:46,839 --> 00:05:50,200 Speaker 1: And then one in one day, Uh, the train arrives 108 00:05:50,240 --> 00:05:52,040 Speaker 1: a little late or it leaves a little right late, 109 00:05:52,040 --> 00:05:54,839 Speaker 1: if you get which whatever it is, and there's not 110 00:05:54,960 --> 00:05:58,360 Speaker 1: enough oxygen and people die. And then it turns out 111 00:05:58,400 --> 00:06:01,400 Speaker 1: that the train was not based saying its decision on 112 00:06:01,720 --> 00:06:03,920 Speaker 1: when people were there, but it was basing it on 113 00:06:04,160 --> 00:06:08,880 Speaker 1: a clock in the train station, like what it was. Um. 114 00:06:08,920 --> 00:06:11,200 Speaker 1: And I may be mangling this horribly, but you know, 115 00:06:11,240 --> 00:06:13,800 Speaker 1: another way of illustrating the point that machine learning could 116 00:06:13,920 --> 00:06:18,600 Speaker 1: end up, you know, latching onto shortcuts or heuristic devices 117 00:06:19,440 --> 00:06:23,679 Speaker 1: that would just seem completely insane to a quote unquote 118 00:06:23,720 --> 00:06:27,240 Speaker 1: logical human mind, but might make sense within the framework 119 00:06:27,400 --> 00:06:30,479 Speaker 1: of the AI. Right, they worked in the training cases, 120 00:06:30,520 --> 00:06:33,159 Speaker 1: and it doesn't understand because it doesn't have common sense, 121 00:06:33,200 --> 00:06:36,080 Speaker 1: it doesn't understand why they wouldn't work. In another case, 122 00:06:37,279 --> 00:06:39,720 Speaker 1: there was actually a real world case that we talked 123 00:06:39,720 --> 00:06:43,440 Speaker 1: about in Part one where there was an attempt to 124 00:06:43,440 --> 00:06:48,400 Speaker 1: do some machine learning on what risk factors would would 125 00:06:48,440 --> 00:06:51,440 Speaker 1: make a pneumonia case admitted to the hospital have a 126 00:06:51,520 --> 00:06:54,719 Speaker 1: higher or lower chance of survival. And one thing that 127 00:06:54,839 --> 00:06:58,960 Speaker 1: a machine learning algorithm determined was that asthma meant that 128 00:06:59,040 --> 00:07:01,960 Speaker 1: you were you better off when you got pneumonia if 129 00:07:02,000 --> 00:07:05,039 Speaker 1: you had asthma. But actually the reason for that that 130 00:07:05,040 --> 00:07:07,080 Speaker 1: that isn't true. Actually the reason for that is that 131 00:07:07,120 --> 00:07:10,720 Speaker 1: if you have asthma, you're a higher risk case for pneumonia, 132 00:07:11,000 --> 00:07:14,600 Speaker 1: so you've got more intensive treatment in the hospital and 133 00:07:14,600 --> 00:07:17,040 Speaker 1: thus had better outcomes on the data set that the 134 00:07:17,040 --> 00:07:19,760 Speaker 1: algorithm was trained on. But the algorithm came up with 135 00:07:19,800 --> 00:07:23,720 Speaker 1: this completely backwards uh failure to understand the difference between 136 00:07:23,760 --> 00:07:27,160 Speaker 1: correlation and causation. There it made it look like asthma 137 00:07:27,200 --> 00:07:30,240 Speaker 1: was a superpower. Now, of course, if you you take 138 00:07:30,320 --> 00:07:33,080 Speaker 1: that kind of shortsighted algorithm and you make it god, 139 00:07:33,160 --> 00:07:34,720 Speaker 1: then it will say, oh, I've just got to give 140 00:07:34,760 --> 00:07:37,200 Speaker 1: everybody asthma, so so we will have a better chance 141 00:07:37,240 --> 00:07:40,200 Speaker 1: of surviving. The point is it can be hard to 142 00:07:40,480 --> 00:07:43,880 Speaker 1: imagine in advance all the cases like this that would 143 00:07:43,880 --> 00:07:48,120 Speaker 1: arise when you've got a world full of robots and 144 00:07:48,120 --> 00:07:50,119 Speaker 1: and AI is running around in it that are trained 145 00:07:50,120 --> 00:07:53,040 Speaker 1: on machine learning. Basically, they're just a number of less 146 00:07:53,040 --> 00:07:56,360 Speaker 1: sort of soft sky nets that you couldn't possibly predict, 147 00:07:56,920 --> 00:07:58,800 Speaker 1: you know, like the sky net scenario being sort of 148 00:07:58,840 --> 00:08:01,240 Speaker 1: like a robots decide that one to end all war 149 00:08:01,800 --> 00:08:05,880 Speaker 1: and um, you know, humans causal war, therefore end all humans, 150 00:08:05,920 --> 00:08:09,080 Speaker 1: that sort of thing. But there's so many different like 151 00:08:09,200 --> 00:08:12,280 Speaker 1: lesser versions of it. It It could also be destructive or 152 00:08:12,360 --> 00:08:15,200 Speaker 1: annoying or just get in the way of effectively using 153 00:08:15,240 --> 00:08:18,920 Speaker 1: AI for whatever we turn to it for. Uh yeah, yeah. 154 00:08:18,960 --> 00:08:21,320 Speaker 1: To come to an example that is definitely used by 155 00:08:21,400 --> 00:08:24,760 Speaker 1: Nick Bostrom, the paper clip maximizer. You know, a robot 156 00:08:24,800 --> 00:08:27,280 Speaker 1: that is designed to make as many paper clips as 157 00:08:27,320 --> 00:08:29,520 Speaker 1: it can, and it just it looks at your body 158 00:08:29,520 --> 00:08:31,520 Speaker 1: and says, hey, that's full of matter. Those could be 159 00:08:31,560 --> 00:08:37,000 Speaker 1: paper clips. Yeah that yeah, yeah, that would be that 160 00:08:37,000 --> 00:08:39,480 Speaker 1: would be quite an apocalypse. Now, before we get back 161 00:08:39,520 --> 00:08:41,880 Speaker 1: into the main subject and talking about this limly in 162 00:08:41,960 --> 00:08:45,920 Speaker 1: case paper with with robots as offenders, there was one 163 00:08:46,000 --> 00:08:48,040 Speaker 1: thing that was interesting I came across. It was just 164 00:08:48,080 --> 00:08:51,520 Speaker 1: a brief footnote in their paper, but about the question 165 00:08:51,840 --> 00:08:54,480 Speaker 1: of what about if the robot is the plaintiff in 166 00:08:54,480 --> 00:08:57,880 Speaker 1: a case? Uh, They said, it's it is possible to 167 00:08:57,960 --> 00:09:00,680 Speaker 1: imagine a robot as a plaintiff in a court case, 168 00:09:01,240 --> 00:09:05,040 Speaker 1: because of course robots, you know, can be injured by humans. 169 00:09:05,040 --> 00:09:07,760 Speaker 1: And they cite a bunch of examples of news stories 170 00:09:07,800 --> 00:09:12,120 Speaker 1: of humans just intentionally like torturing and being cruel two 171 00:09:12,200 --> 00:09:16,400 Speaker 1: robots like that. They cite one news article from eighteen 172 00:09:16,480 --> 00:09:22,280 Speaker 1: about people just aggressively kicking food delivery robots, and then 173 00:09:22,320 --> 00:09:24,520 Speaker 1: they share another story. I actually remember this one from 174 00:09:24,520 --> 00:09:29,120 Speaker 1: the news from about a Silicon Valley security robot that 175 00:09:29,240 --> 00:09:31,920 Speaker 1: was just violently attacked by a drunk man in a 176 00:09:31,960 --> 00:09:34,520 Speaker 1: parking garage. I don't remember this one, but I can 177 00:09:34,559 --> 00:09:38,480 Speaker 1: imagine how it went down. Yeah, exactly. So they say 178 00:09:38,480 --> 00:09:41,119 Speaker 1: that in a case like this, this is actually pretty 179 00:09:41,160 --> 00:09:44,079 Speaker 1: straightforward as a property crime. I mean, unless we start 180 00:09:44,720 --> 00:09:48,200 Speaker 1: getting into a scenario where we're really seeing robots as 181 00:09:48,280 --> 00:09:51,559 Speaker 1: like human beings with their own like consciousness and interests 182 00:09:51,600 --> 00:09:55,880 Speaker 1: and all that, the attacks against robots are really probably 183 00:09:55,960 --> 00:09:59,040 Speaker 1: just property crimes against the owner of the robot. It's like, 184 00:09:59,080 --> 00:10:02,880 Speaker 1: you know, attacking somebody computer or their car or something potentially. 185 00:10:02,920 --> 00:10:05,840 Speaker 1: But we'll get into some stuff a little later that 186 00:10:06,000 --> 00:10:08,360 Speaker 1: I think shows some other directions that could go in 187 00:10:08,400 --> 00:10:11,800 Speaker 1: as well, you know, um, you know, especially consider the 188 00:10:11,840 --> 00:10:15,760 Speaker 1: awesome possibility of robots owning themselves. Yeah, and and that's 189 00:10:15,760 --> 00:10:18,120 Speaker 1: obviously a very different world. I mean, where you get 190 00:10:18,120 --> 00:10:21,520 Speaker 1: into the idea like does a robot actually have rights? Um, 191 00:10:21,800 --> 00:10:24,640 Speaker 1: which is not. That's sort of beyond the horizon of 192 00:10:24,679 --> 00:10:27,319 Speaker 1: what's explored in in this paper itself. This paper is 193 00:10:27,360 --> 00:10:30,080 Speaker 1: more focused on like the kinds of robots that you 194 00:10:30,120 --> 00:10:34,080 Speaker 1: can practically imagine within the next few decades. And and 195 00:10:34,160 --> 00:10:36,400 Speaker 1: in those cases, it seems like all of the really 196 00:10:36,440 --> 00:10:40,800 Speaker 1: thorny stuff would probably be in robots as offenders rather 197 00:10:40,880 --> 00:10:44,760 Speaker 1: than robots as victims of crimes. Right. But to your point, 198 00:10:44,840 --> 00:10:48,040 Speaker 1: like the the initial crimes against robot that we can 199 00:10:48,280 --> 00:10:51,079 Speaker 1: robots that we can imagine would be stuff like drunk 200 00:10:51,120 --> 00:10:53,880 Speaker 1: people pushing them over things like that, Yeah, or just 201 00:10:53,960 --> 00:10:56,640 Speaker 1: like a human driver and a human powered vehicle hitting 202 00:10:56,640 --> 00:10:59,880 Speaker 1: an autonomous vehicle, you know, right. Aw As I mentioned 203 00:10:59,880 --> 00:11:02,000 Speaker 1: in the last episode, this is a very big paper 204 00:11:02,040 --> 00:11:04,160 Speaker 1: and we're not gonna have time to get into every 205 00:11:04,240 --> 00:11:06,360 Speaker 1: avenue they go down in it. But I just wanted 206 00:11:06,400 --> 00:11:08,839 Speaker 1: to go through, uh and and mention some ideas that 207 00:11:08,920 --> 00:11:12,120 Speaker 1: stuck out to me is interesting that they discuss. And 208 00:11:12,160 --> 00:11:17,040 Speaker 1: one thing that really fascinated me about this was that 209 00:11:17,240 --> 00:11:21,480 Speaker 1: the idea of robots as possible agents in in a 210 00:11:21,559 --> 00:11:26,960 Speaker 1: legal context uh brings to the for a philosophical argument 211 00:11:27,000 --> 00:11:30,160 Speaker 1: that has existed in the realm of substantive law for 212 00:11:30,200 --> 00:11:32,280 Speaker 1: a while. UH. And I'll try not to be too 213 00:11:32,320 --> 00:11:34,280 Speaker 1: dry about this, but I think it actually does get 214 00:11:34,320 --> 00:11:37,520 Speaker 1: to some really interesting philosophical territory. UH. And this is 215 00:11:37,559 --> 00:11:41,520 Speaker 1: the distinction between what Limly and Casey call the normative 216 00:11:41,800 --> 00:11:47,240 Speaker 1: versus economic interpretations of substantive law. Again complicated philosophical and 217 00:11:47,320 --> 00:11:49,520 Speaker 1: legal distinction. I'll try to do my best to sum 218 00:11:49,520 --> 00:11:53,680 Speaker 1: it up simply. So. The normative perspective on substantive law 219 00:11:54,200 --> 00:11:59,280 Speaker 1: says that the law is a prohibition against doing something bad. 220 00:11:59,520 --> 00:12:02,120 Speaker 1: So in so thing is against the law. That means 221 00:12:02,360 --> 00:12:06,120 Speaker 1: you shouldn't do it, And we would stop the offender 222 00:12:06,280 --> 00:12:08,960 Speaker 1: from doing the thing that's against the law if we could. 223 00:12:09,360 --> 00:12:11,720 Speaker 1: But since we usually can't stop them from doing it, 224 00:12:11,760 --> 00:12:15,480 Speaker 1: often because it already happened, the remedy that exists. You 225 00:12:15,480 --> 00:12:18,920 Speaker 1: know that maybe paying damages to the victim or something 226 00:12:18,960 --> 00:12:22,280 Speaker 1: like that is a is a an attempt to right 227 00:12:22,400 --> 00:12:25,520 Speaker 1: the wrong in other words, to do the next best 228 00:12:25,640 --> 00:12:29,079 Speaker 1: thing to undoing the harm in the first place. So 229 00:12:29,440 --> 00:12:32,880 Speaker 1: basically getting into the idea of negative reinforcement. Somebody or 230 00:12:32,920 --> 00:12:35,800 Speaker 1: something did something bad, we can't we couldn't stop them 231 00:12:35,800 --> 00:12:39,640 Speaker 1: from doing something bad, but we can try and and 232 00:12:39,760 --> 00:12:43,400 Speaker 1: give them stimulus that would make them not do it again. 233 00:12:43,679 --> 00:12:46,959 Speaker 1: Be that economic or otherwise. Well, yes, but I think 234 00:12:46,960 --> 00:12:49,520 Speaker 1: what you're saying, uh, could actually apply to both of 235 00:12:49,559 --> 00:12:51,959 Speaker 1: these conditions I'm going to talk about. So I think 236 00:12:52,000 --> 00:12:56,079 Speaker 1: maybe the distinction comes in about whether that whether there 237 00:12:56,200 --> 00:12:59,000 Speaker 1: is such a thing as an inherent prohibition. So the 238 00:12:59,000 --> 00:13:03,199 Speaker 1: thing that's operative in the normative view is that the 239 00:13:03,200 --> 00:13:05,520 Speaker 1: thing that's against the law is a thing that should 240 00:13:05,559 --> 00:13:10,000 Speaker 1: not be done, and thus the remedy is an attempt 241 00:13:10,080 --> 00:13:12,200 Speaker 1: to try to fix the fact that it was done 242 00:13:12,200 --> 00:13:15,680 Speaker 1: in the first place. The the economic view is the 243 00:13:15,760 --> 00:13:18,160 Speaker 1: alternative here, and the way they sum that up is 244 00:13:18,800 --> 00:13:22,800 Speaker 1: there is no such thing as forbidden conduct. Rather, a 245 00:13:22,880 --> 00:13:27,679 Speaker 1: substantive law tells you what the cost of the conduct is. 246 00:13:28,000 --> 00:13:31,080 Speaker 1: Does that distinction make any more sense? Yes? Yes, So 247 00:13:31,120 --> 00:13:36,040 Speaker 1: it's basically the first version is doing crimes is bad. Um, 248 00:13:36,200 --> 00:13:39,199 Speaker 1: the second one is doing crimes is expensive. So it's 249 00:13:39,320 --> 00:13:42,360 Speaker 1: the first is crime should not be done, and the 250 00:13:42,400 --> 00:13:44,840 Speaker 1: second one is crimes can be done if you can 251 00:13:44,880 --> 00:13:48,760 Speaker 1: afford it. Yes, exactly so. In Limley and Casey's words, quote, 252 00:13:49,000 --> 00:13:52,240 Speaker 1: damages on this view, the economic view are simply a 253 00:13:52,320 --> 00:13:56,400 Speaker 1: cost of doing business. One we want defendants to internalize, 254 00:13:56,640 --> 00:14:01,040 Speaker 1: but not necessarily to avoid the conduct altogether. And now 255 00:14:01,120 --> 00:14:03,400 Speaker 1: you might look at this and think, oh, okay, well, 256 00:14:03,440 --> 00:14:06,600 Speaker 1: so the economic view is just like a psychopathic way 257 00:14:06,640 --> 00:14:09,199 Speaker 1: of looking at things. And in a certain sense you 258 00:14:09,200 --> 00:14:11,320 Speaker 1: could look at that as like, if you're calculating what's 259 00:14:11,320 --> 00:14:15,320 Speaker 1: the economic cost of murder, then yeah, okay, that does 260 00:14:15,440 --> 00:14:18,960 Speaker 1: just like that's evil, that's like psychopathic. But they're actually 261 00:14:18,960 --> 00:14:21,840 Speaker 1: all kinds of cases we're thinking about. The economic view 262 00:14:21,920 --> 00:14:25,320 Speaker 1: makes more sense of the way we actually behave And 263 00:14:25,360 --> 00:14:28,000 Speaker 1: they use the example of stopping at a traffic light. 264 00:14:29,240 --> 00:14:31,920 Speaker 1: So to read from Limely and Casey here quote Under 265 00:14:31,960 --> 00:14:35,440 Speaker 1: the normative view, a red light stands as a prohibition 266 00:14:35,560 --> 00:14:38,680 Speaker 1: against traveling through an intersection, with the remedy being a 267 00:14:38,680 --> 00:14:41,360 Speaker 1: ticket or a fine against those who are caught breaking 268 00:14:41,400 --> 00:14:44,960 Speaker 1: the prohibition. We would stop you from running the red 269 00:14:45,040 --> 00:14:48,320 Speaker 1: light if we could, But because policing every intersection in 270 00:14:48,360 --> 00:14:51,680 Speaker 1: the country would be impossible. We instead punish those we 271 00:14:51,760 --> 00:14:55,120 Speaker 1: do catch in hopes of deterring others. So in this 272 00:14:55,200 --> 00:14:58,160 Speaker 1: first case, you running a red light is bad, you 273 00:14:58,200 --> 00:15:00,880 Speaker 1: should not do it, and the cost of doing it, 274 00:15:00,880 --> 00:15:03,080 Speaker 1: you know, the punishment you face for doing it is 275 00:15:03,120 --> 00:15:06,360 Speaker 1: an attempt to right that wrong. But then they say, 276 00:15:06,480 --> 00:15:10,120 Speaker 1: under the economic view, however, and absolute prohibition against running 277 00:15:10,120 --> 00:15:13,000 Speaker 1: red lights was never the intention. Rather, the red light 278 00:15:13,120 --> 00:15:16,600 Speaker 1: merely signals a consequence for those who do, in fact 279 00:15:16,960 --> 00:15:20,560 Speaker 1: choose to travel through the intersection. As in the first instance, 280 00:15:20,600 --> 00:15:23,280 Speaker 1: the remedy available is a fine or a ticket. But 281 00:15:23,400 --> 00:15:25,400 Speaker 1: under this view, the choice of whether or not to 282 00:15:25,480 --> 00:15:28,320 Speaker 1: violate the law depends on the willingness of the lawbreaker 283 00:15:28,600 --> 00:15:32,120 Speaker 1: to accept the penalty. So in the case of a 284 00:15:32,160 --> 00:15:34,880 Speaker 1: red light, well that that might make more sense if 285 00:15:34,920 --> 00:15:37,280 Speaker 1: you're like sitting at a red light and you look 286 00:15:37,320 --> 00:15:40,160 Speaker 1: around and there are no other cars anywhere near you, 287 00:15:40,480 --> 00:15:42,800 Speaker 1: and you've you've got a clear view of the entire 288 00:15:42,840 --> 00:15:45,840 Speaker 1: intersection and the red lights not changing, and you think 289 00:15:45,880 --> 00:15:48,680 Speaker 1: maybe it's broken, and you're just like, okay, I'm I'm 290 00:15:48,680 --> 00:15:50,840 Speaker 1: just going to drive through. Well, if if you reach 291 00:15:50,920 --> 00:15:53,240 Speaker 1: that point where you're like, I think it's broken. That 292 00:15:53,320 --> 00:15:55,560 Speaker 1: I feel like that's a slightly different case. But if 293 00:15:55,560 --> 00:15:59,479 Speaker 1: you're just like nobody's watching, I'm gonna do it, um 294 00:15:59,560 --> 00:16:02,280 Speaker 1: and and and the light isn't taking an absurd amount 295 00:16:02,320 --> 00:16:05,360 Speaker 1: of time or longer than you're you're accustomed to. Yeah, 296 00:16:05,400 --> 00:16:08,240 Speaker 1: I don't know how the the belief that the light 297 00:16:08,360 --> 00:16:10,720 Speaker 1: is broken would factor into that, but yeah, it is. 298 00:16:11,000 --> 00:16:13,360 Speaker 1: I Mean, one thing that I think is clear that 299 00:16:13,400 --> 00:16:17,640 Speaker 1: in it's that in many cases there are people, especially 300 00:16:17,680 --> 00:16:22,360 Speaker 1: I think companies and corporations that operate on the economic view, 301 00:16:22,400 --> 00:16:25,480 Speaker 1: and it is something that I think people generally look 302 00:16:25,480 --> 00:16:27,760 Speaker 1: at and say, okay, that that's kind of grimy. Like 303 00:16:27,800 --> 00:16:30,080 Speaker 1: it like a company that says, okay, there is a 304 00:16:30,160 --> 00:16:34,920 Speaker 1: fine for not obeying this environmental regulation, and we're going 305 00:16:34,960 --> 00:16:37,880 Speaker 1: to make more money by violating the regulation than we 306 00:16:37,920 --> 00:16:39,920 Speaker 1: would pay in the fine anybody, So we're just gonna 307 00:16:39,960 --> 00:16:42,680 Speaker 1: pay it. Yeah, you hear about that. With factories, for instance, 308 00:16:42,680 --> 00:16:45,320 Speaker 1: where where there'll be there'll be some situation where that 309 00:16:45,400 --> 00:16:48,480 Speaker 1: the fine is not significant enough to really be at 310 00:16:48,520 --> 00:16:51,840 Speaker 1: a turrent, it's just a for them breaking that that 311 00:16:51,880 --> 00:16:55,160 Speaker 1: mandate being called on it. Occasionally, it's just the cost 312 00:16:55,200 --> 00:16:58,520 Speaker 1: of doing business. Right. Uh. So, there's a funny way 313 00:16:58,560 --> 00:17:00,680 Speaker 1: to describe this point of view. The the authors bring 314 00:17:00,760 --> 00:17:04,399 Speaker 1: up here that they called the bad man theory. And 315 00:17:04,480 --> 00:17:07,520 Speaker 1: this comes from Justice Oliver Wendell Holmes, who is a 316 00:17:07,880 --> 00:17:10,720 Speaker 1: US Supreme Court justice. Uh And he's talking about the 317 00:17:10,760 --> 00:17:14,560 Speaker 1: economic view of substantive law. Uh And. Holmes wrote, quote, 318 00:17:14,760 --> 00:17:16,840 Speaker 1: if you want to know the law and nothing else, 319 00:17:16,880 --> 00:17:19,640 Speaker 1: you must look at it as a bad man who 320 00:17:19,680 --> 00:17:23,359 Speaker 1: cares only for the material consequences which such knowledge enables 321 00:17:23,440 --> 00:17:26,399 Speaker 1: him to predict, not as a good one who finds 322 00:17:26,480 --> 00:17:29,920 Speaker 1: his reasons for conduct, whether inside the law or outside 323 00:17:29,920 --> 00:17:33,600 Speaker 1: of it, in the vaguer sanctions of conscience. Uh. And, 324 00:17:33,640 --> 00:17:35,919 Speaker 1: so they write, the measure of the substantive law, in 325 00:17:35,960 --> 00:17:38,840 Speaker 1: other words, is not to be mixed up with moral qualms, 326 00:17:39,080 --> 00:17:42,359 Speaker 1: but is simply coextensive with its remedy. No more and 327 00:17:42,440 --> 00:17:45,199 Speaker 1: no less. It just is what the remedy is. It's 328 00:17:45,240 --> 00:17:47,399 Speaker 1: the cost of doing business. Now, of course, there are 329 00:17:47,400 --> 00:17:50,480 Speaker 1: plenty of legal scholars and philosophers who would dispute how 330 00:17:50,480 --> 00:17:53,600 Speaker 1: Holmes thinks of this. But the interesting question is how 331 00:17:53,640 --> 00:17:57,320 Speaker 1: does this apply to robots. If you're programming a robot 332 00:17:57,359 --> 00:18:01,600 Speaker 1: to behave well, you actually don't get to just sort 333 00:18:01,600 --> 00:18:05,000 Speaker 1: of like jump over this distinction the way humans do 334 00:18:05,160 --> 00:18:07,600 Speaker 1: when they think about their own moral conduct. Right, Like, 335 00:18:07,640 --> 00:18:10,199 Speaker 1: you're not sitting when you're trying to think what's a 336 00:18:10,200 --> 00:18:12,680 Speaker 1: good way to be a good person. You're not sitting 337 00:18:12,680 --> 00:18:15,320 Speaker 1: around thinking about well, am I going by the normative 338 00:18:15,400 --> 00:18:19,480 Speaker 1: view of morality or the economic view of morality? You know, Um, 339 00:18:19,640 --> 00:18:21,720 Speaker 1: you just sort of act a certain way whatever it 340 00:18:21,800 --> 00:18:23,760 Speaker 1: seems to you the right way to do. But if 341 00:18:23,800 --> 00:18:26,119 Speaker 1: you're trying to program a robot to behave well, you 342 00:18:26,160 --> 00:18:28,840 Speaker 1: have to make a choice whether to embrace the normative 343 00:18:28,880 --> 00:18:32,000 Speaker 1: view or the economic view. Does a robot view a 344 00:18:32,200 --> 00:18:36,639 Speaker 1: red light, say, as a firm prohibition against forward movement, 345 00:18:36,680 --> 00:18:38,800 Speaker 1: it's just a bad thing and you shouldn't do it 346 00:18:38,880 --> 00:18:41,199 Speaker 1: to drive through a red light? Or does it just 347 00:18:41,359 --> 00:18:45,919 Speaker 1: view it as a substantial discouragement against forward motion that 348 00:18:46,000 --> 00:18:49,119 Speaker 1: has a certain cost, and if you were to overcome 349 00:18:49,160 --> 00:18:52,000 Speaker 1: that cost, then you drive on through. Yeah, this is 350 00:18:52,000 --> 00:18:54,440 Speaker 1: a great, great question because I feel like with humans 351 00:18:54,560 --> 00:18:56,880 Speaker 1: we're probably mixing a match and all the time, yes, 352 00:18:57,240 --> 00:19:00,760 Speaker 1: you know, perhaps the same law breaking behavior. You know, 353 00:19:00,840 --> 00:19:03,000 Speaker 1: we may do both on on one thing, and we 354 00:19:03,040 --> 00:19:04,840 Speaker 1: do one on another thing, and then the other one 355 00:19:04,880 --> 00:19:07,240 Speaker 1: on its still a third thing, but with the robot 356 00:19:07,280 --> 00:19:09,679 Speaker 1: it seems like you're gonna deal more or less with 357 00:19:09,800 --> 00:19:14,280 Speaker 1: kind of an absolute direction. Either they're going to be 358 00:19:14,560 --> 00:19:18,000 Speaker 1: um either that the law is is to be obeyed 359 00:19:18,080 --> 00:19:21,760 Speaker 1: or the law is to be taken into your cost analysis. Well, yeah, 360 00:19:21,760 --> 00:19:23,880 Speaker 1: so they talked about how the normative you is actually 361 00:19:23,960 --> 00:19:29,400 Speaker 1: very much like uh Isaac Asimov's Laws of Robotics inviolable rules, 362 00:19:29,520 --> 00:19:33,000 Speaker 1: and the Asimov story is doing a very good job 363 00:19:33,400 --> 00:19:38,960 Speaker 1: of demonstrating why inviolable rules are really difficult to implement 364 00:19:39,000 --> 00:19:42,280 Speaker 1: in the real world. Like you know that they Asimov 365 00:19:42,320 --> 00:19:45,359 Speaker 1: explored this brilliantly. And along these lines, the authors here 366 00:19:45,440 --> 00:19:47,560 Speaker 1: argue that there there are major reasons to think it 367 00:19:47,600 --> 00:19:51,240 Speaker 1: will just not make any practical sense to program robots 368 00:19:51,440 --> 00:19:55,080 Speaker 1: with a normative view of legal remedies. That probably when 369 00:19:55,080 --> 00:19:57,800 Speaker 1: people make ais and robots that that have to take 370 00:19:57,800 --> 00:20:00,760 Speaker 1: these kinds of things into account, they're a most definitely 371 00:20:00,760 --> 00:20:05,080 Speaker 1: going to program them according to the economic view. Right. Uh. 372 00:20:05,119 --> 00:20:07,600 Speaker 1: They say that quote encoding the rule don't run a 373 00:20:07,640 --> 00:20:11,440 Speaker 1: red light as an absolute prohibition, for example, might sometimes 374 00:20:11,440 --> 00:20:14,399 Speaker 1: conflict with the more compelling goal of not letting your 375 00:20:14,480 --> 00:20:18,679 Speaker 1: driver die by being hit by an oncoming truck. So 376 00:20:18,960 --> 00:20:22,000 Speaker 1: the robots are probably going to have to be economically 377 00:20:22,760 --> 00:20:26,840 Speaker 1: economically motivated to an extent like this, um, But then 378 00:20:26,840 --> 00:20:29,119 Speaker 1: they talk about how you know, this gets very complicated 379 00:20:29,160 --> 00:20:33,959 Speaker 1: because robots will calculate the risks of reward and punishment 380 00:20:34,400 --> 00:20:37,600 Speaker 1: with different biases than humans, or maybe even without the 381 00:20:37,640 --> 00:20:40,800 Speaker 1: biases that humans have that the legal system relies on 382 00:20:41,000 --> 00:20:44,840 Speaker 1: in order to keep us obedient. Humans are highly motivated, 383 00:20:45,000 --> 00:20:48,040 Speaker 1: usually by certain types of punishments that like, you know, 384 00:20:48,440 --> 00:20:51,639 Speaker 1: humans like really don't want to spend a month in jail, 385 00:20:52,600 --> 00:20:54,479 Speaker 1: you know, most of the time. And you can't just 386 00:20:54,640 --> 00:20:58,840 Speaker 1: rely on a robot to be incredibly motivated by something 387 00:20:58,880 --> 00:21:00,760 Speaker 1: like this, first of all, because like it wouldn't even 388 00:21:00,760 --> 00:21:03,439 Speaker 1: make sense to send the robot itself to jail. So 389 00:21:03,520 --> 00:21:07,600 Speaker 1: you need some kind of organized system for making a 390 00:21:07,680 --> 00:21:11,879 Speaker 1: robot understand the cost of bad behavior in a systematized 391 00:21:11,880 --> 00:21:14,240 Speaker 1: way that made sense to the robot as a as 392 00:21:14,240 --> 00:21:17,959 Speaker 1: a demotivating incentive. Yeah, Like shame comes to mind as 393 00:21:17,960 --> 00:21:20,600 Speaker 1: another aspect of of all this, Like how do you 394 00:21:20,640 --> 00:21:23,879 Speaker 1: shame a robot? You have to program a robot to 395 00:21:24,000 --> 00:21:26,560 Speaker 1: feel Shane and being uh you know, made to give 396 00:21:26,600 --> 00:21:29,360 Speaker 1: a public apology or something. Yeah. Uh so, so they 397 00:21:29,440 --> 00:21:31,760 Speaker 1: argue that that it really only makes sense for robots 398 00:21:31,800 --> 00:21:34,840 Speaker 1: to look at legal remedies in an economic way, and 399 00:21:34,840 --> 00:21:37,439 Speaker 1: then they write quote, it thus appears that Justice Holmes 400 00:21:37,520 --> 00:21:41,399 Speaker 1: archetypical bad man, will finally be brought to corporeal form, 401 00:21:41,640 --> 00:21:44,359 Speaker 1: though ironically not as a man at all. And if 402 00:21:44,400 --> 00:21:49,400 Speaker 1: Justice Holmes metaphorical subject is truly morally impoverished and analytically deficient, 403 00:21:49,720 --> 00:21:53,680 Speaker 1: as some accused, it will have significant ramifications for robots. 404 00:21:54,200 --> 00:21:57,159 Speaker 1: But yeah, thinking about these incentives, it gets more and 405 00:21:57,200 --> 00:21:59,359 Speaker 1: more difficult. Like the more you try to imagine the 406 00:21:59,359 --> 00:22:04,520 Speaker 1: particular humans have self motivations, you know, pre existing motivations 407 00:22:04,560 --> 00:22:07,520 Speaker 1: that can just be assumed. In most cases, humans don't 408 00:22:07,560 --> 00:22:09,760 Speaker 1: want to pay out money, humans don't want to go 409 00:22:09,840 --> 00:22:14,080 Speaker 1: to jail. How would these costs be instantiated as motivating 410 00:22:14,119 --> 00:22:16,960 Speaker 1: for robots? You would have to you would have to 411 00:22:16,960 --> 00:22:21,080 Speaker 1: basically force some humans I guess meaning the programmers or 412 00:22:21,160 --> 00:22:25,440 Speaker 1: creators of the robots, to instill those costs as motivating 413 00:22:25,560 --> 00:22:28,240 Speaker 1: on the robot. But that's not always going to be 414 00:22:28,240 --> 00:22:31,840 Speaker 1: easy to do because, Okay, imagine a robot does violate 415 00:22:31,840 --> 00:22:34,240 Speaker 1: one of these norms and it causes harm to somebody, 416 00:22:34,560 --> 00:22:37,919 Speaker 1: and as a result, the court says, okay, uh, someone, 417 00:22:38,000 --> 00:22:40,960 Speaker 1: you know, someone has been harmed by this negligent or 418 00:22:41,000 --> 00:22:44,880 Speaker 1: failed autonomous vehicle, and now there must be a payout. 419 00:22:46,000 --> 00:22:49,919 Speaker 1: Who actually pays? Where is the pain of the punishment located? 420 00:22:50,640 --> 00:22:53,320 Speaker 1: A bunch of complications to this problem arise, like it 421 00:22:53,359 --> 00:22:57,040 Speaker 1: gets way more complicated than just the programmer or the owner, 422 00:22:57,520 --> 00:23:01,000 Speaker 1: especially because in this age of artificial intelligence, there is 423 00:23:01,000 --> 00:23:04,800 Speaker 1: a kind of there's a kind of distributed responsibility across 424 00:23:04,880 --> 00:23:08,679 Speaker 1: many parties. The authors write, quote robots are composed of 425 00:23:08,720 --> 00:23:12,840 Speaker 1: many complex components, learning from their interactions with thousands, millions, 426 00:23:12,880 --> 00:23:16,679 Speaker 1: or even billions of data points. And they are often designed, operated, 427 00:23:16,800 --> 00:23:20,199 Speaker 1: least or owned by different companies. Which party is to 428 00:23:20,280 --> 00:23:23,439 Speaker 1: internalize these costs. The one that designed the robot or 429 00:23:23,440 --> 00:23:25,399 Speaker 1: AI in the first place, and that might even be 430 00:23:25,480 --> 00:23:28,680 Speaker 1: multiple companies, The one that collected and curated the data 431 00:23:28,720 --> 00:23:31,960 Speaker 1: set used to train its algorithm in unpredictable ways, the 432 00:23:32,160 --> 00:23:34,800 Speaker 1: users who bought the robot and deployed it in the field. 433 00:23:35,200 --> 00:23:37,679 Speaker 1: And then it gets even more complicated than that, because 434 00:23:37,800 --> 00:23:41,320 Speaker 1: the authors start going into tons of ways that we 435 00:23:41,359 --> 00:23:45,600 Speaker 1: can predict now that it's unlikely that these costs will 436 00:23:45,640 --> 00:23:50,320 Speaker 1: be internalized in commercially produced robots in ways that are 437 00:23:50,359 --> 00:23:55,119 Speaker 1: socially optimal. Because if if you're you're asking a corporation 438 00:23:55,200 --> 00:23:58,439 Speaker 1: that makes robots to take into account some type of 439 00:23:58,480 --> 00:24:04,480 Speaker 1: economic disincent of against the robot behaving badly, other economic 440 00:24:04,520 --> 00:24:07,919 Speaker 1: incentives are going to be competing with those disincentives, right. 441 00:24:08,400 --> 00:24:11,200 Speaker 1: Uh So the author's right. For instance, if I make 442 00:24:11,240 --> 00:24:13,960 Speaker 1: it clear that my car will kill its driver rather 443 00:24:14,040 --> 00:24:17,240 Speaker 1: than run over a pedestrian, if the issue arises, people 444 00:24:17,320 --> 00:24:20,359 Speaker 1: might not buy my car. The economic costs of lost 445 00:24:20,400 --> 00:24:23,959 Speaker 1: sales may swamp the costs of liability from a contrary choice. 446 00:24:24,359 --> 00:24:26,960 Speaker 1: In the other direction, car companies could run into pr 447 00:24:27,080 --> 00:24:30,439 Speaker 1: problems if their cars run over kids. But simply it 448 00:24:30,560 --> 00:24:34,600 Speaker 1: is aggregate profits, not just profits related to legal sanctions, 449 00:24:34,760 --> 00:24:38,199 Speaker 1: that will drive robot decision making. And then there are 450 00:24:38,280 --> 00:24:40,400 Speaker 1: still a million other things to consider. I mean, one 451 00:24:40,440 --> 00:24:42,880 Speaker 1: thing they talk about is the idea that even within 452 00:24:43,000 --> 00:24:48,119 Speaker 1: corporations that produce robots and AI, uh, the parts of 453 00:24:48,160 --> 00:24:51,840 Speaker 1: those corporations don't all understand what the other parts are doing. 454 00:24:52,240 --> 00:24:55,359 Speaker 1: You know, they say, workers within these corporations are likely 455 00:24:55,400 --> 00:25:00,280 Speaker 1: to be siloed in ways that interfere with effective cost internalization. Uh. 456 00:25:00,359 --> 00:25:04,240 Speaker 1: Quote machine learning is a specialized programming skill, and programmers 457 00:25:04,280 --> 00:25:07,719 Speaker 1: aren't economists. Uh. And then they talked about why in 458 00:25:07,760 --> 00:25:10,600 Speaker 1: many cases, it's going to be really difficult to answer 459 00:25:10,680 --> 00:25:13,760 Speaker 1: the question of why an AI did what it did? 460 00:25:13,840 --> 00:25:17,200 Speaker 1: So can you even determine that the AI, say, was, 461 00:25:17,200 --> 00:25:19,640 Speaker 1: was acting in a way that wasn't reasonable? Like, how 462 00:25:19,680 --> 00:25:23,119 Speaker 1: could you ever fundamentally examine the state of mind of 463 00:25:23,160 --> 00:25:26,000 Speaker 1: the AI well enough to to prove that the decision 464 00:25:26,080 --> 00:25:29,680 Speaker 1: it made wasn't the most reasonable one from its own perspective. 465 00:25:30,119 --> 00:25:31,840 Speaker 1: But then another thing they raised, I think is a 466 00:25:31,880 --> 00:25:34,520 Speaker 1: really interesting point, And this gets into one of the 467 00:25:34,520 --> 00:25:37,720 Speaker 1: things we talked about in the last episode where thinking 468 00:25:37,880 --> 00:25:43,680 Speaker 1: about culpability uh for AI and robots actually makes us 469 00:25:44,320 --> 00:25:47,280 Speaker 1: is going to force us to re examine our ideas 470 00:25:47,359 --> 00:25:50,439 Speaker 1: of of culpability and blame when it comes to human 471 00:25:50,480 --> 00:25:55,000 Speaker 1: decision making. Because they talk about this the idea that quote, 472 00:25:55,080 --> 00:26:00,399 Speaker 1: the sheer rationality of robot decision making may itself provoke 473 00:26:00,520 --> 00:26:03,560 Speaker 1: the ire of humans. Now, how would that be? It 474 00:26:03,600 --> 00:26:05,440 Speaker 1: seems like we would say, okay, well, you know we 475 00:26:05,480 --> 00:26:07,800 Speaker 1: want robots to be as rational as possible. We don't 476 00:26:07,800 --> 00:26:12,320 Speaker 1: want them to be irrational. But it is often only 477 00:26:12,560 --> 00:26:16,800 Speaker 1: by carelessly putting costs and risks out of mind that 478 00:26:16,840 --> 00:26:20,160 Speaker 1: we are able to go about our lives. For example, 479 00:26:20,520 --> 00:26:24,080 Speaker 1: people drive cars, and no matter how safe of a 480 00:26:24,160 --> 00:26:27,880 Speaker 1: driver you are, driving a car comes with the unavoidable 481 00:26:28,040 --> 00:26:31,800 Speaker 1: risk that you will harm someone uh the right quote. 482 00:26:31,840 --> 00:26:34,959 Speaker 1: Any economist will tell you that the optimal number of 483 00:26:35,040 --> 00:26:38,919 Speaker 1: deaths from many socially beneficial activities is more than zero 484 00:26:39,359 --> 00:26:42,040 Speaker 1: where it Otherwise, our cars would never go more than 485 00:26:42,119 --> 00:26:45,520 Speaker 1: five miles per hour. Indeed, we would rarely leave our 486 00:26:45,560 --> 00:26:48,639 Speaker 1: homes at all. Even today, we deal with those costs 487 00:26:48,640 --> 00:26:52,679 Speaker 1: and remedies law unevenly. The effective statistical price of a 488 00:26:52,760 --> 00:26:55,600 Speaker 1: human life in court decisions is all over the map. 489 00:26:55,960 --> 00:26:59,159 Speaker 1: The calculation is generally done ad hoc and after the 490 00:26:59,200 --> 00:27:03,480 Speaker 1: fact that it allows us to avoid explicitly discussing politically 491 00:27:03,520 --> 00:27:07,280 Speaker 1: fraught concepts that can lead to accusations of trading lives 492 00:27:07,320 --> 00:27:10,600 Speaker 1: for cash. And it may work acceptably for humans because 493 00:27:10,600 --> 00:27:14,399 Speaker 1: we have instinctive reactions against injuring others that make deterrence 494 00:27:14,560 --> 00:27:17,879 Speaker 1: less important. But in many instances robots will need to 495 00:27:18,080 --> 00:27:22,080 Speaker 1: quantify the value we put on a life if they 496 00:27:22,080 --> 00:27:26,160 Speaker 1: are to modify their behavior at all. Accordingly, the companies 497 00:27:26,200 --> 00:27:28,960 Speaker 1: that make robots will have to figure out how much 498 00:27:29,040 --> 00:27:32,000 Speaker 1: they value human life, and they will have to write 499 00:27:32,040 --> 00:27:35,080 Speaker 1: it down in the algorithm for all to see, at 500 00:27:35,160 --> 00:27:39,359 Speaker 1: least after extensive discovery, uh, referring to like you know 501 00:27:39,400 --> 00:27:41,720 Speaker 1: what the courts will find out by looking into how 502 00:27:41,720 --> 00:27:44,399 Speaker 1: these algorithms are created. And I think this is a 503 00:27:44,400 --> 00:27:47,840 Speaker 1: fantastic point, Like, in order for a robot to make 504 00:27:47,880 --> 00:27:51,000 Speaker 1: ethical decisions about living in the real world, it's going 505 00:27:51,040 --> 00:27:54,080 Speaker 1: to have to do things like put a price tag 506 00:27:54,200 --> 00:27:57,200 Speaker 1: on you know, what kind of risk to human life 507 00:27:57,240 --> 00:28:00,840 Speaker 1: is acceptable in order for it to do anything. And 508 00:28:01,119 --> 00:28:04,120 Speaker 1: we don't. That seems monstrous to us. It does not 509 00:28:04,200 --> 00:28:08,480 Speaker 1: seem reasonable for any percent chance of harming a human, 510 00:28:09,160 --> 00:28:13,199 Speaker 1: of killing somebody to be the unacceptable risk of your 511 00:28:13,280 --> 00:28:16,320 Speaker 1: day to day activities. And yet it actually already is that, 512 00:28:16,359 --> 00:28:19,520 Speaker 1: you know, it always is that way whenever we do anything, 513 00:28:20,040 --> 00:28:22,520 Speaker 1: but we just like have to put it out of mind, 514 00:28:22,560 --> 00:28:25,880 Speaker 1: like we can't think about it. Yeah, I mean, like, 515 00:28:25,920 --> 00:28:30,040 Speaker 1: what's the alternative, right A programming monstrous self delusion into 516 00:28:30,119 --> 00:28:32,720 Speaker 1: the self driving car where it says I will not 517 00:28:32,800 --> 00:28:36,080 Speaker 1: get into a wreck on my on my next route 518 00:28:36,160 --> 00:28:39,000 Speaker 1: because I cannot That cannot happen to me. It has 519 00:28:39,040 --> 00:28:41,120 Speaker 1: never happened to me before, it will never happen. You know, 520 00:28:41,440 --> 00:28:45,800 Speaker 1: these sorts of you know, ridiculous, not even statements that 521 00:28:45,880 --> 00:28:47,360 Speaker 1: we make in our mind. It's just kind of like 522 00:28:47,400 --> 00:28:50,360 Speaker 1: assumptions like that's that's the kind of thing that happens 523 00:28:50,400 --> 00:28:53,040 Speaker 1: to other drivers, and it's not going to happen to me, 524 00:28:53,080 --> 00:28:55,760 Speaker 1: even though we we've all seen the you know, the 525 00:28:55,800 --> 00:28:58,640 Speaker 1: statistics before. Yeah, exactly, I mean, I think this is 526 00:28:58,680 --> 00:29:01,520 Speaker 1: a really good point. And uh so in this case, 527 00:29:01,560 --> 00:29:05,560 Speaker 1: the robot wouldn't even necessarily be doing something evil. In fact, 528 00:29:05,840 --> 00:29:08,040 Speaker 1: you could argue there could be cases where the robot 529 00:29:08,200 --> 00:29:11,440 Speaker 1: is behaving in a way that is far safer, far 530 00:29:11,560 --> 00:29:14,600 Speaker 1: less risky than the average human doing the same thing. 531 00:29:15,160 --> 00:29:20,160 Speaker 1: But the very fact of its clearly coded rationality reveals 532 00:29:20,280 --> 00:29:23,440 Speaker 1: something that is already true about human societies, which we 533 00:29:23,520 --> 00:29:33,040 Speaker 1: can't really bear to look at or think about. So 534 00:29:33,080 --> 00:29:35,720 Speaker 1: another thing that the author has explored that I think 535 00:29:35,840 --> 00:29:40,760 Speaker 1: is really interesting is the idea of how robot punishment 536 00:29:41,400 --> 00:29:46,640 Speaker 1: would make it like directly punishing the robot itself, whether 537 00:29:46,720 --> 00:29:49,840 Speaker 1: how that possibility might make us rethink the idea of 538 00:29:49,960 --> 00:29:53,840 Speaker 1: punishing humans. Uh, Now, of course, it's just the case 539 00:29:53,960 --> 00:29:56,240 Speaker 1: that whether or not it actually serves as any kind 540 00:29:56,240 --> 00:30:00,920 Speaker 1: of deterrent, whether or not it actually rationally reduces is harm. 541 00:30:01,000 --> 00:30:04,800 Speaker 1: It may just be unavoidable that humans sometimes feel they 542 00:30:04,920 --> 00:30:10,160 Speaker 1: want to inflict direct harm on a perpetrator as punishment 543 00:30:10,280 --> 00:30:13,280 Speaker 1: for the crime they're alleged to have committed, and that 544 00:30:13,360 --> 00:30:16,000 Speaker 1: may well translate to robots themselves. I mean, you can 545 00:30:16,040 --> 00:30:19,040 Speaker 1: imagine we we've all I think, raged against an inanimate 546 00:30:19,080 --> 00:30:21,680 Speaker 1: object before. We wanted to kick a printer or something 547 00:30:21,720 --> 00:30:24,120 Speaker 1: like that. Uh. And we talked in the last episode 548 00:30:24,160 --> 00:30:28,360 Speaker 1: about some of that psychological research about how people mindlessly 549 00:30:28,360 --> 00:30:32,240 Speaker 1: apply social rules to robots. The authors here right, Certainly 550 00:30:32,280 --> 00:30:36,040 Speaker 1: people punch or smash inanimate objects all the time. Juries 551 00:30:36,120 --> 00:30:39,480 Speaker 1: might similarly want to punish a robot not to create 552 00:30:39,520 --> 00:30:43,760 Speaker 1: optimal cost internalization, but because it makes the jury and 553 00:30:43,840 --> 00:30:47,360 Speaker 1: the victim feel better. The authors write later towards their 554 00:30:47,360 --> 00:30:50,840 Speaker 1: conclusion about the idea of directly punishing robots that quote, 555 00:30:50,880 --> 00:30:54,760 Speaker 1: this seems socially wasteful. Punishing robots not to make them 556 00:30:54,800 --> 00:30:57,680 Speaker 1: behave better, but just to punish them is kind of 557 00:30:57,720 --> 00:31:00,760 Speaker 1: like kicking a puppy that can't understand why it's being hurt. 558 00:31:01,160 --> 00:31:03,920 Speaker 1: The same might be true of punishing people to make 559 00:31:04,000 --> 00:31:06,920 Speaker 1: us feel better, but with robots, the punishment is stripped 560 00:31:06,920 --> 00:31:09,440 Speaker 1: of any pretense that it is sending a message to 561 00:31:09,520 --> 00:31:13,400 Speaker 1: make the robot understand the wrongness of its actions. Now, 562 00:31:13,600 --> 00:31:16,280 Speaker 1: I'm pretty sympathetic personally to the point of view that 563 00:31:16,400 --> 00:31:19,360 Speaker 1: a lot of punishment that happens in the world is 564 00:31:19,400 --> 00:31:23,360 Speaker 1: not actually uh, is not actually a rational way to 565 00:31:23,560 --> 00:31:27,880 Speaker 1: reduce harm, but just kind of like is uh. You know, 566 00:31:28,320 --> 00:31:30,320 Speaker 1: if it serves any purpose, it is the purpose of 567 00:31:30,320 --> 00:31:33,440 Speaker 1: the emotional satisfaction of people who feel they've been wronged, 568 00:31:33,520 --> 00:31:36,960 Speaker 1: or people who want to demonstrate moral approprium on on 569 00:31:37,120 --> 00:31:41,760 Speaker 1: the offender. But I understand that, you know, in some cases, 570 00:31:41,800 --> 00:31:45,840 Speaker 1: you could imagine that punishing somebody serves as an object 571 00:31:45,880 --> 00:31:49,560 Speaker 1: example that deters behavior in the future, and to the 572 00:31:49,600 --> 00:31:51,560 Speaker 1: extent that that is ever the case. If it is 573 00:31:51,640 --> 00:31:55,000 Speaker 1: the case, could punishing a robot serve that role, could 574 00:31:55,040 --> 00:31:58,640 Speaker 1: actually inflicting say like like punching a robot or somehow 575 00:31:58,680 --> 00:32:02,120 Speaker 1: otherwise punishing a robot serve as a kind of object 576 00:32:02,160 --> 00:32:06,400 Speaker 1: example that deters behavior in humans, say, say the humans 577 00:32:06,400 --> 00:32:10,160 Speaker 1: who will program the robots of the future. It's your 578 00:32:10,320 --> 00:32:13,800 Speaker 1: kind of symbolism to imagine. Yeah, I mean, when you 579 00:32:13,840 --> 00:32:17,640 Speaker 1: start thinking about, you know, the ways to punish robots, 580 00:32:17,640 --> 00:32:19,720 Speaker 1: I mean you think of some of the more ridiculous 581 00:32:19,720 --> 00:32:22,680 Speaker 1: examples that have been brought up in sci fi and 582 00:32:22,720 --> 00:32:27,080 Speaker 1: sci fi comedy, like robot hells and so forth. Um, 583 00:32:27,120 --> 00:32:30,400 Speaker 1: And or just the idea of even destroying or deleting 584 00:32:30,880 --> 00:32:35,320 Speaker 1: a robot that is faulty or misbehaving. Um. But but maybe, 585 00:32:35,440 --> 00:32:37,440 Speaker 1: you know, maybe it ends up being something more like 586 00:32:37,480 --> 00:32:41,040 Speaker 1: I think of game systems right where say, if you 587 00:32:41,520 --> 00:32:45,680 Speaker 1: accumulate too many of uh say madness points, your I 588 00:32:45,680 --> 00:32:47,880 Speaker 1: don't know, your movement is cut in half, that sort 589 00:32:47,880 --> 00:32:50,880 Speaker 1: of thing, and then that has a ramification on how 590 00:32:50,960 --> 00:32:52,880 Speaker 1: you play the game and to what extent you can 591 00:32:52,880 --> 00:32:56,000 Speaker 1: play the game well. And therefore, like playing into the 592 00:32:56,040 --> 00:32:59,680 Speaker 1: economic model, you know, it could it could have sort 593 00:32:59,720 --> 00:33:02,960 Speaker 1: of our officially constructed but very real consequences on how 594 00:33:03,040 --> 00:33:06,720 Speaker 1: well a system could behave, you know, But then again 595 00:33:06,880 --> 00:33:09,640 Speaker 1: you can imagine ways that an AI might find ways 596 00:33:09,680 --> 00:33:12,440 Speaker 1: to to circumvent that and say, well, if I play 597 00:33:12,520 --> 00:33:15,160 Speaker 1: the game a certain way where I don't need to 598 00:33:15,560 --> 00:33:18,120 Speaker 1: move at normal speed, I can just move at half 599 00:33:18,160 --> 00:33:21,280 Speaker 1: speed but have the benefit of getting to break these rules. 600 00:33:21,800 --> 00:33:23,560 Speaker 1: Then who knows, you know, it just I feel like 601 00:33:24,080 --> 00:33:28,720 Speaker 1: there it seems an inescapable maze. Yeah, well that's that's 602 00:33:28,720 --> 00:33:32,240 Speaker 1: interesting because that is edging toward another thing that the 603 00:33:32,240 --> 00:33:34,440 Speaker 1: authors actually talked about here, which is the idea of 604 00:33:34,480 --> 00:33:38,640 Speaker 1: a robot death penalty. Uh. And this is funny because 605 00:33:38,760 --> 00:33:42,440 Speaker 1: I again because personally, you know, I see a lot 606 00:33:42,480 --> 00:33:45,520 Speaker 1: of flaws in in applying a death penalty to humans. 607 00:33:45,600 --> 00:33:49,520 Speaker 1: I think that is a very flawed, uh judicial remedy. 608 00:33:49,640 --> 00:33:53,800 Speaker 1: But I can understand a death penalty for robots, Like 609 00:33:54,160 --> 00:33:57,600 Speaker 1: you know, robots don't have the same rights as human defendants. 610 00:33:57,840 --> 00:34:01,480 Speaker 1: If a robot is malfunctioning or behaving in a way 611 00:34:01,520 --> 00:34:05,120 Speaker 1: that is so dangerous as to suggest it is likely 612 00:34:05,160 --> 00:34:08,840 Speaker 1: in the future to continue to endanger human lives to 613 00:34:08,920 --> 00:34:13,000 Speaker 1: an unacceptable extent, then yeah, it seems to me reasonable 614 00:34:13,040 --> 00:34:16,759 Speaker 1: that you should just turn off that robot permanently. Okay, 615 00:34:16,800 --> 00:34:19,000 Speaker 1: But but then again, and then it raises the question, well, 616 00:34:19,000 --> 00:34:21,880 Speaker 1: what about what what led us to this malfunction? Is 617 00:34:21,880 --> 00:34:24,560 Speaker 1: there something in the system itself that needs to be 618 00:34:24,600 --> 00:34:28,000 Speaker 1: remedied in order to prevent that from happening. Again, that's 619 00:34:28,040 --> 00:34:30,920 Speaker 1: like very good point, and the authors bring up exactly 620 00:34:30,960 --> 00:34:34,319 Speaker 1: this concern. Yeah, so they say, well, then again, so 621 00:34:34,360 --> 00:34:36,959 Speaker 1: a robot might not have human rights where you would 622 00:34:36,960 --> 00:34:39,799 Speaker 1: be concerned about the death penalty for the robot's own good, 623 00:34:40,239 --> 00:34:42,840 Speaker 1: but you might be concerned about what you are failing 624 00:34:42,880 --> 00:34:45,360 Speaker 1: to be able to learn from. Allowing the robot to 625 00:34:45,400 --> 00:34:48,800 Speaker 1: continue to operate like that that could help you refine 626 00:34:49,000 --> 00:34:51,239 Speaker 1: AI in the future. Maybe not letting it continue to 627 00:34:51,280 --> 00:34:53,680 Speaker 1: operate in the wild, but I don't know, keeping it 628 00:34:53,920 --> 00:34:56,880 Speaker 1: operative in some sense because like, whatever it's doing is 629 00:34:56,920 --> 00:35:00,000 Speaker 1: something we need to understand better. With the robot prison 630 00:35:00,000 --> 00:35:04,200 Speaker 1: instead of robot death penalty. Um. And of course that 631 00:35:04,280 --> 00:35:07,640 Speaker 1: the human comparison to be made is equally as is 632 00:35:07,640 --> 00:35:11,960 Speaker 1: frustrating because you end up with scenarios where you'll have, um, 633 00:35:12,080 --> 00:35:15,919 Speaker 1: a society that's very pro death penalty. But then when 634 00:35:15,920 --> 00:35:19,000 Speaker 1: it comes to doing the same sort of backwork and saying, well, 635 00:35:19,040 --> 00:35:21,279 Speaker 1: what led to this case, what were some of the 636 00:35:21,360 --> 00:35:25,719 Speaker 1: systematic problems, uh, cultural problems, societal problems. I don't know, 637 00:35:25,800 --> 00:35:28,040 Speaker 1: you know, well, whatever it is that that led to 638 00:35:28,080 --> 00:35:30,480 Speaker 1: this case that needed to be remedied with death. Should 639 00:35:30,480 --> 00:35:33,200 Speaker 1: we correct those problems too, And in some cases the 640 00:35:33,200 --> 00:35:35,040 Speaker 1: answer seems to be, oh, no, we're not doing that. 641 00:35:35,440 --> 00:35:38,400 Speaker 1: We'll just we'll just do the death penalty as is necessary, 642 00:35:38,600 --> 00:35:40,880 Speaker 1: even though it doesn't actually prevent us from reaching this 643 00:35:41,080 --> 00:35:43,719 Speaker 1: this point over and over again. I mean, I feel 644 00:35:43,719 --> 00:35:45,799 Speaker 1: like it's one of the most common features of the 645 00:35:45,840 --> 00:35:49,359 Speaker 1: tough on crime mentality that it is resistant to the 646 00:35:49,440 --> 00:35:53,120 Speaker 1: idea of understanding why what led a person to commit 647 00:35:53,120 --> 00:35:56,160 Speaker 1: a crime. I mean, you've heard I'm trying to think 648 00:35:56,200 --> 00:35:57,879 Speaker 1: of an example of somebody, but I mean you've heard 649 00:35:57,920 --> 00:36:00,640 Speaker 1: the person say, oh, uh, you know, you're just gonna 650 00:36:00,640 --> 00:36:02,880 Speaker 1: give some sob story about what happened when he was 651 00:36:02,920 --> 00:36:05,839 Speaker 1: a child or something like that. Yeah, yeah, yeah, yeah, 652 00:36:05,840 --> 00:36:09,479 Speaker 1: I've definitely encountered that that that counter argument before. Yeah, 653 00:36:09,600 --> 00:36:11,560 Speaker 1: but yeah, I mean, I think we're probably on the 654 00:36:11,600 --> 00:36:14,640 Speaker 1: same page that it really probably is very useful to 655 00:36:14,680 --> 00:36:17,640 Speaker 1: try to understand what are the common underlying conditions that 656 00:36:17,719 --> 00:36:21,640 Speaker 1: you can detect when people do something bad. And of course, 657 00:36:21,680 --> 00:36:23,920 Speaker 1: the same thing would be true of robots, right, And 658 00:36:23,960 --> 00:36:27,520 Speaker 1: it seems like with robots there would potentially be room 659 00:36:27,560 --> 00:36:31,719 Speaker 1: for true rehabilitation with with with these things. If not, 660 00:36:32,040 --> 00:36:33,600 Speaker 1: I mean, certainly you could look at it in a 661 00:36:33,640 --> 00:36:37,160 Speaker 1: software hardware scenario where like, Okay, the software's something's wrong 662 00:36:37,200 --> 00:36:39,560 Speaker 1: with the software, Well delete that, put in some some 663 00:36:39,640 --> 00:36:44,160 Speaker 1: healthy software, um, but keep the hardware. Uh. You know 664 00:36:44,280 --> 00:36:47,480 Speaker 1: that's in a way, that's rehabilitation right there. It's a 665 00:36:47,520 --> 00:36:50,600 Speaker 1: sort of rehabilitation that's not possible with humans. We can't 666 00:36:50,960 --> 00:36:53,439 Speaker 1: wipe somebody's mental state and replace it with a new, 667 00:36:53,520 --> 00:36:56,120 Speaker 1: factory clean mental state. You know, we can't go back 668 00:36:56,160 --> 00:37:01,640 Speaker 1: and edit someone's memories and traumas and what have you. Uh, 669 00:37:01,880 --> 00:37:04,319 Speaker 1: But with machines, it seems like we would have more 670 00:37:04,400 --> 00:37:08,520 Speaker 1: ability to do something of that nature. Yeah. Though, this 671 00:37:08,560 --> 00:37:10,200 Speaker 1: is another thing that comes up, and I mean, of 672 00:37:10,239 --> 00:37:12,799 Speaker 1: course it probably would be useful to try to learn 673 00:37:12,960 --> 00:37:16,480 Speaker 1: from failed AI in order to better perfect AI and robots. 674 00:37:16,520 --> 00:37:19,560 Speaker 1: But on the other hand, in in basically the idea 675 00:37:19,560 --> 00:37:24,080 Speaker 1: of trying to rehabilitate or reprogram robots that do wrong, uh, 676 00:37:24,160 --> 00:37:26,600 Speaker 1: the authors point out that they're probably going to be 677 00:37:26,640 --> 00:37:31,000 Speaker 1: a lot of difficulties in enforcing say, the equivalent of 678 00:37:31,080 --> 00:37:33,560 Speaker 1: court orders against robots. So one thing that is a 679 00:37:33,560 --> 00:37:37,520 Speaker 1: common remedy in in legal cases against humans, as you 680 00:37:37,600 --> 00:37:40,000 Speaker 1: might get a restraining order, you know, you need to 681 00:37:40,000 --> 00:37:43,279 Speaker 1: stay fifty feet away from somebody right uh, fifty feet 682 00:37:43,320 --> 00:37:45,880 Speaker 1: away from the plantiff or something like that, or you 683 00:37:45,920 --> 00:37:48,360 Speaker 1: need to not operate a vehicle or you know something. 684 00:37:48,640 --> 00:37:51,000 Speaker 1: There will be cases where it's probably difficult to enforce 685 00:37:51,080 --> 00:37:53,400 Speaker 1: that same kind of thing on a robot, especially on 686 00:37:53,640 --> 00:37:59,319 Speaker 1: robots whose behavior is determined by a complex interaction of 687 00:37:59,480 --> 00:38:02,920 Speaker 1: rules that are not explicitly coded by humans. So, you know, 688 00:38:03,040 --> 00:38:06,040 Speaker 1: most AI these days is not going to be a 689 00:38:06,080 --> 00:38:09,319 Speaker 1: series of if then statements written by humans, but it's 690 00:38:09,320 --> 00:38:12,400 Speaker 1: going to be determined by machine learning, which can to 691 00:38:12,600 --> 00:38:15,480 Speaker 1: some extent be sort of reverse engineered and and somewhat 692 00:38:15,520 --> 00:38:18,160 Speaker 1: understood by humans. But the more complex it is, the 693 00:38:18,200 --> 00:38:20,480 Speaker 1: harder it is to do that. And so there might 694 00:38:20,480 --> 00:38:23,520 Speaker 1: be a lot of cases where you know, you say, okay, 695 00:38:23,560 --> 00:38:26,120 Speaker 1: this robot needs to do X, it needs to obit, 696 00:38:26,200 --> 00:38:28,640 Speaker 1: you know, stay fifty feet away from the plaintiff or something, 697 00:38:29,120 --> 00:38:31,440 Speaker 1: but the person you know, whoever is in charge of 698 00:38:31,480 --> 00:38:33,279 Speaker 1: the robot might say, I don't know how to make 699 00:38:33,320 --> 00:38:37,120 Speaker 1: it do that. Or the possibly more tragic or funnier 700 00:38:37,160 --> 00:38:40,640 Speaker 1: example would be the it discovers the equivalent of the 701 00:38:41,560 --> 00:38:43,480 Speaker 1: drone with the wormhole that we talked about in the 702 00:38:43,560 --> 00:38:46,360 Speaker 1: last episode, right where it's the robot is told to 703 00:38:46,440 --> 00:38:48,960 Speaker 1: keep fifty feet of distance between you and the plaintiff. 704 00:38:49,239 --> 00:38:51,760 Speaker 1: Robot obeys the role by lifting the plaintiff and throwing 705 00:38:51,760 --> 00:38:56,040 Speaker 1: them fifty feet away. So, to read another section from 706 00:38:56,160 --> 00:38:58,840 Speaker 1: Limley and Casey here, they're right to issue an effective 707 00:38:58,840 --> 00:39:01,279 Speaker 1: injunction that causes is a robot to do what we 708 00:39:01,400 --> 00:39:04,680 Speaker 1: want it to do and nothing else. Requires both extreme 709 00:39:04,840 --> 00:39:09,560 Speaker 1: foresight and extreme precision in drafting it. If injunctions are 710 00:39:09,600 --> 00:39:11,440 Speaker 1: to work at all, courts will have to spend a 711 00:39:11,480 --> 00:39:14,640 Speaker 1: lot more time thinking about exactly what they want to 712 00:39:14,719 --> 00:39:18,879 Speaker 1: happen and all the possible circumstances that could arise. If 713 00:39:18,920 --> 00:39:22,640 Speaker 1: past experiences any indication courts are unlikely to do it 714 00:39:22,719 --> 00:39:25,360 Speaker 1: very well. That's not a knock on courts. Rather, the 715 00:39:25,400 --> 00:39:29,359 Speaker 1: problem is twofold words are notoriously bad at conveying our 716 00:39:29,400 --> 00:39:35,120 Speaker 1: intended meaning, and people are notoriously bad at predicting the future. Coders, 717 00:39:35,160 --> 00:39:37,600 Speaker 1: for their part, aren't known for their deep understanding of 718 00:39:37,640 --> 00:39:40,719 Speaker 1: the law, and so we should expect errors in translation. 719 00:39:40,960 --> 00:39:44,239 Speaker 1: Even if the injunction is flawlessly written, and if we 720 00:39:44,320 --> 00:39:47,360 Speaker 1: fall into any of these traps, the consequences of drafting 721 00:39:47,360 --> 00:39:52,120 Speaker 1: the injunction incompletely maybe quite severe. So I'm imagining you 722 00:39:52,200 --> 00:39:54,960 Speaker 1: issue a court order to a robot to do something 723 00:39:55,080 --> 00:39:57,280 Speaker 1: or not do something. You're kind of in the situation 724 00:39:57,320 --> 00:40:00,960 Speaker 1: of like the monkeys Paw wish, you know, right, like, oh, 725 00:40:01,400 --> 00:40:03,520 Speaker 1: you shouldn't have phrased it that way, Now you're in 726 00:40:03,560 --> 00:40:06,799 Speaker 1: for real trouble. Or what's the better example of that 727 00:40:06,840 --> 00:40:09,000 Speaker 1: isn't There's some movie we were just talking about recently 728 00:40:09,040 --> 00:40:11,759 Speaker 1: with like the Bad Genie who when you phrase a 729 00:40:11,800 --> 00:40:14,319 Speaker 1: wish wrong, does you know works it out on you 730 00:40:14,360 --> 00:40:17,480 Speaker 1: in a terrible way. Um, I don't know. We were 731 00:40:17,520 --> 00:40:20,560 Speaker 1: talking about Lepricn or wish Master or something. Does LEPrecon 732 00:40:20,640 --> 00:40:24,359 Speaker 1: grant wishes? I don't remember LEPrecon granting any wishes? What's 733 00:40:24,360 --> 00:40:27,120 Speaker 1: he do? Then? I think the only one I've seen 734 00:40:27,200 --> 00:40:29,560 Speaker 1: is Lepricon in space, So I'm a little foggy on 735 00:40:29,600 --> 00:40:33,080 Speaker 1: the the logic. I don't think he grants wishes. He 736 00:40:33,200 --> 00:40:36,920 Speaker 1: just he just like rides around on skateboards and punishes people. 737 00:40:37,040 --> 00:40:39,360 Speaker 1: He just attacks people who try to get his gold 738 00:40:39,360 --> 00:40:42,000 Speaker 1: and stuff. Well, Lepricans in general are known for this 739 00:40:42,080 --> 00:40:45,279 Speaker 1: sort of thing, though, are they. Yeah, Okay, if you're 740 00:40:45,280 --> 00:40:49,160 Speaker 1: not precise enough, they'll work something in there to cheat 741 00:40:49,200 --> 00:40:52,560 Speaker 1: you out of your your your prize. I'm trying to think, so, like, 742 00:40:52,600 --> 00:40:54,960 Speaker 1: don't come within fifty feet of the plaintiff. And so 743 00:40:55,040 --> 00:40:57,440 Speaker 1: the robot I don't know, like it builds a big 744 00:40:57,560 --> 00:41:00,520 Speaker 1: yard stick made out of human feed or something. Yeah, yeah, 745 00:41:00,560 --> 00:41:04,319 Speaker 1: it has fifty ft long arms again to lifted them 746 00:41:04,320 --> 00:41:07,319 Speaker 1: into the air. Something to that effect, or say the 747 00:41:07,640 --> 00:41:12,400 Speaker 1: say it's uh, for some reason, schools are just too dangerous, 748 00:41:12,800 --> 00:41:16,400 Speaker 1: and this self driving car is not permitted to go 749 00:41:16,600 --> 00:41:19,720 Speaker 1: within um, you know some you know, so many blocks 750 00:41:19,719 --> 00:41:22,040 Speaker 1: of an active school, and so it calls in a 751 00:41:22,080 --> 00:41:25,680 Speaker 1: bomb threat on that school every day in order to 752 00:41:25,840 --> 00:41:27,480 Speaker 1: get the kids out so that it can actually go 753 00:41:27,520 --> 00:41:29,920 Speaker 1: by I don't know, something to that effect. Maybe, Well, 754 00:41:29,960 --> 00:41:33,160 Speaker 1: that reminds me of a funny observation that uh, not 755 00:41:33,200 --> 00:41:37,359 Speaker 1: that this is lawful activity, but uh, a funny observation 756 00:41:37,400 --> 00:41:41,200 Speaker 1: that the authors make towards their conclusion. They bring up 757 00:41:41,280 --> 00:41:48,320 Speaker 1: there are cases of of crashes with autonomous vehicles where 758 00:41:48,520 --> 00:41:52,960 Speaker 1: the autonomous vehicle didn't crash into someone the autonomous vehicle, 759 00:41:53,480 --> 00:41:57,759 Speaker 1: you could argue caused a crash, but somebody else ran 760 00:41:58,080 --> 00:42:03,000 Speaker 1: into the autonomous vehicle because the autonomous vehicle did something 761 00:42:03,040 --> 00:42:07,960 Speaker 1: that is legal and presumably safe, but unexpected. And examples 762 00:42:08,000 --> 00:42:11,360 Speaker 1: here would be driving the speed limit in certain areas 763 00:42:11,480 --> 00:42:14,920 Speaker 1: or coming to a complete stop at an intersection. And 764 00:42:14,960 --> 00:42:17,160 Speaker 1: this is another way that the authors are bringing up 765 00:42:17,160 --> 00:42:21,279 Speaker 1: the idea that UH, examining robot logic is really going 766 00:42:21,320 --> 00:42:23,239 Speaker 1: to have to cause us to re examine the way 767 00:42:23,320 --> 00:42:25,799 Speaker 1: humans interact with the law, because there are cases where 768 00:42:26,080 --> 00:42:30,800 Speaker 1: people cause problems that lead to harm by obeying the rules. 769 00:42:31,200 --> 00:42:33,120 Speaker 1: Oh yeah, Like I think of this all the time, 770 00:42:33,160 --> 00:42:36,200 Speaker 1: and imagine most people do when when driving for any 771 00:42:36,320 --> 00:42:39,840 Speaker 1: long distance. Because you have the speed limit as it's posted, 772 00:42:40,360 --> 00:42:45,840 Speaker 1: you have the speed that the majority of people are driving, um, 773 00:42:46,560 --> 00:42:48,720 Speaker 1: you know, you have that sort of ten mile over zone. 774 00:42:48,760 --> 00:42:51,280 Speaker 1: Then you have the people who are driving exceedingly fast. 775 00:42:51,560 --> 00:42:54,760 Speaker 1: Then you have that minimum speed limit that virtually nobody 776 00:42:54,840 --> 00:42:57,279 Speaker 1: is driving forty mile per hour on the interstate, but 777 00:42:57,719 --> 00:43:01,320 Speaker 1: it's posted. Uh, and therefore would be legal to drive 778 00:43:01,480 --> 00:43:04,040 Speaker 1: forty one per hour if you were a robot and 779 00:43:04,480 --> 00:43:07,040 Speaker 1: weren't in a particular hurry, and perhaps that's you know, 780 00:43:07,120 --> 00:43:10,879 Speaker 1: maximum efficiency for your travel. Uh yeah, there's so many, 781 00:43:11,320 --> 00:43:13,360 Speaker 1: so many things like that to think about. And I 782 00:43:13,680 --> 00:43:16,800 Speaker 1: think we're probably not even very good at at guessing 783 00:43:16,960 --> 00:43:20,279 Speaker 1: until we encounter them through robots. How many other situations 784 00:43:20,320 --> 00:43:24,000 Speaker 1: there are like this in the world where where you 785 00:43:24,040 --> 00:43:26,759 Speaker 1: can technically be within the bounds of the law, like 786 00:43:26,800 --> 00:43:29,480 Speaker 1: you're doing what by the book you're supposed to be doing, 787 00:43:29,480 --> 00:43:32,440 Speaker 1: but actually it's really dangerous to be doing it that way. 788 00:43:33,600 --> 00:43:36,040 Speaker 1: So how are you supposed to interrogate a robot state 789 00:43:36,040 --> 00:43:37,920 Speaker 1: of mind? And when it comes to stuff like that. 790 00:43:37,920 --> 00:43:40,440 Speaker 1: But so anyway, this leads to the author's talking about 791 00:43:40,560 --> 00:43:43,680 Speaker 1: the difficulties in in robots state of mind valuation, and 792 00:43:43,840 --> 00:43:47,120 Speaker 1: they say, quote, robots don't seem to be good targets 793 00:43:47,120 --> 00:43:49,800 Speaker 1: for rules based on moral blame or state of mind, 794 00:43:49,880 --> 00:43:52,440 Speaker 1: but they are good at data. So we might consider 795 00:43:52,520 --> 00:43:56,520 Speaker 1: a legal standard that bases liability on how safe the 796 00:43:56,600 --> 00:44:00,399 Speaker 1: robot is compared to others of its type. This would 797 00:44:00,440 --> 00:44:03,879 Speaker 1: be a sort of robotic reasonableness test that could take 798 00:44:03,880 --> 00:44:06,400 Speaker 1: the form of a carrot, such as a safe harbor 799 00:44:06,520 --> 00:44:10,440 Speaker 1: for self driving cars that are significantly safer than average 800 00:44:10,800 --> 00:44:14,480 Speaker 1: or significantly safer than human drivers, or we could use 801 00:44:14,480 --> 00:44:18,320 Speaker 1: a stick holding robots liable if they lagged behind their peers, 802 00:44:18,680 --> 00:44:21,360 Speaker 1: or even shutting down the worst ten percent of robots 803 00:44:21,360 --> 00:44:24,560 Speaker 1: in a category every year. So I'm not sure if 804 00:44:24,640 --> 00:44:26,720 Speaker 1: I agree with this, but this was an interesting idea 805 00:44:26,760 --> 00:44:30,720 Speaker 1: to me. So, instead of like trying to to interrogate 806 00:44:30,760 --> 00:44:35,760 Speaker 1: the underlying logic of a type of autonomous car, robot 807 00:44:35,920 --> 00:44:39,520 Speaker 1: or whatever, because it's so difficult to try to understand 808 00:44:39,560 --> 00:44:44,080 Speaker 1: the underlying logic, what if you just compare its outcomes 809 00:44:44,120 --> 00:44:48,520 Speaker 1: to other machines of the same genre as it, or 810 00:44:48,600 --> 00:44:51,040 Speaker 1: two humans. I mean, you can imagine this working better 811 00:44:51,080 --> 00:44:53,799 Speaker 1: in the case of something like autonomous cars then you can, 812 00:44:53,920 --> 00:44:56,840 Speaker 1: and you know other cases where the robot is essentially 813 00:44:56,880 --> 00:44:59,799 Speaker 1: introducing a sort of a new genre of agent into 814 00:44:59,800 --> 00:45:02,880 Speaker 1: the world. But autonomous cars are in many ways going 815 00:45:02,920 --> 00:45:07,240 Speaker 1: to be roughly equivalent in outcomes to human drivers in 816 00:45:07,239 --> 00:45:09,960 Speaker 1: in regular cars, and so would it make more sense 817 00:45:10,000 --> 00:45:15,000 Speaker 1: to try to understand the reasoning behind each autonomous vehicle's 818 00:45:15,080 --> 00:45:18,200 Speaker 1: decision making when it gets into an accident, or uh 819 00:45:18,280 --> 00:45:21,799 Speaker 1: to compare its behavior to I don't know, some kind 820 00:45:21,840 --> 00:45:25,759 Speaker 1: of aggregate or standard of human driving or other autonomous 821 00:45:25,880 --> 00:45:28,680 Speaker 1: vehicles or maybe we just we just tell it. Look, 822 00:45:28,840 --> 00:45:32,160 Speaker 1: most humans drive like selfish bastards, so just go do it, 823 00:45:32,320 --> 00:45:34,600 Speaker 1: do what you gotta do well. I mean, I would 824 00:45:34,600 --> 00:45:38,840 Speaker 1: say that there is a downside risk to not taking 825 00:45:38,840 --> 00:45:43,400 Speaker 1: this stuff seriously enough, which is uh, which is something 826 00:45:43,480 --> 00:45:46,920 Speaker 1: like that, I mean something like essentially letting robots go 827 00:45:47,120 --> 00:45:50,719 Speaker 1: hog wild because they can well be designed and not 828 00:45:50,760 --> 00:45:53,640 Speaker 1: saying that anybody would be you know, maliciously going wahaha 829 00:45:53,760 --> 00:45:56,520 Speaker 1: and rubbing their hands together while they make it this case. 830 00:45:56,600 --> 00:46:00,000 Speaker 1: But you know, you could imagine a situation where there 831 00:46:00,000 --> 00:46:03,200 Speaker 1: are more and more robots entering the world where the uh, 832 00:46:03,239 --> 00:46:08,160 Speaker 1: the corporate responsibility for them is so diffuse that nobody 833 00:46:08,160 --> 00:46:11,920 Speaker 1: can locate the one person who's responsible for the robots behavior, 834 00:46:12,120 --> 00:46:15,239 Speaker 1: and thus nobody ever really makes the robot, you know, 835 00:46:16,000 --> 00:46:18,799 Speaker 1: behave morally at all. So robots just sort of like 836 00:46:19,120 --> 00:46:23,400 Speaker 1: become a new class of superhuman psychopaths that are immune 837 00:46:23,480 --> 00:46:27,200 Speaker 1: from all consequences. In fact, I would say that is 838 00:46:27,239 --> 00:46:30,440 Speaker 1: a robot apocalypse scenario I've never seen before done in 839 00:46:30,480 --> 00:46:32,840 Speaker 1: a movie. It's always like when the robots are terrible 840 00:46:32,880 --> 00:46:35,640 Speaker 1: to us, it's always like organized, it's always like that, 841 00:46:35,800 --> 00:46:38,800 Speaker 1: you know, they Okay, they decide humans are a cancer 842 00:46:38,920 --> 00:46:40,879 Speaker 1: or something, and so they're going to wipe us out? 843 00:46:41,200 --> 00:46:44,680 Speaker 1: What if instead? It's the problem is just that robots, 844 00:46:44,719 --> 00:46:49,480 Speaker 1: sort of, by corporate negligence and distributed responsibility for their 845 00:46:49,520 --> 00:46:53,040 Speaker 1: behavior among humans, robots just end up being ultimately a 846 00:46:53,200 --> 00:46:56,800 Speaker 1: moral and we're flooded with these a moral critters running 847 00:46:56,800 --> 00:46:59,640 Speaker 1: around all over the place that are pretty smart and 848 00:46:59,680 --> 00:47:02,719 Speaker 1: really powerful. I guess there are You do see some 849 00:47:02,800 --> 00:47:07,800 Speaker 1: shades of this in um in some futuristic sci fi genres. 850 00:47:07,800 --> 00:47:11,360 Speaker 1: I'm particularly thinking of some of the models of cyberpunk genre, 851 00:47:11,520 --> 00:47:16,759 Speaker 1: where the the corporation model has been has been embraced 852 00:47:16,800 --> 00:47:21,760 Speaker 1: as the way of understanding the future of Aiyes, um, 853 00:47:21,800 --> 00:47:24,040 Speaker 1: but but yeah, I think I think for the most 854 00:47:24,080 --> 00:47:27,680 Speaker 1: part this this scenario hasn't been as explored as much. 855 00:47:27,760 --> 00:47:29,160 Speaker 1: We tend to We tend to want to go for 856 00:47:29,239 --> 00:47:32,799 Speaker 1: the evil overlord or the out of control kilbot rather 857 00:47:32,880 --> 00:47:36,560 Speaker 1: than this, right, yeah, you want you want an identifiable villain, 858 00:47:36,640 --> 00:47:39,440 Speaker 1: just like they do in the courts. But yeah, sometimes, uh, 859 00:47:39,719 --> 00:47:43,560 Speaker 1: sometimes corporations or manufacturers can be kind of slippery and 860 00:47:43,719 --> 00:47:53,000 Speaker 1: saying like whose thing is this? Than? So I was 861 00:47:53,000 --> 00:47:55,400 Speaker 1: thinking about all this, about the idea of you know, 862 00:47:55,440 --> 00:47:59,080 Speaker 1: particularly self driving cars being like the main example we 863 00:47:59,080 --> 00:48:02,040 Speaker 1: we ruminate on with this sort of thing. UM, I 864 00:48:02,120 --> 00:48:05,080 Speaker 1: decided to to look to the book Life three point 865 00:48:05,080 --> 00:48:08,840 Speaker 1: oh by Max teg Mark UM, which is a is 866 00:48:08,840 --> 00:48:10,719 Speaker 1: a really great book came out of a couple of 867 00:48:10,800 --> 00:48:15,160 Speaker 1: years back. And Max teg Mark is a Swedish American physicist, cosmologist, 868 00:48:15,160 --> 00:48:17,840 Speaker 1: and machine learning researcher. If you've been listening to the 869 00:48:17,880 --> 00:48:20,360 Speaker 1: show for a while, you might remember that I briefly 870 00:48:20,400 --> 00:48:22,919 Speaker 1: interviewed him, had like a mini interview with him at 871 00:48:22,960 --> 00:48:26,400 Speaker 1: the World Science Festival, UH several years back. Yeah, and 872 00:48:26,640 --> 00:48:29,560 Speaker 1: I know I've referenced his book Our Mathematical Universe in 873 00:48:29,640 --> 00:48:33,040 Speaker 1: previous episodes. Yeah, so so these are These are both 874 00:48:33,040 --> 00:48:35,520 Speaker 1: books intended for a wide audience, very very readable. UH. 875 00:48:36,239 --> 00:48:39,040 Speaker 1: Life three point oh does a does a fabulous job 876 00:48:39,080 --> 00:48:43,080 Speaker 1: of walking the reader through these various scenarios of UH 877 00:48:43,560 --> 00:48:45,960 Speaker 1: in many cases of of AI scendency and how it 878 00:48:46,000 --> 00:48:49,480 Speaker 1: could work. And he gets into this this topic of 879 00:48:49,840 --> 00:48:53,759 Speaker 1: UM of legality and UM and and AI and self 880 00:48:53,880 --> 00:48:56,840 Speaker 1: driving cars. Now, he does not make any allusions to 881 00:48:56,960 --> 00:48:59,919 Speaker 1: Johnny Cab until a recall, but I'm going to make 882 00:49:00,000 --> 00:49:02,080 Speaker 1: allusions to Johnny Cab in total recall is a way 883 00:49:02,120 --> 00:49:05,920 Speaker 1: of sort of putting a manic face on self driving cars. 884 00:49:06,280 --> 00:49:10,319 Speaker 1: How did I get here? The door opened, you got in, 885 00:49:11,880 --> 00:49:15,759 Speaker 1: It's sound reason. So, um, imagine that you're in a 886 00:49:15,800 --> 00:49:19,440 Speaker 1: self driving Johnny Cab and it recks. So the basic 887 00:49:19,520 --> 00:49:22,480 Speaker 1: question you might ask is are you responsible for this 888 00:49:22,560 --> 00:49:25,600 Speaker 1: wreck as the occupant? That seems ridiculous to think, so, right, 889 00:49:25,600 --> 00:49:29,000 Speaker 1: you weren't driving it, You just told it where to go. Um, 890 00:49:29,120 --> 00:49:31,880 Speaker 1: are the owners of the Johnny Cab responsible? Now this 891 00:49:31,920 --> 00:49:35,920 Speaker 1: seems more reasonable, right, but again it runs into a 892 00:49:35,920 --> 00:49:39,600 Speaker 1: lot of the problems we were just raising there. Yeah, 893 00:49:39,600 --> 00:49:41,839 Speaker 1: but tag Mark points out that there is this other 894 00:49:41,880 --> 00:49:46,279 Speaker 1: option and that American legal scholar David of Lattic has 895 00:49:46,320 --> 00:49:49,160 Speaker 1: pointed out that perhaps it is the Johnny Cab itself 896 00:49:49,160 --> 00:49:52,640 Speaker 1: that should be responsible. Now we've been already been discussing 897 00:49:52,719 --> 00:49:54,319 Speaker 1: a lot of this, like what does that mean? What 898 00:49:54,360 --> 00:49:56,520 Speaker 1: does it mean if a you have a Johnny Cab, 899 00:49:56,760 --> 00:50:00,239 Speaker 1: you have a self driving vehicle is responsible for wreck 900 00:50:00,360 --> 00:50:02,319 Speaker 1: than it is in what you know? How do we 901 00:50:02,360 --> 00:50:05,480 Speaker 1: even begin to make sense of that statement? Do you 902 00:50:05,800 --> 00:50:09,720 Speaker 1: take the damages out of the Johnny Cabs? Bank account. Well, 903 00:50:10,400 --> 00:50:12,200 Speaker 1: that's the thing. We we kind of end up getting 904 00:50:12,200 --> 00:50:16,040 Speaker 1: into that scenario because if the Johnny Cab has responsibilities 905 00:50:16,840 --> 00:50:19,600 Speaker 1: than than tech Mark writes, why not let it own 906 00:50:19,760 --> 00:50:22,880 Speaker 1: car insurance? Not only would this allow for it to 907 00:50:23,040 --> 00:50:26,920 Speaker 1: financially handle accidents, it would also potentially serve as a 908 00:50:26,960 --> 00:50:31,279 Speaker 1: design incentive and a purchasing incent incentive. So the the 909 00:50:31,360 --> 00:50:34,280 Speaker 1: idea here is the better self driving cars with better 910 00:50:34,360 --> 00:50:38,719 Speaker 1: records will qualify for lower premiums, and the less reliable 911 00:50:38,719 --> 00:50:41,239 Speaker 1: models will have to pay higher premiums. So if the 912 00:50:41,320 --> 00:50:45,600 Speaker 1: Johnny Cab runs into enough stuff and explodes enough, then 913 00:50:45,680 --> 00:50:48,279 Speaker 1: that brand of Johnny Cab simply won't be able to 914 00:50:48,320 --> 00:50:51,600 Speaker 1: take to the streets anymore. Oh this is interesting, okay, 915 00:50:51,640 --> 00:50:53,600 Speaker 1: So in order to mean this is very much the 916 00:50:53,600 --> 00:50:57,239 Speaker 1: economic model that we were discussing earlier. So when Schwarzenegger 917 00:50:57,280 --> 00:50:59,440 Speaker 1: hops in and Johnny Cab says, where would you like 918 00:50:59,480 --> 00:51:02,719 Speaker 1: to go? And he says, drive, just drive anywhere, and 919 00:51:02,800 --> 00:51:04,920 Speaker 1: he says, I don't know where that is. And so 920 00:51:05,200 --> 00:51:08,960 Speaker 1: so his incentive to not just like blindly plow forward 921 00:51:09,080 --> 00:51:11,520 Speaker 1: is how much would it cost if I ran into 922 00:51:11,560 --> 00:51:16,560 Speaker 1: something when I did that? Yeah? Exactly. But but Temark 923 00:51:16,640 --> 00:51:19,360 Speaker 1: points out that the implications of letting a self driving 924 00:51:19,360 --> 00:51:22,560 Speaker 1: car own car insurance it ultimately goes beyond this situation, 925 00:51:22,800 --> 00:51:25,520 Speaker 1: because how does the Johnny cab pay for its insurance 926 00:51:25,520 --> 00:51:30,240 Speaker 1: policy that, again it hypothetically owns in this scenario? Should 927 00:51:30,239 --> 00:51:32,759 Speaker 1: we let it own money in order to do this? 928 00:51:32,920 --> 00:51:34,920 Speaker 1: Does it have its own bank account like you alluded 929 00:51:34,960 --> 00:51:39,320 Speaker 1: to earlier, especially if it's operating as an independent contractor 930 00:51:39,360 --> 00:51:43,520 Speaker 1: of sorts, perhaps paying back certain percentages or fees to 931 00:51:43,600 --> 00:51:46,160 Speaker 1: a greater cab company, Like maybe that's how it would work. 932 00:51:46,520 --> 00:51:49,600 Speaker 1: And if it can own money, well, can it also 933 00:51:49,640 --> 00:51:52,800 Speaker 1: own property? Like perhaps at the very least it rents 934 00:51:52,800 --> 00:51:57,960 Speaker 1: garage space, Uh, but maybe it owns garage space for itself, um, 935 00:51:58,000 --> 00:52:00,640 Speaker 1: you know, or a maintenance facility or the tools that 936 00:52:00,719 --> 00:52:03,359 Speaker 1: work on it. Does it own those as well? Does 937 00:52:03,400 --> 00:52:06,319 Speaker 1: it own spare parts? Does it own the bottles of 938 00:52:06,360 --> 00:52:10,719 Speaker 1: water that go inside of itself for its customers? Does 939 00:52:10,760 --> 00:52:13,319 Speaker 1: it own the complementary wet towels for your head that 940 00:52:13,400 --> 00:52:17,799 Speaker 1: it keeps on hand? Yeah? Um, I mean, if nothing else, 941 00:52:17,840 --> 00:52:19,719 Speaker 1: it seems like if it owned things, like, the more 942 00:52:19,760 --> 00:52:23,120 Speaker 1: things it owns, the more things that you could potentially 943 00:52:23,680 --> 00:52:28,120 Speaker 1: um uh invoke a penalty upon through the legal system, 944 00:52:28,160 --> 00:52:31,400 Speaker 1: and if they can own money and property and again 945 00:52:32,200 --> 00:52:36,040 Speaker 1: potentially themselves. Then tech Mark takes it a step further. 946 00:52:36,160 --> 00:52:38,800 Speaker 1: He writes, if this is the case, quote, there's nothing 947 00:52:38,920 --> 00:52:42,560 Speaker 1: legally stopping smart computers from making money on the stock 948 00:52:42,600 --> 00:52:46,200 Speaker 1: market and using it to buy online services. Once the 949 00:52:46,200 --> 00:52:49,040 Speaker 1: computer starts paying humans to work for it, it can 950 00:52:49,040 --> 00:52:52,640 Speaker 1: accomplish anything that humans can do. I see. So you 951 00:52:52,719 --> 00:52:55,680 Speaker 1: might say that even if you're skeptical of an AI's 952 00:52:55,800 --> 00:52:59,400 Speaker 1: ability to have say the emotional and cultural intelligence to 953 00:53:00,160 --> 00:53:03,520 Speaker 1: uh to write a popular screenplay or you know, create 954 00:53:03,560 --> 00:53:06,399 Speaker 1: a popular movie, it just doesn't get humans well enough 955 00:53:06,440 --> 00:53:08,560 Speaker 1: to do that. It could, at least if it had 956 00:53:08,600 --> 00:53:12,640 Speaker 1: its own economic agency pay humans to do that, right, 957 00:53:12,800 --> 00:53:15,480 Speaker 1: right and um. Elsewhere in the book, tech Mark gets 958 00:53:15,480 --> 00:53:18,279 Speaker 1: into a lot of this, especially entertainment idea, presenting a 959 00:53:18,320 --> 00:53:22,320 Speaker 1: scenario by which machines like this could gain the entertainment 960 00:53:22,360 --> 00:53:25,880 Speaker 1: industry in order to to ascend to you know, extreme 961 00:53:25,960 --> 00:53:28,319 Speaker 1: financial power. A lot of it is just like sort 962 00:53:28,360 --> 00:53:31,879 Speaker 1: of playing the algorithms right, you know, like doing corporation 963 00:53:31,960 --> 00:53:36,600 Speaker 1: stuff and then hiring humans as necessary to to bring 964 00:53:36,640 --> 00:53:39,439 Speaker 1: that to fruition. You know, I mean, would this be 965 00:53:39,600 --> 00:53:41,839 Speaker 1: all that different from any of our like I don't know, 966 00:53:42,200 --> 00:53:47,799 Speaker 1: Disney or comic book studios or whatever exists today. Yeah, yeah, exactly. Um, so, 967 00:53:47,920 --> 00:53:49,719 Speaker 1: you know, we already know the sort of prowess that 968 00:53:49,840 --> 00:53:52,279 Speaker 1: computers have when it comes to the stock market. Tech 969 00:53:52,360 --> 00:53:55,000 Speaker 1: Mark you know, points out that, you know, you know what, 970 00:53:55,120 --> 00:53:57,640 Speaker 1: we have examples of this in the world already where 971 00:53:57,680 --> 00:53:59,960 Speaker 1: we're using AI, and he writes that it could lead 972 00:54:00,040 --> 00:54:03,120 Speaker 1: to a situation where most of the economy is owned 973 00:54:03,200 --> 00:54:06,279 Speaker 1: and controlled by machines. And this, he warns, is not 974 00:54:06,400 --> 00:54:08,560 Speaker 1: that crazy, considering that we already live in a world 975 00:54:08,640 --> 00:54:12,640 Speaker 1: where non human entities called corporations exert tremendous power and 976 00:54:12,719 --> 00:54:15,720 Speaker 1: hold tremendous wealth. I think there there is a large 977 00:54:15,719 --> 00:54:18,440 Speaker 1: amount of overlap between the concept of corporation and the 978 00:54:18,480 --> 00:54:22,000 Speaker 1: concept of an AI. Yeah and uh. And then there 979 00:54:22,040 --> 00:54:24,719 Speaker 1: are steps beyond this as well. If if machines can 980 00:54:24,760 --> 00:54:26,839 Speaker 1: do all of these things, So if they can, if 981 00:54:26,840 --> 00:54:30,040 Speaker 1: they can if a machine can own property, if it 982 00:54:30,040 --> 00:54:33,439 Speaker 1: can potentially own itself, if it can if it can 983 00:54:33,440 --> 00:54:35,800 Speaker 1: buy things, if it can invest in the stock market, 984 00:54:35,840 --> 00:54:39,040 Speaker 1: if it can accumulate financial power, if it can do 985 00:54:39,080 --> 00:54:41,759 Speaker 1: all these things then should they also get the right 986 00:54:41,800 --> 00:54:46,040 Speaker 1: to vote as well? You know, it's it's potentially paying taxes, 987 00:54:46,239 --> 00:54:48,560 Speaker 1: does it get to to vote in addition to that? 988 00:54:49,080 --> 00:54:52,680 Speaker 1: And then if not, why and what becomes the caveat 989 00:54:52,680 --> 00:54:55,279 Speaker 1: that determines the right to vote in this scenario? Now, 990 00:54:55,320 --> 00:54:58,000 Speaker 1: if I understand you, right, I think you're saying the 991 00:54:58,040 --> 00:55:01,200 Speaker 1: tech mark is is exploring these possibilities as stuff that 992 00:55:01,280 --> 00:55:04,920 Speaker 1: he thinks might not be as implausible as people would suspect, 993 00:55:05,040 --> 00:55:07,720 Speaker 1: rather than his stuff where he's like, here's my ideal world, 994 00:55:08,320 --> 00:55:11,120 Speaker 1: right right, He's saying like, look, you know this is 995 00:55:11,160 --> 00:55:13,759 Speaker 1: already where we are. We know what a I can do, 996 00:55:13,880 --> 00:55:16,680 Speaker 1: and we can easily extrapolate where it might go. These 997 00:55:16,719 --> 00:55:19,320 Speaker 1: are the scenarios we should, we should potentially be prepared 998 00:55:19,360 --> 00:55:22,520 Speaker 1: for in much the same way that nobody, nobody really 999 00:55:22,600 --> 00:55:25,680 Speaker 1: at an intuitive level, believes that a corporation is a 1000 00:55:25,760 --> 00:55:29,239 Speaker 1: person like a like a human being as a person. Uh, 1001 00:55:29,280 --> 00:55:31,800 Speaker 1: you know, it's at least done well enough at convincing 1002 00:55:31,800 --> 00:55:34,239 Speaker 1: the courts that it is a person. So would you 1003 00:55:34,280 --> 00:55:36,560 Speaker 1: not be able to expect the same coming out of 1004 00:55:36,600 --> 00:55:40,560 Speaker 1: machines that were sophisticated enough, right, and convincing the court 1005 00:55:40,719 --> 00:55:43,440 Speaker 1: is Uh. I'm glad you brought that up, because that's 1006 00:55:43,480 --> 00:55:46,839 Speaker 1: that's another area that tech markets into. So what does 1007 00:55:46,880 --> 00:55:51,759 Speaker 1: it mean when judges have to potentially judge Aiyes? Um? 1008 00:55:52,000 --> 00:55:55,640 Speaker 1: Would these be specialized judges with technical knowledge and understanding 1009 00:55:55,680 --> 00:55:58,960 Speaker 1: of the complex systems involved? Uh? You know? Or is 1010 00:55:59,000 --> 00:56:01,240 Speaker 1: it going to be a human judge judging a machine 1011 00:56:01,280 --> 00:56:04,439 Speaker 1: as if it were a human? Um? You know. Both 1012 00:56:04,480 --> 00:56:07,800 Speaker 1: of these are possibilities. But then here's another idea that 1013 00:56:07,880 --> 00:56:12,920 Speaker 1: tech marks discusses at length. What if we use robo judges? Um? 1014 00:56:12,960 --> 00:56:15,760 Speaker 1: And this ultimately goes beyond the idea of using robo 1015 00:56:15,840 --> 00:56:18,160 Speaker 1: judges to judge at the robots, but potentially using them 1016 00:56:18,200 --> 00:56:22,240 Speaker 1: to judge humans as well. UM because while human judges 1017 00:56:22,280 --> 00:56:25,080 Speaker 1: have limited ability to understand the technical knowledge of cases, 1018 00:56:25,600 --> 00:56:28,920 Speaker 1: robo judges, tech Mark points out, would in theory have 1019 00:56:29,280 --> 00:56:32,960 Speaker 1: unlimited learning and memory capacity. They could also be copied, 1020 00:56:33,000 --> 00:56:35,040 Speaker 1: so there would be no staffing shortages you need to 1021 00:56:35,160 --> 00:56:39,279 Speaker 1: judges today, We'll just copy and paste, right, uh and simplification. 1022 00:56:39,360 --> 00:56:42,440 Speaker 1: But but you know, essentially, once you have one, you 1023 00:56:42,480 --> 00:56:45,840 Speaker 1: can have many. Uh. This way justice could be cheaper 1024 00:56:45,920 --> 00:56:48,960 Speaker 1: and just maybe a little more just by removing the 1025 00:56:49,040 --> 00:56:52,719 Speaker 1: human equation, or at least so the machines would argue right, 1026 00:56:53,520 --> 00:56:55,200 Speaker 1: But then the other the side of the thing is, 1027 00:56:55,239 --> 00:57:00,080 Speaker 1: we've already discussed how human created AI is susceptible to 1028 00:57:00,080 --> 00:57:03,640 Speaker 1: to bias. So we could potentially, you know, we could 1029 00:57:03,640 --> 00:57:06,520 Speaker 1: create a robot judge, but if we're not not careful, 1030 00:57:06,600 --> 00:57:08,560 Speaker 1: it could be bugged, it could be hacked, it could 1031 00:57:08,560 --> 00:57:11,600 Speaker 1: be otherwise compromised where it just might have these various 1032 00:57:11,600 --> 00:57:14,440 Speaker 1: biases that it is um that it is using when 1033 00:57:14,480 --> 00:57:18,080 Speaker 1: it's judging humans or machines. And then you'd have to 1034 00:57:18,120 --> 00:57:20,880 Speaker 1: have public trust in such a system as well, So 1035 00:57:21,160 --> 00:57:22,800 Speaker 1: we run into a lot of the same problems we 1036 00:57:22,880 --> 00:57:25,960 Speaker 1: run into when we're talking about trusting the machine to 1037 00:57:26,040 --> 00:57:30,760 Speaker 1: drive us across town. Yeah, Like so if uh robot judge, 1038 00:57:30,880 --> 00:57:33,960 Speaker 1: even if now I'm certainly not granting this because I 1039 00:57:34,200 --> 00:57:36,400 Speaker 1: don't necessarily believe this was the case, but even if 1040 00:57:36,440 --> 00:57:39,720 Speaker 1: it were true that a robot judge would be better 1041 00:57:39,920 --> 00:57:42,640 Speaker 1: at judging cases than a human and like more fair 1042 00:57:42,680 --> 00:57:45,480 Speaker 1: and more just, you could run into problems with public 1043 00:57:45,520 --> 00:57:48,439 Speaker 1: trust and those kind of judges because, for example, they 1044 00:57:48,480 --> 00:57:52,479 Speaker 1: make the calculations explicit, right, the same way we talked 1045 00:57:52,480 --> 00:57:56,560 Speaker 1: about like placing a certain value on a human life. Uh, 1046 00:57:56,600 --> 00:57:58,720 Speaker 1: it's something that we all sort of do, but we 1047 00:57:58,800 --> 00:58:00,960 Speaker 1: don't like to think about it or acknowledge we do it. 1048 00:58:00,960 --> 00:58:03,080 Speaker 1: We just do it at an intuitive level that's sort 1049 00:58:03,120 --> 00:58:06,120 Speaker 1: of hidden in the dark recesses of the mind. And 1050 00:58:06,120 --> 00:58:08,480 Speaker 1: and and don't think about it. A machine would have 1051 00:58:08,560 --> 00:58:10,960 Speaker 1: to like put a number on that, and and for 1052 00:58:11,080 --> 00:58:14,040 Speaker 1: public transparency reasons, that number would probably need to be 1053 00:58:14,080 --> 00:58:18,360 Speaker 1: publicly accessible. Yeah, another area, and this is where this 1054 00:58:18,440 --> 00:58:21,400 Speaker 1: is another topic and robotics that you know, we could 1055 00:58:21,400 --> 00:58:25,240 Speaker 1: easily discuss at extreme length, but there's a robotic surgery 1056 00:58:25,280 --> 00:58:27,959 Speaker 1: to consider. You know. While we continue to make great 1057 00:58:28,000 --> 00:58:31,720 Speaker 1: strides and robotic surgery, and in some cases the robotic 1058 00:58:31,800 --> 00:58:35,440 Speaker 1: surgery route is indisputably the safest route, there remains a 1059 00:58:35,440 --> 00:58:40,760 Speaker 1: lot of discussion regarding UM. You know, how robot surgery is, 1060 00:58:40,960 --> 00:58:44,960 Speaker 1: UM is progressing, where it's headed, and how malpractice potentially 1061 00:58:45,080 --> 00:58:49,560 Speaker 1: factors into everything UM. Now to despite the advances that 1062 00:58:49,560 --> 00:58:51,919 Speaker 1: we've seen, we're not quite at the medical droid level, 1063 00:58:52,000 --> 00:58:55,880 Speaker 1: you know, like the autonomous UH surgical bot. But as 1064 00:58:55,920 --> 00:58:58,200 Speaker 1: reported by Dennis Grady in the New York Times just 1065 00:58:58,320 --> 00:59:02,280 Speaker 1: last year, AI coupled with new imaging techniques are already 1066 00:59:02,320 --> 00:59:05,880 Speaker 1: showing promise as a means of diagnosing tumors as accurately 1067 00:59:05,920 --> 00:59:10,439 Speaker 1: as human physicians, but at far greater speed. UM. So 1068 00:59:10,960 --> 00:59:14,720 Speaker 1: it's interesting to UH to think about these advancements, but 1069 00:59:14,760 --> 00:59:18,040 Speaker 1: at the same time realize that particularly in an AI, 1070 00:59:18,200 --> 00:59:21,600 Speaker 1: we're talking more about AI, I mean, particularly an AI 1071 00:59:21,680 --> 00:59:25,160 Speaker 1: and medicine. We're talking about AI assisted medicine or AI 1072 00:59:25,240 --> 00:59:30,040 Speaker 1: assisted surgery. So the human AI relationship is in these 1073 00:59:30,120 --> 00:59:33,880 Speaker 1: cases not one of replacement but of cooperation, at least 1074 00:59:34,560 --> 00:59:37,960 Speaker 1: for the near term. Yeah, yeah, yeah, I see that, 1075 00:59:38,000 --> 00:59:40,960 Speaker 1: because I mean, there are many reasons for that, but 1076 00:59:41,040 --> 00:59:43,120 Speaker 1: one of the one of the reasons that strikes me 1077 00:59:43,200 --> 00:59:46,520 Speaker 1: is it comes back to a perhaps sometimes irrational desire 1078 00:59:46,680 --> 00:59:49,640 Speaker 1: to inflict punishment on a person who has done wrong, 1079 00:59:49,720 --> 00:59:51,920 Speaker 1: even if it doesn't like help the person who has 1080 00:59:51,960 --> 00:59:55,960 Speaker 1: been harmed in the first place. Um. There there are 1081 00:59:55,960 --> 00:59:58,680 Speaker 1: certain just like intuitions we have, and I think one 1082 00:59:58,720 --> 01:00:02,280 Speaker 1: of them is we we feel more confident if there 1083 01:00:02,440 --> 01:00:06,440 Speaker 1: is somebody in the loop who would suffer from the 1084 01:00:06,520 --> 01:00:09,880 Speaker 1: consequences of failure, you know, like the fit, Like it 1085 01:00:09,960 --> 01:00:13,440 Speaker 1: doesn't just help that, Like oh no, I assure you 1086 01:00:13,760 --> 01:00:17,360 Speaker 1: the surgical robot has you know, strong incentives within its 1087 01:00:17,360 --> 01:00:20,560 Speaker 1: programming not to fail, not to botch the surgery and 1088 01:00:20,560 --> 01:00:23,880 Speaker 1: take out your you know, remove one of your vital organs. Yeah, 1089 01:00:23,960 --> 01:00:26,840 Speaker 1: Like on one level, on some level, we want that 1090 01:00:26,920 --> 01:00:29,040 Speaker 1: person to know their career is on the line, or 1091 01:00:29,080 --> 01:00:31,480 Speaker 1: the reputation is on the line. You know. I think 1092 01:00:31,520 --> 01:00:36,160 Speaker 1: most people would feel better going under surgery with the 1093 01:00:36,280 --> 01:00:40,240 Speaker 1: knowledge that if the surgeon were to do something bad 1094 01:00:40,280 --> 01:00:42,000 Speaker 1: to you. It's not just enough to know that the 1095 01:00:42,000 --> 01:00:44,560 Speaker 1: surgeon and surgeon is going to try really hard not 1096 01:00:44,600 --> 01:00:47,440 Speaker 1: to do something bad to you. You also want the 1097 01:00:47,520 --> 01:00:50,880 Speaker 1: like second order guarantee that, like, if the surgeon were 1098 01:00:51,000 --> 01:00:53,000 Speaker 1: to screw up and take take out one of your 1099 01:00:53,080 --> 01:00:56,560 Speaker 1: vital organs, something bad would happen to them and they 1100 01:00:56,600 --> 01:01:00,320 Speaker 1: would suffer. But with a robot, they wouldn't suffer. It's 1101 01:01:00,360 --> 01:01:04,080 Speaker 1: just like, oh, whoops. I wonder if we end up 1102 01:01:04,080 --> 01:01:06,920 Speaker 1: reaching a point with this in this discussion where you know, 1103 01:01:06,960 --> 01:01:10,240 Speaker 1: we're talking about robots hiring people, do we end up 1104 01:01:10,240 --> 01:01:14,480 Speaker 1: in a in a position where Aiyes, higher humans not 1105 01:01:14,600 --> 01:01:18,640 Speaker 1: so much because they need human um expertise or human 1106 01:01:18,680 --> 01:01:22,919 Speaker 1: skills or human senses the ability to feel pain. Yeah, 1107 01:01:22,960 --> 01:01:26,040 Speaker 1: and to be culpable, Like they need somebody that will, 1108 01:01:26,160 --> 01:01:30,800 Speaker 1: like essentially aies hiring humans to be scapegoats in the 1109 01:01:30,880 --> 01:01:34,200 Speaker 1: system or in their in the in the in their 1110 01:01:34,240 --> 01:01:38,280 Speaker 1: particular job. Uh So they're like, yeah, we need a 1111 01:01:38,320 --> 01:01:39,959 Speaker 1: human in the loop. Not because I need a human 1112 01:01:40,000 --> 01:01:42,000 Speaker 1: in the loop. I can do this by myself, but 1113 01:01:42,080 --> 01:01:44,880 Speaker 1: if something goes wrong, if you know, then there's always 1114 01:01:44,880 --> 01:01:47,360 Speaker 1: a certain chance that something will happen, I need a 1115 01:01:47,480 --> 01:01:50,600 Speaker 1: human there that will bear the blame. Every robot essentially 1116 01:01:50,680 --> 01:01:54,040 Speaker 1: needs a human co pilot, even in cases where robots 1117 01:01:54,880 --> 01:01:58,720 Speaker 1: far outperformed the humans, just because the human copilot has 1118 01:01:58,760 --> 01:02:03,160 Speaker 1: to be there to ex upt responsibility for failure. Oh yeah. 1119 01:02:03,200 --> 01:02:05,120 Speaker 1: In the first episode, we talked about the idea of 1120 01:02:05,120 --> 01:02:09,240 Speaker 1: there being like a punchable um plate on a robot 1121 01:02:09,720 --> 01:02:11,840 Speaker 1: um for when it for when we feel like we 1122 01:02:11,880 --> 01:02:14,120 Speaker 1: need to punish it. It's like that, except instead of 1123 01:02:14,160 --> 01:02:16,360 Speaker 1: a specialized plate on the robot itself, it's just a 1124 01:02:16,400 --> 01:02:21,040 Speaker 1: person that the robot hired. A whipping boy. Oh this 1125 01:02:21,120 --> 01:02:26,000 Speaker 1: is so horrible and and so perversely plausible. I can 1126 01:02:26,080 --> 01:02:29,000 Speaker 1: I can kind of see it. It's like in my lifetime, 1127 01:02:29,040 --> 01:02:33,960 Speaker 1: I can see it. Well, thanks for the nightmares, Robin, Well, no, 1128 01:02:34,080 --> 01:02:36,360 Speaker 1: I think we've had plenty of potential nightmares discussed here. 1129 01:02:36,400 --> 01:02:38,160 Speaker 1: But I mean we shouldn't just focus on the nightmares. 1130 01:02:38,200 --> 01:02:41,960 Speaker 1: I mean, again, to be clear, Um, you know, so 1131 01:02:42,040 --> 01:02:44,400 Speaker 1: the idea of self driving cars, the idea of robot 1132 01:02:44,400 --> 01:02:48,360 Speaker 1: assisted surgery. I mean, we're ultimately talking about the aim 1133 01:02:48,640 --> 01:02:53,480 Speaker 1: of of of creating safer practices of saving him and live. So, uh, 1134 01:02:53,520 --> 01:02:56,479 Speaker 1: you know, it's all it's not all nightmares and um 1135 01:02:56,720 --> 01:03:00,280 Speaker 1: robot health scapes. But we have to be realist stick 1136 01:03:00,360 --> 01:03:06,400 Speaker 1: about the very complex UM scenarios and casks that we're 1137 01:03:06,400 --> 01:03:10,040 Speaker 1: building things around and unleashing machine intelligence upon. Yeah, I 1138 01:03:10,080 --> 01:03:13,000 Speaker 1: mean I made this clear in uh in the previous episode. 1139 01:03:13,120 --> 01:03:16,400 Speaker 1: I'm not like down on things like autonomous vehicles. I mean, ultimately, 1140 01:03:16,440 --> 01:03:20,240 Speaker 1: I think autonomous vehicles are probably a good thing. Um, 1141 01:03:20,280 --> 01:03:22,800 Speaker 1: but I do think it's really important for people to 1142 01:03:23,600 --> 01:03:29,880 Speaker 1: start paying attention to these, uh, these unbelievably complicated philosophical, moral, 1143 01:03:29,920 --> 01:03:34,760 Speaker 1: and legal questions that will inevitably arise as more independent 1144 01:03:34,800 --> 01:03:38,680 Speaker 1: and intelligent agents in filth radar our world. All right, Well, 1145 01:03:38,680 --> 01:03:40,520 Speaker 1: on that note, we're gonna go and close it out. 1146 01:03:41,360 --> 01:03:43,600 Speaker 1: But if you would like to listen to other episodes 1147 01:03:43,640 --> 01:03:46,040 Speaker 1: of Stuff to Blow Your Mind, you know where to 1148 01:03:46,120 --> 01:03:48,960 Speaker 1: find them. You can find our core episodes on Tuesdays 1149 01:03:48,960 --> 01:03:52,120 Speaker 1: and Thursdays in the Stuff to Blow Your Mind podcast feed. 1150 01:03:52,760 --> 01:03:55,840 Speaker 1: On Monday's we tend to do listener mail, on Wednesdays 1151 01:03:56,200 --> 01:03:59,280 Speaker 1: we tend to bust out an artifact shorty episode, and 1152 01:03:59,280 --> 01:04:01,280 Speaker 1: on Fridays we do a little weird house cinema where 1153 01:04:01,320 --> 01:04:03,080 Speaker 1: we don't wantly talk about science so much as we 1154 01:04:03,200 --> 01:04:06,200 Speaker 1: just talked about one weird movie or another, and then 1155 01:04:06,240 --> 01:04:08,800 Speaker 1: we have a little rerun on the weekend. Huge thanks 1156 01:04:08,840 --> 01:04:12,360 Speaker 1: as always to our excellent audio producer Seth Nicholas Johnson. 1157 01:04:12,720 --> 01:04:14,360 Speaker 1: If you would like to get in touch with us 1158 01:04:14,360 --> 01:04:17,000 Speaker 1: with feedback on this episode or any other, to suggest 1159 01:04:17,000 --> 01:04:19,080 Speaker 1: a topic for the future, or just to say hello, 1160 01:04:19,440 --> 01:04:22,120 Speaker 1: you can email us at contact at stuff to Blow 1161 01:04:22,160 --> 01:04:32,080 Speaker 1: your Mind dot com. Stuff to Blow Your Mind is 1162 01:04:32,120 --> 01:04:34,800 Speaker 1: production of I Heart Radio. For more podcasts for my 1163 01:04:34,840 --> 01:04:37,920 Speaker 1: Heart Radio, visit the iHeart Radio app, Apple Podcasts, or 1164 01:04:37,920 --> 01:04:48,360 Speaker 1: wherever you're listening to your favorite shows.