1 00:00:04,120 --> 00:00:07,160 Speaker 1: Get in touch with technology with tech Stuff from how 2 00:00:07,200 --> 00:00:13,880 Speaker 1: stuff Works dot com. Hey there, and welcome to tech Stuff. 3 00:00:13,880 --> 00:00:17,959 Speaker 1: I'm your host, Jonathan Strickland. I'm an executive producer and 4 00:00:18,079 --> 00:00:21,560 Speaker 1: I love all things tech. And today we're gonna talk 5 00:00:21,600 --> 00:00:28,440 Speaker 1: about the concept of granting personhood or personality two robots, 6 00:00:28,480 --> 00:00:31,280 Speaker 1: not a personality in the sense of this robot is 7 00:00:31,360 --> 00:00:35,640 Speaker 1: Chipper and this one's depressed, but rather the concept of 8 00:00:35,880 --> 00:00:42,800 Speaker 1: granting robots some of the aspects that we say humans 9 00:00:43,000 --> 00:00:48,720 Speaker 1: have from a legal standpoint, and there are numerous arguments 10 00:00:48,760 --> 00:00:54,800 Speaker 1: both for and against this concept. Uh. I've talked about 11 00:00:54,960 --> 00:00:59,000 Speaker 1: artificial intelligence extensively, not just on this show but on 12 00:00:59,120 --> 00:01:03,040 Speaker 1: other shows as well. And one of the elements about 13 00:01:03,120 --> 00:01:07,319 Speaker 1: artificial intelligence that tends to pop up, especially in things 14 00:01:07,319 --> 00:01:12,440 Speaker 1: like science fiction, is what happens when artificial intelligence is 15 00:01:12,480 --> 00:01:19,040 Speaker 1: able to take actions that could negatively impact people? Uh, 16 00:01:19,120 --> 00:01:21,520 Speaker 1: what do we do? What sort of framework do we 17 00:01:21,600 --> 00:01:26,160 Speaker 1: make if that were the case, where we can determine 18 00:01:26,240 --> 00:01:29,319 Speaker 1: who's responsible for this? And then the interesting thing is 19 00:01:29,360 --> 00:01:34,759 Speaker 1: that for me anyway, the European Parliament, a group out 20 00:01:34,760 --> 00:01:37,440 Speaker 1: of the European Parliament kind of a working group, put 21 00:01:37,480 --> 00:01:41,800 Speaker 1: together a proposal a couple of years ago about the 22 00:01:41,840 --> 00:01:45,240 Speaker 1: possibility of this and the idea of granting personhood to 23 00:01:45,440 --> 00:01:49,400 Speaker 1: robots in a way to create an established framework of 24 00:01:49,480 --> 00:01:54,760 Speaker 1: law and accountability, because at the moment there's not really 25 00:01:54,800 --> 00:01:58,800 Speaker 1: anything formalized there, so they put together a proposal. The 26 00:01:58,840 --> 00:02:04,520 Speaker 1: proposal UH is still online. It was originally published on 27 00:02:04,720 --> 00:02:08,000 Speaker 1: May thirty one, two thousand sixteen. It was submitted in 28 00:02:08,080 --> 00:02:11,720 Speaker 1: draft form. The title of the draft was Draft Report 29 00:02:11,800 --> 00:02:15,840 Speaker 1: with Recommendations to the Commission on Civil Law Rules on 30 00:02:16,080 --> 00:02:20,560 Speaker 1: Robotics and it's a fascinating report. It's actually a lot 31 00:02:20,560 --> 00:02:22,799 Speaker 1: of fun to read. I highly recommend checking it out 32 00:02:22,880 --> 00:02:26,240 Speaker 1: if you have some time. The general purpose of the 33 00:02:26,280 --> 00:02:31,280 Speaker 1: proposal was to start official discussions in the European Union 34 00:02:31,760 --> 00:02:36,040 Speaker 1: about developing policies and guidelines in the field of robotics, 35 00:02:36,080 --> 00:02:41,639 Speaker 1: particularly since there were individual member states of the EU 36 00:02:41,760 --> 00:02:45,799 Speaker 1: s several nations that were developing their own policies over time, 37 00:02:46,639 --> 00:02:51,080 Speaker 1: and the European Parliament Working Group was saying this could 38 00:02:51,080 --> 00:02:54,920 Speaker 1: be a problem because if we have one set of 39 00:02:54,960 --> 00:02:59,440 Speaker 1: policies that are in play and let's say France, and 40 00:02:59,480 --> 00:03:03,040 Speaker 1: a different set that are in play in Germany, that 41 00:03:03,560 --> 00:03:07,600 Speaker 1: starts to create conflict, and in the European Union, you're 42 00:03:07,600 --> 00:03:10,280 Speaker 1: supposed to be able to move throughout the union and 43 00:03:10,320 --> 00:03:14,799 Speaker 1: take jobs wherever and live wherever within the Union. And 44 00:03:15,360 --> 00:03:19,120 Speaker 1: therefore it would work a lot better if everyone had 45 00:03:19,160 --> 00:03:22,960 Speaker 1: the same set of rules and regulations on something like this, 46 00:03:23,080 --> 00:03:28,200 Speaker 1: especially something that's going to factor so heavily in industry 47 00:03:28,360 --> 00:03:33,480 Speaker 1: and economy in the workplace. So really this was mostly 48 00:03:33,520 --> 00:03:36,560 Speaker 1: a kind of a warning saying we need to start 49 00:03:36,560 --> 00:03:40,760 Speaker 1: talking about this. The introduction of that proposal is phenomenal 50 00:03:40,800 --> 00:03:43,960 Speaker 1: in its own right. It cites lots of literary works, 51 00:03:44,000 --> 00:03:50,480 Speaker 1: including Mary Shelley for Frankenstein, The Pygmalion, myth Uh. It 52 00:03:51,120 --> 00:03:56,920 Speaker 1: references Carol Capex Are You Are? That is the play 53 00:03:56,960 --> 00:04:03,520 Speaker 1: that originated the word robot. So it's pretty interesting just 54 00:04:03,680 --> 00:04:07,760 Speaker 1: from the fact that it's referencing science fiction and horror 55 00:04:08,320 --> 00:04:12,320 Speaker 1: literature way more than you would expect in your typical 56 00:04:13,000 --> 00:04:17,480 Speaker 1: government proposal. It's it's much more entertaining than reading a 57 00:04:18,279 --> 00:04:22,560 Speaker 1: draft of, say, uh like your typical agricultural report, which 58 00:04:22,560 --> 00:04:25,520 Speaker 1: is still incredibly important. Don't get me wrong, those are 59 00:04:25,720 --> 00:04:29,960 Speaker 1: very important reports. They just don't tend to be page turners. Now, 60 00:04:29,960 --> 00:04:33,240 Speaker 1: those citations are all there to establish that we humans 61 00:04:33,279 --> 00:04:38,840 Speaker 1: have been really fascinated, maybe even fixated with this idea 62 00:04:39,400 --> 00:04:43,560 Speaker 1: of creating intelligent machines or even intelligent life in the 63 00:04:43,600 --> 00:04:47,520 Speaker 1: case of Frankenstein's Monster, that this has been something that 64 00:04:47,600 --> 00:04:52,599 Speaker 1: we have really aspired to on some level, the idea 65 00:04:52,640 --> 00:04:56,320 Speaker 1: that we create something that itself could be said to 66 00:04:56,440 --> 00:05:00,960 Speaker 1: be intelligent. Section B of the introduction, as the introductions 67 00:05:01,000 --> 00:05:04,400 Speaker 1: long enough to have different sections, would say that we're 68 00:05:04,440 --> 00:05:09,279 Speaker 1: on the threshold of a new industrial revolution. That the 69 00:05:09,320 --> 00:05:13,480 Speaker 1: first industrial revolution that took place in the nineteenth century 70 00:05:13,800 --> 00:05:20,360 Speaker 1: was all about creating factories, harnessing the power of coal building, railroads, 71 00:05:20,960 --> 00:05:25,960 Speaker 1: steam power, assembly lines, all of that stuff transformed the 72 00:05:26,000 --> 00:05:28,760 Speaker 1: world from what what had been a really a grarian 73 00:05:28,960 --> 00:05:33,240 Speaker 1: society into more of an urban one in these industrialized nations. 74 00:05:34,240 --> 00:05:38,960 Speaker 1: And the section was saying that artificial intelligence was going 75 00:05:39,000 --> 00:05:42,520 Speaker 1: to fuel a new revolution that would be just as 76 00:05:42,600 --> 00:05:46,880 Speaker 1: impactful as the previous industrial revolution, that no part of 77 00:05:46,960 --> 00:05:51,239 Speaker 1: society would go unaffected by that revolution. So therefore, since 78 00:05:51,240 --> 00:05:54,599 Speaker 1: it was going to be such a big part of 79 00:05:54,600 --> 00:05:57,440 Speaker 1: our lives moving forward, it would be really smart for 80 00:05:57,520 --> 00:06:00,839 Speaker 1: us to think about the implications of that out before 81 00:06:00,880 --> 00:06:04,359 Speaker 1: it happens, and prepare for it and to put into 82 00:06:04,440 --> 00:06:09,520 Speaker 1: place protections for people before it becomes reactionary, you know, 83 00:06:09,600 --> 00:06:11,960 Speaker 1: before there's a problem and then we have to figure out, oh, 84 00:06:12,000 --> 00:06:15,359 Speaker 1: how can we fix this. The argument of the proposal was, 85 00:06:15,560 --> 00:06:19,440 Speaker 1: let's be proactive, let's try and figure out what problems 86 00:06:19,520 --> 00:06:24,200 Speaker 1: may confront us in the future and solve them now 87 00:06:24,800 --> 00:06:31,039 Speaker 1: before we are dealing with a catastrophic experience. So then 88 00:06:31,080 --> 00:06:34,240 Speaker 1: the proposal goes and points out that robots sales have 89 00:06:34,320 --> 00:06:36,640 Speaker 1: been on the rise over the past few years. They 90 00:06:36,760 --> 00:06:40,480 Speaker 1: they've increased year over year. The automotive industry in particular 91 00:06:40,880 --> 00:06:44,080 Speaker 1: was called out, and it also stated that robots will 92 00:06:44,120 --> 00:06:48,120 Speaker 1: provide numerous benefits in the short to medium term and 93 00:06:48,160 --> 00:06:54,040 Speaker 1: potentially what they called virtually unbounded prosperity in the long term. 94 00:06:54,120 --> 00:06:58,080 Speaker 1: So the idea being there are some real positive outcomes 95 00:06:58,200 --> 00:07:02,640 Speaker 1: to employing robots. However, the flip side of that is 96 00:07:02,680 --> 00:07:06,600 Speaker 1: those advances might quote result in a large part of 97 00:07:06,600 --> 00:07:09,240 Speaker 1: the work now done by humans being taken over by 98 00:07:09,320 --> 00:07:12,760 Speaker 1: robots end quote, and that that would affect not just employment, 99 00:07:12,840 --> 00:07:16,720 Speaker 1: but systems like social security which rely upon employment taxes 100 00:07:16,720 --> 00:07:20,600 Speaker 1: for funding and other things that are tax supported. And 101 00:07:20,680 --> 00:07:22,840 Speaker 1: what happens when a robot fails in some way? What 102 00:07:22,880 --> 00:07:26,360 Speaker 1: if it creates damage to a person or property? And 103 00:07:26,640 --> 00:07:30,320 Speaker 1: how will robots interact with our personal data, even you know, 104 00:07:30,600 --> 00:07:34,400 Speaker 1: personal data that we haven't necessarily given consent to share. 105 00:07:35,160 --> 00:07:38,200 Speaker 1: The more intelligent the system, the more proactive it might 106 00:07:38,240 --> 00:07:41,640 Speaker 1: be in taking that data and using it in some 107 00:07:41,720 --> 00:07:45,120 Speaker 1: way without us even saying anything about it, especially if 108 00:07:45,160 --> 00:07:47,680 Speaker 1: you haven't built in specific rules for the robot or 109 00:07:47,760 --> 00:07:51,240 Speaker 1: artificial intelligent program or whatever to follow so that it 110 00:07:51,320 --> 00:07:55,000 Speaker 1: doesn't do that. Now, before I go much further down 111 00:07:55,080 --> 00:07:58,160 Speaker 1: that particular line of reasoning, I do need to point 112 00:07:58,160 --> 00:08:03,080 Speaker 1: out that AI and automation and their effect on jobs 113 00:08:03,360 --> 00:08:07,520 Speaker 1: is still a matter of considerable debate. No one is 114 00:08:07,560 --> 00:08:10,480 Speaker 1: sure right now to what extent it's going to have 115 00:08:10,600 --> 00:08:14,160 Speaker 1: an effect on people in general. So the worst case 116 00:08:14,160 --> 00:08:18,160 Speaker 1: scenario that some people say is that robots and AI 117 00:08:18,280 --> 00:08:21,760 Speaker 1: automated systems are going to replace the vast majority of 118 00:08:21,880 --> 00:08:25,960 Speaker 1: jobs within a few decades, and that this will happen 119 00:08:26,080 --> 00:08:29,440 Speaker 1: before we've ever built out any sort of infrastructure that 120 00:08:29,480 --> 00:08:33,040 Speaker 1: would take care of people that would help them transition 121 00:08:33,440 --> 00:08:37,280 Speaker 1: either into new jobs that were created as a result 122 00:08:37,400 --> 00:08:40,839 Speaker 1: of this, jobs that we don't have today because they 123 00:08:41,320 --> 00:08:44,000 Speaker 1: we don't need them, but we may have a need 124 00:08:44,040 --> 00:08:48,480 Speaker 1: of them once this automated future comes upon us. Or 125 00:08:49,600 --> 00:08:52,760 Speaker 1: maybe that isn't the case. Maybe we don't create new jobs, 126 00:08:53,080 --> 00:08:56,800 Speaker 1: Maybe we don't have a system where we can actually 127 00:08:57,200 --> 00:09:01,800 Speaker 1: take care of people and and separate the concepts of 128 00:09:02,080 --> 00:09:06,960 Speaker 1: making a living and being employed, having some other system 129 00:09:07,040 --> 00:09:10,720 Speaker 1: there so that people could meet their needs to make 130 00:09:10,880 --> 00:09:14,080 Speaker 1: ends meet. How do we do that without having it 131 00:09:14,120 --> 00:09:18,440 Speaker 1: tied to jobs? But there are other people who also 132 00:09:18,480 --> 00:09:22,160 Speaker 1: think we're not likely to see that huge of a change, 133 00:09:22,200 --> 00:09:27,040 Speaker 1: at least not in the relatively near future. We're more 134 00:09:27,160 --> 00:09:31,240 Speaker 1: likely going to see AI and automation take over individual tasks, 135 00:09:31,720 --> 00:09:35,240 Speaker 1: but not entire jobs. So the future might be one 136 00:09:35,280 --> 00:09:38,360 Speaker 1: in which our work is augmented by AI, but we 137 00:09:38,440 --> 00:09:43,200 Speaker 1: are not replaced by AI. The justification for that argument 138 00:09:43,280 --> 00:09:46,520 Speaker 1: is saying, look at machine learning as it stands right now. 139 00:09:46,559 --> 00:09:50,080 Speaker 1: It is very impressive, but it also shows that there 140 00:09:50,080 --> 00:09:54,360 Speaker 1: are severe limitations in machine learning right now. It's just 141 00:09:54,440 --> 00:09:58,480 Speaker 1: not sophisticated enough to be able to take over for 142 00:09:58,600 --> 00:10:01,760 Speaker 1: everything that a human can do. Uh, and it requires 143 00:10:01,800 --> 00:10:03,440 Speaker 1: a lot of training and a lot of stuff can 144 00:10:03,480 --> 00:10:07,200 Speaker 1: go wrong, So it's not likely that we're gonna see 145 00:10:07,200 --> 00:10:10,240 Speaker 1: computers take over lots and lots and lots of jobs, 146 00:10:10,240 --> 00:10:13,280 Speaker 1: but they might take over repetitive tasks that your job 147 00:10:13,480 --> 00:10:16,319 Speaker 1: happens to include, and then you can focus on the 148 00:10:16,360 --> 00:10:21,120 Speaker 1: stuff that isn't repetitive and predictable. Who's to say, who's 149 00:10:21,200 --> 00:10:24,440 Speaker 1: right not me? I don't know. It could be either 150 00:10:24,480 --> 00:10:26,280 Speaker 1: one of those. But the argument of the proposal I 151 00:10:26,320 --> 00:10:28,840 Speaker 1: think still holds true, which is that we need to 152 00:10:29,559 --> 00:10:33,160 Speaker 1: at least consider what the worst case scenario is and 153 00:10:33,440 --> 00:10:38,880 Speaker 1: have a plan to alleviate the outcome of that. Then 154 00:10:38,920 --> 00:10:41,520 Speaker 1: there's the question of how robots are going to impact 155 00:10:41,720 --> 00:10:45,120 Speaker 1: stuff like human dignity. So in the future, if we 156 00:10:45,200 --> 00:10:48,559 Speaker 1: have robots that are acting like caretakers for the sick 157 00:10:48,960 --> 00:10:51,600 Speaker 1: or the elderly or the young, what impact is that 158 00:10:51,640 --> 00:10:54,920 Speaker 1: going to have on those people, How is that going 159 00:10:54,960 --> 00:10:57,880 Speaker 1: to make them feel? What impact will have on their health? 160 00:10:58,679 --> 00:11:01,200 Speaker 1: And uh, and hey, what if those robots were to 161 00:11:01,240 --> 00:11:07,320 Speaker 1: get really effective, really smart, like maybe smarter than humans smart, 162 00:11:07,840 --> 00:11:12,199 Speaker 1: So again, not necessarily intelligent in the same way humans are, 163 00:11:12,240 --> 00:11:15,199 Speaker 1: but better able to process certain types of information than 164 00:11:15,400 --> 00:11:20,720 Speaker 1: humans are. The proposal in the introduction, it raises that 165 00:11:20,800 --> 00:11:23,800 Speaker 1: question as well. Could robots actually represent a danger to 166 00:11:23,840 --> 00:11:26,840 Speaker 1: the human species. It's something that we have to consider 167 00:11:27,400 --> 00:11:31,360 Speaker 1: before it becomes a reality. Now, with all of this 168 00:11:31,480 --> 00:11:35,360 Speaker 1: in mind, says the introduction, the EU should get off 169 00:11:35,360 --> 00:11:38,880 Speaker 1: its butt and start talking about those ideas and work 170 00:11:38,880 --> 00:11:42,760 Speaker 1: on a strategy to avoid problems in the future. And 171 00:11:43,280 --> 00:11:46,320 Speaker 1: the proposal would then go on to have a few 172 00:11:46,360 --> 00:11:48,480 Speaker 1: suggestions of its own. I'll get into those in just 173 00:11:48,600 --> 00:11:51,760 Speaker 1: a second, but first let's take a quick break to 174 00:11:51,880 --> 00:12:03,480 Speaker 1: thank our sponsor. So the proposal has a solutions and 175 00:12:03,559 --> 00:12:08,760 Speaker 1: Suggestions section, and under general principles, the report would cite 176 00:12:08,840 --> 00:12:14,600 Speaker 1: Asimov's Laws of Robotics and would say that designers, producers, 177 00:12:14,600 --> 00:12:18,600 Speaker 1: and operators of robots should keep these in mind, which 178 00:12:18,679 --> 00:12:22,720 Speaker 1: is amusing to me. I mean asthma asthmava. Isaac Asimov 179 00:12:23,000 --> 00:12:29,120 Speaker 1: was a famed science fiction and speculative fiction author wrote 180 00:12:29,160 --> 00:12:33,160 Speaker 1: some amazing stuff, and those laws of robotics have become 181 00:12:33,200 --> 00:12:38,680 Speaker 1: sort of iconic in artificial intelligence. The original three laws 182 00:12:38,720 --> 00:12:41,959 Speaker 1: of robotics are probably the best known. So law number 183 00:12:42,000 --> 00:12:44,880 Speaker 1: one is a robot may not harm a human or 184 00:12:45,120 --> 00:12:48,599 Speaker 1: through in action, allow a human to come to harm. 185 00:12:48,840 --> 00:12:52,439 Speaker 1: Law too is a robot has to obey any order 186 00:12:52,800 --> 00:12:56,480 Speaker 1: given to it by a human unless it would violate 187 00:12:56,720 --> 00:13:00,000 Speaker 1: the first law. So you could tell a robot, hey 188 00:13:00,000 --> 00:13:01,440 Speaker 1: will pick that up for me, and it would have 189 00:13:01,520 --> 00:13:04,400 Speaker 1: to do it. But you couldn't tell robot, hey, go 190 00:13:04,600 --> 00:13:07,360 Speaker 1: push that guy into traffic, because that would violate the 191 00:13:07,400 --> 00:13:10,600 Speaker 1: first law. The third law was a robot would have 192 00:13:10,679 --> 00:13:15,280 Speaker 1: to protect itself from harm unless doing so would conflict 193 00:13:15,320 --> 00:13:19,400 Speaker 1: with either of the first two laws. So if a 194 00:13:19,559 --> 00:13:23,600 Speaker 1: robot were to see that a car were coming, it 195 00:13:23,640 --> 00:13:25,439 Speaker 1: was coming down the street, it was gonna hit a 196 00:13:25,480 --> 00:13:27,920 Speaker 1: little old lady, and the robot would be able to 197 00:13:27,960 --> 00:13:30,559 Speaker 1: push the little lady out of the way, but as 198 00:13:30,559 --> 00:13:33,120 Speaker 1: a result, it was going to get hit by this car. 199 00:13:33,320 --> 00:13:35,400 Speaker 1: The robot would have to do it because even though 200 00:13:35,440 --> 00:13:38,240 Speaker 1: it has a law stating it has to protect itself, 201 00:13:38,760 --> 00:13:41,960 Speaker 1: it that gets superseded by law number one, which says 202 00:13:42,480 --> 00:13:44,920 Speaker 1: it cannot through in action, allow a human to come 203 00:13:44,960 --> 00:13:47,160 Speaker 1: to harm. It would have to take action in that case. 204 00:13:47,880 --> 00:13:51,520 Speaker 1: Then there's a fourth law called law zero that was 205 00:13:51,559 --> 00:13:55,360 Speaker 1: considered to be the top of the this ladder, the 206 00:13:55,480 --> 00:13:57,920 Speaker 1: most important of all the laws, which is a robot 207 00:13:57,960 --> 00:14:02,280 Speaker 1: may not harm humanity or by inaction allow humanity to 208 00:14:02,320 --> 00:14:04,960 Speaker 1: come to harm, so not just an individual, but humanity 209 00:14:05,080 --> 00:14:09,880 Speaker 1: as a whole. Under the liability section of the report, 210 00:14:10,280 --> 00:14:12,960 Speaker 1: it suggests that it will not be long before the 211 00:14:13,000 --> 00:14:17,000 Speaker 1: European Union either needs to classify robots under a category 212 00:14:17,040 --> 00:14:20,400 Speaker 1: such as persons or to create a brand new category 213 00:14:20,520 --> 00:14:25,240 Speaker 1: just for robots. So AI and machine learning are really 214 00:14:25,640 --> 00:14:31,880 Speaker 1: changing how robots interact with environments. Before you would program 215 00:14:31,920 --> 00:14:35,960 Speaker 1: all the ways that a robot would interact with its environment, 216 00:14:36,080 --> 00:14:40,160 Speaker 1: and mostly you would try to control the types of 217 00:14:40,240 --> 00:14:42,680 Speaker 1: environments your robot was going to be in. You know, 218 00:14:42,760 --> 00:14:46,120 Speaker 1: the more predictable and the more stationary the robot, the 219 00:14:46,160 --> 00:14:48,240 Speaker 1: easier it was to program. Right, So if you've got 220 00:14:48,240 --> 00:14:51,280 Speaker 1: a robot, like a giant robotic arm that's doing welding 221 00:14:51,400 --> 00:14:54,640 Speaker 1: in a car manufacturing line, that robot is going to 222 00:14:54,680 --> 00:14:58,080 Speaker 1: be stationary. It doesn't move around. It stays in one 223 00:14:58,120 --> 00:15:00,840 Speaker 1: spot along the assembly line. The car has come to it, 224 00:15:00,840 --> 00:15:05,200 Speaker 1: it does its work, next car comes to it, it continues. Uh. 225 00:15:05,280 --> 00:15:09,600 Speaker 1: That is one way of programming a robot, and it's 226 00:15:09,800 --> 00:15:14,440 Speaker 1: easy comparatively speaking. But these days we now have robots 227 00:15:14,480 --> 00:15:19,960 Speaker 1: that use machine learning that encounter situations and then process 228 00:15:20,000 --> 00:15:24,320 Speaker 1: information come up with a conclusion. It might be a 229 00:15:24,320 --> 00:15:28,520 Speaker 1: way to act, might be UH, you know, a specific 230 00:15:29,200 --> 00:15:33,320 Speaker 1: um request, it might make whatever you may think, it 231 00:15:33,400 --> 00:15:38,480 Speaker 1: has to come to that conclusion, which means that we 232 00:15:38,560 --> 00:15:41,880 Speaker 1: don't know how robots are always going to react to 233 00:15:41,920 --> 00:15:46,320 Speaker 1: their environments. Environments can be very chaotic things with lots 234 00:15:46,320 --> 00:15:49,440 Speaker 1: of different variables. And while you might program a robot 235 00:15:49,480 --> 00:15:52,120 Speaker 1: so it will behave in a very predictable way for 236 00:15:52,240 --> 00:15:55,720 Speaker 1: certain situations, you're not gonna be able to predict every 237 00:15:55,760 --> 00:16:00,320 Speaker 1: possible situation situation in every possible environment. So because of that, 238 00:16:00,440 --> 00:16:05,200 Speaker 1: and because we're using machine learning in more applications, this 239 00:16:05,240 --> 00:16:08,600 Speaker 1: program was saying, or this this UH report was saying, 240 00:16:09,360 --> 00:16:12,640 Speaker 1: we need to keep that in mind. And while we 241 00:16:13,480 --> 00:16:17,240 Speaker 1: wouldn't necessarily call even the most advanced AI conscious or 242 00:16:17,240 --> 00:16:20,560 Speaker 1: self aware, it's not. It's not doesn't appear to be. 243 00:16:21,200 --> 00:16:23,840 Speaker 1: We are seeing more applications that allow machines to learn 244 00:16:23,840 --> 00:16:27,440 Speaker 1: from their environments and adapt their approaches to complete certain tasks, 245 00:16:27,560 --> 00:16:29,320 Speaker 1: and that is something we need to keep an eye 246 00:16:29,360 --> 00:16:31,920 Speaker 1: on now. Right now, you could argue that if a 247 00:16:32,040 --> 00:16:37,040 Speaker 1: robot malfunctions, you would say the company that made the 248 00:16:37,120 --> 00:16:41,320 Speaker 1: robot should be held liable or the programmer who programmed 249 00:16:41,360 --> 00:16:45,880 Speaker 1: in either the routine the robot was following or the 250 00:16:45,880 --> 00:16:51,280 Speaker 1: basic artificial intelligence that the robot would use as guideline. However, 251 00:16:51,440 --> 00:16:54,360 Speaker 1: as robots depend more and more upon machine learning to 252 00:16:54,440 --> 00:16:58,640 Speaker 1: interact with their environments, this gets really murky because again, 253 00:16:58,680 --> 00:17:02,680 Speaker 1: those environments are filled with predictable variables, and the robot 254 00:17:02,800 --> 00:17:05,280 Speaker 1: might quote unquote learned to do something in a way 255 00:17:05,320 --> 00:17:09,680 Speaker 1: that's harmful, or it's inefficient, or it's ineffective. This doesn't 256 00:17:09,720 --> 00:17:12,160 Speaker 1: necessarily mean it would cause harm to people. It might 257 00:17:12,280 --> 00:17:16,000 Speaker 1: just mess things up in like an industry environment and 258 00:17:16,119 --> 00:17:21,760 Speaker 1: cause financial harm. So the proposal argues that as machines 259 00:17:21,840 --> 00:17:27,640 Speaker 1: become more advanced, it makes less sense to blame the manufacturer. 260 00:17:27,720 --> 00:17:30,880 Speaker 1: It's less effective, and there might need to be new 261 00:17:30,960 --> 00:17:34,680 Speaker 1: rules and definitions for liability that would hold the machines 262 00:17:35,000 --> 00:17:41,399 Speaker 1: themselves responsible. So the proposal quote considers that a system 263 00:17:41,440 --> 00:17:45,399 Speaker 1: of registration of advanced robots should be introduced end quote. 264 00:17:45,640 --> 00:17:49,400 Speaker 1: That would involve creating the criteria to decide which robots 265 00:17:49,440 --> 00:17:52,720 Speaker 1: would be required for registration, and it would also call 266 00:17:52,760 --> 00:17:56,200 Speaker 1: for more funding from the European Union for research projects, 267 00:17:56,440 --> 00:18:01,080 Speaker 1: particularly those that involve the social and ethical challenges raised 268 00:18:01,119 --> 00:18:04,639 Speaker 1: by advances in robotics, and that the EU should create 269 00:18:04,720 --> 00:18:09,000 Speaker 1: a quote Legislative Instrument on legal questions related to the 270 00:18:09,000 --> 00:18:12,240 Speaker 1: development of robotics and AI end quote that would look 271 00:18:12,280 --> 00:18:16,560 Speaker 1: ahead ten to fifteen years, which is super hard to do. 272 00:18:17,200 --> 00:18:20,680 Speaker 1: It's really hard to predict what will happen in technology 273 00:18:20,720 --> 00:18:24,480 Speaker 1: in five years, let alone ten or fifteen. But they're 274 00:18:24,480 --> 00:18:27,840 Speaker 1: saying you should try and consider as many different possible 275 00:18:27,880 --> 00:18:33,320 Speaker 1: situations as you can. So why are they saying this, Well, 276 00:18:33,840 --> 00:18:38,320 Speaker 1: if I create a machine learning program and I have 277 00:18:38,400 --> 00:18:42,280 Speaker 1: a robot that follows it, and the robot is put 278 00:18:42,320 --> 00:18:45,200 Speaker 1: into an environment that's got a lot of these variables, 279 00:18:46,000 --> 00:18:48,000 Speaker 1: I may not know how my robots going to react 280 00:18:48,000 --> 00:18:51,760 Speaker 1: in every single situation. The more advanced the robot is, 281 00:18:52,480 --> 00:18:55,120 Speaker 1: the less certain I can be of exactly what it's 282 00:18:55,119 --> 00:19:01,120 Speaker 1: going to do based upon any given scenario. I might 283 00:19:01,480 --> 00:19:04,640 Speaker 1: feel like the guidelines I've created are enough to keep 284 00:19:04,680 --> 00:19:07,280 Speaker 1: the robot out of trouble. But the world is a 285 00:19:07,359 --> 00:19:10,800 Speaker 1: chaotic place, and we've talked about this with autonomous cars. 286 00:19:10,840 --> 00:19:12,600 Speaker 1: I'm actually going to talk about it more. I plan 287 00:19:12,680 --> 00:19:16,520 Speaker 1: on doing a whole suite of episodes about the history 288 00:19:17,320 --> 00:19:21,960 Speaker 1: an evolution and challenges of autonomous cars in the near future. 289 00:19:22,000 --> 00:19:27,600 Speaker 1: That's gonna be a multi part series. But with autonomous cars, 290 00:19:28,280 --> 00:19:34,480 Speaker 1: we know that there are situations that can end in accidents, 291 00:19:34,600 --> 00:19:39,240 Speaker 1: even fatal ones. We've seen examples of that, and that 292 00:19:39,760 --> 00:19:45,240 Speaker 1: this raises questions of what ultimately is what should we 293 00:19:45,280 --> 00:19:51,200 Speaker 1: hold accountable, who who is responsible for this? With driving scenarios, 294 00:19:51,640 --> 00:19:55,800 Speaker 1: we know we can predict maybe let's say nine of 295 00:19:55,840 --> 00:19:58,520 Speaker 1: the different scenarios you would encounter on a typical drive, 296 00:19:58,920 --> 00:20:00,680 Speaker 1: but that means there's ten percent and the stuff out 297 00:20:00,720 --> 00:20:04,200 Speaker 1: there that happens, it's just not normal. It's not something 298 00:20:04,200 --> 00:20:09,000 Speaker 1: you would typically encounter. And these outside scenarios pose a 299 00:20:09,040 --> 00:20:12,840 Speaker 1: problem because you cannot anticipate every single one in program 300 00:20:12,960 --> 00:20:16,920 Speaker 1: into your machine when this happens, make sure you do 301 00:20:17,000 --> 00:20:21,080 Speaker 1: this other thing. The machine will encounter scenarios in which 302 00:20:21,119 --> 00:20:24,320 Speaker 1: it has to make a decision for itself, and it's 303 00:20:24,359 --> 00:20:28,959 Speaker 1: at that point that we don't know where responsibility falls, 304 00:20:29,040 --> 00:20:34,080 Speaker 1: where the accountability falls because the programmer could not possibly 305 00:20:34,119 --> 00:20:37,400 Speaker 1: have anticipated this, So is it really fair to hold 306 00:20:37,440 --> 00:20:41,480 Speaker 1: them at fault. The manufacturer may have made it exactly 307 00:20:41,480 --> 00:20:43,359 Speaker 1: the way it was supposed to be made. There are 308 00:20:43,359 --> 00:20:46,360 Speaker 1: no faults in the manufacturing process, So are they at 309 00:20:46,359 --> 00:20:50,240 Speaker 1: fault the owner of the robot that put it in 310 00:20:50,640 --> 00:20:54,320 Speaker 1: whatever situation it found itself in the first place, maybe 311 00:20:54,359 --> 00:20:57,959 Speaker 1: they're at fault, or maybe the robot itself is at fault. 312 00:20:58,880 --> 00:21:02,679 Speaker 1: So this proposal goes on to call for a legal 313 00:21:02,720 --> 00:21:07,120 Speaker 1: solution that doesn't restrict the type or extent of damage 314 00:21:07,160 --> 00:21:10,080 Speaker 1: as a person can seek based solely on the fact 315 00:21:10,160 --> 00:21:13,120 Speaker 1: that the damage was caused by a non human agent. So, 316 00:21:13,359 --> 00:21:16,040 Speaker 1: in other words, a court should should not be allowed 317 00:21:16,040 --> 00:21:19,600 Speaker 1: to say, well, we can only award you this amount. 318 00:21:19,680 --> 00:21:22,960 Speaker 1: We can't give you any more than this because the 319 00:21:23,000 --> 00:21:25,760 Speaker 1: thing that caused harm to you was a robot, not 320 00:21:25,840 --> 00:21:27,920 Speaker 1: a person. If it were a person, we would be 321 00:21:27,960 --> 00:21:30,760 Speaker 1: able to award you more money. The proposal says, we 322 00:21:31,040 --> 00:21:33,280 Speaker 1: want to make sure that does not become the case. 323 00:21:33,800 --> 00:21:36,960 Speaker 1: And like I said, we're already seeing this in cases 324 00:21:36,960 --> 00:21:41,600 Speaker 1: with autonomous cars, where a company can say, well, you 325 00:21:41,640 --> 00:21:45,760 Speaker 1: know it was it was not the fault of the programmer, 326 00:21:45,880 --> 00:21:47,960 Speaker 1: was not the fault of the company. It was a 327 00:21:48,000 --> 00:21:51,000 Speaker 1: situation in which the car itself made that decision. The 328 00:21:51,080 --> 00:21:55,440 Speaker 1: car is at fault, not us. That is a problematic thing. 329 00:21:56,040 --> 00:21:59,639 Speaker 1: One of the things that the the report suggested was 330 00:22:00,119 --> 00:22:02,840 Speaker 1: that the producers of a robot are liable for damage 331 00:22:02,840 --> 00:22:05,439 Speaker 1: on a level that is proportionate with the amount of 332 00:22:05,480 --> 00:22:09,680 Speaker 1: instructions the producers gave to the robot. So the more 333 00:22:10,160 --> 00:22:13,920 Speaker 1: locked down the robot, the more responsible the producers are 334 00:22:14,000 --> 00:22:17,600 Speaker 1: for that robot's behavior. So if you have that stationary 335 00:22:17,880 --> 00:22:21,840 Speaker 1: robot that that is welding on an assembly line and 336 00:22:21,960 --> 00:22:25,879 Speaker 1: it malfunctions or it starts causing huge amounts of damage 337 00:22:25,880 --> 00:22:30,960 Speaker 1: and its behavior it's it's perhaps ruining several cars along 338 00:22:31,000 --> 00:22:33,800 Speaker 1: the assembly line as it's going through this process, you 339 00:22:33,840 --> 00:22:36,280 Speaker 1: would say, all right, the company that made this robot 340 00:22:36,400 --> 00:22:39,560 Speaker 1: is at fault. Something they did was wrong because the 341 00:22:39,640 --> 00:22:43,760 Speaker 1: robots only following the instructions that this company gave it. 342 00:22:43,760 --> 00:22:46,359 Speaker 1: It's not making any decisions on its own. But the 343 00:22:46,400 --> 00:22:50,439 Speaker 1: more the decision making process is on the robot, the 344 00:22:50,560 --> 00:22:54,639 Speaker 1: less you can hold the manufacturer accountable, according to this report, 345 00:22:54,640 --> 00:22:58,000 Speaker 1: which is a pretty radical idea, and the proposal also 346 00:22:58,040 --> 00:23:01,760 Speaker 1: states that the longer a robot has received quote unquote education, 347 00:23:02,160 --> 00:23:05,639 Speaker 1: the more liable the robots quote unquote teacher is for 348 00:23:05,680 --> 00:23:08,720 Speaker 1: any damage the robot causes. Now, that might be the 349 00:23:08,760 --> 00:23:10,680 Speaker 1: company that made the robot. It might be the person 350 00:23:10,760 --> 00:23:13,440 Speaker 1: that programmed the robot, it might be the person who 351 00:23:13,640 --> 00:23:16,920 Speaker 1: purchased the robot and then put it in an environment 352 00:23:16,960 --> 00:23:20,359 Speaker 1: where it was learning, but it would be whoever was 353 00:23:20,440 --> 00:23:25,359 Speaker 1: introducing these scenarios to the robot. So that's a really 354 00:23:26,040 --> 00:23:28,880 Speaker 1: interesting concept. And again, this is just a proposal. It's 355 00:23:28,880 --> 00:23:32,440 Speaker 1: not like this has been enacted into law. Uh. They 356 00:23:32,520 --> 00:23:36,960 Speaker 1: also created a suggestion for an obligatory insurance scheme. This 357 00:23:37,080 --> 00:23:39,800 Speaker 1: is not that crazy. I mean, if you drive a 358 00:23:39,840 --> 00:23:42,600 Speaker 1: car in the United States, you have to have insurance. 359 00:23:43,240 --> 00:23:45,639 Speaker 1: It's a requirement by law. So it'd be similar to that, 360 00:23:45,680 --> 00:23:48,920 Speaker 1: except in this case, the producers of the robots would 361 00:23:48,920 --> 00:23:52,120 Speaker 1: pay out the insurance for the robots that it was creating. 362 00:23:52,640 --> 00:23:55,920 Speaker 1: And the report also suggested that there should be or 363 00:23:55,960 --> 00:24:01,080 Speaker 1: there could be, a compensation fund for the robots, which 364 00:24:01,119 --> 00:24:03,880 Speaker 1: sounds crazy. Like one of the things you think about 365 00:24:03,960 --> 00:24:07,199 Speaker 1: about automation is that, well, it reduces the need to 366 00:24:07,280 --> 00:24:11,080 Speaker 1: have paid employees. You've got robots, why do you need 367 00:24:11,119 --> 00:24:15,040 Speaker 1: to pay them. Well, it's not to reward the robot. 368 00:24:15,080 --> 00:24:17,480 Speaker 1: It's not to make the robot feel like it did 369 00:24:17,480 --> 00:24:21,359 Speaker 1: a great job. Robots can't feel anything anyway. It's not 370 00:24:21,560 --> 00:24:26,520 Speaker 1: even to you know, to to have this system, you know, 371 00:24:26,680 --> 00:24:29,960 Speaker 1: perpetuate the system of paying for work. It's rather to 372 00:24:30,160 --> 00:24:34,119 Speaker 1: build a fund, a compensation fund that could cover the 373 00:24:34,160 --> 00:24:37,840 Speaker 1: cost of any damages or harm that the robot might create. 374 00:24:38,720 --> 00:24:42,280 Speaker 1: So you're not paying the robot and putting money into 375 00:24:42,440 --> 00:24:44,760 Speaker 1: its bank account that then goes to spend on motor 376 00:24:44,800 --> 00:24:48,600 Speaker 1: oil or something. You're accumulating money in the event that 377 00:24:48,640 --> 00:24:52,600 Speaker 1: the robot causes damage, and then you've got money dedicated 378 00:24:52,640 --> 00:24:55,320 Speaker 1: to that robot that you can use to pay out 379 00:24:55,840 --> 00:25:00,400 Speaker 1: and the event of damages being created by that robot. Uh, 380 00:25:00,640 --> 00:25:02,800 Speaker 1: so you're essentially paying a robot so that in case 381 00:25:02,840 --> 00:25:06,399 Speaker 1: it goes haywire and starts, I don't know, slapping people around, 382 00:25:06,720 --> 00:25:10,240 Speaker 1: you can actually cover all those damages. The report also 383 00:25:10,280 --> 00:25:14,600 Speaker 1: calls for specific legal status to apply to robots, thus 384 00:25:14,640 --> 00:25:19,679 Speaker 1: creating assas that's equivalent to like electronic persons that have 385 00:25:19,800 --> 00:25:23,080 Speaker 1: specific rights and obligations, including that of making good any 386 00:25:23,119 --> 00:25:26,440 Speaker 1: damage they may cause. And a lot of people say 387 00:25:26,520 --> 00:25:29,359 Speaker 1: that this is confusing, that why would you grant this 388 00:25:29,440 --> 00:25:33,159 Speaker 1: personhood to robots? And a frequent response to that, and 389 00:25:33,200 --> 00:25:35,280 Speaker 1: I think it's got some validity to it as well. 390 00:25:35,320 --> 00:25:38,440 Speaker 1: We already do it with businesses. We all are already 391 00:25:38,480 --> 00:25:43,640 Speaker 1: grant the concept of personhood to corporations. Corporations can behave 392 00:25:43,720 --> 00:25:48,119 Speaker 1: in legal matters as if they are people, human being people. 393 00:25:48,680 --> 00:25:51,359 Speaker 1: And if a corporation can do it a business a 394 00:25:51,560 --> 00:25:55,880 Speaker 1: collection of work, then why not a robot. Then there's 395 00:25:55,880 --> 00:25:58,399 Speaker 1: a section from the design of an Ethical Framework to 396 00:25:58,440 --> 00:26:01,840 Speaker 1: make certain advances that uh that are made in robotics 397 00:26:01,960 --> 00:26:08,320 Speaker 1: are made with consideration to how it impacts human safety, privacy, dignity, 398 00:26:08,840 --> 00:26:12,600 Speaker 1: your who owns information. This proposal goes on to say 399 00:26:12,640 --> 00:26:15,280 Speaker 1: that the risk of harm should be no greater than 400 00:26:15,480 --> 00:26:18,919 Speaker 1: encountered in ordinary life. So, in other words, a future 401 00:26:18,920 --> 00:26:21,359 Speaker 1: filled with robots should pose no more risk to a 402 00:26:21,440 --> 00:26:25,920 Speaker 1: person as that person would encounter today in ordinary circumstances. Now, 403 00:26:25,920 --> 00:26:28,679 Speaker 1: to do all of that, the proposals just creating a 404 00:26:28,760 --> 00:26:33,160 Speaker 1: new European Agency for Robotics and Artificial Intelligence and funding 405 00:26:33,200 --> 00:26:36,080 Speaker 1: that agency and staffing it with technical experts as well 406 00:26:36,119 --> 00:26:40,080 Speaker 1: as leaders in ethical and regulatory fields to start really 407 00:26:40,119 --> 00:26:43,680 Speaker 1: susten this stuff out. The other sections in the report 408 00:26:44,160 --> 00:26:48,280 Speaker 1: cover concepts like intellectual property, um, how do you protect 409 00:26:48,440 --> 00:26:52,200 Speaker 1: and encourage innovation? How do you call on new criteria 410 00:26:52,320 --> 00:26:55,440 Speaker 1: to apply to copyrightable works that are produced by computers 411 00:26:55,520 --> 00:26:58,800 Speaker 1: or robots. You may have listened to my episode, my 412 00:26:58,880 --> 00:27:04,480 Speaker 1: recent episode about the artificial intelligence that created a painting. 413 00:27:04,880 --> 00:27:09,440 Speaker 1: Who owns that painting? Does the machine own that painting? 414 00:27:09,520 --> 00:27:12,720 Speaker 1: We don't have rules for this, and that's what this 415 00:27:12,920 --> 00:27:16,760 Speaker 1: report is arguing is as we create more machines and 416 00:27:16,840 --> 00:27:21,440 Speaker 1: systems that can produce stuff that we would normally protect 417 00:27:22,280 --> 00:27:25,760 Speaker 1: with stuff like a copyright, what do we do? Who 418 00:27:25,840 --> 00:27:28,960 Speaker 1: owns that? Who should that go to? They also argue 419 00:27:28,960 --> 00:27:33,000 Speaker 1: for standardization, the idea of making sure that these robots 420 00:27:33,000 --> 00:27:37,440 Speaker 1: are all communicating with their various systems in a standardized way, 421 00:27:37,720 --> 00:27:39,480 Speaker 1: so that way, when you move from one part of 422 00:27:39,520 --> 00:27:45,120 Speaker 1: the EU to another, you don't have these conflicting communications standards, 423 00:27:45,800 --> 00:27:49,959 Speaker 1: data transfer standards. You want everything to work within the 424 00:27:50,040 --> 00:27:54,440 Speaker 1: same umbrella of standards, sort of avoiding the problem that 425 00:27:54,520 --> 00:27:57,000 Speaker 1: people often have, which is that you go to a 426 00:27:57,040 --> 00:28:02,800 Speaker 1: different country and everyone there has this audacity to talk 427 00:28:02,800 --> 00:28:05,560 Speaker 1: in a different language than you do, which means that 428 00:28:05,600 --> 00:28:08,520 Speaker 1: you have to speak really slow and loud if you're 429 00:28:08,560 --> 00:28:12,480 Speaker 1: an American. That's more of a joke about Americans being 430 00:28:12,640 --> 00:28:15,639 Speaker 1: rude and foreign nations. But you don't want that to 431 00:28:15,760 --> 00:28:18,080 Speaker 1: happen if you can prevent it, if you can create 432 00:28:18,080 --> 00:28:21,119 Speaker 1: the standards in the first place. They also call for 433 00:28:21,160 --> 00:28:25,119 Speaker 1: more standards to allow for the testing of driverless cars 434 00:28:25,880 --> 00:28:29,159 Speaker 1: and other autonomous vehicles rather than just this fragmented approach 435 00:28:29,200 --> 00:28:32,480 Speaker 1: that we're seeing, and also had a section on care 436 00:28:32,600 --> 00:28:36,919 Speaker 1: robots and medical robots general call to develop those robots 437 00:28:36,960 --> 00:28:40,680 Speaker 1: with a human impact in mind, and also the concept 438 00:28:40,720 --> 00:28:44,840 Speaker 1: of human repair and enhancement, which gets into this idea 439 00:28:44,920 --> 00:28:51,960 Speaker 1: of using robotics to either heal injury for people or 440 00:28:52,000 --> 00:28:56,120 Speaker 1: to maybe even replace people parts to make people cyborgs. 441 00:28:57,040 --> 00:29:00,160 Speaker 1: That's kind of a far off sort of thing that 442 00:29:00,200 --> 00:29:03,040 Speaker 1: we could think about. It may never happen, but the 443 00:29:03,120 --> 00:29:08,160 Speaker 1: report actually calls for a committee on robot ethics in 444 00:29:08,240 --> 00:29:11,280 Speaker 1: hospitals and other healthcare institutions to kind of develop ethical 445 00:29:11,320 --> 00:29:14,320 Speaker 1: guidelines for how that might be used. And then there's 446 00:29:14,360 --> 00:29:17,920 Speaker 1: a short section about drones, which is becoming more and 447 00:29:17,960 --> 00:29:21,320 Speaker 1: more important. I've got more to say about this, but 448 00:29:21,600 --> 00:29:31,840 Speaker 1: first let's take another quick break to thank our sponsor. Now, 449 00:29:31,840 --> 00:29:36,640 Speaker 1: according to one forecast that the reports sites, the European 450 00:29:36,760 --> 00:29:40,440 Speaker 1: Union could face a shortage of nearly a million information 451 00:29:40,560 --> 00:29:44,440 Speaker 1: and communications technology professionals in the near future. And on 452 00:29:44,520 --> 00:29:47,920 Speaker 1: top of that, it says that of all jobs will 453 00:29:48,000 --> 00:29:52,120 Speaker 1: require some level of digital skills moving forward, so it 454 00:29:52,200 --> 00:29:56,080 Speaker 1: calls for a revision of a digital competence framework to 455 00:29:56,240 --> 00:29:59,720 Speaker 1: correct for that, and it also calls for designing programs 456 00:29:59,720 --> 00:30:03,480 Speaker 1: doing courage people who typically don't go into these fields 457 00:30:03,840 --> 00:30:07,920 Speaker 1: to pursue them, specifically young women, to get young women 458 00:30:07,920 --> 00:30:11,760 Speaker 1: into the field of robotics, and that the European Union 459 00:30:11,760 --> 00:30:14,880 Speaker 1: and member states should quote launch initiatives in order to 460 00:30:14,960 --> 00:30:18,360 Speaker 1: support women in i c T that's the information and 461 00:30:18,360 --> 00:30:22,360 Speaker 1: communications technology and to boost their E skills end quote. 462 00:30:23,200 --> 00:30:26,240 Speaker 1: They also call for a system to monitor job trends 463 00:30:26,320 --> 00:30:30,120 Speaker 1: to see where jobs are disappearing due to automation, where 464 00:30:30,200 --> 00:30:34,800 Speaker 1: jobs are being created because of robotics, and to stay 465 00:30:34,840 --> 00:30:37,360 Speaker 1: ahead of that to say, well, we see where things 466 00:30:37,360 --> 00:30:40,840 Speaker 1: are going, so we know what to stress to people 467 00:30:40,880 --> 00:30:44,440 Speaker 1: who are in school, like what areas of opportunity are there, 468 00:30:44,480 --> 00:30:48,000 Speaker 1: so that people will go into those areas one to 469 00:30:48,080 --> 00:30:52,000 Speaker 1: meet the demand that's going to be created, and to 470 00:30:52,000 --> 00:30:56,120 Speaker 1: to avoid going into fields that would largely be obsolete, 471 00:30:56,360 --> 00:30:59,240 Speaker 1: meaning that they would have difficulty finding work after they 472 00:30:59,480 --> 00:31:03,920 Speaker 1: came out of the education system, and maybe that the 473 00:31:04,000 --> 00:31:08,400 Speaker 1: EU should introduce corporate reporting requirements on the extent and 474 00:31:08,440 --> 00:31:11,680 Speaker 1: proportion of the contribution of robotics and AI to the 475 00:31:11,680 --> 00:31:15,840 Speaker 1: economic results of a company for the purpose of taxation 476 00:31:16,200 --> 00:31:21,640 Speaker 1: and social security contributions. So not only does this report 477 00:31:21,680 --> 00:31:27,200 Speaker 1: suggest that employers could quote unquote pay robots, although again 478 00:31:27,240 --> 00:31:29,560 Speaker 1: that would be to put money aside in the event 479 00:31:29,680 --> 00:31:32,760 Speaker 1: that something were to go catastrophically wrong with that robot, 480 00:31:33,480 --> 00:31:36,920 Speaker 1: but that the robots would also have to pay into 481 00:31:36,920 --> 00:31:40,760 Speaker 1: the social security system, or rather the company's employing those 482 00:31:40,880 --> 00:31:45,240 Speaker 1: robots would need to And the report is saying that 483 00:31:45,440 --> 00:31:49,560 Speaker 1: if it is a fact that overall we're going to 484 00:31:49,600 --> 00:31:54,200 Speaker 1: see a decrease in employment because of automation, that would 485 00:31:54,240 --> 00:31:57,440 Speaker 1: be really bad. It would have a domino effect ripple 486 00:31:57,480 --> 00:32:00,560 Speaker 1: effect on stuff like Social Security because it depends on 487 00:32:00,600 --> 00:32:05,440 Speaker 1: employment taxes, and without those taxes, systems like Social Security 488 00:32:05,520 --> 00:32:09,320 Speaker 1: would lose funding and people would be and put in 489 00:32:09,440 --> 00:32:13,360 Speaker 1: hardship because of that. So there would need to be 490 00:32:13,400 --> 00:32:16,840 Speaker 1: some sort of tax on robots to help compensate for this, 491 00:32:17,240 --> 00:32:19,800 Speaker 1: to to level out the fact that there would not 492 00:32:19,800 --> 00:32:25,200 Speaker 1: be these employment taxes that employees would typically be UH 493 00:32:25,440 --> 00:32:28,560 Speaker 1: paying for out of their salaries. So the proposal also 494 00:32:28,600 --> 00:32:32,120 Speaker 1: suggests that all members of the EU consider a general 495 00:32:32,200 --> 00:32:37,560 Speaker 1: basic income as a possibility in case this wave of 496 00:32:37,600 --> 00:32:41,880 Speaker 1: automation does take large effect, because otherwise you could have 497 00:32:42,160 --> 00:32:45,440 Speaker 1: a population that's largely out of work and has no 498 00:32:45,520 --> 00:32:48,120 Speaker 1: way of making a living. A general basic income, like 499 00:32:48,160 --> 00:32:52,000 Speaker 1: a guaranteed basic income, would be an amount of money 500 00:32:52,080 --> 00:32:54,240 Speaker 1: that the government would pay out to each and every 501 00:32:54,280 --> 00:32:58,480 Speaker 1: citizen that would be used to cover basic needs. It 502 00:32:58,520 --> 00:33:00,840 Speaker 1: would not prevent people from going out and getting a 503 00:33:00,960 --> 00:33:04,120 Speaker 1: job and earning more income that they could do if 504 00:33:04,200 --> 00:33:07,800 Speaker 1: they wanted to live above the basic line that had 505 00:33:07,840 --> 00:33:11,160 Speaker 1: been set by whatever the basic income amount was. But 506 00:33:11,200 --> 00:33:13,240 Speaker 1: the idea would be that the basic income would would 507 00:33:13,240 --> 00:33:17,760 Speaker 1: cover your your most important needs like a place to sleep, food, 508 00:33:18,200 --> 00:33:20,760 Speaker 1: that kind of stuff. The proposal also lays out some 509 00:33:20,760 --> 00:33:24,959 Speaker 1: guidelines for licenses. One set that would be meant for 510 00:33:25,480 --> 00:33:28,760 Speaker 1: designers of these robots, one set that would meant be 511 00:33:28,840 --> 00:33:32,360 Speaker 1: meant for the users of the robots. UH. They do 512 00:33:32,520 --> 00:33:37,000 Speaker 1: have a few interesting examples in the report. There's actually 513 00:33:37,040 --> 00:33:38,920 Speaker 1: quite a few examples, but some of the fun ones 514 00:33:38,920 --> 00:33:42,920 Speaker 1: like under designers there's UH integrate opt out mechanisms like 515 00:33:43,120 --> 00:33:47,240 Speaker 1: kill switches consistent with design objectives. Make sure your robot 516 00:33:47,320 --> 00:33:50,600 Speaker 1: is going to operate in a legal way, so you know, 517 00:33:50,880 --> 00:33:54,120 Speaker 1: don't go making a gangster bot that would be dumb. 518 00:33:54,960 --> 00:33:57,800 Speaker 1: Be transparent in the way that the robot is programmed, 519 00:33:57,840 --> 00:34:00,960 Speaker 1: as well as the predictability of its robotic behavior so 520 00:34:01,000 --> 00:34:04,280 Speaker 1: people know what to expect when they use it. Develop 521 00:34:04,560 --> 00:34:07,520 Speaker 1: tracing tools during the development stages so that when a 522 00:34:07,600 --> 00:34:10,200 Speaker 1: robot behaves a particular way, it can be traced back 523 00:34:10,239 --> 00:34:13,520 Speaker 1: to the design of the robot, and doing that would 524 00:34:13,520 --> 00:34:16,920 Speaker 1: help other designers either incorporate good design elements into their 525 00:34:16,920 --> 00:34:22,040 Speaker 1: approach or avoid designs that lead to you know, the 526 00:34:22,440 --> 00:34:26,120 Speaker 1: crazy Viking robots that go on rampages in Northern Europe. 527 00:34:26,520 --> 00:34:29,280 Speaker 1: Make sure that the robots are identifiable as robots as well. 528 00:34:29,640 --> 00:34:31,719 Speaker 1: That was a big one they said that should be 529 00:34:31,800 --> 00:34:34,560 Speaker 1: part of it. People should know when they're looking at 530 00:34:34,560 --> 00:34:37,640 Speaker 1: a robot that it is in fact a robot. Then 531 00:34:37,719 --> 00:34:40,360 Speaker 1: for users, some of the things that were in the 532 00:34:40,400 --> 00:34:45,160 Speaker 1: licenses included respect human physical and emotional frailty, so you 533 00:34:45,200 --> 00:34:51,360 Speaker 1: should not have you know, robots employed as as strike breakers. 534 00:34:52,040 --> 00:34:55,640 Speaker 1: You should respect other people's privacy, which includes turning off 535 00:34:55,680 --> 00:34:58,719 Speaker 1: a robots video recording equipment if the situation warrants it, 536 00:34:59,440 --> 00:35:05,440 Speaker 1: and not to weaponize robots, which seems pretty you know, straightforward. Now, 537 00:35:05,560 --> 00:35:08,200 Speaker 1: this was just a proposal, and it's still something that 538 00:35:08,320 --> 00:35:13,080 Speaker 1: is being debated. In the European Union. There has undergone 539 00:35:13,640 --> 00:35:16,799 Speaker 1: rewrites and tweaks since it was first proposed a couple 540 00:35:16,800 --> 00:35:20,280 Speaker 1: of years ago, and in the EU there are still 541 00:35:20,600 --> 00:35:24,680 Speaker 1: discussions that are happening happening regularly about you know, what, 542 00:35:24,760 --> 00:35:29,160 Speaker 1: if anything, should the EU do about the development of 543 00:35:29,200 --> 00:35:33,400 Speaker 1: machine learning, artificial intelligence and automation, and what are the 544 00:35:33,440 --> 00:35:36,120 Speaker 1: best courses of action. I think the report was a 545 00:35:36,120 --> 00:35:39,920 Speaker 1: particularly interesting proposal. I think it had a lot of 546 00:35:40,040 --> 00:35:43,080 Speaker 1: very ambitious parts to it. I don't know that all 547 00:35:43,120 --> 00:35:46,440 Speaker 1: of them were even remotely realistic. Some of them certainly, 548 00:35:46,560 --> 00:35:49,239 Speaker 1: but I don't know about all of them. But I 549 00:35:49,680 --> 00:35:51,759 Speaker 1: thought that the most important thing was that it would 550 00:35:51,800 --> 00:35:55,280 Speaker 1: get people talking. The problem is we're still talking because 551 00:35:55,480 --> 00:35:59,560 Speaker 1: it's a complicated thing to think about and it requires 552 00:35:59,600 --> 00:36:05,120 Speaker 1: a lot of subtle decision making that it's just not 553 00:36:05,200 --> 00:36:11,799 Speaker 1: easy to do. It requires very careful consideration, and things 554 00:36:11,840 --> 00:36:15,000 Speaker 1: are changing so quickly that it can be difficult to 555 00:36:15,640 --> 00:36:18,480 Speaker 1: even get a handle on what's happening right now, let 556 00:36:18,520 --> 00:36:23,120 Speaker 1: alone figure out what might happen in ten years. It 557 00:36:23,200 --> 00:36:25,840 Speaker 1: does not change the fact that we need to consider 558 00:36:25,920 --> 00:36:29,480 Speaker 1: these things and to come up with some solutions. They 559 00:36:29,560 --> 00:36:32,839 Speaker 1: might not require granting personhood to robots. That might be 560 00:36:33,080 --> 00:36:36,839 Speaker 1: a step too far, but it would be really good 561 00:36:36,880 --> 00:36:40,680 Speaker 1: to get a stronger handle on what is coming down 562 00:36:40,840 --> 00:36:44,719 Speaker 1: the pipeline so that we're not blindsided by it and 563 00:36:44,840 --> 00:36:47,800 Speaker 1: we can limit the negative impact, if any, that it 564 00:36:47,840 --> 00:36:50,799 Speaker 1: would have on people in general. And I just think 565 00:36:50,800 --> 00:36:53,719 Speaker 1: it's fascinating this idea that we are really having these 566 00:36:53,719 --> 00:36:59,040 Speaker 1: conversations about how do we go forward as automation and 567 00:36:59,160 --> 00:37:02,680 Speaker 1: AI and machine learning become more and more part of 568 00:37:02,760 --> 00:37:05,880 Speaker 1: our lives, even if it's not in ways we directly 569 00:37:05,960 --> 00:37:08,719 Speaker 1: observe on a day to day basis. That wraps up 570 00:37:08,760 --> 00:37:11,120 Speaker 1: this episode. Like I said, pretty soon, I'm gonna do 571 00:37:11,239 --> 00:37:16,120 Speaker 1: a suite on autonomous cars and talk about their development 572 00:37:16,200 --> 00:37:19,919 Speaker 1: and the technology behind them, the ethical issues there. We'll 573 00:37:19,920 --> 00:37:22,640 Speaker 1: get into the trolley problem. It's one of my favorite 574 00:37:22,680 --> 00:37:26,680 Speaker 1: logical problems or ethical problems to discuss. I talked about 575 00:37:26,719 --> 00:37:29,320 Speaker 1: a little bit recently. We'll get into more detail about 576 00:37:29,400 --> 00:37:32,279 Speaker 1: that because people are having to talk about it now 577 00:37:32,760 --> 00:37:36,560 Speaker 1: and it's it's potentially scary stuff but also really fascinating. 578 00:37:36,840 --> 00:37:39,600 Speaker 1: If you guys have any suggestions for future episodes of 579 00:37:39,640 --> 00:37:42,600 Speaker 1: tech Stuff, maybe there's a technology you want me to cover, 580 00:37:42,760 --> 00:37:45,200 Speaker 1: a company, maybe there's someone you would like me to 581 00:37:45,200 --> 00:37:48,200 Speaker 1: talk about, or to go on over to tech Stuff 582 00:37:48,239 --> 00:37:51,000 Speaker 1: podcast dot com. That's the website. You'll find all the 583 00:37:51,000 --> 00:37:53,680 Speaker 1: ways to contact me there. I look forward to hearing 584 00:37:53,680 --> 00:37:55,560 Speaker 1: from you. Also, make sure you head over to t 585 00:37:55,800 --> 00:37:58,960 Speaker 1: public dot com slash tech Stuff. That's our merchandise store. 586 00:37:58,960 --> 00:38:01,239 Speaker 1: You're gonna find all sorts tech stuff merch over there. 587 00:38:01,600 --> 00:38:04,560 Speaker 1: There's cool things to check out. We're gonna be adding 588 00:38:04,600 --> 00:38:08,399 Speaker 1: some more designs in the very near future, and every 589 00:38:08,440 --> 00:38:10,960 Speaker 1: purchase you make goes to help the show, so we 590 00:38:11,080 --> 00:38:15,879 Speaker 1: greatly appreciate you supporting us. It's fantastic and I will 591 00:38:15,920 --> 00:38:24,080 Speaker 1: talk to you again really soon for more on this 592 00:38:24,280 --> 00:38:26,800 Speaker 1: and thousands of other topics because it how stuff works. 593 00:38:26,800 --> 00:38:36,960 Speaker 1: Dot com