1 00:00:04,120 --> 00:00:07,160 Speaker 1: Get in touch with technology with tech Stuff from how 2 00:00:07,200 --> 00:00:13,880 Speaker 1: stuff works dot com. Hey there, and welcome to tech Stuff. 3 00:00:13,920 --> 00:00:17,040 Speaker 1: I'm your host, Jonathan Strickland. I'm an executive producer at 4 00:00:17,040 --> 00:00:20,000 Speaker 1: how Stuff Works in I Love all Things tech, and 5 00:00:20,040 --> 00:00:25,800 Speaker 1: welcome to episode nine hundred ninety nine of tech Stuff. Yep, 6 00:00:25,880 --> 00:00:28,040 Speaker 1: we're gonna have a really big shin dig for the 7 00:00:28,080 --> 00:00:30,159 Speaker 1: next one. And by shin dig, I mean I'm going 8 00:00:30,200 --> 00:00:33,559 Speaker 1: to record another show. But today we're going to talk 9 00:00:33,600 --> 00:00:39,159 Speaker 1: about the state of artificial intelligence because in late June 10 00:00:40,000 --> 00:00:45,159 Speaker 1: Dr Nathan Bane and Ian Hogarth presented a report titled 11 00:00:45,280 --> 00:00:48,080 Speaker 1: the State of a I, giving an update on the 12 00:00:48,080 --> 00:00:51,560 Speaker 1: advancements that have happened in the field of artificial intelligence 13 00:00:51,640 --> 00:00:55,360 Speaker 1: over the course of the last twelve months, so end 14 00:00:55,400 --> 00:00:59,000 Speaker 1: of well summer of seventeen to the summer of eighteen essentially. 15 00:00:59,120 --> 00:01:00,640 Speaker 1: So I thought it might be in using to talk 16 00:01:00,640 --> 00:01:04,039 Speaker 1: about what they found as they researched the topic. But first, 17 00:01:04,120 --> 00:01:08,240 Speaker 1: let's define a few terms, because artificial intelligence is one 18 00:01:08,240 --> 00:01:13,920 Speaker 1: of those categories of topics that tends to encompass a 19 00:01:13,920 --> 00:01:16,479 Speaker 1: lot of different ideas and unless you define what you're 20 00:01:16,520 --> 00:01:19,959 Speaker 1: talking about early on, uh, you could have two people 21 00:01:20,000 --> 00:01:24,000 Speaker 1: talking about different aspects of AI, and they don't realize 22 00:01:24,000 --> 00:01:26,800 Speaker 1: they're talking about different aspects and they think they're disagreeing, 23 00:01:26,840 --> 00:01:30,080 Speaker 1: but in reality they're actually agreeing. It's just they haven't 24 00:01:30,080 --> 00:01:32,880 Speaker 1: defined their terms. So we're gonna do that first. So 25 00:01:33,440 --> 00:01:39,440 Speaker 1: the term artificial intelligence dates to nineteen fifty five. John McCarthy, 26 00:01:39,560 --> 00:01:42,240 Speaker 1: who was was a computer scientist, had coined the phrase 27 00:01:42,240 --> 00:01:45,640 Speaker 1: while developing a plan for a conference that took place 28 00:01:45,720 --> 00:01:50,000 Speaker 1: at Dartmouth, and it was supposed to happen in nineteen 29 00:01:50,040 --> 00:01:52,160 Speaker 1: fifty six. So he came up with the term artificial 30 00:01:52,160 --> 00:01:54,880 Speaker 1: intelligence about a year before the conference was to take place. 31 00:01:55,560 --> 00:01:59,280 Speaker 1: And his was a definition of general intelligence, which means 32 00:01:59,680 --> 00:02:02,560 Speaker 1: it would be a way to allow machines to reason, 33 00:02:03,000 --> 00:02:07,040 Speaker 1: engage an abstract thought, do some problem solving, and also 34 00:02:07,080 --> 00:02:10,080 Speaker 1: to pursue self improvement. And I think a lot of 35 00:02:10,120 --> 00:02:13,960 Speaker 1: people still think of this when they are thinking of 36 00:02:13,960 --> 00:02:17,320 Speaker 1: the phrase artificial intelligence. That means a computer that can 37 00:02:17,440 --> 00:02:20,320 Speaker 1: process information in a way that is at least comparable, 38 00:02:20,520 --> 00:02:24,959 Speaker 1: if not identical, to the way we humans process information. 39 00:02:25,400 --> 00:02:29,000 Speaker 1: With the concept of self improvement included, there's at least 40 00:02:29,000 --> 00:02:32,240 Speaker 1: an implication that such a machine would at least possess 41 00:02:32,320 --> 00:02:34,680 Speaker 1: some degree of self awareness. It would have to know 42 00:02:35,240 --> 00:02:40,080 Speaker 1: at least that it needed to improve upon something. It 43 00:02:40,160 --> 00:02:42,600 Speaker 1: may not really have a full sense of self. However, 44 00:02:42,680 --> 00:02:47,200 Speaker 1: that's not necessarily required in order to have self improvement, 45 00:02:47,760 --> 00:02:52,120 Speaker 1: so uh, there'll just be a degree of it. Um. However, 46 00:02:52,160 --> 00:02:54,520 Speaker 1: it could go as far as a machine that actually 47 00:02:54,560 --> 00:02:58,519 Speaker 1: knows that it exists within a world it has a 48 00:02:58,560 --> 00:03:05,680 Speaker 1: sense of self. That could be a potential uh way 49 00:03:05,680 --> 00:03:09,760 Speaker 1: to interpret this concept. But as time has gone on, 50 00:03:09,840 --> 00:03:14,359 Speaker 1: we have needed to narrow down various definitions of artificial 51 00:03:14,400 --> 00:03:19,600 Speaker 1: intelligence because intelligence itself human intelligence is a very broad term. 52 00:03:19,680 --> 00:03:23,200 Speaker 1: It encompasses many different things. It takes more than creating 53 00:03:23,240 --> 00:03:26,720 Speaker 1: a machine that can process information at a faster rate 54 00:03:26,720 --> 00:03:30,320 Speaker 1: than humans. We wouldn't say that a calculator is more 55 00:03:30,360 --> 00:03:33,680 Speaker 1: intelligent than a person just because a calculator can, uh 56 00:03:33,919 --> 00:03:37,960 Speaker 1: can perform a complex mathematical calculation in a fraction of 57 00:03:38,000 --> 00:03:41,000 Speaker 1: the amount of time your average person could, right, because 58 00:03:41,040 --> 00:03:43,160 Speaker 1: your average person can still do tons of stuff that 59 00:03:43,200 --> 00:03:46,280 Speaker 1: a calculator can't do, so we wouldn't call the calculator intelligent. 60 00:03:46,560 --> 00:03:49,760 Speaker 1: It takes more than just processing information at a very 61 00:03:49,840 --> 00:03:54,920 Speaker 1: fast rate, so we have to go beyond just efficiency. 62 00:03:55,080 --> 00:03:59,240 Speaker 1: If we want to talk about intelligence, the Encyclopedia Britannica 63 00:03:59,360 --> 00:04:02,720 Speaker 1: defines AI as the ability of a digital computer or 64 00:04:02,800 --> 00:04:09,560 Speaker 1: computer controlled robot to perform tasks commonly associated with intelligent beings. Uh. 65 00:04:09,560 --> 00:04:12,440 Speaker 1: This isn't a bad definition, but it does require you 66 00:04:12,480 --> 00:04:15,480 Speaker 1: to take another step back to consider what sort of 67 00:04:15,520 --> 00:04:21,160 Speaker 1: actions would be considered intelligent versus those that are more instinctive. So, 68 00:04:21,320 --> 00:04:25,800 Speaker 1: for example, if a fly sees that there is a 69 00:04:25,880 --> 00:04:28,880 Speaker 1: hand coming toward it to squish it and it flies away, 70 00:04:29,120 --> 00:04:32,200 Speaker 1: is that instinctual or is that intelligent? While you would 71 00:04:32,240 --> 00:04:36,680 Speaker 1: argue it's more instinct, it's not a sign of intelligence. Uh. 72 00:04:36,720 --> 00:04:39,320 Speaker 1: There have been a lot of experiments with insects in 73 00:04:39,400 --> 00:04:43,440 Speaker 1: general to display that behaviors they have that appear to 74 00:04:43,560 --> 00:04:48,600 Speaker 1: be intelligent because they involve complex behaviors, Uh, turn out 75 00:04:48,640 --> 00:04:51,840 Speaker 1: to be instinctual because if you start messing with them, 76 00:04:51,880 --> 00:04:54,440 Speaker 1: they will just repeat the exact same sequence over and 77 00:04:54,480 --> 00:04:57,080 Speaker 1: over again. They don't have a way of retaining the 78 00:04:57,120 --> 00:05:00,120 Speaker 1: information that just happen in order to build upon it, 79 00:05:00,400 --> 00:05:03,279 Speaker 1: which is all part of learning. Right as human beings, 80 00:05:03,560 --> 00:05:07,239 Speaker 1: when we encounter new information, we can incorporate that into 81 00:05:07,240 --> 00:05:10,760 Speaker 1: our knowledge and then we can build upon that in 82 00:05:10,960 --> 00:05:16,039 Speaker 1: future encounters with similar scenarios. We can even generalize from 83 00:05:16,080 --> 00:05:22,560 Speaker 1: that situation and apply that knowledge to similar but different scenarios. 84 00:05:23,279 --> 00:05:29,200 Speaker 1: This is something that differentiates intelligence from just instinct. AI 85 00:05:29,320 --> 00:05:33,000 Speaker 1: is also a multidisciplinary field. It's not just one area 86 00:05:33,000 --> 00:05:36,640 Speaker 1: of study. It's in fact, lots of different areas of study, 87 00:05:36,680 --> 00:05:41,560 Speaker 1: everything from computer science to biology, to neuroscience, to psychology, 88 00:05:41,600 --> 00:05:45,800 Speaker 1: to engineering and more. It's tons of different areas of 89 00:05:45,800 --> 00:05:49,279 Speaker 1: study that all go into AI, and they're also a 90 00:05:49,320 --> 00:05:53,159 Speaker 1: lot of different ways that we can categorize AI. One 91 00:05:53,160 --> 00:05:56,080 Speaker 1: way is that we could just divide AI into two 92 00:05:56,200 --> 00:06:01,599 Speaker 1: very large categories week AI and strong long AI. John 93 00:06:01,720 --> 00:06:06,320 Speaker 1: Searle proposed this way of defining AI back in In fact, 94 00:06:06,400 --> 00:06:09,360 Speaker 1: he argued that week AI is probably the best will 95 00:06:09,360 --> 00:06:12,800 Speaker 1: ever do, will never have strong AI. Week AI refers 96 00:06:12,839 --> 00:06:16,359 Speaker 1: to a simulation of human thought. That you could have 97 00:06:16,400 --> 00:06:19,359 Speaker 1: a machine that appears to think like a person, but 98 00:06:19,480 --> 00:06:23,080 Speaker 1: that's just an appearance. In reality, once you strip away 99 00:06:23,120 --> 00:06:26,520 Speaker 1: all the different layers, it turns out it's just a simulation. 100 00:06:26,600 --> 00:06:31,360 Speaker 1: The computer is not quote unquote actually thinking. Uh, it's 101 00:06:31,680 --> 00:06:34,960 Speaker 1: mimicking the way we think, and it tends to be 102 00:06:35,000 --> 00:06:38,760 Speaker 1: in a relatively narrow band of applications. It may perform 103 00:06:38,800 --> 00:06:41,880 Speaker 1: those applications very well, it might even do so better 104 00:06:41,920 --> 00:06:45,880 Speaker 1: than a human can, but it cannot act outside of 105 00:06:45,920 --> 00:06:48,760 Speaker 1: that narrow band, or if it does attempt to act 106 00:06:48,800 --> 00:06:51,240 Speaker 1: outside of it, it doesn't do so very well. So 107 00:06:51,520 --> 00:06:55,320 Speaker 1: an example of this might be IBM's Deep Blue, which 108 00:06:55,520 --> 00:07:00,520 Speaker 1: was the computer they designed to play against chess grand masters, 109 00:07:00,560 --> 00:07:03,520 Speaker 1: and it plays chess really, really well, well enough to 110 00:07:03,560 --> 00:07:07,520 Speaker 1: beat grand masters at chess, But you couldn't then tell 111 00:07:07,560 --> 00:07:12,920 Speaker 1: it to sort a complex series of problems so that 112 00:07:12,960 --> 00:07:16,320 Speaker 1: you could tackle them properly, or ask it about what 113 00:07:16,400 --> 00:07:19,160 Speaker 1: the weather is going to be like three days from now. 114 00:07:19,400 --> 00:07:22,040 Speaker 1: You couldn't set it to other tasks. It was programmed 115 00:07:22,040 --> 00:07:25,360 Speaker 1: for a very specific application, and you could not just 116 00:07:25,440 --> 00:07:29,160 Speaker 1: leverage that quote unquote intelligence to something else. Whereas with 117 00:07:29,200 --> 00:07:33,280 Speaker 1: a person, you can have a person tackle all sorts 118 00:07:33,320 --> 00:07:35,960 Speaker 1: of different problems, and even if the person has never 119 00:07:36,040 --> 00:07:40,320 Speaker 1: encountered that problem before, he or she can apply the information, 120 00:07:40,360 --> 00:07:44,200 Speaker 1: the knowledge that they have accumulated throughout their experiences an 121 00:07:44,200 --> 00:07:47,360 Speaker 1: attempt to apply it as best they are able to 122 00:07:47,480 --> 00:07:49,760 Speaker 1: the new task. They might not be very good at 123 00:07:49,760 --> 00:07:52,360 Speaker 1: the new task, but they can at least try to 124 00:07:52,400 --> 00:07:54,720 Speaker 1: do that based upon the information and the knowledge that 125 00:07:54,760 --> 00:07:58,239 Speaker 1: they have gained in their past experiences. It doesn't matter 126 00:07:58,240 --> 00:08:01,480 Speaker 1: how many games of chess Deep Blue plays, it's still 127 00:08:01,520 --> 00:08:04,640 Speaker 1: not going to be good at doing other tasks. Strong 128 00:08:04,760 --> 00:08:08,200 Speaker 1: AI would refer to an artificial intelligence that could actually 129 00:08:08,280 --> 00:08:10,960 Speaker 1: think in a way that's at least analogous to the 130 00:08:10,960 --> 00:08:14,080 Speaker 1: way we humans think, even if it uses a different 131 00:08:14,120 --> 00:08:17,640 Speaker 1: methodology to do so, and it could apply that intelligence 132 00:08:17,640 --> 00:08:21,600 Speaker 1: to any situation, not just a narrow set of situations. 133 00:08:21,640 --> 00:08:23,840 Speaker 1: And that does not mean that it would immediately be 134 00:08:23,920 --> 00:08:27,640 Speaker 1: great at everything. This isn't a definition of a super 135 00:08:27,680 --> 00:08:32,520 Speaker 1: intelligent computer. It might also be pretty crappy at brand 136 00:08:32,520 --> 00:08:36,200 Speaker 1: new tasks, but it can learn over time and improve 137 00:08:36,360 --> 00:08:41,319 Speaker 1: over time, self improvement being an important concept in intelligence. 138 00:08:42,120 --> 00:08:44,920 Speaker 1: So I can learn from mistakes and even generalize what 139 00:08:45,040 --> 00:08:48,640 Speaker 1: it's learned into new, unrelated situations, so it would not 140 00:08:48,720 --> 00:08:51,760 Speaker 1: be bound by those narrow set of applications like a 141 00:08:51,800 --> 00:08:54,760 Speaker 1: week AI would. Cirl, By the way, is also the 142 00:08:54,760 --> 00:08:58,360 Speaker 1: philosopher who proposed the Chinese room thought experiment to argue 143 00:08:58,400 --> 00:09:01,960 Speaker 1: against strong AI. I've talked about the Chinese room thought 144 00:09:02,000 --> 00:09:07,520 Speaker 1: experiment and other episodes. Basically, it's this this thought experiment 145 00:09:07,600 --> 00:09:10,840 Speaker 1: where you imagine that you are in a room and 146 00:09:10,880 --> 00:09:13,080 Speaker 1: the room only has a door, and the door has 147 00:09:13,120 --> 00:09:16,760 Speaker 1: a little slot in it, and occasionally a piece of 148 00:09:16,800 --> 00:09:19,320 Speaker 1: paper is shoved through the slot. You get the piece 149 00:09:19,320 --> 00:09:23,040 Speaker 1: of paper, something is written in an alphabet that you 150 00:09:23,120 --> 00:09:25,920 Speaker 1: don't you don't understand, you have you have no knowledge 151 00:09:25,960 --> 00:09:28,600 Speaker 1: of this alphabet. It's just it looks like squiggles to you. 152 00:09:29,280 --> 00:09:32,439 Speaker 1: But you have an enormous book, and that enormous book 153 00:09:32,520 --> 00:09:36,040 Speaker 1: has a list of all these different pages of squiggles. 154 00:09:36,040 --> 00:09:38,240 Speaker 1: So what you do is you consult the enormous book, 155 00:09:38,280 --> 00:09:40,959 Speaker 1: you look for a page of squiggles that's identical to 156 00:09:41,040 --> 00:09:43,120 Speaker 1: the one that was sent to you, and then you 157 00:09:43,160 --> 00:09:46,600 Speaker 1: follow the instructions of what to do. When you get 158 00:09:46,640 --> 00:09:50,040 Speaker 1: a page that has the squiggles, you follow the instructions, 159 00:09:50,360 --> 00:09:52,560 Speaker 1: you put the output through the slot in the door, 160 00:09:52,800 --> 00:09:56,280 Speaker 1: and things continue. Searl argued, this is essentially the same 161 00:09:56,320 --> 00:10:00,600 Speaker 1: way that computers process information. They don't understand the information 162 00:10:00,640 --> 00:10:04,160 Speaker 1: that's coming in. They look at the information, they look 163 00:10:04,320 --> 00:10:08,160 Speaker 1: for the match of that information against what is programmed 164 00:10:08,160 --> 00:10:12,079 Speaker 1: to do, and then they respond with the appropriate response. 165 00:10:12,559 --> 00:10:16,200 Speaker 1: But they don't understand the process. Uh. You would say 166 00:10:16,240 --> 00:10:18,520 Speaker 1: that the same thing is true of the Chinese room 167 00:10:18,559 --> 00:10:21,800 Speaker 1: thought experiment. That if you did a piece of paper 168 00:10:21,880 --> 00:10:25,880 Speaker 1: written in Chinese, and the person inside does not understand Chinese, 169 00:10:26,320 --> 00:10:28,760 Speaker 1: they see the piece of paper, they have the book, 170 00:10:29,160 --> 00:10:32,439 Speaker 1: they write the response, they send it back through from 171 00:10:32,480 --> 00:10:35,720 Speaker 1: an external observer, it would appear that whoever is inside 172 00:10:35,720 --> 00:10:39,800 Speaker 1: the room understands Chinese. But the truth of the matter 173 00:10:39,920 --> 00:10:43,840 Speaker 1: is you don't understand Chinese. You're just following the instructions. Now, 174 00:10:43,880 --> 00:10:46,239 Speaker 1: there have been a lot of responses to this particular 175 00:10:46,280 --> 00:10:49,200 Speaker 1: thought experiment, but that's another episode, so I'm not gonna 176 00:10:49,200 --> 00:10:53,160 Speaker 1: go into it here, but it's a very interesting philosophical notion. 177 00:10:53,880 --> 00:10:57,160 Speaker 1: There was an assistant professor of integrative biology and computer 178 00:10:57,240 --> 00:11:00,680 Speaker 1: science and engineering at Michigan State University named Erin Hints 179 00:11:00,720 --> 00:11:03,840 Speaker 1: who lays out four types of AI uh in an 180 00:11:03,920 --> 00:11:07,520 Speaker 1: article that he wrote for the Conversation dot com. His 181 00:11:07,640 --> 00:11:10,600 Speaker 1: four types of AI start with type one, which are 182 00:11:10,679 --> 00:11:15,160 Speaker 1: reactive machines. These base all operations on the current state 183 00:11:15,440 --> 00:11:19,720 Speaker 1: of any given situation, but it cannot refer back to 184 00:11:20,080 --> 00:11:24,520 Speaker 1: past events. Uh, As Hints points out in this piece 185 00:11:24,559 --> 00:11:27,760 Speaker 1: Deep Blue, When I was mentioning earlier. It falls into 186 00:11:27,800 --> 00:11:31,720 Speaker 1: this type one category. When Deep Blue played chess, it 187 00:11:31,800 --> 00:11:35,200 Speaker 1: wasn't tracking moves. It wasn't saying, all right, well, the 188 00:11:35,280 --> 00:11:38,640 Speaker 1: last five moves, my opponent did such and such, So 189 00:11:38,760 --> 00:11:42,239 Speaker 1: I suspect based on that that I'm learning more about 190 00:11:42,320 --> 00:11:45,600 Speaker 1: his style of play. Deep Blue would look at the 191 00:11:45,600 --> 00:11:48,400 Speaker 1: state of the board, where all the pieces were, and 192 00:11:48,440 --> 00:11:50,840 Speaker 1: it would do this as if it was a the 193 00:11:50,920 --> 00:11:54,040 Speaker 1: first time Deep Blue had ever seen the chessboard. It's 194 00:11:54,040 --> 00:11:57,200 Speaker 1: not referring to the previous moves. It's just looking at 195 00:11:57,280 --> 00:11:59,920 Speaker 1: what is on the board at that moment and then 196 00:12:00,120 --> 00:12:02,920 Speaker 1: starts to evaluate all the potential moves it can make, 197 00:12:03,320 --> 00:12:06,680 Speaker 1: all the potential moves its opponent can make, and then 198 00:12:06,920 --> 00:12:10,120 Speaker 1: chooses a move among that set. The next time it's 199 00:12:10,160 --> 00:12:12,360 Speaker 1: Deep Blues turn, it does it all over again. It 200 00:12:12,480 --> 00:12:16,280 Speaker 1: is not referring to its past experience. It's only looking 201 00:12:16,320 --> 00:12:19,839 Speaker 1: at the current state of the board. So again, it 202 00:12:20,240 --> 00:12:23,520 Speaker 1: can't analyze player behavior leading up to the turn and 203 00:12:23,520 --> 00:12:26,319 Speaker 1: then base the decision off of that. So Deep Blue 204 00:12:26,400 --> 00:12:29,480 Speaker 1: blade chess as if on every single turn it was 205 00:12:29,559 --> 00:12:33,120 Speaker 1: the first time it ever scenes ever I had a 206 00:12:33,160 --> 00:12:36,559 Speaker 1: chance to look at that board. Um Ronnie Brooks and 207 00:12:36,640 --> 00:12:40,240 Speaker 1: AI researcher argued that this is really the only type 208 00:12:40,280 --> 00:12:43,040 Speaker 1: of AI we should try to build, because any other 209 00:12:43,160 --> 00:12:47,720 Speaker 1: AI would require some sort of internalized concept of the world. 210 00:12:48,080 --> 00:12:52,120 Speaker 1: It would need to have some form of representation of 211 00:12:52,120 --> 00:12:55,880 Speaker 1: the world in its for lack of a better word, mind, 212 00:12:56,559 --> 00:12:58,480 Speaker 1: in order for it to be able to react off 213 00:12:58,520 --> 00:13:01,679 Speaker 1: of that and be us. We humans are the people 214 00:13:01,840 --> 00:13:05,840 Speaker 1: programming these machines, we would have to program that representation 215 00:13:05,880 --> 00:13:09,040 Speaker 1: of the world, and no matter how carefully we do, 216 00:13:09,080 --> 00:13:12,160 Speaker 1: that is never going to be as good a representation 217 00:13:12,320 --> 00:13:14,439 Speaker 1: as the world actually is. Right, It's only going to 218 00:13:14,520 --> 00:13:18,080 Speaker 1: be a weak simulation of what we see the world 219 00:13:18,160 --> 00:13:22,000 Speaker 1: to be, and therefore any decisions such a machine makes 220 00:13:22,200 --> 00:13:26,000 Speaker 1: based off of this imperfect representation of the world will 221 00:13:26,040 --> 00:13:31,000 Speaker 1: in turn be imperfect. Moreover, if I were to program 222 00:13:31,000 --> 00:13:33,680 Speaker 1: such a computer, I'm doing so based off my own 223 00:13:34,440 --> 00:13:37,160 Speaker 1: concept of what the world is. But my concept of 224 00:13:37,200 --> 00:13:40,120 Speaker 1: the world is different from someone else's concept of the world. 225 00:13:40,160 --> 00:13:42,920 Speaker 1: Someone who comes from a very different background, with a 226 00:13:43,000 --> 00:13:47,240 Speaker 1: very different set of experiences might have a very very 227 00:13:47,400 --> 00:13:50,920 Speaker 1: radically different perception of what the world is. And if 228 00:13:50,920 --> 00:13:53,720 Speaker 1: I were to design a machine off my perceptions, it 229 00:13:53,800 --> 00:13:56,280 Speaker 1: might behave in a way that is completely alien to 230 00:13:56,360 --> 00:13:59,080 Speaker 1: this other person, perhaps in a way that is harmful 231 00:13:59,120 --> 00:14:01,520 Speaker 1: to this other person, and we'll get more into that later. 232 00:14:01,880 --> 00:14:05,240 Speaker 1: That falls into the realm of bias. So there are 233 00:14:05,280 --> 00:14:07,560 Speaker 1: some who argue that we shouldn't even try to go 234 00:14:07,640 --> 00:14:11,240 Speaker 1: beyond type one because of the potential problems we could encounter. 235 00:14:11,600 --> 00:14:17,000 Speaker 1: Type two. AI possesses some limited ability to remember. This 236 00:14:17,040 --> 00:14:19,960 Speaker 1: is sort of like short term memory for humans, though 237 00:14:20,000 --> 00:14:23,800 Speaker 1: perhaps it's more transient than short term memory. The memories 238 00:14:23,840 --> 00:14:27,520 Speaker 1: never get converted into long term storage for these machines, 239 00:14:27,840 --> 00:14:31,000 Speaker 1: but rather they serve to help a machine make immediate decisions. 240 00:14:31,520 --> 00:14:35,200 Speaker 1: Hence points to self driving cars as an example of 241 00:14:35,240 --> 00:14:40,240 Speaker 1: this type of AI. Hence says that, well, they have 242 00:14:40,360 --> 00:14:43,040 Speaker 1: to identify and monitor other elements on the road that 243 00:14:43,120 --> 00:14:46,240 Speaker 1: are constantly changing, such as other vehicles. They have to 244 00:14:46,280 --> 00:14:50,240 Speaker 1: be able to tell how fast is another vehicle traveling, 245 00:14:50,640 --> 00:14:53,680 Speaker 1: How close to your vehicle is this other vehicle, what 246 00:14:53,720 --> 00:14:56,960 Speaker 1: direction are they traveling in? Uh Like, if you're going 247 00:14:57,040 --> 00:14:59,680 Speaker 1: down the highway and there are other cars on the highway, 248 00:15:00,400 --> 00:15:03,040 Speaker 1: your car needs to know how many there are, where 249 00:15:03,040 --> 00:15:05,400 Speaker 1: they are in relation to you, how fast they're traveling. 250 00:15:05,640 --> 00:15:10,120 Speaker 1: This is all ongoing information. So the alternative would be 251 00:15:10,160 --> 00:15:12,200 Speaker 1: for your AI to look at the world as a 252 00:15:12,240 --> 00:15:16,680 Speaker 1: series of snapshots. Right, But snapshots don't tell you things 253 00:15:16,760 --> 00:15:20,280 Speaker 1: like speed. They would tell you the thing was here, 254 00:15:20,600 --> 00:15:23,600 Speaker 1: now it's here. You could interpret it as speed if 255 00:15:23,600 --> 00:15:27,040 Speaker 1: you knew how long it was between snapshots. But that's 256 00:15:27,040 --> 00:15:29,960 Speaker 1: a lot of unnecessary processing power. It makes more sense 257 00:15:30,400 --> 00:15:34,640 Speaker 1: to design AI that has the ability to at least 258 00:15:34,680 --> 00:15:41,000 Speaker 1: hold information in short term to understand things like velocity. 259 00:15:41,080 --> 00:15:47,200 Speaker 1: So uh, slightly different version of AI, slightly more sophisticated 260 00:15:47,240 --> 00:15:49,280 Speaker 1: than type one. But you don't have to worry about 261 00:15:49,320 --> 00:15:51,240 Speaker 1: other stuff, right, You don't have to worry about anything 262 00:15:51,240 --> 00:15:56,840 Speaker 1: outside of whatever the AI's purposes. So a self driving car, 263 00:15:56,880 --> 00:15:59,520 Speaker 1: for example, doesn't need to know how expensive a jug 264 00:15:59,640 --> 00:16:01,840 Speaker 1: of milk is, right. It doesn't need to know any 265 00:16:01,880 --> 00:16:04,680 Speaker 1: of that. It just needs to know rules of the road, 266 00:16:04,920 --> 00:16:08,960 Speaker 1: needs to know how to identify things that it's places 267 00:16:09,000 --> 00:16:11,840 Speaker 1: where a car can go versus where a car should 268 00:16:11,840 --> 00:16:16,000 Speaker 1: not go. It has to be able to identify pedestrians, uh, 269 00:16:16,040 --> 00:16:18,560 Speaker 1: these sort of things. But outside of that, the car 270 00:16:18,720 --> 00:16:20,640 Speaker 1: doesn't need to worry about it, so it still has 271 00:16:20,640 --> 00:16:24,280 Speaker 1: a fairly narrow set of parameters that it follows. Its worldview, 272 00:16:24,320 --> 00:16:27,400 Speaker 1: in other words, is constrained, so you don't have to 273 00:16:27,400 --> 00:16:30,560 Speaker 1: worry about creating a perfect representation of the world. You 274 00:16:30,640 --> 00:16:34,000 Speaker 1: just have to create as perfect a representation of the 275 00:16:34,040 --> 00:16:37,560 Speaker 1: specific part of the world the AI will inhabit as 276 00:16:37,600 --> 00:16:40,720 Speaker 1: you possibly can to make sure it behaves properly. Type 277 00:16:40,720 --> 00:16:44,280 Speaker 1: three AI should not only have an internal representation of 278 00:16:44,320 --> 00:16:48,040 Speaker 1: the world, but also a concept of other entities that 279 00:16:48,120 --> 00:16:51,160 Speaker 1: are within that world as and it doesn't just note 280 00:16:51,200 --> 00:16:54,640 Speaker 1: the presence of other things within its environment, but recognizes 281 00:16:54,880 --> 00:16:59,520 Speaker 1: which of those things have agency. Humans possess agency. We 282 00:16:59,600 --> 00:17:04,680 Speaker 1: understand our own faculties. We can recognize that others possess 283 00:17:04,840 --> 00:17:07,879 Speaker 1: similar abilities. If you and I were to have a conversation, 284 00:17:08,680 --> 00:17:11,960 Speaker 1: we would do so knowing that the other person possesses 285 00:17:12,040 --> 00:17:15,160 Speaker 1: at least some of the same abilities that we ourselves do. Right, 286 00:17:15,840 --> 00:17:20,640 Speaker 1: so we know like you would know that I have motivations, 287 00:17:20,720 --> 00:17:23,840 Speaker 1: that I have needs and wants. You would know this, 288 00:17:23,960 --> 00:17:25,720 Speaker 1: and I would know the same about you. We might 289 00:17:25,760 --> 00:17:29,159 Speaker 1: not know what they all are, but we recognize that 290 00:17:29,200 --> 00:17:32,920 Speaker 1: the other person has them. A Type three AI would 291 00:17:32,920 --> 00:17:36,960 Speaker 1: be able to recognize this and other entities It would 292 00:17:37,000 --> 00:17:42,080 Speaker 1: not itself necessarily possess needs, wants, anything like that, but 293 00:17:42,160 --> 00:17:46,080 Speaker 1: it would recognize that other entities do have those things, 294 00:17:46,119 --> 00:17:50,360 Speaker 1: So it is not self aware, but is aware of others. 295 00:17:50,920 --> 00:17:54,080 Speaker 1: So if we just assume that we are the only 296 00:17:54,080 --> 00:17:58,359 Speaker 1: ones who possess these faculties, then any conversation we would 297 00:17:58,359 --> 00:18:01,040 Speaker 1: ever have with anyone else would be akin to speaking 298 00:18:01,119 --> 00:18:05,160 Speaker 1: to ourselves, because we would assume other people don't have 299 00:18:05,200 --> 00:18:08,199 Speaker 1: those faculties, they don't possess the intelligence that we have. 300 00:18:08,760 --> 00:18:11,439 Speaker 1: That would mean that every single episode I did of 301 00:18:11,440 --> 00:18:13,960 Speaker 1: tech stuff would essentially just turn into Tacos and Lord 302 00:18:14,000 --> 00:18:15,480 Speaker 1: of the Rings, because that's all I would care to 303 00:18:15,520 --> 00:18:17,880 Speaker 1: talk about. But I assume you guys want to hear 304 00:18:17,880 --> 00:18:21,239 Speaker 1: more than that, Sorry, wants to hear about Tacos and 305 00:18:21,320 --> 00:18:23,240 Speaker 1: Lord of the Rings, or at least one or the other. 306 00:18:23,280 --> 00:18:25,600 Speaker 1: I don't know which. I'm gonna guess Tacos if I 307 00:18:25,640 --> 00:18:29,280 Speaker 1: have to guess. So this comes to the theory of 308 00:18:29,280 --> 00:18:32,359 Speaker 1: the mind, which is also pretty close to what Alan 309 00:18:32,400 --> 00:18:35,199 Speaker 1: Touring was talking about during the Turing Test. If we 310 00:18:35,200 --> 00:18:38,720 Speaker 1: were to create a machine that could reliably mimic a 311 00:18:38,920 --> 00:18:41,760 Speaker 1: human well enough so that your average person couldn't tell 312 00:18:42,320 --> 00:18:44,800 Speaker 1: if the responses it was getting the human was getting 313 00:18:44,840 --> 00:18:47,520 Speaker 1: were from a person or from a computer, You would 314 00:18:47,560 --> 00:18:50,760 Speaker 1: say that computer passes the Turing test, and Touring would say, 315 00:18:50,880 --> 00:18:54,359 Speaker 1: you might as well grant that the computer possesses intelligence, 316 00:18:54,800 --> 00:18:57,200 Speaker 1: because you would do the same thing to another human being. 317 00:18:57,400 --> 00:18:59,280 Speaker 1: Right if you talk to another human being, the human 318 00:18:59,320 --> 00:19:01,800 Speaker 1: talks to you like a human being, you say, oh, 319 00:19:02,000 --> 00:19:06,200 Speaker 1: this person has some of the same basic aspects of 320 00:19:06,280 --> 00:19:09,359 Speaker 1: humanity that I have, like intelligence and motivations and needs 321 00:19:09,359 --> 00:19:11,919 Speaker 1: and wants. Turing said, you might as well extend that 322 00:19:11,960 --> 00:19:15,560 Speaker 1: to computers if they're able to mimic human interaction close 323 00:19:15,680 --> 00:19:17,199 Speaker 1: enough so that you could not tell if it was 324 00:19:17,200 --> 00:19:20,120 Speaker 1: a human or a computer. Even if the computer doesn't 325 00:19:20,119 --> 00:19:22,840 Speaker 1: possess those things, we might as well assume it does, 326 00:19:23,119 --> 00:19:26,400 Speaker 1: because we give that same consideration to other people. However, 327 00:19:26,960 --> 00:19:30,400 Speaker 1: Hints would say that would really only apply to type 328 00:19:30,400 --> 00:19:34,240 Speaker 1: four AI. Those are the types of artificial intelligence that 329 00:19:34,359 --> 00:19:37,600 Speaker 1: have self awareness. This is an AI that not only 330 00:19:37,640 --> 00:19:40,639 Speaker 1: recognizes there are other entities out there, but understands that 331 00:19:40,720 --> 00:19:45,679 Speaker 1: it itself is an entity possessing intelligence, and AI in 332 00:19:45,760 --> 00:19:49,240 Speaker 1: type three would recognize that humans have thoughts, feelings, and motivations, 333 00:19:49,440 --> 00:19:53,280 Speaker 1: but an AI in type four would have those of 334 00:19:53,320 --> 00:19:55,960 Speaker 1: its own. It would have its own motivations, its own needs, 335 00:19:55,960 --> 00:19:59,560 Speaker 1: its own wants, its own loves and hates all the 336 00:19:59,560 --> 00:20:02,280 Speaker 1: way in main not defined it in such terms. It 337 00:20:02,280 --> 00:20:04,720 Speaker 1: would not only be able to recognize motivations, but also 338 00:20:04,880 --> 00:20:09,840 Speaker 1: understand motivations. It could put itself in the place of 339 00:20:09,880 --> 00:20:14,239 Speaker 1: another entity and say, this other entity wants to do 340 00:20:14,760 --> 00:20:19,960 Speaker 1: X because entity is why. So it might be Jonathan 341 00:20:20,040 --> 00:20:23,639 Speaker 1: wants to kick open the door because he's hungry for tacos, 342 00:20:24,200 --> 00:20:26,520 Speaker 1: and the computer would be able to understand this concept, 343 00:20:26,760 --> 00:20:29,320 Speaker 1: although it may not ever want to eat a taco. 344 00:20:30,480 --> 00:20:35,120 Speaker 1: I consider that imperfect machine. So that is where we are. 345 00:20:35,240 --> 00:20:38,120 Speaker 1: That's the definitions of artificial intelligence, so that you kind 346 00:20:38,119 --> 00:20:42,600 Speaker 1: of understand, uh, the philosophical approach to what we consider 347 00:20:42,680 --> 00:20:45,680 Speaker 1: a I. When we come back, we'll talk more about 348 00:20:45,720 --> 00:20:49,120 Speaker 1: the actual report and what the two researchers found as 349 00:20:49,160 --> 00:20:51,680 Speaker 1: they were looking into the advances that have been made 350 00:20:51,680 --> 00:20:54,560 Speaker 1: in AI over the past twelve months. But first let's 351 00:20:54,560 --> 00:21:04,760 Speaker 1: take a quick break and thank our sponsor. Much of 352 00:21:04,800 --> 00:21:07,440 Speaker 1: the work and artificial intelligence that have been following has 353 00:21:07,480 --> 00:21:10,800 Speaker 1: fallen firmly in the type one category I mentioned before 354 00:21:10,840 --> 00:21:13,680 Speaker 1: the break. And that's not to say that the work 355 00:21:13,760 --> 00:21:17,000 Speaker 1: is boring or it's not useful. The technology needed for 356 00:21:17,040 --> 00:21:20,520 Speaker 1: type one AI is incredibly sophisticated, so it involves not 357 00:21:20,560 --> 00:21:23,119 Speaker 1: just developing sensors for a machine to be able to 358 00:21:23,240 --> 00:21:27,760 Speaker 1: observe its environment, but also the various programs algorithms necessary 359 00:21:27,800 --> 00:21:30,840 Speaker 1: to process information in a meaningful way so the machine 360 00:21:30,880 --> 00:21:33,600 Speaker 1: can react in the way that we wanted to react. 361 00:21:34,040 --> 00:21:38,920 Speaker 1: Stuff like image recognition, voice recognition, depths, sen saying, all 362 00:21:38,960 --> 00:21:41,440 Speaker 1: that kind of stuff sort of fall into that category, 363 00:21:42,240 --> 00:21:45,320 Speaker 1: although they can also be incorporated into higher categories of 364 00:21:45,359 --> 00:21:49,800 Speaker 1: artificial intelligence, but they are sort of their building blocks essentially. 365 00:21:50,320 --> 00:21:53,120 Speaker 1: One of the first topics from the report focuses on 366 00:21:53,760 --> 00:21:57,080 Speaker 1: machine learning and a concept called transfer learning, so we 367 00:21:57,119 --> 00:22:00,200 Speaker 1: get to talk about what that means. Machine learning is 368 00:22:00,240 --> 00:22:04,800 Speaker 1: an approach that involves a computer examining data, learning from 369 00:22:04,880 --> 00:22:07,639 Speaker 1: that data, and then using what has been learned for 370 00:22:07,760 --> 00:22:12,640 Speaker 1: future decisions. So, for example, let's take Amazon's shopping suggestions. 371 00:22:12,680 --> 00:22:15,880 Speaker 1: When you buy something off Amazon, you'll see a recommendation 372 00:22:15,960 --> 00:22:18,400 Speaker 1: for stuff that other people have bought when they were 373 00:22:18,440 --> 00:22:21,600 Speaker 1: purchasing the same thing you just bought. So Amazon is 374 00:22:21,680 --> 00:22:24,679 Speaker 1: using machine learning to try and up sell you more stuff. 375 00:22:24,720 --> 00:22:26,879 Speaker 1: It's like saying, hey, when other people bought that thing 376 00:22:26,920 --> 00:22:29,760 Speaker 1: of a jig, they also gotta do hicky. You probably 377 00:22:29,800 --> 00:22:31,720 Speaker 1: also want to do hicky because you just buy that 378 00:22:31,760 --> 00:22:34,720 Speaker 1: thing of a jig. That's a simple example of this. 379 00:22:35,480 --> 00:22:38,800 Speaker 1: Then you have deep learning that's kind of a a 380 00:22:38,880 --> 00:22:42,080 Speaker 1: subset of machine learning. Deep learning is sort of a 381 00:22:42,119 --> 00:22:45,880 Speaker 1: self correcting branch in machine learning. You train a computer 382 00:22:45,960 --> 00:22:48,600 Speaker 1: on sets of data, and occasionally you have to step 383 00:22:48,640 --> 00:22:51,720 Speaker 1: in to make corrections and adjustments to make certain the 384 00:22:51,720 --> 00:22:54,600 Speaker 1: computer is on the right track, that the computer is 385 00:22:54,640 --> 00:22:59,640 Speaker 1: not making bad suggestions, which will happen just because it's 386 00:22:59,680 --> 00:23:02,439 Speaker 1: work from large amounts of data, and sometimes it is 387 00:23:02,480 --> 00:23:05,679 Speaker 1: making choices that to the computer seem logical but to 388 00:23:05,720 --> 00:23:08,919 Speaker 1: an outside observer seem wackadoodle crazy, So you have to 389 00:23:08,920 --> 00:23:11,760 Speaker 1: go in and tweak things. You might train an algorithm, 390 00:23:11,760 --> 00:23:14,480 Speaker 1: for example, to recognize pictures of coffee mugs, and then 391 00:23:14,480 --> 00:23:17,320 Speaker 1: occasionally you have to pop in and you see something 392 00:23:17,359 --> 00:23:20,520 Speaker 1: that isn't a coffee mug that has been mistakenly identified 393 00:23:20,520 --> 00:23:23,080 Speaker 1: as one. You have to tell the computer, no, computer, 394 00:23:23,200 --> 00:23:25,800 Speaker 1: that is not a coffee mug, and then it learns 395 00:23:25,840 --> 00:23:30,240 Speaker 1: from there. Deep learning depends upon artificial neural networks. These 396 00:23:30,280 --> 00:23:35,119 Speaker 1: are neural networks that mimic the way brains process information, 397 00:23:35,760 --> 00:23:39,840 Speaker 1: and they have algorithms that behave like neurons. Each algorithm 398 00:23:40,000 --> 00:23:44,640 Speaker 1: processes some information, then assigns a weight to how correct 399 00:23:44,680 --> 00:23:48,200 Speaker 1: it believes its conclusion to be, like I'm pretty sure 400 00:23:48,240 --> 00:23:50,920 Speaker 1: this is right, all the way down to this might 401 00:23:50,960 --> 00:23:53,320 Speaker 1: be right, but I don't know. And then it passes 402 00:23:53,359 --> 00:23:56,920 Speaker 1: the data down to another layer of neurons, which then 403 00:23:57,400 --> 00:24:02,840 Speaker 1: takes that information, processes it another way, passes it on, etcetera, etcetera. 404 00:24:03,040 --> 00:24:05,639 Speaker 1: And then the system can look at all the different 405 00:24:05,640 --> 00:24:09,080 Speaker 1: weightings of all the different potential answers and say, out 406 00:24:09,119 --> 00:24:12,080 Speaker 1: of all the conclusions I've come up with, this one 407 00:24:12,160 --> 00:24:15,840 Speaker 1: is the one I'm most confident is correct. So that's 408 00:24:15,840 --> 00:24:17,639 Speaker 1: the answer we're gonna go with. We're not gonna go 409 00:24:17,720 --> 00:24:21,959 Speaker 1: with any of the others because they are statistically less 410 00:24:22,000 --> 00:24:25,760 Speaker 1: likely to be the correct answer. So the key here 411 00:24:25,840 --> 00:24:28,480 Speaker 1: is that you have to train a deep learning network 412 00:24:29,080 --> 00:24:32,880 Speaker 1: on a really large amount of data so that you 413 00:24:32,960 --> 00:24:36,639 Speaker 1: can really get it to grasp the concept, and you 414 00:24:36,680 --> 00:24:40,639 Speaker 1: also have to make sure you tweak the waiting situation 415 00:24:40,720 --> 00:24:44,160 Speaker 1: the way it waits how confident it is in an 416 00:24:44,200 --> 00:24:47,640 Speaker 1: answer in such a way that filters out bad conclusions 417 00:24:48,080 --> 00:24:51,800 Speaker 1: early on. Uh So it's a little different, like you're 418 00:24:51,920 --> 00:24:55,680 Speaker 1: you're actually tweaking its decision making process as opposed to 419 00:24:56,280 --> 00:24:59,320 Speaker 1: looking at the decisions once they've already been made. And 420 00:24:59,359 --> 00:25:01,720 Speaker 1: it gets really really tricky, but we're gonna leave it 421 00:25:01,720 --> 00:25:04,640 Speaker 1: to that. You just you have to very gently guide 422 00:25:05,080 --> 00:25:08,040 Speaker 1: the decision making process and then you let it go 423 00:25:08,080 --> 00:25:10,520 Speaker 1: on its own, and then it ultimately starts to produce 424 00:25:10,600 --> 00:25:14,800 Speaker 1: the best decisions if you've designed the system properly. So 425 00:25:14,880 --> 00:25:18,080 Speaker 1: these are all non trivial problems, but they are surmountable. 426 00:25:18,080 --> 00:25:21,280 Speaker 1: We do have deep learning systems out there now. Transfer 427 00:25:21,359 --> 00:25:23,920 Speaker 1: learning is where you train a machine to do one 428 00:25:24,000 --> 00:25:27,359 Speaker 1: thing and then transfer that learning to a new task. 429 00:25:27,480 --> 00:25:29,840 Speaker 1: It might be semi related, or it might be completely 430 00:25:29,920 --> 00:25:33,560 Speaker 1: unrelated to the original task. You can reapply the learning 431 00:25:33,600 --> 00:25:36,720 Speaker 1: model you developed for the first task, and you this 432 00:25:36,760 --> 00:25:39,200 Speaker 1: cuts down the time it would take to train algorithms 433 00:25:39,240 --> 00:25:42,760 Speaker 1: to do something new. Moreover, computer models that have been 434 00:25:42,800 --> 00:25:45,560 Speaker 1: trained on different problems will begin to build a more 435 00:25:45,760 --> 00:25:49,000 Speaker 1: rich representation of the world, which I mentioned earlier is 436 00:25:49,080 --> 00:25:53,000 Speaker 1: necessary for the higher forms of artificial intelligence. The report 437 00:25:53,040 --> 00:25:57,280 Speaker 1: gives an example in Google Inception V three network, which 438 00:25:57,359 --> 00:26:01,760 Speaker 1: was trained on image recognition and then retrained on recognizing 439 00:26:01,840 --> 00:26:05,399 Speaker 1: skin diseases. The result was that the AI could actually 440 00:26:05,400 --> 00:26:09,920 Speaker 1: outperform twenty one Stanford dermatologists when it came to making 441 00:26:09,920 --> 00:26:12,760 Speaker 1: informed decisions such as whether or not a patient should 442 00:26:12,760 --> 00:26:17,560 Speaker 1: get a biopsy. Next, the report acknowledges the importance of 443 00:26:17,600 --> 00:26:23,160 Speaker 1: graphics processing units, also known as GPUs. Unlike most CPUs 444 00:26:23,240 --> 00:26:26,920 Speaker 1: central processing units, a GPU is designed to process a 445 00:26:26,920 --> 00:26:29,760 Speaker 1: lot of data in parallel, so that ends up being 446 00:26:29,760 --> 00:26:32,959 Speaker 1: really useful when you're training computer models with enormous amounts 447 00:26:32,960 --> 00:26:36,080 Speaker 1: of information. But it's not an approach that works for 448 00:26:36,119 --> 00:26:40,040 Speaker 1: all types of computing because not all computational problems can 449 00:26:40,080 --> 00:26:43,359 Speaker 1: be divided up to be solved in parallel, and a 450 00:26:43,480 --> 00:26:47,639 Speaker 1: GPU would handle a problem that can't be divided up 451 00:26:47,640 --> 00:26:51,520 Speaker 1: in parallel much more slowly than a very powerful CPU could. 452 00:26:51,920 --> 00:26:55,400 Speaker 1: This is also true for quantum computers, meaning that when 453 00:26:55,400 --> 00:26:59,080 Speaker 1: we get reliable, powerful quantum computers will likely see a 454 00:26:59,119 --> 00:27:03,639 Speaker 1: real boom in training computers and machine learning. The report 455 00:27:03,720 --> 00:27:07,000 Speaker 1: also argues that while GPUs are incredibly useful for training 456 00:27:07,000 --> 00:27:10,560 Speaker 1: a model, the actual application of the model can rest 457 00:27:10,600 --> 00:27:13,920 Speaker 1: on CPUs. So if you prefer, you'd want to use 458 00:27:14,000 --> 00:27:17,480 Speaker 1: a GPU when you're putting your computer model through school, 459 00:27:17,800 --> 00:27:19,880 Speaker 1: but when it's time for your computer model to actually 460 00:27:20,440 --> 00:27:23,520 Speaker 1: do its job to pursue its career, you switch it 461 00:27:23,520 --> 00:27:27,840 Speaker 1: over to CPU because the parallel part all comes in 462 00:27:27,880 --> 00:27:31,480 Speaker 1: the training section, not in the application section. The report 463 00:27:31,520 --> 00:27:35,560 Speaker 1: also identifies a few big challenges and pushing AI further. First, there's, 464 00:27:35,640 --> 00:27:39,080 Speaker 1: of course, the technological barriers. The report points out that 465 00:27:39,160 --> 00:27:44,240 Speaker 1: processor clock frequencies are are starting to plateau. So generally speaking, 466 00:27:44,480 --> 00:27:48,480 Speaker 1: clock frequencies tell you how many operations a processor can 467 00:27:48,520 --> 00:27:52,680 Speaker 1: complete in a second. That's really an oversimplification, but generally 468 00:27:53,240 --> 00:27:56,320 Speaker 1: the higher the number, the more stuff your processor can 469 00:27:56,359 --> 00:28:00,280 Speaker 1: do within a second amount of time. Advance as an 470 00:28:00,320 --> 00:28:05,400 Speaker 1: AI will likely depend upon new microprocessor architectures to overcome 471 00:28:05,440 --> 00:28:08,240 Speaker 1: this hurdle. We're reaching the limit of what we can 472 00:28:08,240 --> 00:28:12,359 Speaker 1: do with the classical microprocessor design. So essentially we're getting 473 00:28:12,400 --> 00:28:16,040 Speaker 1: closer to the end of Moore's law due to fundamental 474 00:28:16,240 --> 00:28:19,720 Speaker 1: physical limits that we cannot overcome if we just stick 475 00:28:19,840 --> 00:28:22,639 Speaker 1: with the way we've been making microprocessors for the last 476 00:28:22,760 --> 00:28:26,280 Speaker 1: several decades. But that does not mean our our computers 477 00:28:26,320 --> 00:28:28,840 Speaker 1: are never going to get more powerful. It may only 478 00:28:28,880 --> 00:28:32,520 Speaker 1: mean that we have to innovate new architectures, new designs, 479 00:28:32,520 --> 00:28:36,320 Speaker 1: new approaches to processing, which in turn could necessitate a 480 00:28:36,400 --> 00:28:39,720 Speaker 1: new version of Moore's law. We might be on another 481 00:28:39,840 --> 00:28:45,360 Speaker 1: astronomical expansion of processing, but it would require brand new 482 00:28:45,440 --> 00:28:50,160 Speaker 1: architectures that haven't necessarily been proven yet. The researchers identified 483 00:28:50,200 --> 00:28:55,480 Speaker 1: Google's tensor processing unit or TPU, as a possible successor. 484 00:28:55,840 --> 00:29:00,400 Speaker 1: The TPU is a type of application specific integrated circuit 485 00:29:00,720 --> 00:29:04,240 Speaker 1: and a s I C. This is a circuit that 486 00:29:04,240 --> 00:29:07,840 Speaker 1: has made for a very specific application, as opposed to 487 00:29:07,840 --> 00:29:10,680 Speaker 1: a general circuit like a CPU. A CPU is supposed 488 00:29:10,680 --> 00:29:12,960 Speaker 1: to be able to handle lots of different data for 489 00:29:13,040 --> 00:29:17,600 Speaker 1: lots of different applications, but a TPU is meant for 490 00:29:17,640 --> 00:29:24,240 Speaker 1: a very specific application, for example, for artificial intelligence. Related 491 00:29:24,280 --> 00:29:28,520 Speaker 1: to this is another barrier, which is financial. Harnessing really 492 00:29:28,560 --> 00:29:33,280 Speaker 1: powerful processing technologies is expensive, so artificial intelligence R and 493 00:29:33,360 --> 00:29:36,840 Speaker 1: D tends to be really costly progress and AI is 494 00:29:36,880 --> 00:29:39,960 Speaker 1: limited in part by funding. In other words, it's not 495 00:29:40,240 --> 00:29:45,200 Speaker 1: just technology, it's also where's the money coming from. As 496 00:29:45,240 --> 00:29:48,040 Speaker 1: for how AI has been coming along, the researchers pointed 497 00:29:48,040 --> 00:29:50,840 Speaker 1: to Google's Alpha zero, which taught itself how to play 498 00:29:50,920 --> 00:29:54,960 Speaker 1: the game Go at superhuman levels, and it did that 499 00:29:55,080 --> 00:29:59,680 Speaker 1: just by playing itself. It played games of Go against itself, 500 00:29:59,680 --> 00:30:03,160 Speaker 1: repeat eatedly, with no human interaction. The system didn't have 501 00:30:03,200 --> 00:30:07,560 Speaker 1: any historical data to pull from. It wasn't consulting historic 502 00:30:07,640 --> 00:30:10,680 Speaker 1: games of Go and the strategies that people employed. It 503 00:30:10,720 --> 00:30:14,200 Speaker 1: was developing strategies on its own, so it only had 504 00:30:14,240 --> 00:30:16,520 Speaker 1: the basic rules of the game programmed into it, and 505 00:30:16,520 --> 00:30:20,160 Speaker 1: then it just began playing thousands and thousands of games 506 00:30:20,200 --> 00:30:24,360 Speaker 1: against itself and learned strategies. It would learn tactics, it 507 00:30:24,360 --> 00:30:29,880 Speaker 1: would abandon approaches. It had forty days of training, and 508 00:30:29,920 --> 00:30:32,680 Speaker 1: it reached levels of mastery that could foil even the 509 00:30:32,720 --> 00:30:37,680 Speaker 1: best human players. The researchers also mentioned Open Ai that's 510 00:30:37,680 --> 00:30:40,160 Speaker 1: a team that created Ai agents that could play the 511 00:30:40,240 --> 00:30:43,800 Speaker 1: game Dota too. Dota two is a mobile that's a 512 00:30:43,840 --> 00:30:46,080 Speaker 1: style of game in which two teams of players try 513 00:30:46,160 --> 00:30:48,560 Speaker 1: to win a match that involves capturing certain spaces on 514 00:30:48,600 --> 00:30:51,480 Speaker 1: a playing field and defeating the members of the other team. 515 00:30:51,880 --> 00:30:56,240 Speaker 1: Like Alpha zero, the team used a self playing feature 516 00:30:56,400 --> 00:31:01,120 Speaker 1: as a training mechanism. Every player on a team was 517 00:31:01,200 --> 00:31:04,800 Speaker 1: controlled by a different AI agent. Now there was one 518 00:31:04,840 --> 00:31:08,200 Speaker 1: computer that was generating all these AI agents, but each 519 00:31:08,240 --> 00:31:11,840 Speaker 1: AI agent was acting as its own kind of individual. 520 00:31:12,400 --> 00:31:15,920 Speaker 1: So each agent had its own neural network, and that 521 00:31:16,000 --> 00:31:19,560 Speaker 1: meant that these different AI agents were having to collaborate 522 00:31:19,600 --> 00:31:22,120 Speaker 1: with each other to work together to form these strategies 523 00:31:22,480 --> 00:31:25,360 Speaker 1: in order to achieve goals. So this was not just 524 00:31:25,720 --> 00:31:29,800 Speaker 1: a one computer that was controlling all the pieces simultaneously. 525 00:31:29,840 --> 00:31:33,200 Speaker 1: It was almost like a separate computer system for every 526 00:31:33,280 --> 00:31:36,000 Speaker 1: single player. And then they would talk to each other 527 00:31:36,040 --> 00:31:39,240 Speaker 1: and coordinate with each other, which is really cool and 528 00:31:39,280 --> 00:31:42,800 Speaker 1: also a little terrifying if you if you think about it, 529 00:31:42,920 --> 00:31:50,000 Speaker 1: computers working together independently is kind of scary anyway. Another 530 00:31:50,000 --> 00:31:53,480 Speaker 1: big development and AI is addressing bias in machine learning models. 531 00:31:53,520 --> 00:31:56,360 Speaker 1: I mentioned this earlier. Here's the interesting thing. A computer 532 00:31:56,480 --> 00:31:59,640 Speaker 1: can have a bias, and that's because computers are working 533 00:31:59,680 --> 00:32:02,800 Speaker 1: off of algorithms that were ultimately designed by human beings, 534 00:32:03,080 --> 00:32:06,400 Speaker 1: and human beings have bias. If I were to set 535 00:32:06,400 --> 00:32:08,520 Speaker 1: out to create something, I'd be drawing on my own 536 00:32:08,520 --> 00:32:11,640 Speaker 1: personal experiences and my own knowledge. But that is a 537 00:32:11,680 --> 00:32:15,720 Speaker 1: tiny sliver of the spectrum of human experience. The same 538 00:32:15,800 --> 00:32:18,880 Speaker 1: is true for people who design machine learning algorithms. As 539 00:32:18,920 --> 00:32:23,000 Speaker 1: a result, those algorithms might overlook or misidentified data points 540 00:32:23,040 --> 00:32:27,200 Speaker 1: that fall outside the experience of the architect who designed 541 00:32:27,320 --> 00:32:30,200 Speaker 1: that machine learning tool, and depending upon the nature of 542 00:32:30,200 --> 00:32:33,680 Speaker 1: the AI, that could be disastrous. So, for example, back 543 00:32:33,680 --> 00:32:36,520 Speaker 1: in two thousand nine, Hewlett Packard had to deal with 544 00:32:36,560 --> 00:32:39,800 Speaker 1: a scandal. They had these cameras that had image recognition 545 00:32:39,880 --> 00:32:43,200 Speaker 1: software built into the camera, and it would identify a 546 00:32:43,240 --> 00:32:46,920 Speaker 1: person's face so that it would focus properly on a 547 00:32:47,080 --> 00:32:49,680 Speaker 1: on a the subject of your photo. Assume that if 548 00:32:49,720 --> 00:32:52,080 Speaker 1: you had a person in the picture, that you wanted 549 00:32:52,080 --> 00:32:56,640 Speaker 1: the person to be in focus. However, they failed to 550 00:32:56,760 --> 00:33:01,640 Speaker 1: recognize dark skinned people. So that's a problem where your 551 00:33:02,200 --> 00:33:09,040 Speaker 1: your computer tool is ignoring people because the person who 552 00:33:09,080 --> 00:33:13,240 Speaker 1: designed it had designed it to recognize folks that were 553 00:33:13,320 --> 00:33:17,600 Speaker 1: like themselves. They weren't necessarily thinking about it outside of 554 00:33:17,600 --> 00:33:23,240 Speaker 1: their own realm of experience, which was obviously a pr nightmare. 555 00:33:23,720 --> 00:33:27,240 Speaker 1: Now imagine you have that same issue. But now let's 556 00:33:27,240 --> 00:33:30,800 Speaker 1: go beyond something that is just a public relations problem 557 00:33:30,840 --> 00:33:35,320 Speaker 1: to something even worse than that. Uh, think about self 558 00:33:35,400 --> 00:33:38,160 Speaker 1: driving cars. Self driving cars need to be able to 559 00:33:38,200 --> 00:33:41,720 Speaker 1: recognize pedestrians who are crossing the street. But if you 560 00:33:41,760 --> 00:33:44,400 Speaker 1: have a self driving car that doesn't recognize a dark 561 00:33:44,440 --> 00:33:48,960 Speaker 1: skinned person, then that could lead to tragic results. You 562 00:33:48,960 --> 00:33:55,120 Speaker 1: could have a terrible collision fatalities. These are non trivial issues. 563 00:33:55,160 --> 00:33:58,640 Speaker 1: Biases may appear simply problematic on first blush, but they 564 00:33:58,680 --> 00:34:03,840 Speaker 1: can lead to really catastrophic outcomes, and so in recent months, 565 00:34:03,840 --> 00:34:08,160 Speaker 1: more work has been dedicated to creating systems that eliminate bias, 566 00:34:08,200 --> 00:34:11,239 Speaker 1: and that's easier said than done. The researchers gave an 567 00:34:11,280 --> 00:34:14,000 Speaker 1: example of a biased system with Google Translate as well. 568 00:34:14,480 --> 00:34:18,800 Speaker 1: So Turkish is a language that does not have gendered 569 00:34:18,920 --> 00:34:23,600 Speaker 1: pronouns like he and she. It just doesn't. In Turkish, 570 00:34:24,000 --> 00:34:28,040 Speaker 1: all pronouns he, she, and it are represented by a 571 00:34:28,080 --> 00:34:33,240 Speaker 1: single pronoun oh. The researchers translated she is a doctor 572 00:34:33,760 --> 00:34:38,279 Speaker 1: and he is a nurse from English into Turkish, and 573 00:34:38,320 --> 00:34:42,320 Speaker 1: Google Translate dutifully change the gendered pronouns in English to 574 00:34:42,440 --> 00:34:47,080 Speaker 1: the Turkish genderless pronoun oh. But then they went to 575 00:34:47,120 --> 00:34:51,080 Speaker 1: reverse the process, turned the Turkish phrases back into English. 576 00:34:51,440 --> 00:34:56,520 Speaker 1: Google Translate assigned genders to the pronouns. You know, they 577 00:34:56,560 --> 00:34:59,640 Speaker 1: were both genderless pronouns in Turkish, but in order to 578 00:34:59,680 --> 00:35:02,000 Speaker 1: make it makes sense in English and not be it 579 00:35:02,320 --> 00:35:06,040 Speaker 1: is a doctor and it is a nurse, it assigned genders, 580 00:35:06,080 --> 00:35:10,520 Speaker 1: and it assigned he to the doctor phrase and she 581 00:35:11,000 --> 00:35:13,680 Speaker 1: to the nurse, even though the original English phrases were 582 00:35:14,160 --> 00:35:17,839 Speaker 1: she is a doctor, he is a nurse, translated from 583 00:35:17,880 --> 00:35:20,680 Speaker 1: Turkish back to English into he is a doctor, she 584 00:35:20,880 --> 00:35:23,920 Speaker 1: is a nurse. So the genders on the occupations swapped, 585 00:35:23,960 --> 00:35:28,160 Speaker 1: and that reveals a gender bias in knee translation algorithm. 586 00:35:28,200 --> 00:35:30,840 Speaker 1: That just assumes that if you're talking about a doctor 587 00:35:31,160 --> 00:35:34,959 Speaker 1: and the gender is indeterminate in your original language, then 588 00:35:35,280 --> 00:35:38,239 Speaker 1: it must be a man, which is more than a 589 00:35:38,280 --> 00:35:42,520 Speaker 1: little problematic. So those are just simple examples, but it 590 00:35:42,560 --> 00:35:45,640 Speaker 1: goes much deeper than that. Well, i'll tell you more 591 00:35:46,120 --> 00:35:50,160 Speaker 1: about what the researchers found in their state of the 592 00:35:50,200 --> 00:35:53,440 Speaker 1: AI and as well as an update that has happened 593 00:35:53,480 --> 00:35:56,960 Speaker 1: since that report came out, But first let's take another 594 00:35:57,040 --> 00:36:08,560 Speaker 1: quick break to thank our sponsors. Another really important concept 595 00:36:08,640 --> 00:36:12,000 Speaker 1: and artificial intelligence that the researchers point out is transparency, 596 00:36:12,080 --> 00:36:14,719 Speaker 1: because it's not really enough to have a computer get 597 00:36:14,760 --> 00:36:17,640 Speaker 1: to the right answer. We need to know how it 598 00:36:17,680 --> 00:36:20,080 Speaker 1: got to that answer, and it may turn out that 599 00:36:20,080 --> 00:36:23,400 Speaker 1: the computer system is making a lot of incorrect assumptions 600 00:36:23,440 --> 00:36:27,359 Speaker 1: before it arrives at the right conclusion. Those faulty assumptions 601 00:36:27,400 --> 00:36:30,759 Speaker 1: should be addressed to avoid problems in the future, such 602 00:36:30,800 --> 00:36:33,960 Speaker 1: as future conclusions that are wrong because they depend too 603 00:36:34,000 --> 00:36:38,800 Speaker 1: heavily on faulty assumptions. So AI designers need to build 604 00:36:38,840 --> 00:36:41,799 Speaker 1: in systems that help us check the work of the 605 00:36:41,880 --> 00:36:45,480 Speaker 1: AI to make sure this is not happening. This is 606 00:36:45,520 --> 00:36:47,720 Speaker 1: something that needs to be built into an AI system 607 00:36:47,760 --> 00:36:50,040 Speaker 1: from the beginning, or else we get into what was 608 00:36:50,080 --> 00:36:54,040 Speaker 1: called a black box situation. So a black box is 609 00:36:54,080 --> 00:36:56,920 Speaker 1: where you have a system where all the processes that 610 00:36:57,000 --> 00:37:01,240 Speaker 1: happen inside the system are hidden away from the average person. 611 00:37:01,640 --> 00:37:04,920 Speaker 1: You don't know how the system got to its conclusion, 612 00:37:05,640 --> 00:37:07,880 Speaker 1: and so you don't know if you can trust the 613 00:37:07,920 --> 00:37:11,799 Speaker 1: conclusion or not. That's a problem. This always makes me 614 00:37:11,840 --> 00:37:15,560 Speaker 1: think of the computer Deep Thought, which was the super 615 00:37:15,600 --> 00:37:19,240 Speaker 1: Intelligent computer and Hitchhiker's Guide to the Galaxy. They asked 616 00:37:19,280 --> 00:37:23,000 Speaker 1: the computer what is the meaning to life, the universe 617 00:37:23,040 --> 00:37:26,200 Speaker 1: and everything? And the computer says forty two, Well, you 618 00:37:26,239 --> 00:37:30,120 Speaker 1: don't understand why the computer got to its conclusion. Of 619 00:37:30,200 --> 00:37:35,360 Speaker 1: forty two, because the computer doesn't tell you how it 620 00:37:35,480 --> 00:37:38,960 Speaker 1: got to its answer, It just processes the information and 621 00:37:39,000 --> 00:37:43,160 Speaker 1: then produces the answer. We don't want that situation with AI. 622 00:37:43,560 --> 00:37:48,120 Speaker 1: There's also the danger of inserting changes in data to 623 00:37:48,200 --> 00:37:51,719 Speaker 1: cause AI systems to make big mistakes. By inserting what 624 00:37:51,760 --> 00:37:56,200 Speaker 1: the researchers referred to as adversarial patches, you can cause 625 00:37:56,280 --> 00:38:00,640 Speaker 1: a system to fail. So, in other words, you purposefully 626 00:38:00,760 --> 00:38:06,040 Speaker 1: introduced bad data to make the system start to have 627 00:38:06,440 --> 00:38:10,759 Speaker 1: problems throughout the processing of information. They gave examples of 628 00:38:10,840 --> 00:38:15,279 Speaker 1: image recognition software and showed that by inserting some extra data. 629 00:38:15,480 --> 00:38:18,360 Speaker 1: They included one that was a sticker that had a 630 00:38:18,400 --> 00:38:22,080 Speaker 1: particular design on it, you can override a computer's ability 631 00:38:22,160 --> 00:38:25,880 Speaker 1: to correctly identify an image and cause it to misidentify it. 632 00:38:26,000 --> 00:38:28,680 Speaker 1: In the example they used, they showed a sticker that 633 00:38:28,719 --> 00:38:30,800 Speaker 1: if you put it down in the view of the 634 00:38:30,800 --> 00:38:35,040 Speaker 1: the camera, then the AI would always identify it as 635 00:38:35,120 --> 00:38:38,000 Speaker 1: a toaster, no matter what it was, because the sticker 636 00:38:38,600 --> 00:38:41,480 Speaker 1: was enough to fool the computer into thinking what it 637 00:38:41,520 --> 00:38:43,440 Speaker 1: was looking at was a toaster, even if there was 638 00:38:43,480 --> 00:38:47,239 Speaker 1: a banana saying right next to the sticker. So that's 639 00:38:47,280 --> 00:38:51,120 Speaker 1: a huge problem if you can fool computer vision into 640 00:38:51,160 --> 00:38:54,480 Speaker 1: thinking it seeing one thing when it's something else. Again, 641 00:38:54,520 --> 00:38:57,719 Speaker 1: if we look at the autonomous car example, and you're 642 00:38:57,760 --> 00:39:00,760 Speaker 1: able to think of a way to have the vision 643 00:39:00,840 --> 00:39:05,360 Speaker 1: system of the car, assuming that's relying solely on optics, 644 00:39:05,960 --> 00:39:08,239 Speaker 1: then you've got a real problem on your hands. But 645 00:39:08,280 --> 00:39:11,799 Speaker 1: even if it's relying on multiple sensors, if you find 646 00:39:11,840 --> 00:39:15,120 Speaker 1: ways to fool those sensors or to misdirect them in 647 00:39:15,160 --> 00:39:18,160 Speaker 1: some way, you will cause the technology itself to behave 648 00:39:18,200 --> 00:39:21,080 Speaker 1: in a way that it shouldn't because it's acting on 649 00:39:21,160 --> 00:39:24,160 Speaker 1: the wrong kind of information. The report then goes on 650 00:39:24,239 --> 00:39:27,600 Speaker 1: to address the issue of talent who are working in 651 00:39:27,600 --> 00:39:30,720 Speaker 1: the field of AI. They estimated that twenty two thousand 652 00:39:30,840 --> 00:39:35,480 Speaker 1: pH D educated researchers and engineers are working on AI 653 00:39:35,560 --> 00:39:39,600 Speaker 1: around the globe in some capacity. About five thousand of 654 00:39:39,640 --> 00:39:44,080 Speaker 1: them are very high level researchers. The United States leads 655 00:39:44,080 --> 00:39:47,720 Speaker 1: the world in open positions for jobs relating to AI 656 00:39:47,760 --> 00:39:51,920 Speaker 1: research and development. Google is the leading employer of AI 657 00:39:52,040 --> 00:39:55,839 Speaker 1: talent in the US, but China has produced more pure 658 00:39:55,880 --> 00:40:00,960 Speaker 1: reviewed publications relating to AI than any other country. Next, 659 00:40:01,000 --> 00:40:03,399 Speaker 1: the report looks at how AI has been rolled out 660 00:40:03,440 --> 00:40:07,960 Speaker 1: in various industries, noting that medical imaging and liquid biopsies 661 00:40:07,960 --> 00:40:11,680 Speaker 1: are two effective uses of AI applications to help diagnose patients. 662 00:40:12,280 --> 00:40:15,799 Speaker 1: Healthcare in general is a large area of opportunity for AI. 663 00:40:16,080 --> 00:40:19,120 Speaker 1: Another application of AI is a little less warm and 664 00:40:19,160 --> 00:40:22,120 Speaker 1: fuzzy than healthcare. That would be how governments are starting 665 00:40:22,120 --> 00:40:26,000 Speaker 1: to put AI at work in surveillance operations, such as 666 00:40:26,040 --> 00:40:30,560 Speaker 1: incorporating it in CCTV software to include facial recognition technology. 667 00:40:31,120 --> 00:40:34,800 Speaker 1: I also mentioned Project Maven and previous episodes of tech stuff. 668 00:40:34,960 --> 00:40:38,800 Speaker 1: Project Maven was another example they cited. The report covers 669 00:40:38,920 --> 00:40:44,239 Speaker 1: a ton of other industries from warehouse automation to autonomous vehicles, 670 00:40:44,239 --> 00:40:49,759 Speaker 1: to security, to agriculture to finance, and essentially all industries 671 00:40:49,800 --> 00:40:53,120 Speaker 1: are seeing increased AI roll out, but at different rates. 672 00:40:53,480 --> 00:40:57,439 Speaker 1: So it's not like you're seeing AI suddenly flooding all 673 00:40:57,520 --> 00:41:02,200 Speaker 1: industries at exponentials speed, but they are starting to get 674 00:41:02,239 --> 00:41:05,400 Speaker 1: more of an inroad into every single industry. It's just 675 00:41:05,440 --> 00:41:08,439 Speaker 1: some of them. It's faster than others, but they tend 676 00:41:08,480 --> 00:41:12,520 Speaker 1: to improve efficiencies and they tend to reduce costs. But 677 00:41:12,560 --> 00:41:15,279 Speaker 1: it's also hastening and era of automation that will make 678 00:41:15,320 --> 00:41:17,239 Speaker 1: it imperative to figure out what the heck to do 679 00:41:17,400 --> 00:41:20,200 Speaker 1: when it comes to employment, which the report actually does 680 00:41:20,239 --> 00:41:23,480 Speaker 1: address a little bit later. They also mentioned briefly the 681 00:41:23,560 --> 00:41:26,520 Speaker 1: recent focus on privacy and security in the wake of 682 00:41:26,800 --> 00:41:30,480 Speaker 1: things like the Cambridge Analytica scandal over at Facebook, as 683 00:41:30,520 --> 00:41:32,919 Speaker 1: well as the adoption of g d p R. UH. 684 00:41:33,239 --> 00:41:35,799 Speaker 1: Those are, by the way, unrelated to one another, but 685 00:41:35,920 --> 00:41:38,000 Speaker 1: I've also talked about both of them in recent episodes 686 00:41:38,040 --> 00:41:40,399 Speaker 1: of Tech Stuff. I can't help but think that as 687 00:41:40,520 --> 00:41:45,360 Speaker 1: AI becomes more sophisticated, protecting privacy will become a larger challenge. 688 00:41:45,440 --> 00:41:48,200 Speaker 1: AI will be able to work through large data samples 689 00:41:48,440 --> 00:41:52,120 Speaker 1: and potentially identify individuals within it with very little trouble. 690 00:41:52,480 --> 00:41:55,319 Speaker 1: So in light of something like g DPR, this would 691 00:41:55,320 --> 00:41:58,799 Speaker 1: make a lot more types of data sensitive. We would 692 00:41:58,840 --> 00:42:02,799 Speaker 1: have to identify those is saying you need to have 693 00:42:02,960 --> 00:42:05,120 Speaker 1: this classified under g d p R. This is not 694 00:42:05,160 --> 00:42:09,200 Speaker 1: truly anonymous data because remember, Harvard professor only needed three 695 00:42:09,239 --> 00:42:12,360 Speaker 1: points of data to identify of all adults in the 696 00:42:12,400 --> 00:42:15,640 Speaker 1: United States. That was the person's gender, their birth date, 697 00:42:15,680 --> 00:42:17,879 Speaker 1: and their ZIP code. That's all she needed, and then 698 00:42:17,920 --> 00:42:22,360 Speaker 1: she could identify of the adult US population based on 699 00:42:22,360 --> 00:42:26,200 Speaker 1: those three data points. So when you think about sophisticated 700 00:42:26,239 --> 00:42:29,960 Speaker 1: computer algorithms, and they're intelligent, and they are able to 701 00:42:29,960 --> 00:42:32,680 Speaker 1: work with large data sets very effectively and very quickly, 702 00:42:33,400 --> 00:42:36,040 Speaker 1: you start to see the potential for fewer and fewer 703 00:42:36,160 --> 00:42:40,480 Speaker 1: data points to point to a specific individual. And then 704 00:42:40,719 --> 00:42:46,720 Speaker 1: the concept of anonymized data starts to get really, really fuzzy. 705 00:42:47,239 --> 00:42:49,760 Speaker 1: It's hard to say if a piece of data truly 706 00:42:49,840 --> 00:42:53,680 Speaker 1: is anonymous unless it's just swallowed up by huge amounts 707 00:42:53,719 --> 00:42:56,560 Speaker 1: of other information and you've you've washed it completely of 708 00:42:56,600 --> 00:43:00,080 Speaker 1: its individual status. Otherwise there may be a chance of 709 00:43:00,120 --> 00:43:02,880 Speaker 1: tracing it back to a specific person, and then you 710 00:43:02,920 --> 00:43:05,560 Speaker 1: have the issues of g d p R. After the 711 00:43:05,600 --> 00:43:09,760 Speaker 1: industry section in the report comes a politics section UH 712 00:43:09,800 --> 00:43:12,360 Speaker 1: and that one they look at some survey results that 713 00:43:12,400 --> 00:43:16,360 Speaker 1: address issues relating to AI, including the employment question I 714 00:43:16,360 --> 00:43:20,680 Speaker 1: mentioned earlier. According to the surveys that the researchers were consulting, 715 00:43:21,160 --> 00:43:25,520 Speaker 1: seventy six percent of respondents felt that the inequality between 716 00:43:25,560 --> 00:43:28,839 Speaker 1: the rich and the poor will become much worse than 717 00:43:28,880 --> 00:43:32,719 Speaker 1: it is today as a result of AI and automation. Essentially, 718 00:43:32,719 --> 00:43:37,319 Speaker 1: thinking those who own the systems that have AI roll 719 00:43:37,360 --> 00:43:40,520 Speaker 1: out involved and those who own the businesses are going 720 00:43:40,600 --> 00:43:44,760 Speaker 1: to profit. Uh and then those who are otherwise affected 721 00:43:44,760 --> 00:43:46,680 Speaker 1: are going to find themselves out of work, and you 722 00:43:46,680 --> 00:43:50,080 Speaker 1: will get this increasing gap between the halves and have nots. 723 00:43:50,880 --> 00:43:55,320 Speaker 1: The found it unlikely that the economy will create new, 724 00:43:55,480 --> 00:43:59,319 Speaker 1: better paying jobs as a result of AI and automation, 725 00:44:00,160 --> 00:44:02,879 Speaker 1: so saying that people are pessimistic as being kind of 726 00:44:03,880 --> 00:44:07,800 Speaker 1: an understatement. Also, based on the results cited in the report, 727 00:44:07,880 --> 00:44:10,640 Speaker 1: it seems like most people don't think a universal basic 728 00:44:10,680 --> 00:44:15,280 Speaker 1: income is likely to happen, not that it wouldn't work, 729 00:44:15,800 --> 00:44:19,200 Speaker 1: but that it's not likely to get adopted. The results 730 00:44:19,239 --> 00:44:22,480 Speaker 1: also seemed to indicate many people are concerned about AI's 731 00:44:22,520 --> 00:44:26,480 Speaker 1: potential dangers, ranging from a loss of privacy to more 732 00:44:26,640 --> 00:44:30,279 Speaker 1: existential threats, and that more people favor some form of 733 00:44:30,320 --> 00:44:35,319 Speaker 1: regulation than a fully deregulated approach, with in favor of 734 00:44:35,360 --> 00:44:43,600 Speaker 1: regulation opposed and for not really sure. Uh So, almost 735 00:44:43,680 --> 00:44:46,640 Speaker 1: the same number of people think that AI should have 736 00:44:46,719 --> 00:44:50,319 Speaker 1: some form of regulation attached to it, as I don't 737 00:44:50,400 --> 00:44:54,359 Speaker 1: know one way or the other. That might be due 738 00:44:54,360 --> 00:44:58,800 Speaker 1: to survey wording. We have to remember survey results aren't 739 00:44:58,840 --> 00:45:02,480 Speaker 1: always in negative of how people really feel, because it 740 00:45:02,560 --> 00:45:06,000 Speaker 1: often also relies upon the wording used in the survey 741 00:45:06,000 --> 00:45:09,040 Speaker 1: and how it was administered. The report found that in 742 00:45:09,080 --> 00:45:12,439 Speaker 1: the United States, unemployment is at a seventeen year low. 743 00:45:13,040 --> 00:45:17,120 Speaker 1: Jobs are on the rise, but wages are lagging behind 744 00:45:17,520 --> 00:45:20,960 Speaker 1: job creation. In fact, it found that in the United States, 745 00:45:21,320 --> 00:45:25,760 Speaker 1: labor productivity has increased much more dramatically than compensation rates 746 00:45:25,760 --> 00:45:29,080 Speaker 1: have increased. The researchers also found that many of the 747 00:45:29,080 --> 00:45:32,200 Speaker 1: new jobs that have been created are low paying ones, 748 00:45:32,480 --> 00:45:36,200 Speaker 1: so that's problematic. The researchers are quick to point out 749 00:45:36,440 --> 00:45:41,239 Speaker 1: that you cannot necessarily correlate any of the labor statistics 750 00:45:41,320 --> 00:45:45,480 Speaker 1: directly with the adoption of AI and automation because there 751 00:45:45,480 --> 00:45:48,959 Speaker 1: are so many other factors that are also present. There's 752 00:45:49,000 --> 00:45:51,839 Speaker 1: just not enough information or evidence to support any firm 753 00:45:51,920 --> 00:45:55,480 Speaker 1: conclusions about the impact of AI and automation on jobs. Yet, 754 00:45:55,880 --> 00:45:58,000 Speaker 1: and not only that, the report points out that there 755 00:45:58,040 --> 00:46:01,839 Speaker 1: are quote unquote only two million industrial robots in the 756 00:46:01,840 --> 00:46:05,440 Speaker 1: world right now, and then the US has fewer robots 757 00:46:05,480 --> 00:46:09,480 Speaker 1: in factories compared to countries like Japan, Germany, and Korea. 758 00:46:09,960 --> 00:46:14,239 Speaker 1: The researchers conclude that the report with a few predictions 759 00:46:14,239 --> 00:46:17,640 Speaker 1: of their own. Uh They say that a Chinese research 760 00:46:17,719 --> 00:46:21,239 Speaker 1: lab will produce a significant research breakthrough sometime within the 761 00:46:21,280 --> 00:46:25,000 Speaker 1: next twelve months. A machine learning algorithm will be able 762 00:46:25,040 --> 00:46:28,080 Speaker 1: to design a therapeutic drug that will produce positive results 763 00:46:28,120 --> 00:46:31,840 Speaker 1: in clinical trials within that twelve months, and that US 764 00:46:31,880 --> 00:46:35,120 Speaker 1: and China will scramble to sweep up tech companies in 765 00:46:35,160 --> 00:46:37,920 Speaker 1: Europe and Asia as part of a trade war and 766 00:46:38,040 --> 00:46:40,880 Speaker 1: AI race, kind of like the space race was in 767 00:46:40,920 --> 00:46:44,920 Speaker 1: the sixties and seventies. Uh. And here's an interesting PostScript 768 00:46:45,040 --> 00:46:48,680 Speaker 1: that was not in that initial report. The day I 769 00:46:48,800 --> 00:46:51,920 Speaker 1: finalize these notes for this episode, I received a report 770 00:46:52,000 --> 00:46:55,920 Speaker 1: from Riot Research about an AI bubble and how it 771 00:46:56,000 --> 00:46:58,600 Speaker 1: is due to burst. So this is just on the 772 00:46:58,640 --> 00:47:01,200 Speaker 1: finance side of things, not in the technological side of things. 773 00:47:01,680 --> 00:47:05,839 Speaker 1: It suggested that the return on investments for AI will 774 00:47:05,920 --> 00:47:10,480 Speaker 1: yield quote rather poor results, with this being akin to 775 00:47:10,560 --> 00:47:14,440 Speaker 1: a bubble bursting end quote. It suggests that many smaller 776 00:47:14,480 --> 00:47:18,239 Speaker 1: companies working in AI could end up folding, similar to 777 00:47:18,440 --> 00:47:21,680 Speaker 1: when the VR bubble burst in the nineties, but a 778 00:47:21,800 --> 00:47:24,640 Speaker 1: larger companies like Google will weather the storm and they 779 00:47:24,640 --> 00:47:28,080 Speaker 1: will continue to do R and D work in AI. Essentially, 780 00:47:28,320 --> 00:47:31,319 Speaker 1: the report serves as a warning to investors that they 781 00:47:31,320 --> 00:47:34,680 Speaker 1: should consider carefully where they put their money with regard 782 00:47:34,800 --> 00:47:39,359 Speaker 1: to AI applications, as the amount that they invest is 783 00:47:39,880 --> 00:47:42,880 Speaker 1: going to be greater than the potential yield from those investments. 784 00:47:42,920 --> 00:47:46,880 Speaker 1: They're essentially saying more money is going into artificial intelligence 785 00:47:47,160 --> 00:47:50,319 Speaker 1: than is going to be produced from the results of 786 00:47:50,360 --> 00:47:54,400 Speaker 1: that AI work. At least right now, that may change. 787 00:47:54,719 --> 00:47:57,400 Speaker 1: But here's the weird thing is that are not really weird, 788 00:47:57,440 --> 00:48:00,799 Speaker 1: But here's the kind of self fulfilling prophecy. Is that 789 00:48:01,239 --> 00:48:05,120 Speaker 1: if investors start pulling their money in order to protect 790 00:48:05,160 --> 00:48:08,160 Speaker 1: their investments, they don't want to invest in a in 791 00:48:08,200 --> 00:48:12,400 Speaker 1: an industry that isn't going to return create a return 792 00:48:12,400 --> 00:48:15,319 Speaker 1: on that investment, then as a result, we could see 793 00:48:15,360 --> 00:48:18,600 Speaker 1: developments slow down in AI, and then we start to 794 00:48:18,600 --> 00:48:21,840 Speaker 1: see it plateau, and so it becomes this kind of 795 00:48:21,960 --> 00:48:25,239 Speaker 1: weird self fulfilling prophecy where in the short term you 796 00:48:25,280 --> 00:48:28,719 Speaker 1: may not expect a really good return on investment. That's 797 00:48:28,719 --> 00:48:33,000 Speaker 1: a big risk. But if you end up heating this 798 00:48:33,080 --> 00:48:36,040 Speaker 1: warning and you pull your money from investing in such things, 799 00:48:36,960 --> 00:48:39,280 Speaker 1: then it may never get the chance to prove itself 800 00:48:39,360 --> 00:48:42,240 Speaker 1: in the long run, and it may just put off 801 00:48:43,120 --> 00:48:46,560 Speaker 1: true advancements in AI much further than they would otherwise happen. 802 00:48:46,920 --> 00:48:50,480 Speaker 1: It's a double edged sword kind of situation. Anyway, That 803 00:48:50,600 --> 00:48:53,920 Speaker 1: is the state of artificial intelligence as of the summer 804 00:48:54,000 --> 00:48:57,560 Speaker 1: of Who's to say what will happen in the next 805 00:48:57,600 --> 00:49:00,719 Speaker 1: twelve months. It will be interesting to see where we 806 00:49:00,800 --> 00:49:04,360 Speaker 1: are in twenty nineteen, what role AI is playing in 807 00:49:04,520 --> 00:49:07,960 Speaker 1: various industries and in our lives, and whether or not 808 00:49:08,640 --> 00:49:13,040 Speaker 1: UH it has truly advanced in a noticeable way, or 809 00:49:13,080 --> 00:49:16,920 Speaker 1: if it's just a situation where we get incremental improvements 810 00:49:17,080 --> 00:49:22,080 Speaker 1: and UH an increased rollout, in which case you might say, well, 811 00:49:22,120 --> 00:49:24,920 Speaker 1: things have gotten better, but not to a point where 812 00:49:25,080 --> 00:49:27,359 Speaker 1: you you know your socks are gonna get blown off. 813 00:49:27,360 --> 00:49:29,960 Speaker 1: Who's to say. We'll find out a year from now, 814 00:49:30,000 --> 00:49:33,040 Speaker 1: I suppose. In the meantime, if you have any suggestions 815 00:49:33,040 --> 00:49:35,400 Speaker 1: for future episodes of tech Stuff, you should write me 816 00:49:35,440 --> 00:49:37,520 Speaker 1: and let me know. The email address for the show 817 00:49:38,000 --> 00:49:40,680 Speaker 1: is tech Stuff at how stuff works dot com, or 818 00:49:40,760 --> 00:49:42,800 Speaker 1: draw me a line on Facebook or Twitter. The handle 819 00:49:42,800 --> 00:49:46,120 Speaker 1: there is tech Stuff hs W. You can also follow 820 00:49:46,239 --> 00:49:49,239 Speaker 1: us on Instagram and don't forget Our next episode is 821 00:49:49,360 --> 00:49:55,080 Speaker 1: episode one thousand. I'm letting that sink in. I've done 822 00:49:55,080 --> 00:50:01,160 Speaker 1: a thousand of these. I'm so tired, but I'll see 823 00:50:01,160 --> 00:50:03,640 Speaker 1: you guys on the next episode, and I can't wait 824 00:50:03,680 --> 00:50:06,439 Speaker 1: to talk to you for the next thousand, So I'll 825 00:50:06,440 --> 00:50:15,319 Speaker 1: talk to you again really soon for more on this 826 00:50:15,480 --> 00:50:18,000 Speaker 1: and thousands of other topics because at how stuff Works 827 00:50:18,000 --> 00:50:28,200 Speaker 1: dot com