1 00:00:04,440 --> 00:00:12,520 Speaker 1: Technology with tech Stuff from dot com. Hey there, and 2 00:00:12,600 --> 00:00:16,480 Speaker 1: welcome to tech Stuff. I'm your host Jonathan Strickland. I'm 3 00:00:16,480 --> 00:00:19,600 Speaker 1: a senior writer for how Stuff Works dot com. Where 4 00:00:20,200 --> 00:00:23,400 Speaker 1: do you know what we do? We explain the universe. 5 00:00:24,400 --> 00:00:28,960 Speaker 1: And this topic today is all about a little exchange 6 00:00:29,040 --> 00:00:34,080 Speaker 1: that happened online over the course of several days. Actually, 7 00:00:34,080 --> 00:00:38,159 Speaker 1: it started with Elon Musk and he was addressing a 8 00:00:38,200 --> 00:00:43,120 Speaker 1: governmental body and talking about his view that artificial intelligence 9 00:00:43,240 --> 00:00:47,240 Speaker 1: needs to have strict regulations attached to it in order 10 00:00:47,280 --> 00:00:52,440 Speaker 1: to prevent some sort of catastrophic future, possibly sky Net related, 11 00:00:52,840 --> 00:00:57,560 Speaker 1: where the robots and other artificial intelligent constructs rise up 12 00:00:57,600 --> 00:01:01,320 Speaker 1: against their human masters and crush us under their metaphorical 13 00:01:01,680 --> 00:01:07,199 Speaker 1: or perhaps literal boots. And then you had Mark Zuckerberg 14 00:01:07,319 --> 00:01:11,720 Speaker 1: of Facebook on a live broadcast on Facebook Live from 15 00:01:11,760 --> 00:01:14,960 Speaker 1: his backyard during a barbecue. He was asked a question 16 00:01:15,160 --> 00:01:19,120 Speaker 1: about this sort of thing, and he specifically said that 17 00:01:19,160 --> 00:01:22,640 Speaker 1: he thought this was a very pessimistic view of the 18 00:01:22,680 --> 00:01:25,600 Speaker 1: future of artificial intelligence, and that within five or ten years, 19 00:01:25,680 --> 00:01:29,679 Speaker 1: artificial intelligence would be transforming our lives in ways that 20 00:01:29,720 --> 00:01:32,120 Speaker 1: we can't even imagine, and they would all be awesome 21 00:01:32,280 --> 00:01:35,520 Speaker 1: and fantastic and magical and we should love that. And 22 00:01:35,560 --> 00:01:38,880 Speaker 1: then Musk struck back on Twitter and said that he 23 00:01:38,920 --> 00:01:41,880 Speaker 1: had talked to Zuckerberg about this process before, but frankly, 24 00:01:42,360 --> 00:01:45,720 Speaker 1: Zuckerberg just is out of his depth with artificial intelligence. 25 00:01:45,720 --> 00:01:48,080 Speaker 1: It's not something that he's an expert at, and he's 26 00:01:48,120 --> 00:01:54,480 Speaker 1: really speaking from inexperience. I find this exchange amusing, as 27 00:01:54,520 --> 00:01:59,200 Speaker 1: does a lot of the journalists area of technology. I mean, 28 00:01:59,240 --> 00:02:01,320 Speaker 1: we've got a lot of people who are commenting on this, 29 00:02:01,920 --> 00:02:07,200 Speaker 1: but personally I also find it a little confounding because, uh, 30 00:02:07,400 --> 00:02:12,920 Speaker 1: Elon Musk has said some stuff that contradicts his own 31 00:02:13,080 --> 00:02:18,280 Speaker 1: companies policies if you look at it carefully. Um, he 32 00:02:18,600 --> 00:02:24,920 Speaker 1: has specifically resisted the concept of regulations for self driving cars, 33 00:02:24,919 --> 00:02:32,679 Speaker 1: but you could argue very uh realistically and convincingly, I 34 00:02:32,720 --> 00:02:36,960 Speaker 1: would say that self driving cars are an implementation of 35 00:02:37,040 --> 00:02:42,280 Speaker 1: artificial intelligence. So we're gonna dive into this. We're gonna 36 00:02:42,320 --> 00:02:45,880 Speaker 1: look at the different opinions about artificial intelligence, kind of 37 00:02:45,919 --> 00:02:49,040 Speaker 1: explore the concept of artificial intelligence in general, see where 38 00:02:49,080 --> 00:02:52,240 Speaker 1: it came from, and what does it really mean and 39 00:02:52,560 --> 00:02:55,320 Speaker 1: who's right or as I put it in my notes, 40 00:02:55,440 --> 00:02:58,960 Speaker 1: Musk's position is AI without regulation is going to totally 41 00:02:59,040 --> 00:03:02,160 Speaker 1: kill us, dude. And Zuckerberg's position is a is going 42 00:03:02,200 --> 00:03:05,680 Speaker 1: to improve our lives in countless ways. Bra So who's 43 00:03:05,760 --> 00:03:10,440 Speaker 1: right or neither of them right? Well, to start off with, 44 00:03:10,520 --> 00:03:14,800 Speaker 1: let's talk about the birth of the term artificial intelligence. 45 00:03:15,200 --> 00:03:18,440 Speaker 1: It was coined by John McCarthy, who passed away in 46 00:03:18,440 --> 00:03:20,800 Speaker 1: two thousand eleven at the age of eighty four. He 47 00:03:20,919 --> 00:03:24,639 Speaker 1: worked at Stanford as a professor emeritus of computer science, 48 00:03:25,120 --> 00:03:28,400 Speaker 1: and he also co founded the Artificial Intelligence Project at 49 00:03:28,480 --> 00:03:32,000 Speaker 1: m I T as well as the Stanford Artificial Intelligence Labs. 50 00:03:32,000 --> 00:03:38,480 Speaker 1: So somebody who certainly has had a long and storied 51 00:03:38,520 --> 00:03:42,040 Speaker 1: past in the development of artificial intelligence. He first used 52 00:03:42,080 --> 00:03:44,760 Speaker 1: the term artificial intelligence in a proposal for a summer 53 00:03:44,840 --> 00:03:48,400 Speaker 1: research conference at Dartmouth in nineteen fifty five. It was 54 00:03:48,440 --> 00:03:52,040 Speaker 1: the first time the term ever appeared in a printed publication. 55 00:03:53,800 --> 00:03:56,600 Speaker 1: So what is artificial intelligence? I mean, it's such a 56 00:03:56,680 --> 00:03:59,120 Speaker 1: huge term, has been used by so many people that 57 00:03:59,320 --> 00:04:02,160 Speaker 1: it's lost a lot of its meaning. Also, I should 58 00:04:02,160 --> 00:04:05,280 Speaker 1: point out that John McCarthy I mentioned earlier as a 59 00:04:06,240 --> 00:04:09,520 Speaker 1: one of the creators of Lisp, the programming language that 60 00:04:09,600 --> 00:04:12,920 Speaker 1: was used in artificial intelligence. So if the name sounds familiar. 61 00:04:12,920 --> 00:04:14,760 Speaker 1: It means that you listen to the History of Programming 62 00:04:14,800 --> 00:04:18,320 Speaker 1: Languages episodes, or that you're just familiar with John McCarthy's work. 63 00:04:19,360 --> 00:04:21,760 Speaker 1: But yeah, artificial intelligence is one of those terms that, 64 00:04:21,960 --> 00:04:25,120 Speaker 1: since its introduction, has been used to describe so many 65 00:04:25,160 --> 00:04:28,040 Speaker 1: different things, and used in such a vague way so 66 00:04:28,160 --> 00:04:31,320 Speaker 1: many times that for many people it seems like a 67 00:04:31,360 --> 00:04:36,120 Speaker 1: meaningless term. It's almost like it's just a general catch 68 00:04:36,160 --> 00:04:40,520 Speaker 1: all for the scary possibilities of technology that gets away 69 00:04:40,560 --> 00:04:43,520 Speaker 1: from us. It reminds me of Humpty Dumpty and through 70 00:04:43,560 --> 00:04:47,599 Speaker 1: the Looking Glass. He says that that words mean exactly 71 00:04:47,640 --> 00:04:50,520 Speaker 1: what he wants them to mean, neither more nor less. 72 00:04:51,320 --> 00:04:52,880 Speaker 1: He says, you know who is to be the master? 73 00:04:53,040 --> 00:04:55,800 Speaker 1: That is the only thing that's important. I'm more in 74 00:04:55,920 --> 00:04:58,000 Speaker 1: charge than the words are. So if I use a 75 00:04:58,040 --> 00:05:00,680 Speaker 1: word to mean something, that's what it means. That's what 76 00:05:00,760 --> 00:05:03,440 Speaker 1: I feel artificial intelligence has become for a lot of people. 77 00:05:04,080 --> 00:05:05,719 Speaker 1: And I also think that that's what leads to a 78 00:05:05,720 --> 00:05:10,560 Speaker 1: lot of disagreements. That some people have one idea of 79 00:05:10,600 --> 00:05:14,240 Speaker 1: what artificial intelligence is and other people have a totally 80 00:05:14,279 --> 00:05:17,440 Speaker 1: different idea of what artificial intelligence is. But because they're 81 00:05:17,480 --> 00:05:20,719 Speaker 1: both using the phrase artificial intelligence. It seems like they're 82 00:05:20,720 --> 00:05:23,880 Speaker 1: talking about the same thing, and that's why they're hitting 83 00:05:24,160 --> 00:05:27,200 Speaker 1: some massive disagreements, or at least one of the reasons 84 00:05:27,480 --> 00:05:31,040 Speaker 1: why they are disagreeing. It's really because, if you dig 85 00:05:31,120 --> 00:05:36,920 Speaker 1: down further, they're talking about two different things frequently. Now, 86 00:05:37,000 --> 00:05:40,480 Speaker 1: John McCarthy's use of artificial intelligence was given this context 87 00:05:40,600 --> 00:05:44,280 Speaker 1: in his proposal. Quote the study is to proceed on 88 00:05:44,360 --> 00:05:48,440 Speaker 1: the basis of the conjecture that every aspect of learning 89 00:05:48,720 --> 00:05:52,719 Speaker 1: or any other feature of intelligence can, in principle, be 90 00:05:52,920 --> 00:05:56,560 Speaker 1: so precisely described that a machine can be made to 91 00:05:56,720 --> 00:06:01,560 Speaker 1: simulate it. End quote. Here's the thing about that definition. 92 00:06:02,480 --> 00:06:06,159 Speaker 1: It already is vague because it's talking about the every 93 00:06:06,200 --> 00:06:09,960 Speaker 1: aspect of learning or any other feature of intelligence. We 94 00:06:10,080 --> 00:06:14,360 Speaker 1: have not fully defined what intelligence is within the human experience. 95 00:06:15,120 --> 00:06:21,080 Speaker 1: There are aspects of intelligence that are very vague and fuzzy, 96 00:06:21,160 --> 00:06:25,640 Speaker 1: and we only have kind of a partial understanding of 97 00:06:25,640 --> 00:06:29,679 Speaker 1: what it actually is. An example I might give as consciousness. 98 00:06:31,040 --> 00:06:35,520 Speaker 1: Defining consciousness is a particularly troublesome and difficult thing to 99 00:06:35,640 --> 00:06:39,680 Speaker 1: do in human beings, let alone in machines. So we 100 00:06:39,839 --> 00:06:45,120 Speaker 1: know that consciousness is a manifestation of the brain, the 101 00:06:45,200 --> 00:06:47,560 Speaker 1: idea of the mind being a manifestation of the brain. 102 00:06:47,640 --> 00:06:52,000 Speaker 1: All of this is dependent upon actual physical matter, the 103 00:06:52,040 --> 00:06:54,599 Speaker 1: gray matter in our heads. And we know this because 104 00:06:55,120 --> 00:07:01,800 Speaker 1: various UH diseases, disorders, injuries that affect act consciousness are 105 00:07:01,800 --> 00:07:04,920 Speaker 1: the ones that are affecting the brain. It's if you 106 00:07:05,600 --> 00:07:09,440 Speaker 1: UH suffer an injury to the brain that is in 107 00:07:09,520 --> 00:07:13,000 Speaker 1: one of these areas that define consciousness, your sense of 108 00:07:13,040 --> 00:07:17,480 Speaker 1: consciousness is likewise affected. That tells us that there is 109 00:07:17,480 --> 00:07:22,520 Speaker 1: this physical connection, that there's not this metaphysical mind necessarily 110 00:07:22,640 --> 00:07:27,000 Speaker 1: that is a layer on top of our physical brains. 111 00:07:27,080 --> 00:07:32,920 Speaker 1: But beyond that, defining consciousness is really tricky, and there 112 00:07:32,960 --> 00:07:37,960 Speaker 1: are plenty of psychologists, neurologists, philosophers who have debated the 113 00:07:38,080 --> 00:07:42,360 Speaker 1: nature of consciousness for ages, and we're no closer to 114 00:07:42,480 --> 00:07:46,160 Speaker 1: really defining it than we were back then. Sometimes we 115 00:07:46,880 --> 00:07:51,160 Speaker 1: narrow it down by saying, what isn't consciousness? So we're 116 00:07:51,160 --> 00:07:54,360 Speaker 1: whittling it away, kind of the way the sculptor whittles 117 00:07:54,400 --> 00:07:58,240 Speaker 1: away everything that isn't a statue when they start with 118 00:07:58,240 --> 00:08:01,200 Speaker 1: a block of marble. You know, you just say, are 119 00:08:01,280 --> 00:08:03,880 Speaker 1: you in a car of an elephant? Here's your block 120 00:08:03,880 --> 00:08:06,160 Speaker 1: of marvel. Cut all the stuff that doesn't look like 121 00:08:06,200 --> 00:08:08,760 Speaker 1: an elephant away, and what you're left with is the 122 00:08:08,800 --> 00:08:12,360 Speaker 1: elephant that might be what consciousness is for us. We 123 00:08:12,440 --> 00:08:15,720 Speaker 1: remove all of the concepts that are not consciousness, and 124 00:08:15,720 --> 00:08:19,200 Speaker 1: whatever is left over that becomes our definition. It's not 125 00:08:19,320 --> 00:08:24,760 Speaker 1: exactly satisfying at the moment. Well McCarthy expounded on artificial 126 00:08:24,800 --> 00:08:29,000 Speaker 1: intelligence in nineteen sixty in a paper titled Programs with 127 00:08:29,160 --> 00:08:33,760 Speaker 1: Common Sense, which kind of gives you another perspective of 128 00:08:33,800 --> 00:08:38,200 Speaker 1: what artificial intelligence could be. And this leads us to 129 00:08:38,960 --> 00:08:42,960 Speaker 1: ask other questions like what exactly kind of what? What 130 00:08:43,000 --> 00:08:46,760 Speaker 1: are the implementations of artificial intelligence? What what types are there? 131 00:08:47,679 --> 00:08:50,800 Speaker 1: And again, the number of types of AI depends upon 132 00:08:50,840 --> 00:08:54,400 Speaker 1: whom you ask and how they frame the answer. They're 133 00:08:54,440 --> 00:08:56,839 Speaker 1: simple answers where some people will say, oh, there's two 134 00:08:56,840 --> 00:09:00,120 Speaker 1: types of AI. They're strong AI and there's weak a I. 135 00:09:00,960 --> 00:09:04,520 Speaker 1: Or some people will say general AI and narrow AI. 136 00:09:05,440 --> 00:09:07,880 Speaker 1: Others will say no, no, no, there's like thirty three 137 00:09:08,040 --> 00:09:12,080 Speaker 1: types of artificial intelligence. Or they might say there's three 138 00:09:12,160 --> 00:09:15,720 Speaker 1: or four types of artificial intelligence. With those two types, 139 00:09:15,760 --> 00:09:19,360 Speaker 1: those are the easiest to kind of explain in broad categorizations. 140 00:09:20,200 --> 00:09:24,200 Speaker 1: Narrow or weak AI is the artificial intelligence we would 141 00:09:24,240 --> 00:09:29,000 Speaker 1: create that's dedicated to a narrow task or series of tasks. 142 00:09:29,040 --> 00:09:32,560 Speaker 1: Strong AI is generally understood to mean a machine that 143 00:09:32,640 --> 00:09:38,760 Speaker 1: has sentience, consciousness, and mind, those qualities that we associate 144 00:09:38,920 --> 00:09:43,480 Speaker 1: with human intelligence. But again, we cannot even fully describe 145 00:09:43,960 --> 00:09:49,640 Speaker 1: those concepts within the context of humans, so trying to 146 00:09:49,679 --> 00:09:54,040 Speaker 1: figure out how to imbue machines with those elements is 147 00:09:54,120 --> 00:09:59,239 Speaker 1: even more complicated. Now there's also the concept of general intelligence. 148 00:09:59,320 --> 00:10:04,040 Speaker 1: This is not reliant upon consciousness or mind or sentience. 149 00:10:04,480 --> 00:10:08,920 Speaker 1: With general intelligence, the idea is that a machine would 150 00:10:08,920 --> 00:10:12,960 Speaker 1: be able to apply intelligence to any problem rather than 151 00:10:13,000 --> 00:10:16,160 Speaker 1: just a specific or narrow band of problems. So, in 152 00:10:16,160 --> 00:10:21,480 Speaker 1: other words, a generally intelligent machine could be used to 153 00:10:21,559 --> 00:10:27,559 Speaker 1: solve problems of various degrees and various contexts. So you 154 00:10:27,640 --> 00:10:32,000 Speaker 1: might have a general intelligence robot, and the general intelligence 155 00:10:32,080 --> 00:10:34,920 Speaker 1: robot can do things like figure out how to manipulate 156 00:10:35,400 --> 00:10:41,040 Speaker 1: physical objects, how to maneuver around within an environment, UH, 157 00:10:41,080 --> 00:10:43,920 Speaker 1: and a few other elements as well. It's having a 158 00:10:43,920 --> 00:10:47,960 Speaker 1: more general approach to problem solving as opposed to something 159 00:10:48,000 --> 00:10:54,119 Speaker 1: that was made specifically to handle a particular task UH. 160 00:10:54,160 --> 00:10:58,360 Speaker 1: But general intelligence. True general intelligence would be capable of 161 00:10:58,480 --> 00:11:02,679 Speaker 1: applying an intelligen an approach to any kind of problem, 162 00:11:02,679 --> 00:11:08,160 Speaker 1: not just a related family of problems. In an article 163 00:11:08,200 --> 00:11:11,640 Speaker 1: for Government Technology, there was a writer named Arrend Hints 164 00:11:11,640 --> 00:11:14,760 Speaker 1: of Michigan State University who laid out four broad categories 165 00:11:14,760 --> 00:11:20,760 Speaker 1: of artificially intelligent machines. And these would go well into 166 00:11:21,120 --> 00:11:25,040 Speaker 1: pretty sophisticated artificial intelligence. From the very get go um 167 00:11:25,240 --> 00:11:29,480 Speaker 1: I argue that artificial intelligence is composed of lots of 168 00:11:29,520 --> 00:11:34,800 Speaker 1: different facets, and you can find elements of artificial intelligence 169 00:11:34,800 --> 00:11:38,320 Speaker 1: and many different programs that exist today, none of which 170 00:11:39,320 --> 00:11:44,239 Speaker 1: are approaching this general intelligence model, and certainly not approaching 171 00:11:44,280 --> 00:11:49,640 Speaker 1: strong AI. But here's how Hence breaks it down. He says, 172 00:11:49,640 --> 00:11:53,959 Speaker 1: Type one is a reactive kind of intelligence. These are 173 00:11:54,000 --> 00:11:59,800 Speaker 1: machines that take action in response to some state, and 174 00:12:00,040 --> 00:12:03,080 Speaker 1: they don't form memories, they don't use past experience to 175 00:12:03,160 --> 00:12:08,000 Speaker 1: inform current decisions. And he argues Deep Blue IBM's Deep 176 00:12:08,000 --> 00:12:12,439 Speaker 1: Blue was that kind of a machine, and Deep Blue 177 00:12:12,880 --> 00:12:17,200 Speaker 1: was the computer that defeated Gary Kasparov in a series 178 00:12:17,400 --> 00:12:20,800 Speaker 1: of chess matches in the nineties. Actually, the first series 179 00:12:21,000 --> 00:12:23,880 Speaker 1: ended in a draw between the two. The second series 180 00:12:23,920 --> 00:12:28,320 Speaker 1: Deep Blue one, and then IBM quickly um ended up 181 00:12:29,360 --> 00:12:34,200 Speaker 1: retiring Deep Blue from that point forward. But Deep Blue 182 00:12:34,360 --> 00:12:37,360 Speaker 1: would just look at the state of the chessboard at 183 00:12:37,400 --> 00:12:40,280 Speaker 1: any given moment and then make a decision based upon 184 00:12:40,280 --> 00:12:43,200 Speaker 1: that state. When it was Deep Blues turn, it didn't 185 00:12:43,280 --> 00:12:46,760 Speaker 1: build up a series of decisions, didn't track what was 186 00:12:46,800 --> 00:12:52,840 Speaker 1: happening turn overturn. Uh, So it didn't evolve in any way. 187 00:12:52,840 --> 00:12:56,000 Speaker 1: It had no internal representation of the world. It just 188 00:12:56,080 --> 00:12:59,199 Speaker 1: would look at what was happening right now and make 189 00:12:59,240 --> 00:13:03,000 Speaker 1: a decision. Now, there was a researcher named Rodney Brooks 190 00:13:03,080 --> 00:13:05,400 Speaker 1: at AI researcher who said, these type one machines are 191 00:13:05,440 --> 00:13:09,240 Speaker 1: the only ones we should ever try to make, because 192 00:13:09,920 --> 00:13:13,720 Speaker 1: to make a machine more intelligent, one that contains a 193 00:13:13,840 --> 00:13:19,080 Speaker 1: virtual representation of the world, would be impossible. That we 194 00:13:19,440 --> 00:13:23,840 Speaker 1: as humans would be incapable of building a virtual representation 195 00:13:24,559 --> 00:13:27,840 Speaker 1: that is accurate enough for such a machine to make 196 00:13:28,080 --> 00:13:32,240 Speaker 1: good decisions. It would have a faulty representation of the world, 197 00:13:32,320 --> 00:13:35,520 Speaker 1: and therefore any decision it would make would not be 198 00:13:36,040 --> 00:13:41,640 Speaker 1: ideal and it would potentially do more harm than good. Uh. 199 00:13:41,880 --> 00:13:43,960 Speaker 1: These sorts of machines are always going to make the 200 00:13:44,040 --> 00:13:48,000 Speaker 1: same decision given a certain set of criteria. So let's 201 00:13:48,000 --> 00:13:51,000 Speaker 1: go back with Deep Blue. Let's say the Deep Blue 202 00:13:51,040 --> 00:13:53,040 Speaker 1: is looking at the board, the chess board. It is 203 00:13:53,080 --> 00:13:55,920 Speaker 1: Deep Blues turn. It's been maybe you know, a half 204 00:13:55,960 --> 00:13:59,000 Speaker 1: dozen turns in the chess game, and it makes a 205 00:13:59,040 --> 00:14:02,400 Speaker 1: decision base upon all the positions of the pieces that 206 00:14:02,440 --> 00:14:06,640 Speaker 1: are in play at that time. This decision is based 207 00:14:06,720 --> 00:14:12,120 Speaker 1: on the probabilities of other moves that the opponent might make, 208 00:14:12,480 --> 00:14:15,319 Speaker 1: the strength of any given move against what the current 209 00:14:15,360 --> 00:14:17,559 Speaker 1: conditions are. There are a lot of factors that go 210 00:14:17,640 --> 00:14:22,240 Speaker 1: into that one decision, But the argument goes that type 211 00:14:22,240 --> 00:14:26,360 Speaker 1: one machines will always come to the same conclusion given 212 00:14:26,400 --> 00:14:29,160 Speaker 1: that same set of criteria. So, in other words, if 213 00:14:29,200 --> 00:14:34,680 Speaker 1: in game one, Deep Blue is given this arrangement of pieces, 214 00:14:35,360 --> 00:14:39,440 Speaker 1: it will make a specific decision by weighing all those 215 00:14:39,720 --> 00:14:43,600 Speaker 1: options out and going with the best one. In game two, 216 00:14:44,040 --> 00:14:48,280 Speaker 1: if that exact same configuration of pieces were to be 217 00:14:48,400 --> 00:14:51,480 Speaker 1: presented to Deep Blue and it's Deep Blues turn, it 218 00:14:51,520 --> 00:14:54,160 Speaker 1: would make the same decision. In that case, It's not 219 00:14:54,200 --> 00:14:56,920 Speaker 1: gonna improvise, it's not going to change, it's not going 220 00:14:57,000 --> 00:15:01,200 Speaker 1: to learn from its past experience, is it is just 221 00:15:01,360 --> 00:15:04,840 Speaker 1: going to make a decision based upon the parameters that 222 00:15:04,920 --> 00:15:08,600 Speaker 1: are in front of it at that very moment. Now, 223 00:15:08,680 --> 00:15:12,120 Speaker 1: this is not necessarily a bad thing. You may want 224 00:15:12,200 --> 00:15:17,560 Speaker 1: certain artificially intelligent machines to be very predictable in their response. 225 00:15:18,040 --> 00:15:20,800 Speaker 1: If you have a smart thermostat and it's been trained 226 00:15:21,200 --> 00:15:24,680 Speaker 1: to learn what you prefer over time, and it's learned 227 00:15:24,680 --> 00:15:27,280 Speaker 1: that you like it cool in the mornings, so you 228 00:15:27,360 --> 00:15:31,080 Speaker 1: like a nice maybe seventy degrees for your thermostat fahrenheit, 229 00:15:31,240 --> 00:15:34,600 Speaker 1: that is obviously, Uh, it's not going to improvise one 230 00:15:34,640 --> 00:15:36,640 Speaker 1: day and say, you know what, I'm just gonna try 231 00:15:36,720 --> 00:15:38,840 Speaker 1: something here. I bet he's really gonna like it. If 232 00:15:38,840 --> 00:15:41,880 Speaker 1: I said eighty eight degrees fahrenheit on a humid Georgia 233 00:15:41,960 --> 00:15:46,160 Speaker 1: July morning, Let's see what happens next. What happens next 234 00:15:46,240 --> 00:15:51,640 Speaker 1: is Jonathan just sweats like crazy. So there are implementations 235 00:15:51,640 --> 00:15:55,040 Speaker 1: where you would want to type one AI machine and 236 00:15:55,120 --> 00:15:59,640 Speaker 1: nothing more advanced than that. Then Type two intelligent machines 237 00:15:59,680 --> 00:16:02,520 Speaker 1: would be ones that have at least some version of 238 00:16:02,520 --> 00:16:06,680 Speaker 1: a memory. They can track changing variables over time and 239 00:16:06,720 --> 00:16:11,040 Speaker 1: analyze past behavior before making a decision on a course 240 00:16:11,160 --> 00:16:15,200 Speaker 1: of action. Now, this memory doesn't go into a lexicon 241 00:16:15,400 --> 00:16:18,320 Speaker 1: of memories. It's like short term memory that never gets 242 00:16:18,360 --> 00:16:22,040 Speaker 1: converted into long term memory. It's stored temporarily, kind of 243 00:16:22,040 --> 00:16:25,480 Speaker 1: like random axis memory is in computers, and then after 244 00:16:25,520 --> 00:16:30,120 Speaker 1: that short while it can be overwritten. These machines would 245 00:16:30,120 --> 00:16:32,200 Speaker 1: be able to do a little bit more than what 246 00:16:32,280 --> 00:16:34,520 Speaker 1: Deep Blue could do. It wouldn't just be making a 247 00:16:34,520 --> 00:16:37,880 Speaker 1: decision based upon the current state of the game. It 248 00:16:37,880 --> 00:16:41,600 Speaker 1: would also remember how the last few moves went and 249 00:16:41,680 --> 00:16:44,120 Speaker 1: what brought it there, and how the other player had 250 00:16:44,160 --> 00:16:47,600 Speaker 1: been playing, and might be able to take that information 251 00:16:47,920 --> 00:16:52,200 Speaker 1: and incorporate that in its decisions for the following moves, 252 00:16:52,960 --> 00:16:57,080 Speaker 1: which means it could potentially adapt its play style. So 253 00:16:57,640 --> 00:17:02,400 Speaker 1: this is a slightly more sophisticated version of the intelligent machine. 254 00:17:03,680 --> 00:17:07,120 Speaker 1: Now we've got two more types of intelligent machines to cover. 255 00:17:07,800 --> 00:17:10,480 Speaker 1: But before I jump into type three in type four 256 00:17:10,640 --> 00:17:14,320 Speaker 1: and then go further into this discussion about artificial intelligence, 257 00:17:14,840 --> 00:17:24,840 Speaker 1: let's take a quick break to thank our sponsor. Now, 258 00:17:24,880 --> 00:17:29,240 Speaker 1: a type three intelligent machine incorporates what he says is 259 00:17:29,280 --> 00:17:32,600 Speaker 1: a theory of mind. These machines would have an internal 260 00:17:32,680 --> 00:17:35,760 Speaker 1: concept of the world as well as the beings that 261 00:17:35,840 --> 00:17:40,000 Speaker 1: actually inhabit that world, and an understanding that those beings 262 00:17:40,280 --> 00:17:45,560 Speaker 1: also possess intelligence that guide their behaviors. In a way, 263 00:17:45,600 --> 00:17:48,280 Speaker 1: you could think of this as awareness of others. So 264 00:17:48,960 --> 00:17:54,639 Speaker 1: these machines would know that people aren't just bags of 265 00:17:54,680 --> 00:17:58,040 Speaker 1: meat that do things, that we have intelligence, and that 266 00:17:58,040 --> 00:18:03,080 Speaker 1: that in fact guides our behavior and understanding that then 267 00:18:03,119 --> 00:18:07,720 Speaker 1: affects the decisions that the AI makes. But this is 268 00:18:07,720 --> 00:18:11,119 Speaker 1: still not quite at that level of strong AI that 269 00:18:11,160 --> 00:18:14,080 Speaker 1: I was mentioning earlier. To get there, you have to 270 00:18:14,160 --> 00:18:18,000 Speaker 1: hit type four machine intelligence, and that is when you 271 00:18:18,080 --> 00:18:21,000 Speaker 1: hit self awareness, where the machine is not just aware 272 00:18:21,280 --> 00:18:26,879 Speaker 1: that other beings possess the quality of intelligence, but it 273 00:18:27,000 --> 00:18:30,439 Speaker 1: is aware of its own self and its own state 274 00:18:30,800 --> 00:18:37,080 Speaker 1: and its own being in relation with everything else. It 275 00:18:37,240 --> 00:18:41,399 Speaker 1: is this sort of machine that could, in theory, start 276 00:18:41,480 --> 00:18:44,800 Speaker 1: to design improvements to itself so it could be recursive 277 00:18:44,880 --> 00:18:50,439 Speaker 1: and that it is able to make improvements. And then 278 00:18:50,560 --> 00:18:54,000 Speaker 1: you get into the situation that some futurists think of 279 00:18:54,040 --> 00:18:59,200 Speaker 1: as the singularity, where you have self improving, artificially intelligent 280 00:18:59,280 --> 00:19:04,680 Speaker 1: machines that are able to evolve. It's such a remarkable 281 00:19:04,760 --> 00:19:10,679 Speaker 1: rate where every generation of improvements is a huge leap 282 00:19:10,720 --> 00:19:13,320 Speaker 1: from the last one, and it's taking less and less 283 00:19:13,359 --> 00:19:16,959 Speaker 1: time between generations for things to change that it becomes 284 00:19:16,960 --> 00:19:20,800 Speaker 1: impossible to describe what the present set of circumstances are 285 00:19:21,080 --> 00:19:24,199 Speaker 1: because the present would be changing so quickly that it 286 00:19:24,280 --> 00:19:29,679 Speaker 1: becomes a meaningless concept. This is the singularity. That is 287 00:19:29,800 --> 00:19:35,760 Speaker 1: one potential outcome of this instance, if it were in 288 00:19:35,800 --> 00:19:38,840 Speaker 1: fact possible, which we don't know if it is possible yet. 289 00:19:39,560 --> 00:19:42,159 Speaker 1: It's some people treated as a foregone conclusion that we 290 00:19:42,200 --> 00:19:45,760 Speaker 1: will eventually have machines that will be able to attain 291 00:19:45,840 --> 00:19:51,120 Speaker 1: self awareness and potentially self improvement. And once you get 292 00:19:51,119 --> 00:19:56,639 Speaker 1: to that stage, how do you avoid this singularity? People 293 00:19:56,640 --> 00:19:59,159 Speaker 1: would argue that's impossible to avoid, But there are a 294 00:19:59,200 --> 00:20:01,840 Speaker 1: lot of people who say, we have no reason to 295 00:20:01,880 --> 00:20:04,199 Speaker 1: believe that this is something that is going to happen, 296 00:20:05,040 --> 00:20:09,199 Speaker 1: or that's even possible from a technological perspective, or maybe 297 00:20:09,240 --> 00:20:13,360 Speaker 1: possible from a technological perspective, But we're talking decades, if 298 00:20:13,400 --> 00:20:16,160 Speaker 1: not a century or more out in front of us, 299 00:20:16,560 --> 00:20:20,840 Speaker 1: based upon our limited understanding of intelligence and our limited 300 00:20:20,840 --> 00:20:23,359 Speaker 1: amount of processing power. When you compare it to something 301 00:20:23,400 --> 00:20:26,440 Speaker 1: like the human mind, keeping in mind that the human 302 00:20:26,480 --> 00:20:30,800 Speaker 1: brain has got billions of neurons in it, and we 303 00:20:30,920 --> 00:20:37,240 Speaker 1: have artificial neuron networks, but they are dwarfed by the 304 00:20:37,280 --> 00:20:41,680 Speaker 1: connections that you find in the human brain. So there's 305 00:20:41,720 --> 00:20:46,320 Speaker 1: a lot of heated debate in the artificial intelligent world 306 00:20:46,880 --> 00:20:51,080 Speaker 1: about whether or not this is something we should even 307 00:20:51,119 --> 00:20:55,840 Speaker 1: concern ourselves with. But some people say that self awareness 308 00:20:55,920 --> 00:20:59,520 Speaker 1: could arise from a system that has a given amount 309 00:20:59,520 --> 00:21:03,679 Speaker 1: of complex city without us having any deeper understanding of 310 00:21:03,720 --> 00:21:07,119 Speaker 1: what consciousness is. In other words, if you were to 311 00:21:07,160 --> 00:21:12,440 Speaker 1: make a machine that was complex enough, consciousness could be 312 00:21:12,480 --> 00:21:18,000 Speaker 1: an emergent behavior, something that naturally occurs once you reach 313 00:21:18,200 --> 00:21:23,199 Speaker 1: a system of significant complexity. That's a little difficult to 314 00:21:23,200 --> 00:21:26,400 Speaker 1: wrap your head around, but keep in mind we humans 315 00:21:26,440 --> 00:21:29,639 Speaker 1: have been harnessing and creating stuff without having a full 316 00:21:29,800 --> 00:21:35,200 Speaker 1: understanding of it. For ages. We were using electricity well 317 00:21:35,280 --> 00:21:39,920 Speaker 1: before we understood the actual physics of electricity, So it's 318 00:21:39,960 --> 00:21:42,280 Speaker 1: a little different. I mean, you can't compare electricity to 319 00:21:42,320 --> 00:21:46,240 Speaker 1: consciousness directly, it doesn't make any sense. But just to 320 00:21:46,280 --> 00:21:50,280 Speaker 1: say there is precedence in human beings creating something that 321 00:21:50,320 --> 00:21:55,040 Speaker 1: they do not fully understand whether or not that's actually possible, However, 322 00:21:55,440 --> 00:21:58,359 Speaker 1: is it we don't know. There's no way for us 323 00:21:58,400 --> 00:22:04,400 Speaker 1: to answer. There is no construct, no machine complex enough 324 00:22:04,920 --> 00:22:08,120 Speaker 1: for us to run an experiment and see if consciousness arises. 325 00:22:08,680 --> 00:22:11,399 Speaker 1: And if it does arise, how do we recognize it? 326 00:22:12,119 --> 00:22:17,280 Speaker 1: How do we know that a machine actually possesses that 327 00:22:17,280 --> 00:22:20,040 Speaker 1: that feature? How do we know a machine truly has 328 00:22:20,119 --> 00:22:24,480 Speaker 1: become self aware and conscious? We'll talk about that a 329 00:22:24,520 --> 00:22:27,119 Speaker 1: little bit more later on, because, of course, there are 330 00:22:27,160 --> 00:22:29,200 Speaker 1: a lot of people who have come up with ideas 331 00:22:29,720 --> 00:22:32,639 Speaker 1: on how we would judge whether or not a machine 332 00:22:32,680 --> 00:22:36,000 Speaker 1: had achieved consciousness, and some of them are more serious 333 00:22:36,000 --> 00:22:41,840 Speaker 1: than others. There's also some serious objections. Well, those are 334 00:22:41,920 --> 00:22:45,119 Speaker 1: the four types that were laid out by this one 335 00:22:45,600 --> 00:22:50,520 Speaker 1: person from Michigan State University. But then, uh, John Spacey 336 00:22:50,680 --> 00:22:53,879 Speaker 1: over on Simplicable has an article where he talks about 337 00:22:53,920 --> 00:22:59,679 Speaker 1: thirty three types of artificial intelligence thirty three. Now, I'm 338 00:22:59,720 --> 00:23:01,959 Speaker 1: not going to go through all thirty three, and I 339 00:23:02,040 --> 00:23:04,440 Speaker 1: also want to point out that the thirty three types 340 00:23:04,480 --> 00:23:07,240 Speaker 1: he points out are really more like thirty three facets 341 00:23:07,280 --> 00:23:11,360 Speaker 1: of artificial intelligence. It's not thirty three degrees of artificial 342 00:23:11,400 --> 00:23:14,880 Speaker 1: intelligence where we start with dumb machine and we end 343 00:23:14,920 --> 00:23:21,520 Speaker 1: with super smart robo master. It's more like different aspects 344 00:23:21,560 --> 00:23:26,080 Speaker 1: of intelligence that are in various stages of development and 345 00:23:26,160 --> 00:23:32,960 Speaker 1: research in the artificial intelligence field. So it's not really 346 00:23:33,000 --> 00:23:37,280 Speaker 1: separate categories. It's more like specific implementations of intelligence. An 347 00:23:37,320 --> 00:23:42,320 Speaker 1: example would be effective computing, effective being a F F E, C, T, 348 00:23:42,720 --> 00:23:49,160 Speaker 1: I V E as into affect something. Effective computing tries 349 00:23:49,240 --> 00:23:52,800 Speaker 1: to suss out the emotions people are experiencing and to 350 00:23:52,880 --> 00:23:57,560 Speaker 1: behave appropriately according to the parameters of the programming, not 351 00:23:57,600 --> 00:24:02,560 Speaker 1: according to social rules necessarily, So these are machines that 352 00:24:02,560 --> 00:24:06,040 Speaker 1: would be able to recognize emotions and respond in a 353 00:24:06,080 --> 00:24:11,360 Speaker 1: way that was appropriate compared to their actual programming. Another 354 00:24:12,720 --> 00:24:16,679 Speaker 1: type that Space lists is computer vision, which is the 355 00:24:16,720 --> 00:24:20,359 Speaker 1: area of computer science focused on analyzing and understanding visual 356 00:24:20,400 --> 00:24:24,359 Speaker 1: information computationally. So I've talked about this in episodes of 357 00:24:24,400 --> 00:24:27,520 Speaker 1: tech Stuff in the past. You've heard about various projects 358 00:24:27,560 --> 00:24:31,240 Speaker 1: to train computers to recognize images like pictures of cats 359 00:24:31,840 --> 00:24:36,440 Speaker 1: versus other stuff like pictures of Deloreans. So this, as 360 00:24:36,480 --> 00:24:39,600 Speaker 1: it turns out, it's really hard to do. This is 361 00:24:39,640 --> 00:24:42,600 Speaker 1: one of those gaps we see between human intelligence and 362 00:24:42,640 --> 00:24:48,360 Speaker 1: machine intelligence. Even with deep learning and artificial neural networks. 363 00:24:49,440 --> 00:24:53,320 Speaker 1: This is really tricky stuff. It doesn't take a long 364 00:24:53,400 --> 00:24:57,800 Speaker 1: time to teach your typical human concepts that they can 365 00:24:57,840 --> 00:25:04,040 Speaker 1: then apply across a broad spectrum of examples. So I 366 00:25:04,200 --> 00:25:07,320 Speaker 1: like to use the example specific example of a coffee mug, 367 00:25:07,440 --> 00:25:13,080 Speaker 1: or just a mug a mug. So imagine a mug. Now, 368 00:25:13,119 --> 00:25:15,800 Speaker 1: the mug you are imagining probably looks a certain way. 369 00:25:16,040 --> 00:25:18,359 Speaker 1: But if I were to show you a totally different mug, 370 00:25:18,440 --> 00:25:20,679 Speaker 1: you would recognize that as a mug, even if it 371 00:25:20,760 --> 00:25:25,080 Speaker 1: was a different size, different color, different shape of the handle, 372 00:25:25,600 --> 00:25:29,040 Speaker 1: different shape of the container itself, As long as it 373 00:25:29,280 --> 00:25:34,919 Speaker 1: adhered to the general parameters that we associate with the 374 00:25:35,040 --> 00:25:38,679 Speaker 1: concept of a mug, you know what I was talking about, 375 00:25:39,119 --> 00:25:43,120 Speaker 1: You would know that that was a mug. Computers not 376 00:25:43,240 --> 00:25:47,080 Speaker 1: that good at this, right, Like it might require you 377 00:25:47,160 --> 00:25:51,320 Speaker 1: to train a computer by showing it thousands or tens 378 00:25:51,359 --> 00:25:55,080 Speaker 1: of thousands of images of mugs so that it builds 379 00:25:55,200 --> 00:26:01,800 Speaker 1: up the various elements that define mug nous, so that 380 00:26:01,880 --> 00:26:04,200 Speaker 1: if it were to look at a brand new image 381 00:26:04,680 --> 00:26:07,919 Speaker 1: of a mug that is unlike all the others that 382 00:26:08,080 --> 00:26:11,520 Speaker 1: have preceded it, it would still be able to identify 383 00:26:11,560 --> 00:26:15,680 Speaker 1: that as Yes, that is a mug. This is hard 384 00:26:15,720 --> 00:26:20,679 Speaker 1: to do. We've seen some advances in this field, but 385 00:26:21,280 --> 00:26:27,159 Speaker 1: it demonstrates the huge gap in that specific part of 386 00:26:27,200 --> 00:26:31,879 Speaker 1: intelligence between machines and humans. That isn't to say that 387 00:26:31,920 --> 00:26:36,280 Speaker 1: it's not getting better with machines. It is, but that's 388 00:26:36,320 --> 00:26:39,280 Speaker 1: just one example that I wanted to give, and it 389 00:26:39,520 --> 00:26:43,960 Speaker 1: kind of hammers home this idea that general intelligence with 390 00:26:44,080 --> 00:26:48,000 Speaker 1: machines is a long way off. There's just so many 391 00:26:48,000 --> 00:26:55,600 Speaker 1: different aspects of it. Um Beyond this Spacey continues to 392 00:26:55,680 --> 00:26:59,200 Speaker 1: define various terms within AI, and I don't necessarily think 393 00:26:59,200 --> 00:27:02,160 Speaker 1: of them as types of artificial intelligence, but again aspects 394 00:27:02,359 --> 00:27:06,680 Speaker 1: of artificial intelligence. Not every AI implementation will need all 395 00:27:06,760 --> 00:27:09,600 Speaker 1: of these aspects. Some of them are going to be 396 00:27:09,720 --> 00:27:15,200 Speaker 1: much better with a very narrow range of artificial intelligent features. 397 00:27:15,920 --> 00:27:18,560 Speaker 1: For example, your rumba probably does not need to be 398 00:27:18,600 --> 00:27:20,720 Speaker 1: able to pick up on what your mood is, whether 399 00:27:20,800 --> 00:27:24,040 Speaker 1: or not you're sad, or anything along those lines. But 400 00:27:24,080 --> 00:27:26,399 Speaker 1: if you are talking about a machine that needs to 401 00:27:26,400 --> 00:27:30,680 Speaker 1: have general intelligence in order to solve any given problem 402 00:27:30,680 --> 00:27:32,800 Speaker 1: in front of it, it will need to have many, 403 00:27:32,920 --> 00:27:36,159 Speaker 1: if not all, of those aspects of artificial intelligence in 404 00:27:36,280 --> 00:27:42,720 Speaker 1: order to uh address any problem presented to it. So 405 00:27:43,119 --> 00:27:46,720 Speaker 1: this kind of gives you an idea of why talking 406 00:27:46,760 --> 00:27:52,000 Speaker 1: about artificial intelligence is tricky because people are thinking about 407 00:27:52,080 --> 00:27:55,520 Speaker 1: in very different terms. Some people focus on the specific 408 00:27:55,560 --> 00:28:00,800 Speaker 1: elements within artificial intelligence that are aspects of general intelligence. 409 00:28:00,800 --> 00:28:05,480 Speaker 1: There's aspects of human intelligence, but they're very specific. It's 410 00:28:05,480 --> 00:28:09,879 Speaker 1: not general AI, like, it's not an intelligent machine that 411 00:28:09,960 --> 00:28:13,199 Speaker 1: you could hold a conversation with. It's more about a 412 00:28:13,280 --> 00:28:18,480 Speaker 1: specific element of being able to analyze information and make 413 00:28:19,359 --> 00:28:24,000 Speaker 1: UH conclusions based on the information and potentially end up 414 00:28:24,040 --> 00:28:28,120 Speaker 1: making a course of action based on those conclusions. There's 415 00:28:28,160 --> 00:28:31,199 Speaker 1: so many different aspects of that within human intelligence that 416 00:28:32,000 --> 00:28:36,760 Speaker 1: it makes it tricky to just say AI and paint 417 00:28:36,800 --> 00:28:41,040 Speaker 1: with that broad brush. I think that ends up being misleading. 418 00:28:42,880 --> 00:28:47,920 Speaker 1: Now next, I'm gonna dive into some interesting ways of 419 00:28:47,920 --> 00:28:51,840 Speaker 1: thinking about AI as. Then can you determine whether or 420 00:28:51,880 --> 00:28:55,240 Speaker 1: not a machine actually does possess intelligence? How would we 421 00:28:55,400 --> 00:29:00,600 Speaker 1: know that? How does one get to the conclusion that 422 00:29:00,680 --> 00:29:04,880 Speaker 1: a machine can actually be intelligent? So if you were 423 00:29:04,920 --> 00:29:09,000 Speaker 1: to look at something like at Stanford, there was a 424 00:29:09,040 --> 00:29:14,800 Speaker 1: computer program, a machine designed that could UH observe the 425 00:29:14,840 --> 00:29:19,600 Speaker 1: movements of a pendulum and based upon those movements and 426 00:29:19,880 --> 00:29:24,160 Speaker 1: multiple observations of those movements. The machine was able to 427 00:29:24,200 --> 00:29:28,040 Speaker 1: suss out the basic laws of motion just by looking 428 00:29:28,120 --> 00:29:30,960 Speaker 1: at the movements of a pendulum. It was able to 429 00:29:31,000 --> 00:29:33,840 Speaker 1: analyze those movements and come up with the laws of 430 00:29:33,880 --> 00:29:36,480 Speaker 1: motion over the course of several hours, which had taken 431 00:29:36,560 --> 00:29:41,360 Speaker 1: human centuries to do. Is that truly an intelligent machine. 432 00:29:42,320 --> 00:29:46,760 Speaker 1: It doesn't necessarily understand anything else. It might be very 433 00:29:46,800 --> 00:29:50,680 Speaker 1: intelligent that specific implementation. How do we know when a 434 00:29:50,720 --> 00:29:53,960 Speaker 1: machine is intelligent. We'll take a look at some potential 435 00:29:53,960 --> 00:29:56,240 Speaker 1: answers of that question in just a minute, but first 436 00:29:56,600 --> 00:30:06,520 Speaker 1: let's take another quick break to thank our sponsor. Okay, 437 00:30:06,560 --> 00:30:11,800 Speaker 1: so we've got Musk and Zuckerberg butting heads about whether 438 00:30:11,920 --> 00:30:15,960 Speaker 1: or not artificial intelligence is going to end us. Let's 439 00:30:16,000 --> 00:30:19,080 Speaker 1: say that we're getting to a point where artificial intelligence 440 00:30:19,120 --> 00:30:23,200 Speaker 1: is approaching something that is similar to what most people 441 00:30:23,440 --> 00:30:26,920 Speaker 1: think of when they hear artificial intelligence. I argue that 442 00:30:26,960 --> 00:30:30,200 Speaker 1: most people when they hear AI, they think of a 443 00:30:30,280 --> 00:30:34,840 Speaker 1: machine that's capable of processing information in a way that 444 00:30:35,000 --> 00:30:39,360 Speaker 1: is analogous to the way humans think. Now I know 445 00:30:39,440 --> 00:30:45,240 Speaker 1: that you guys realize artificial intelligence covers a whole spectrum 446 00:30:45,280 --> 00:30:51,120 Speaker 1: of topics of computer sciences of psychology, UH of data 447 00:30:51,160 --> 00:30:56,880 Speaker 1: processing that don't necessarily equate directly to thinking like a 448 00:30:56,960 --> 00:30:59,920 Speaker 1: human being. But the average person, I would say, think 449 00:31:00,160 --> 00:31:03,640 Speaker 1: AI means that a computer quote unquote thinks the way 450 00:31:03,640 --> 00:31:07,400 Speaker 1: a human does. How would we know when we reached 451 00:31:07,440 --> 00:31:10,560 Speaker 1: that point. Well, a lot of people like to point 452 00:31:10,560 --> 00:31:16,000 Speaker 1: at the Turing test because it's largely through a misunderstanding 453 00:31:16,040 --> 00:31:19,920 Speaker 1: what The Touring test was named after Alan Turing. It 454 00:31:20,000 --> 00:31:25,120 Speaker 1: was proposed in nineteen fifty. The Turing test is interesting 455 00:31:25,160 --> 00:31:29,600 Speaker 1: because Touring was saying, if you were to create an 456 00:31:29,760 --> 00:31:35,040 Speaker 1: artificially intelligent machine, not even are officially intelligent. If you 457 00:31:35,040 --> 00:31:39,280 Speaker 1: were to create a machine that could converse with a 458 00:31:39,360 --> 00:31:42,800 Speaker 1: person in such a way that the person could not 459 00:31:42,920 --> 00:31:47,720 Speaker 1: be certain that the entity they talked to was either 460 00:31:47,880 --> 00:31:51,960 Speaker 1: another human being or a machine, you would have to 461 00:31:52,000 --> 00:31:56,040 Speaker 1: say that that machine possessed intelligence. He would pass. The 462 00:31:56,120 --> 00:32:00,920 Speaker 1: Turing test is the way we often will say it today, 463 00:32:00,960 --> 00:32:05,600 Speaker 1: So typically you see experiments running this way. Contests are 464 00:32:06,360 --> 00:32:11,240 Speaker 1: frequently held to see if any chatbots can beat the 465 00:32:11,240 --> 00:32:15,200 Speaker 1: Touring test, and the way it typically works is that 466 00:32:15,280 --> 00:32:23,400 Speaker 1: you have a series of online interactions and people will 467 00:32:23,440 --> 00:32:26,880 Speaker 1: go through the log into a computer terminal, and there'll 468 00:32:26,880 --> 00:32:34,000 Speaker 1: be a text based communications platform like instant messaging or 469 00:32:34,040 --> 00:32:37,880 Speaker 1: a chat room, something along those lines. They type in 470 00:32:37,960 --> 00:32:43,120 Speaker 1: their and their their questions, their sentences, with their introductions, 471 00:32:43,160 --> 00:32:46,880 Speaker 1: whatever it may be, and then they're getting responses back. 472 00:32:47,920 --> 00:32:53,240 Speaker 1: Those responses maybe from another human being, or they may 473 00:32:53,240 --> 00:32:58,560 Speaker 1: be from a computer. And essentially they say that if 474 00:32:59,120 --> 00:33:03,240 Speaker 1: a certain percent edge of the people going through this 475 00:33:03,280 --> 00:33:08,720 Speaker 1: process are incapable of reliably detecting whether it's a human 476 00:33:08,920 --> 00:33:11,600 Speaker 1: or a computer they're talking to, then the computer is 477 00:33:11,600 --> 00:33:14,880 Speaker 1: said to have passed the Turing test because it is 478 00:33:14,960 --> 00:33:21,400 Speaker 1: able to replicate the behaviors of a person so realistically 479 00:33:21,840 --> 00:33:25,440 Speaker 1: as to be indistinguishable from a person. And Touring would say, 480 00:33:25,560 --> 00:33:29,000 Speaker 1: if you were to encounter another human being and that 481 00:33:29,040 --> 00:33:32,120 Speaker 1: person was to hold a conversation with you, you would 482 00:33:32,120 --> 00:33:35,800 Speaker 1: go ahead and assume that that other human being possesses 483 00:33:35,880 --> 00:33:40,160 Speaker 1: the quality of intelligence. We cannot be sure that anyone 484 00:33:40,280 --> 00:33:45,160 Speaker 1: we interact with possesses intelligence because we cannot inhabit that 485 00:33:45,240 --> 00:33:49,719 Speaker 1: person's being. If I have a conversation with you and 486 00:33:49,800 --> 00:33:53,040 Speaker 1: you are talking back with me, I can't be sure 487 00:33:53,080 --> 00:33:56,240 Speaker 1: that you're intelligent because I cannot be you, just as 488 00:33:56,320 --> 00:33:59,440 Speaker 1: you cannot be sure I am intelligent because you cannot 489 00:33:59,440 --> 00:34:05,520 Speaker 1: be me. But based upon my sentences to you, my 490 00:34:05,600 --> 00:34:07,960 Speaker 1: communication with you, the fact that I'm listening to you, 491 00:34:08,120 --> 00:34:10,719 Speaker 1: responding to what you have to say, and you in 492 00:34:10,760 --> 00:34:14,120 Speaker 1: turn are doing the same with me, we would assume 493 00:34:14,400 --> 00:34:19,200 Speaker 1: we each possess that quality known as intelligence. And Touring said, 494 00:34:19,600 --> 00:34:22,240 Speaker 1: if a machine can fool you into thinking it's a person, 495 00:34:22,600 --> 00:34:26,560 Speaker 1: you might as well extend it the same courtesy. If 496 00:34:26,600 --> 00:34:29,799 Speaker 1: you cannot tell that it's a machine, and you would 497 00:34:29,840 --> 00:34:32,480 Speaker 1: assume that a human being would have intelligence, then why 498 00:34:32,480 --> 00:34:35,440 Speaker 1: would you not assume the machine itself to have intelligence. 499 00:34:35,880 --> 00:34:39,120 Speaker 1: This is sort of a cheeky way of talking about 500 00:34:39,160 --> 00:34:45,920 Speaker 1: machine intelligence. I remember this proceeded coining the phrase artificial 501 00:34:45,960 --> 00:34:48,600 Speaker 1: intelligence in the first place, So Touring was kind of 502 00:34:48,640 --> 00:34:50,719 Speaker 1: having a little bit of fun with this. And there 503 00:34:50,719 --> 00:34:54,359 Speaker 1: have been contests to create chatbots to see if they 504 00:34:54,360 --> 00:34:57,279 Speaker 1: can beat the Turing test, and there have been at 505 00:34:57,320 --> 00:35:00,120 Speaker 1: least two or three that have said, yes, we did it, 506 00:35:00,280 --> 00:35:04,239 Speaker 1: but they all kind of have a little asterisk after them. So, 507 00:35:04,320 --> 00:35:06,640 Speaker 1: for example, there was one from a few years ago 508 00:35:07,800 --> 00:35:11,520 Speaker 1: where a group had built a chat bot that was 509 00:35:11,680 --> 00:35:19,560 Speaker 1: claiming to be a young boy who did not speak 510 00:35:19,600 --> 00:35:23,640 Speaker 1: English as his first language, but all the communications had 511 00:35:23,680 --> 00:35:28,280 Speaker 1: to be in English. That was part of the actual event. 512 00:35:28,600 --> 00:35:31,480 Speaker 1: All of the chat bots were supposed to communicate in English, 513 00:35:31,480 --> 00:35:34,600 Speaker 1: and all the people who were interacting were supposed to 514 00:35:34,640 --> 00:35:39,400 Speaker 1: be communicating in English as well. But this construct was 515 00:35:39,440 --> 00:35:41,880 Speaker 1: claiming to be a young boy from I want to say, 516 00:35:42,160 --> 00:35:47,920 Speaker 1: uh a uh, from Russia or from the Ukraine. It 517 00:35:47,960 --> 00:35:53,920 Speaker 1: might have been a Ukrainian identity, and the boy did 518 00:35:53,960 --> 00:35:58,120 Speaker 1: not have a very deep understanding of pop culture in 519 00:35:58,160 --> 00:36:02,439 Speaker 1: the West. Uh and I had a lot of limitations. 520 00:36:04,040 --> 00:36:07,960 Speaker 1: But because it those limitations were known. You know, if 521 00:36:08,000 --> 00:36:11,200 Speaker 1: you're communicating with this chat bot and the chat butt 522 00:36:11,239 --> 00:36:14,239 Speaker 1: claims to be a young boy from the Ukraine and 523 00:36:14,480 --> 00:36:17,520 Speaker 1: doesn't speak English as a first language, you're gonna cut 524 00:36:17,520 --> 00:36:21,520 Speaker 1: that chat bottle lot of slack because you're gonna think, well, 525 00:36:22,400 --> 00:36:25,000 Speaker 1: anything that appears out of the ordinary as far as 526 00:36:25,000 --> 00:36:28,920 Speaker 1: syntax or grammar is concerned, it is probably because English 527 00:36:28,960 --> 00:36:32,120 Speaker 1: is not his first language. Any gaps in knowledge are 528 00:36:32,200 --> 00:36:35,440 Speaker 1: due to the fact that one he has limited exposure 529 00:36:35,680 --> 00:36:39,040 Speaker 1: to the same sort of things that I have experienced. 530 00:36:39,400 --> 00:36:42,280 Speaker 1: And he's young, so he's not gonna know a lot 531 00:36:42,440 --> 00:36:47,399 Speaker 1: about older pop culture references. When you start putting those 532 00:36:47,440 --> 00:36:53,680 Speaker 1: limitations in where you expect less from the person you're 533 00:36:53,680 --> 00:36:59,719 Speaker 1: communicating with because of those limitations, it becomes easier, as 534 00:37:00,719 --> 00:37:02,600 Speaker 1: kind of a tricky word, but I'll go ahead and 535 00:37:02,680 --> 00:37:05,840 Speaker 1: use it, easier to fool someone into thinking that the 536 00:37:05,920 --> 00:37:10,600 Speaker 1: chat bot is an actual person because they're there. Level 537 00:37:10,640 --> 00:37:15,400 Speaker 1: of expectation has been lowered based upon the actual scenario. 538 00:37:16,160 --> 00:37:20,399 Speaker 1: There has not been to date a chatbot that has 539 00:37:20,520 --> 00:37:27,200 Speaker 1: beaten the Turing test as representing a person who natively 540 00:37:27,360 --> 00:37:32,719 Speaker 1: speaks the language in question with a reasonable body of 541 00:37:32,800 --> 00:37:37,160 Speaker 1: knowledge about the world and how the world works. No 542 00:37:37,480 --> 00:37:42,120 Speaker 1: chatbot has come close to that yet. Even if it did, 543 00:37:42,480 --> 00:37:45,799 Speaker 1: would you say that such a chat bot actually possessed 544 00:37:46,120 --> 00:37:50,399 Speaker 1: true intelligence or would it just seem like it did. 545 00:37:50,719 --> 00:37:55,400 Speaker 1: So let's look at Watson IBMS platform that was famous 546 00:37:55,520 --> 00:37:59,160 Speaker 1: for winning a game of Jeopardy against two former champions. 547 00:38:00,800 --> 00:38:04,320 Speaker 1: It was able to come up with questions for the 548 00:38:04,400 --> 00:38:06,680 Speaker 1: various answers. That's the way jeopardy works. If you're not 549 00:38:06,760 --> 00:38:10,960 Speaker 1: familiar with the game show Jeopardy, uh, the clues are 550 00:38:11,000 --> 00:38:13,359 Speaker 1: given to you in the form of an answer. You 551 00:38:13,400 --> 00:38:16,000 Speaker 1: have to come up with a question that relates to that. 552 00:38:17,040 --> 00:38:20,600 Speaker 1: So if you said, if it said he is credited 553 00:38:20,680 --> 00:38:23,800 Speaker 1: with inventing the lightbulb, you would say, who was Thomas 554 00:38:23,880 --> 00:38:29,920 Speaker 1: Edison Well Watson? This IBM construct this, this collection of 555 00:38:29,960 --> 00:38:32,239 Speaker 1: a p I S really is what it is. Was 556 00:38:32,280 --> 00:38:39,319 Speaker 1: on top of an enormous platform of of computers with 557 00:38:39,360 --> 00:38:42,960 Speaker 1: thousands of processors to to run all the number of 558 00:38:43,000 --> 00:38:46,120 Speaker 1: crunching that was going on behind the scenes. It had 559 00:38:46,120 --> 00:38:50,480 Speaker 1: a big, big database of information, and it was able 560 00:38:50,520 --> 00:38:56,000 Speaker 1: to weigh potential responses to any given clue, and if 561 00:38:56,040 --> 00:39:01,120 Speaker 1: it reached a certain threshold of confidence, it would submit 562 00:39:01,239 --> 00:39:04,520 Speaker 1: that as its response. So let's say it was an 563 00:39:04,520 --> 00:39:07,839 Speaker 1: eighty percent confidence. I think that was around the mark. 564 00:39:09,160 --> 00:39:13,040 Speaker 1: If the machine goes through its various databases find something 565 00:39:13,080 --> 00:39:17,160 Speaker 1: that meets a match with the clue with eight percent 566 00:39:17,280 --> 00:39:20,520 Speaker 1: confidence are greater, it would buzz in and submit that, 567 00:39:20,680 --> 00:39:23,719 Speaker 1: and more often than not it was right. But was 568 00:39:23,760 --> 00:39:27,080 Speaker 1: it truly intelligent? Because it could seem to understand things 569 00:39:27,120 --> 00:39:33,799 Speaker 1: like wordplay and references that were not direct references, it 570 00:39:33,880 --> 00:39:38,600 Speaker 1: seemed very clever, but you wouldn't say that it actually 571 00:39:38,760 --> 00:39:42,359 Speaker 1: possesses the same sort of intelligence that a human being does. 572 00:39:43,400 --> 00:39:50,799 Speaker 1: Even with that implementation, one of the biggest objections, or 573 00:39:50,920 --> 00:39:55,160 Speaker 1: rather challenges to machine intelligence and whether or not a 574 00:39:55,200 --> 00:40:00,719 Speaker 1: machine could ever be intelligent, is called the Chinese room aregument. Now, 575 00:40:00,800 --> 00:40:05,719 Speaker 1: this was proposed by John Searle s E. A. R. L. E. 576 00:40:06,040 --> 00:40:10,120 Speaker 1: It's a philosophical thought experiment that really challenges this idea 577 00:40:10,160 --> 00:40:14,839 Speaker 1: that machines could be said to think or possess intelligence. 578 00:40:15,320 --> 00:40:19,040 Speaker 1: And he creates an analogy to computers to show how 579 00:40:19,239 --> 00:40:24,120 Speaker 1: a machine might appear to understand what's happening and yet 580 00:40:24,360 --> 00:40:29,600 Speaker 1: not have any actual intrinsic understanding. So he says, let's 581 00:40:29,640 --> 00:40:34,560 Speaker 1: take an experiment. Let's say that you are locked in 582 00:40:34,600 --> 00:40:37,600 Speaker 1: a room, and in that room, you've got a table, 583 00:40:37,920 --> 00:40:40,920 Speaker 1: you've got some paper, you've got a pen, and you've 584 00:40:40,960 --> 00:40:46,480 Speaker 1: got an enormous book of instructions. And occasionally somebody from 585 00:40:46,520 --> 00:40:49,520 Speaker 1: outside the room slips a piece of paper under the door, 586 00:40:49,840 --> 00:40:51,520 Speaker 1: and when you pick up the paper, it has a 587 00:40:51,640 --> 00:40:56,600 Speaker 1: Chinese symbol on it, and you don't understand Chinese. You 588 00:40:56,600 --> 00:41:00,160 Speaker 1: you only speak English in this scenario, even if you 589 00:41:00,200 --> 00:41:03,239 Speaker 1: are a multi lingual out there, just imagine for the 590 00:41:03,280 --> 00:41:07,520 Speaker 1: moment that you only understand English. The book you have 591 00:41:07,719 --> 00:41:11,480 Speaker 1: the set of instructions has all these different Chinese symbols, 592 00:41:11,560 --> 00:41:16,440 Speaker 1: Chinese characters inside the book with specific instructions of what 593 00:41:16,640 --> 00:41:20,640 Speaker 1: to do when you get any particular Chinese character. So 594 00:41:20,680 --> 00:41:22,759 Speaker 1: you look at the one that's on your page that's 595 00:41:22,800 --> 00:41:24,759 Speaker 1: been slipped under the door, and you go through the 596 00:41:24,760 --> 00:41:26,840 Speaker 1: book and you look for a match and you find 597 00:41:26,840 --> 00:41:31,799 Speaker 1: Imagine it says, when you get this symbol, draw this 598 00:41:31,960 --> 00:41:35,120 Speaker 1: other symbol and then slip it under the door. So 599 00:41:35,200 --> 00:41:38,160 Speaker 1: you do, you draw this other symbol and you slip 600 00:41:38,160 --> 00:41:41,600 Speaker 1: it under the door. Now, to a person outside the room, 601 00:41:42,000 --> 00:41:46,719 Speaker 1: it looks like you understand what is happening. You have 602 00:41:47,640 --> 00:41:51,160 Speaker 1: that person outside the room has written down something in Chinese, 603 00:41:51,680 --> 00:41:54,759 Speaker 1: slipped it under the door, and received a response in 604 00:41:54,920 --> 00:41:59,600 Speaker 1: Chinese in return. So to that person, you appear to 605 00:41:59,640 --> 00:42:04,319 Speaker 1: be understanding what's going on. But to you, because you're monolingual, 606 00:42:04,440 --> 00:42:07,759 Speaker 1: you only speak English, you only understand English, you don't 607 00:42:07,800 --> 00:42:11,799 Speaker 1: actually understand what those symbols mean. You don't know what 608 00:42:11,840 --> 00:42:13,799 Speaker 1: the symbols coming in mean, and you don't know what 609 00:42:13,840 --> 00:42:17,279 Speaker 1: the symbols you're writing mean. You're just following a set 610 00:42:17,320 --> 00:42:22,040 Speaker 1: of very specific instructions. Searle says, that's what machines are doing. 611 00:42:22,560 --> 00:42:26,239 Speaker 1: They might appear to be understanding you, but really they're 612 00:42:26,280 --> 00:42:30,880 Speaker 1: just following instructions based upon the input they receive. And 613 00:42:30,920 --> 00:42:35,120 Speaker 1: then there's no deeper level than that. It's really when 614 00:42:35,160 --> 00:42:39,600 Speaker 1: they boiled down to it, a grand if then statement, 615 00:42:40,200 --> 00:42:46,600 Speaker 1: if you receive this, then deliver that, Which is an 616 00:42:46,640 --> 00:42:51,479 Speaker 1: interesting idea to say that the person inside the room 617 00:42:51,600 --> 00:42:55,280 Speaker 1: doesn't understand Chinese. They don't. They don't know the meaning 618 00:42:55,320 --> 00:42:59,319 Speaker 1: of the symbols in either direction, but they're still delivering 619 00:42:59,400 --> 00:43:03,560 Speaker 1: properly based upon following those instructions. Could a computer be 620 00:43:03,600 --> 00:43:06,760 Speaker 1: said to be intelligent if that's all it's ultimately doing. 621 00:43:07,200 --> 00:43:11,279 Speaker 1: There are objections to this argument. One of them I'm 622 00:43:11,320 --> 00:43:15,080 Speaker 1: just going to illustrate one is that the person in 623 00:43:15,120 --> 00:43:20,000 Speaker 1: the room is not the whole system. They're one component 624 00:43:20,040 --> 00:43:22,440 Speaker 1: of the system. In a computer, you could argue the 625 00:43:22,480 --> 00:43:27,240 Speaker 1: person represents the processor, for example, and maybe the book 626 00:43:27,400 --> 00:43:30,080 Speaker 1: represents the memory. But if you were to take the 627 00:43:30,120 --> 00:43:34,840 Speaker 1: whole thing, the room, the book, the person, the paper, 628 00:43:35,160 --> 00:43:40,520 Speaker 1: the pen, you group all of that together as a system. 629 00:43:40,560 --> 00:43:44,480 Speaker 1: Some people say, well, no, the system itself as a 630 00:43:44,560 --> 00:43:49,640 Speaker 1: whole quote unquote understands Chinese. Even if the one component 631 00:43:49,719 --> 00:43:54,880 Speaker 1: within the system does not. Searle said, I call shenanigans 632 00:43:55,000 --> 00:43:58,879 Speaker 1: on that argument, because what if you just memorize all 633 00:43:58,880 --> 00:44:03,960 Speaker 1: the instructions, so you've internalized it all. You know everything 634 00:44:04,000 --> 00:44:08,560 Speaker 1: you're supposed to do. Whenever you are encountering a specific 635 00:44:08,880 --> 00:44:12,640 Speaker 1: Chinese character, you see the Chinese character, you know what 636 00:44:12,680 --> 00:44:14,600 Speaker 1: the response is supposed to be. You don't have to 637 00:44:14,640 --> 00:44:18,279 Speaker 1: consult a book or anything. You still don't understand what 638 00:44:18,320 --> 00:44:21,960 Speaker 1: it is you're doing. You're just following the instructions that 639 00:44:22,040 --> 00:44:27,120 Speaker 1: you know you're supposed to follow. That Searl says, does 640 00:44:27,160 --> 00:44:29,759 Speaker 1: not represent intelligence. Now, there are a lot of other 641 00:44:29,800 --> 00:44:33,280 Speaker 1: objections to Searle's thought experiment. There's a lot of heated 642 00:44:33,320 --> 00:44:37,520 Speaker 1: debate about the Chinese room argument, and it's all very fascinating, 643 00:44:37,680 --> 00:44:40,680 Speaker 1: And if you think this is interesting, you should definitely 644 00:44:40,800 --> 00:44:44,480 Speaker 1: look up Chinese room argument because there's a lot that's 645 00:44:44,480 --> 00:44:49,040 Speaker 1: been written about it, and it's amazing to really put 646 00:44:49,040 --> 00:44:51,239 Speaker 1: your mind to it and start thinking about the philosophy 647 00:44:51,320 --> 00:44:54,319 Speaker 1: of intelligence and whether or not it would ever be 648 00:44:54,440 --> 00:44:57,280 Speaker 1: possible for us to truly determine if a machine possess 649 00:44:57,440 --> 00:45:01,480 Speaker 1: that quality. But there some practical things that we should 650 00:45:01,480 --> 00:45:07,160 Speaker 1: consider too. Now, again, Musk was worried about machines potentially 651 00:45:07,160 --> 00:45:09,400 Speaker 1: turning on people. I mean, the example he gave was 652 00:45:09,440 --> 00:45:13,200 Speaker 1: a robot going down the street and killing everybody. Uh. 653 00:45:14,080 --> 00:45:18,680 Speaker 1: Deciding to do this not entirely possible. For a robot 654 00:45:18,800 --> 00:45:25,040 Speaker 1: that has any sort of deadly UH abilities, whether it's 655 00:45:25,239 --> 00:45:29,840 Speaker 1: a soldier robot or something else, there's possibilities of malfunctions 656 00:45:29,920 --> 00:45:34,640 Speaker 1: or misidentifying someone as a target as opposed to a uh, 657 00:45:35,080 --> 00:45:38,279 Speaker 1: you know, an innocent person. And in those cases you 658 00:45:38,280 --> 00:45:41,919 Speaker 1: would say, well, that's clearly a programming error. But it's 659 00:45:41,920 --> 00:45:46,239 Speaker 1: not like the machine is deciding to turn against humans. 660 00:45:46,880 --> 00:45:51,279 Speaker 1: It's more like the machine is making an incorrect conclusion 661 00:45:51,360 --> 00:45:55,480 Speaker 1: based upon its programming. And again you might say, well, 662 00:45:55,480 --> 00:45:57,880 Speaker 1: that doesn't have anything to do with artificial intelligence. It 663 00:45:57,920 --> 00:46:00,319 Speaker 1: doesn't have anything to do at least intrinsically with this 664 00:46:00,400 --> 00:46:04,680 Speaker 1: concept of AI. UH. And if you were to create regulations, 665 00:46:05,000 --> 00:46:09,560 Speaker 1: how do you regulate that? How do you regulate artificial intelligence? 666 00:46:09,560 --> 00:46:12,560 Speaker 1: Where do you put the limitations? What? At what point 667 00:46:12,600 --> 00:46:17,719 Speaker 1: do you say, don't let computers do this? Because if 668 00:46:17,719 --> 00:46:21,799 Speaker 1: you cannot define the problem, how do you create the 669 00:46:21,920 --> 00:46:25,359 Speaker 1: limitation to prevent the problem from happening? And a lot 670 00:46:25,400 --> 00:46:27,560 Speaker 1: of people argue that no one is able to really 671 00:46:27,600 --> 00:46:30,319 Speaker 1: define what this problem is. People are worried about an 672 00:46:30,360 --> 00:46:34,799 Speaker 1: abstract concept that they cannot define, and therefore there's no 673 00:46:34,960 --> 00:46:41,560 Speaker 1: way to create a regulation that is remotely ah relatable 674 00:46:42,000 --> 00:46:44,200 Speaker 1: to the issue. If you can't define the problem, you 675 00:46:44,239 --> 00:46:48,160 Speaker 1: cannot create a solution to it. Some people point at 676 00:46:48,640 --> 00:46:54,760 Speaker 1: different problems, not an existential crisis where robots are seeking 677 00:46:54,840 --> 00:46:59,759 Speaker 1: us out and turning us into fertilizer, but perhaps a 678 00:46:59,760 --> 00:47:04,400 Speaker 1: few sure where automation itself is taking away enough jobs 679 00:47:05,120 --> 00:47:10,839 Speaker 1: to cause massive economic crises. And there's been a lot 680 00:47:10,880 --> 00:47:13,880 Speaker 1: written about this over the last few years. It seems 681 00:47:13,920 --> 00:47:18,880 Speaker 1: like every month another article comes out with either a 682 00:47:20,080 --> 00:47:25,440 Speaker 1: terribly pessimistic UH prediction of how many jobs will be 683 00:47:25,520 --> 00:47:29,240 Speaker 1: lost due to automation within the next five years, or 684 00:47:30,080 --> 00:47:34,560 Speaker 1: a completely optimistic point of view of how many jobs 685 00:47:34,560 --> 00:47:36,839 Speaker 1: are going to be created as a result of automation 686 00:47:36,920 --> 00:47:39,960 Speaker 1: and therefore people are going to have better jobs. Those 687 00:47:40,000 --> 00:47:44,680 Speaker 1: who are all four automation say the jobs that are 688 00:47:44,760 --> 00:47:48,040 Speaker 1: going to be UH phased out by automation are going 689 00:47:48,080 --> 00:47:49,840 Speaker 1: to be the ones that people don't want to do 690 00:47:49,920 --> 00:47:52,719 Speaker 1: in the first place. They're gonna be the dirty dangerous 691 00:47:52,719 --> 00:47:57,120 Speaker 1: and dull jobs. So jobs that are either repetitive and 692 00:47:57,239 --> 00:48:01,960 Speaker 1: are not interesting and therefore no one wants to do them. UH, 693 00:48:02,160 --> 00:48:05,000 Speaker 1: jobs that put people at risk, and therefore it would 694 00:48:05,000 --> 00:48:06,840 Speaker 1: be better to put a machine at risk because you 695 00:48:06,840 --> 00:48:09,120 Speaker 1: can replace a machine, but you can't really replace a 696 00:48:09,239 --> 00:48:18,560 Speaker 1: human or the jobs that are just not they take 697 00:48:18,560 --> 00:48:22,120 Speaker 1: too much in human effort to do uh, and the 698 00:48:22,960 --> 00:48:27,200 Speaker 1: payoff does not equal the amount of effort needed in 699 00:48:27,280 --> 00:48:31,399 Speaker 1: order to complete the job. There are others who say, well, 700 00:48:31,480 --> 00:48:34,560 Speaker 1: automation is gonna go to jobs that are the easiest 701 00:48:34,600 --> 00:48:37,319 Speaker 1: to automate, which are not always going to necessarily be 702 00:48:37,400 --> 00:48:40,479 Speaker 1: ones that fall into those categories. And then you've got 703 00:48:40,520 --> 00:48:44,760 Speaker 1: people who maybe in an area of the workforce where 704 00:48:44,800 --> 00:48:48,640 Speaker 1: they don't have the training or education to pursue jobs 705 00:48:48,920 --> 00:48:53,200 Speaker 1: that are at a higher level necessarily. Their counter arguments 706 00:48:53,239 --> 00:48:56,640 Speaker 1: to this as well. Some people say that automation will 707 00:48:56,640 --> 00:49:00,480 Speaker 1: create more jobs because they'll create more opportunities, with the 708 00:49:00,520 --> 00:49:04,040 Speaker 1: example of say something like the automation of an Amazon warehouse. 709 00:49:04,600 --> 00:49:08,400 Speaker 1: One of the arguments is that automation will bring prices down. 710 00:49:08,800 --> 00:49:11,720 Speaker 1: As prices come down, people will buy more. As people 711 00:49:11,760 --> 00:49:14,839 Speaker 1: buy more, these warehouses will have to get bigger. As 712 00:49:14,880 --> 00:49:18,360 Speaker 1: the warehouses get bigger, more humans will be needed. Even 713 00:49:18,360 --> 00:49:24,160 Speaker 1: though each human will be responsible for less stuff, they'll 714 00:49:24,160 --> 00:49:27,279 Speaker 1: be such a large demand for things that that will 715 00:49:27,320 --> 00:49:30,480 Speaker 1: more than compensate for it. This argument is based off 716 00:49:30,520 --> 00:49:37,680 Speaker 1: the Industrial Revolution, when the loom was created and people 717 00:49:37,719 --> 00:49:40,440 Speaker 1: were starting to realize the potential of the loom to 718 00:49:40,640 --> 00:49:45,400 Speaker 1: speed up weaving quite a bit. There were real concerns 719 00:49:45,480 --> 00:49:48,880 Speaker 1: that loom was going to plunge the world into poverty 720 00:49:48,960 --> 00:49:51,960 Speaker 1: because all these people who had been making a living 721 00:49:52,040 --> 00:49:55,560 Speaker 1: weaving would suddenly find themselves out of work. The truth 722 00:49:55,680 --> 00:49:59,960 Speaker 1: is that there was a much greater call for weavers. 723 00:50:00,160 --> 00:50:03,879 Speaker 1: Because the price of woven materials began to fall, more 724 00:50:03,920 --> 00:50:06,040 Speaker 1: people began to buy them, and then there was an 725 00:50:06,040 --> 00:50:10,200 Speaker 1: increased demand for the very thing that people were afraid 726 00:50:10,400 --> 00:50:14,760 Speaker 1: was going to become a rarity, and so people became weavers. 727 00:50:14,800 --> 00:50:17,000 Speaker 1: It's just that they were weaving with looms instead of 728 00:50:17,040 --> 00:50:22,839 Speaker 1: hand weaving. So there are those optimists who say this 729 00:50:22,920 --> 00:50:26,000 Speaker 1: revolution with automation is going to be the same thing. 730 00:50:27,600 --> 00:50:30,520 Speaker 1: Others say no, because it will happen way too fast. 731 00:50:31,000 --> 00:50:34,480 Speaker 1: Automation is going to change so quickly and so dramatically 732 00:50:34,520 --> 00:50:37,120 Speaker 1: the world that we will not be able to react 733 00:50:37,160 --> 00:50:39,040 Speaker 1: to it in that same way, and we will be 734 00:50:39,080 --> 00:50:42,520 Speaker 1: plunged into an economic crisis. People like Elon must have 735 00:50:42,640 --> 00:50:45,640 Speaker 1: argued that this means we should probably look at something 736 00:50:45,680 --> 00:50:49,960 Speaker 1: like a universal basic income, where everyone is guaranteed a 737 00:50:50,000 --> 00:50:53,759 Speaker 1: certain amount of money per year by the government so 738 00:50:53,880 --> 00:50:57,719 Speaker 1: that they can live. They can they can meet their 739 00:50:57,840 --> 00:51:02,399 Speaker 1: their necessary requirements for the basics of human existence, like 740 00:51:02,880 --> 00:51:07,879 Speaker 1: food and shelter and clothing. Those who can still have 741 00:51:08,000 --> 00:51:11,880 Speaker 1: work will be able to afford more. This leads to 742 00:51:12,480 --> 00:51:16,799 Speaker 1: some saying that Mark Zuckerberg's response of hey, automation and 743 00:51:16,880 --> 00:51:20,520 Speaker 1: AI is going to provide lots of amazing stuff for 744 00:51:20,640 --> 00:51:25,680 Speaker 1: us is accurate, but only for people in Zuckerberg's class, 745 00:51:26,160 --> 00:51:29,319 Speaker 1: people who are already at a level of wealth and 746 00:51:29,360 --> 00:51:33,040 Speaker 1: privilege where they will be able to enjoy those benefits 747 00:51:33,120 --> 00:51:36,200 Speaker 1: because their jobs are not in danger of being automated 748 00:51:36,360 --> 00:51:39,920 Speaker 1: the way other jobs are. So there is a different 749 00:51:40,040 --> 00:51:44,239 Speaker 1: crisis that's on the horizon, according to many people, and 750 00:51:44,280 --> 00:51:48,160 Speaker 1: it all has to do with automation, not just artificial intelligence. 751 00:51:48,440 --> 00:51:51,160 Speaker 1: Automation doesn't have to incorporate a whole lot of AI 752 00:51:51,320 --> 00:51:53,680 Speaker 1: in order for it to be a threat to jobs. 753 00:51:54,760 --> 00:51:57,880 Speaker 1: But the verdict is still out as to how great 754 00:51:57,880 --> 00:52:00,719 Speaker 1: a disruption it really will be and whether or not 755 00:52:00,840 --> 00:52:04,640 Speaker 1: people will be left behind, or that we truly will 756 00:52:04,680 --> 00:52:08,839 Speaker 1: find new jobs for people, new opportunities will arise that 757 00:52:08,920 --> 00:52:12,400 Speaker 1: will end up being superior to what they would have 758 00:52:12,440 --> 00:52:16,200 Speaker 1: done otherwise. It's an unanswered question, but it's one that 759 00:52:16,360 --> 00:52:18,799 Speaker 1: a lot more people are asking. It is not directly 760 00:52:18,840 --> 00:52:22,640 Speaker 1: related to what Musk and Zuckerberg were bickering about, however, 761 00:52:24,560 --> 00:52:31,200 Speaker 1: So I was looking over some information about various jobs 762 00:52:31,239 --> 00:52:36,319 Speaker 1: that are have the potential of becoming automated, and I 763 00:52:36,400 --> 00:52:40,160 Speaker 1: found one in the Atlas dot com that was pretty interesting. 764 00:52:41,360 --> 00:52:45,760 Speaker 1: According to this research, the jobs that have the most 765 00:52:45,760 --> 00:52:50,520 Speaker 1: potential for being automated include accommodation and food services at 766 00:52:50,560 --> 00:52:55,279 Speaker 1: a whopping se potential for automation. Now that does not 767 00:52:55,400 --> 00:52:58,520 Speaker 1: mean that all those jobs will be automated, but it 768 00:52:58,560 --> 00:53:01,120 Speaker 1: does mean that it is the most likely out of 769 00:53:01,200 --> 00:53:06,560 Speaker 1: all the different categories listed to undergo automation. Other ones 770 00:53:06,640 --> 00:53:09,919 Speaker 1: that are high up on the list, including manufacturing, transportation, 771 00:53:09,960 --> 00:53:15,800 Speaker 1: and warehousing, which we're seeing with Amazon. Agriculture retail trade 772 00:53:15,880 --> 00:53:20,600 Speaker 1: is at fifty minings. At the ones that are at 773 00:53:20,640 --> 00:53:27,040 Speaker 1: the lowest end include educational services at management at thirty 774 00:53:28,040 --> 00:53:31,440 Speaker 1: and boy howdy is that going to cause a rift 775 00:53:31,520 --> 00:53:35,800 Speaker 1: if that is true? And then there's a vague category 776 00:53:35,840 --> 00:53:42,040 Speaker 1: called professionals at I'm assuming professionals means people who are 777 00:53:42,040 --> 00:53:44,640 Speaker 1: working white, colored jobs that have a lot of variety 778 00:53:44,719 --> 00:53:47,920 Speaker 1: to them, and not folks name leon who go around 779 00:53:48,560 --> 00:53:57,040 Speaker 1: with pistols. That's a leon the professional reference. So is 780 00:53:57,719 --> 00:54:02,040 Speaker 1: Musk right? Are we going to see AI rise up 781 00:54:02,080 --> 00:54:05,000 Speaker 1: against humans? I don't think that's going to happen in 782 00:54:05,040 --> 00:54:09,080 Speaker 1: the near future. I think AI does pose some challenges, 783 00:54:09,239 --> 00:54:13,640 Speaker 1: and if it is incorporated in ways that we don't 784 00:54:14,719 --> 00:54:18,160 Speaker 1: fully think about, it can cause at least short term harm, 785 00:54:18,200 --> 00:54:20,600 Speaker 1: if not long term harm. But I don't think it's 786 00:54:20,640 --> 00:54:22,640 Speaker 1: an existential crisis. I don't think it's something that we 787 00:54:22,640 --> 00:54:27,120 Speaker 1: need to worry about regulation at the moment. Zuckerberg's argument 788 00:54:27,160 --> 00:54:29,240 Speaker 1: that's going to improve all our lives, I don't quite 789 00:54:29,239 --> 00:54:34,200 Speaker 1: buy that either. I think that it will uh have 790 00:54:34,440 --> 00:54:37,759 Speaker 1: minor impact on most people's lives from a from a 791 00:54:37,960 --> 00:54:45,000 Speaker 1: direct perspective. If automation does end up affecting more people 792 00:54:45,360 --> 00:54:50,000 Speaker 1: than obviously that's a negative impact. I think they're both 793 00:54:50,760 --> 00:54:54,160 Speaker 1: slightly wrong, and there have been some writers who have 794 00:54:54,200 --> 00:54:57,760 Speaker 1: suggested that perhaps this argument is not really about AI, 795 00:54:57,880 --> 00:55:04,480 Speaker 1: but more about Zuckerberg and Musk both promoting images of themselves. 796 00:55:05,400 --> 00:55:08,160 Speaker 1: The argument is not really about artificial intelligence. It's about 797 00:55:08,320 --> 00:55:12,000 Speaker 1: I stand for this. That's what my reputation is based on. 798 00:55:12,360 --> 00:55:16,239 Speaker 1: Therefore I need to continue in this vein. And I 799 00:55:16,280 --> 00:55:19,560 Speaker 1: find it particularly interesting that Musk is talking about AI 800 00:55:19,719 --> 00:55:25,480 Speaker 1: being this potentially dangerous situation, since it is and a 801 00:55:25,600 --> 00:55:31,400 Speaker 1: very important component of both Tesla and SpaceX, and so 802 00:55:31,440 --> 00:55:33,960 Speaker 1: he's walking a very fine line. It's also an important 803 00:55:33,960 --> 00:55:39,640 Speaker 1: component of his proposed tunnel system from the Boring Company, 804 00:55:39,760 --> 00:55:44,080 Speaker 1: which is all about earth boring machines, not things that 805 00:55:44,120 --> 00:55:48,680 Speaker 1: are dull. So it's an interesting debate. I'm not going 806 00:55:48,760 --> 00:55:52,120 Speaker 1: to get involved in it online because neither Musk nor 807 00:55:52,200 --> 00:55:54,520 Speaker 1: Zuckerberg know who I am, and honestly, I think I 808 00:55:54,640 --> 00:55:57,279 Speaker 1: prefer it that way. But I'm curious to hear what 809 00:55:57,360 --> 00:56:00,479 Speaker 1: you guys think about this AI debate. Do you think 810 00:56:00,520 --> 00:56:04,600 Speaker 1: that we're in an existential crisis or heading toward one, 811 00:56:05,280 --> 00:56:08,760 Speaker 1: or do you think it's much ado about nothing. I'm curious. 812 00:56:08,840 --> 00:56:11,719 Speaker 1: Let me know, send me messages. My email is tech 813 00:56:11,800 --> 00:56:14,040 Speaker 1: stuff at how Stuff works dot com, or you can 814 00:56:14,040 --> 00:56:16,799 Speaker 1: always drop me a line on Facebook or Twitter. The 815 00:56:16,840 --> 00:56:18,640 Speaker 1: handle of both of those for the show is tech 816 00:56:18,719 --> 00:56:22,200 Speaker 1: Stuff hs W. Remember you can go to www dot 817 00:56:22,239 --> 00:56:25,720 Speaker 1: twitch dot tv slash tech stuff to watch me record 818 00:56:25,760 --> 00:56:28,440 Speaker 1: these episodes live. You can chat with me in the 819 00:56:28,520 --> 00:56:31,040 Speaker 1: chat room, and just go to twitch dot tv slash 820 00:56:31,080 --> 00:56:33,800 Speaker 1: tech Stuff. You'll find the schedule right there. I record 821 00:56:33,840 --> 00:56:35,799 Speaker 1: Wednesdays and Friday as I hope to see you in 822 00:56:35,800 --> 00:56:44,600 Speaker 1: there and I'll talk to you again really soon. For 823 00:56:44,680 --> 00:56:47,000 Speaker 1: more on this and bothands of other topics, is it 824 00:56:47,080 --> 00:56:57,920 Speaker 1: how staff works dot com