1 00:00:07,080 --> 00:00:11,040 Speaker 1: Alexa, what's the best science podcast on air? Hey? Are 2 00:00:11,039 --> 00:00:13,560 Speaker 1: you trying to replace me with Alexa? What's going on here? 3 00:00:14,120 --> 00:00:17,720 Speaker 1: Do you think you're replaceable? There's no way an artificial 4 00:00:17,760 --> 00:00:21,560 Speaker 1: intelligence could ever make jokes nearly as funny as I am. 5 00:00:21,600 --> 00:00:24,400 Speaker 1: I think there's no way an artificial intelligence would laugh 6 00:00:24,440 --> 00:00:27,280 Speaker 1: at your jokes. I'm pretty sure I could. I could 7 00:00:27,320 --> 00:00:30,200 Speaker 1: program a pretty dumb computer to laugh at my jokes. 8 00:00:30,240 --> 00:00:34,280 Speaker 1: It's called the laugh track. But that's Hey, that's a 9 00:00:34,280 --> 00:00:38,000 Speaker 1: new challenge for for AI. You know, first chest then 10 00:00:38,080 --> 00:00:43,760 Speaker 1: go now science comedy. That's right, now, programs something that 11 00:00:43,800 --> 00:01:05,560 Speaker 1: can find humor in Daniel's ramblings him and I'm Daniel, 12 00:01:05,680 --> 00:01:09,040 Speaker 1: and this is our podcast. Daniel and Jorge explained the Universe, 13 00:01:09,200 --> 00:01:12,080 Speaker 1: in which we try to download everything we know about 14 00:01:12,120 --> 00:01:15,959 Speaker 1: the universe, episode by episode into your brain, whether you're 15 00:01:16,000 --> 00:01:19,840 Speaker 1: a real person or an artificial intelligence, listening to our 16 00:01:19,920 --> 00:01:24,319 Speaker 1: podcast while trying to sound intelligent about it while writing 17 00:01:24,319 --> 00:01:28,280 Speaker 1: your own humor for the open mic AI Night. The 18 00:01:28,319 --> 00:01:35,800 Speaker 1: topic of today's podcast is what is artificial intelligence? And 19 00:01:36,680 --> 00:01:39,880 Speaker 1: very importantly is it dangerous? That's right? Should you be 20 00:01:40,440 --> 00:01:42,880 Speaker 1: looking at your window for the first signs of the 21 00:01:42,959 --> 00:01:46,320 Speaker 1: robot revolution. Should you be afraid of your Alexa? Should 22 00:01:46,360 --> 00:01:50,240 Speaker 1: you be worried about that robot vacuum cleaner getting resentful 23 00:01:50,360 --> 00:01:52,600 Speaker 1: for having to do all the dirty work and eating 24 00:01:52,600 --> 00:01:56,640 Speaker 1: your face off in the middle of the night. That's 25 00:01:56,680 --> 00:01:59,240 Speaker 1: a bit dark. It seems kind of sinister, doesn't it. 26 00:01:59,240 --> 00:02:03,280 Speaker 1: It's like sitting there are circling, circling, circling, waning, waiting, waiting. 27 00:02:03,560 --> 00:02:06,600 Speaker 1: I think those things are creepy, right. Maybe it wants to, 28 00:02:07,160 --> 00:02:10,000 Speaker 1: you know, clean your face, it wants see That's the question. 29 00:02:10,360 --> 00:02:13,680 Speaker 1: Does a robot vacuum cleaner want anything? What does it 30 00:02:13,720 --> 00:02:15,720 Speaker 1: mean for it to want? What is it like to 31 00:02:15,760 --> 00:02:19,320 Speaker 1: be a robot vacuum cleaner? The next great paper in philosophy? 32 00:02:19,480 --> 00:02:21,720 Speaker 1: So this is kind of in the zeitgeist right now. 33 00:02:21,760 --> 00:02:24,880 Speaker 1: I mean, people are really excited about artificial intelligence. But 34 00:02:24,880 --> 00:02:27,280 Speaker 1: at the same time there are big names like Elon 35 00:02:27,360 --> 00:02:31,520 Speaker 1: Musk kind of warning people like, hey, artificial intelligence not 36 00:02:31,600 --> 00:02:33,680 Speaker 1: such a good idea. That's right, it's a huge topic. 37 00:02:33,720 --> 00:02:35,600 Speaker 1: I mean, you drive around like San Francisco, you see 38 00:02:35,680 --> 00:02:39,160 Speaker 1: artificial intelligence, machine learning, deep learning. It's on billboards. Even 39 00:02:39,200 --> 00:02:41,000 Speaker 1: you know you want to get a million bucks for 40 00:02:41,040 --> 00:02:43,880 Speaker 1: your new company, you just say the words AI, deep learning, 41 00:02:43,880 --> 00:02:46,880 Speaker 1: and boom people are throwing cash to right, people are 42 00:02:46,919 --> 00:02:51,120 Speaker 1: learning in the deepest learning. Um. It's definitely part of 43 00:02:51,120 --> 00:02:53,920 Speaker 1: the cultural moment, and you see that reflected not just 44 00:02:53,960 --> 00:02:55,640 Speaker 1: in like what deep thinkers are saying, but also in 45 00:02:55,639 --> 00:02:58,040 Speaker 1: like science fiction. You know, a lot of the near 46 00:02:58,160 --> 00:03:01,079 Speaker 1: term dystopian these days is about how AI will take 47 00:03:01,120 --> 00:03:04,680 Speaker 1: over and the dangers of AI. Another way, like thirty 48 00:03:04,760 --> 00:03:07,919 Speaker 1: years ago is about the dangers of UH of radiation. Right, 49 00:03:07,960 --> 00:03:11,000 Speaker 1: that was the new dangerous thing physicists that invented. Now 50 00:03:11,040 --> 00:03:13,760 Speaker 1: the new dangerous technology that we're all worried about is AI. 51 00:03:13,840 --> 00:03:17,760 Speaker 1: It's a new promise in peril. Um. Yeah. AI. Every 52 00:03:17,760 --> 00:03:19,840 Speaker 1: piece of technology is a double edged sword, right. You 53 00:03:19,880 --> 00:03:21,560 Speaker 1: can use it for good, you can use for evil. 54 00:03:21,600 --> 00:03:24,919 Speaker 1: But AI is special because it's not just technology, is 55 00:03:24,960 --> 00:03:27,160 Speaker 1: not just a tool that people use. It's a tool 56 00:03:27,400 --> 00:03:30,880 Speaker 1: that that has independence, that has autonomy. And that's why 57 00:03:30,880 --> 00:03:33,560 Speaker 1: it's such a vexing question. Well people, I'm sure people 58 00:03:33,560 --> 00:03:37,160 Speaker 1: everyone associated with robots and machines and computers, but we 59 00:03:37,200 --> 00:03:40,840 Speaker 1: were kind of wondering if people actually knew what artificial 60 00:03:40,840 --> 00:03:43,280 Speaker 1: intelligence was, like, what makes it work, what makes it 61 00:03:43,320 --> 00:03:46,880 Speaker 1: different than than real intelligence? M I bet you that 62 00:03:47,760 --> 00:03:50,760 Speaker 1: the people who say the phrase artificial intelligence don't actually 63 00:03:50,760 --> 00:03:53,720 Speaker 1: know what they're talking about, which is probably true for 64 00:03:53,840 --> 00:03:58,520 Speaker 1: most topics andology, it's true for me probably, I'm sure. 65 00:03:59,240 --> 00:04:01,720 Speaker 1: But we're wondering if you guys out there knew what 66 00:04:01,880 --> 00:04:05,200 Speaker 1: artificial intelligence was. And so, as usual, Daniel went out 67 00:04:05,240 --> 00:04:07,800 Speaker 1: and as people in the street, and here's what they 68 00:04:07,880 --> 00:04:11,120 Speaker 1: had to say. Um, it's the idea that we can 69 00:04:11,120 --> 00:04:14,720 Speaker 1: create some type of material thing that could think on 70 00:04:14,760 --> 00:04:17,640 Speaker 1: its own ultimately, And do you think it's something we 71 00:04:17,640 --> 00:04:19,120 Speaker 1: should be concerned about? Is it ever going to be 72 00:04:19,120 --> 00:04:22,600 Speaker 1: a threat to humanity? I mean possibly, but I mean 73 00:04:22,680 --> 00:04:24,480 Speaker 1: we never We don't know everything. We can know the 74 00:04:24,520 --> 00:04:26,680 Speaker 1: bounds of what could be a threat, we cannot be 75 00:04:26,720 --> 00:04:30,720 Speaker 1: a threat. Yeah, it's AI, and it's the stuff that's 76 00:04:30,839 --> 00:04:34,320 Speaker 1: used in various technological applications basically just kind of like 77 00:04:35,160 --> 00:04:39,200 Speaker 1: trying to make machines replicate certain aspects of human intelligence. 78 00:04:39,560 --> 00:04:41,360 Speaker 1: Stuff like that. Okay, And do you think it could 79 00:04:41,360 --> 00:04:43,120 Speaker 1: ever be a threat to humanity? Is something we should 80 00:04:43,120 --> 00:04:45,240 Speaker 1: be worried about? I guess since I don't have a 81 00:04:45,320 --> 00:04:49,200 Speaker 1: particularly strong opinion on it, I don't think so. So 82 00:04:49,839 --> 00:04:52,599 Speaker 1: I guess I'll say no for now. Um, I'm assuming 83 00:04:52,680 --> 00:04:58,680 Speaker 1: that's the idea that computers or electronics can have like sentience. Right, 84 00:04:59,600 --> 00:05:02,160 Speaker 1: Are you worried that computers would when they take over 85 00:05:02,200 --> 00:05:05,120 Speaker 1: and make us their slaves? And not really, I don't 86 00:05:05,160 --> 00:05:07,719 Speaker 1: think it will come to that point. All right. Those 87 00:05:07,720 --> 00:05:10,720 Speaker 1: are pretty sophisticated answers. I like the ones that said, 88 00:05:10,839 --> 00:05:14,240 Speaker 1: um oh, artificial intelligence, that's just AI, right, Like that's 89 00:05:14,240 --> 00:05:18,520 Speaker 1: an answer. So that's an answer to every question. You know, 90 00:05:19,080 --> 00:05:24,640 Speaker 1: what is Google blex Zavi Brown? Oh, that's just gens. Yeah, acronyms. 91 00:05:24,760 --> 00:05:27,719 Speaker 1: Acronyms can make you look intelligent, that's it. That's the 92 00:05:27,760 --> 00:05:34,600 Speaker 1: real artificial intelligence to speaking acronym acronym intelligence. Um. You know, 93 00:05:34,760 --> 00:05:37,120 Speaker 1: but people had some sense that it's, you know, something 94 00:05:37,160 --> 00:05:40,520 Speaker 1: that can think for itself, or something do something for you, 95 00:05:40,720 --> 00:05:43,640 Speaker 1: or create something that can think by itself. There's definitely 96 00:05:43,680 --> 00:05:46,400 Speaker 1: the nuggative idea is definitely out there. They use it 97 00:05:46,600 --> 00:05:49,159 Speaker 1: in relation to what it can do. That's right, yeah, 98 00:05:49,440 --> 00:05:53,520 Speaker 1: exactly what what is what's the new capability that defines it? Yeah? 99 00:05:53,839 --> 00:05:56,040 Speaker 1: Right yeah, and that it's it's a fascinating way to 100 00:05:56,040 --> 00:05:57,840 Speaker 1: think about it, you know. And uh, it's definitely a 101 00:05:57,880 --> 00:06:00,200 Speaker 1: tricky question, right because I guess we know it in 102 00:06:00,200 --> 00:06:02,760 Speaker 1: the context of UM using them for things, right, Like, 103 00:06:02,800 --> 00:06:05,360 Speaker 1: we're people, don't just create a I because we want 104 00:06:05,360 --> 00:06:07,960 Speaker 1: to create artificial beings. It's like, so it can help us. 105 00:06:08,320 --> 00:06:11,520 Speaker 1: I want to create artificial beings. What's what's wrong with that? 106 00:06:11,520 --> 00:06:15,120 Speaker 1: That sounds pretty awesome. Create a whole army of physics 107 00:06:15,160 --> 00:06:19,400 Speaker 1: artificial physics grad students. It sounds pretty cool. Kids. Yeah, 108 00:06:19,440 --> 00:06:23,279 Speaker 1: you mean, are they worried about competing with my digital children? Natural? 109 00:06:23,920 --> 00:06:26,760 Speaker 1: They know you'd rather have artificial children. I didn't say 110 00:06:26,760 --> 00:06:30,200 Speaker 1: I'd rather have artificial that in addition to my beautiful, 111 00:06:30,200 --> 00:06:32,440 Speaker 1: wonderful natural children, which I should not be talking about 112 00:06:32,480 --> 00:06:34,599 Speaker 1: on this podcast. I'd love to have a whole, you know, 113 00:06:34,720 --> 00:06:38,880 Speaker 1: cadre of artificial children to do my bidding, unlike your 114 00:06:38,880 --> 00:06:42,320 Speaker 1: real children. If who won't do your bidding? So somebody 115 00:06:42,400 --> 00:06:47,280 Speaker 1: listening children? And that sort of goes to the heart 116 00:06:47,279 --> 00:06:49,880 Speaker 1: of the question. You know, UM, if you created a 117 00:06:49,920 --> 00:06:52,480 Speaker 1: digital being with artificial intelligence, would it listen to you? 118 00:06:52,560 --> 00:06:54,960 Speaker 1: Or would it make its own decisions? Right? And so 119 00:06:55,360 --> 00:06:57,160 Speaker 1: that's why we thought it would be interesting to dig into, 120 00:06:57,279 --> 00:06:59,640 Speaker 1: like what is artificial intelligence? If it just did what 121 00:06:59,680 --> 00:07:01,680 Speaker 1: you told that to do. It wouldn't maybe be in 122 00:07:01,720 --> 00:07:04,839 Speaker 1: an artificial intelligence you're saying nobody smart should listen to you, 123 00:07:04,880 --> 00:07:08,839 Speaker 1: is what you're saying. I'm saying they should decide for 124 00:07:08,880 --> 00:07:15,640 Speaker 1: themselves whether I'm I'm worth following. So let's break it 125 00:07:15,680 --> 00:07:19,600 Speaker 1: down for people. Daniel, what is artificial intelligence? Well you 126 00:07:19,600 --> 00:07:21,320 Speaker 1: should listen to this podcast and that will give you 127 00:07:21,360 --> 00:07:25,560 Speaker 1: the answer. Done. Um, Well, you know, I think to 128 00:07:25,600 --> 00:07:27,760 Speaker 1: understand what artificial intelligence is, we should think for a 129 00:07:27,840 --> 00:07:29,800 Speaker 1: moment about what do we mean by intelligence? Right? And 130 00:07:30,040 --> 00:07:32,880 Speaker 1: very simply intelligence. It's just the ability to learn, is 131 00:07:32,920 --> 00:07:36,360 Speaker 1: to find patterns to extrapolate from them. Really, that's how you. 132 00:07:36,920 --> 00:07:39,160 Speaker 1: But like a dog can learn. But a dog, you 133 00:07:39,160 --> 00:07:41,760 Speaker 1: wouldn't say it's intelligent, would you? Absolutely? I would say 134 00:07:41,760 --> 00:07:44,040 Speaker 1: a dog is intelligent. You can teach a dog, you 135 00:07:44,080 --> 00:07:46,040 Speaker 1: can train a dog. It's more intelligent than a rock. 136 00:07:46,360 --> 00:07:50,920 Speaker 1: But would you say by a lot, Oh my gosh, 137 00:07:50,960 --> 00:07:52,880 Speaker 1: if you like, never interact with a dog. A dog 138 00:07:52,960 --> 00:07:55,920 Speaker 1: is like a living sension being. It feels that experiences, 139 00:07:55,960 --> 00:07:59,320 Speaker 1: It definitely learns. It can recognize you. I mean, dogs 140 00:07:59,320 --> 00:08:01,000 Speaker 1: can do complicated of think. The dog is a perfect 141 00:08:01,000 --> 00:08:03,040 Speaker 1: example it's you know, I wouldn't trust it to do 142 00:08:03,120 --> 00:08:06,480 Speaker 1: my taxes. You know, I don't know compared to our 143 00:08:06,680 --> 00:08:08,400 Speaker 1: tax accountant, did I do a pretty good job. I mean, 144 00:08:08,400 --> 00:08:11,120 Speaker 1: you can say that's an intelligent dog, but you wouldn't say, like, 145 00:08:11,160 --> 00:08:14,400 Speaker 1: that's the epitome of intelligence. I wouldn't say the dogs 146 00:08:14,440 --> 00:08:16,720 Speaker 1: are the most intelligent beings in the universe. But that's 147 00:08:16,720 --> 00:08:19,360 Speaker 1: what we're talking about. We're talking about do they have intelligence? 148 00:08:20,080 --> 00:08:22,440 Speaker 1: Pretty example, because they can learn, you can train them, 149 00:08:22,640 --> 00:08:24,840 Speaker 1: and you the cool thing about an intelligent being is 150 00:08:24,840 --> 00:08:27,080 Speaker 1: that you can train it to do something even if 151 00:08:27,080 --> 00:08:29,760 Speaker 1: you don't know how to do it. Say, for example, 152 00:08:29,800 --> 00:08:33,720 Speaker 1: you want your dog to recognize you, right, but tear 153 00:08:33,760 --> 00:08:35,839 Speaker 1: the face off anybody who tries to break into the house, 154 00:08:36,160 --> 00:08:39,280 Speaker 1: right guard dog. Okay, so you can train a dog. 155 00:08:39,360 --> 00:08:41,440 Speaker 1: You reward it when it does the right thing, and 156 00:08:41,440 --> 00:08:43,480 Speaker 1: you punish it when it does the wrong thing. You 157 00:08:43,520 --> 00:08:46,520 Speaker 1: don't know how to, like build a being that does that, 158 00:08:46,520 --> 00:08:49,880 Speaker 1: that like recognizes your face and recognizes stranger's faces and 159 00:08:49,920 --> 00:08:52,840 Speaker 1: makes these decisions. That's hard task, you know, it's not 160 00:08:52,920 --> 00:08:55,200 Speaker 1: easy to do. But you can train a dog. A 161 00:08:55,240 --> 00:08:58,080 Speaker 1: dog can learn how to solve this problem and all 162 00:08:58,120 --> 00:08:59,760 Speaker 1: you need to do to train it is to reward 163 00:08:59,800 --> 00:09:02,000 Speaker 1: it and punch it. So you're saying, just the ability 164 00:09:02,040 --> 00:09:04,959 Speaker 1: to sort of learn from your mistakes or learn from 165 00:09:05,679 --> 00:09:08,640 Speaker 1: your surroundings, that's what you would call intelligence. Yeah, And 166 00:09:08,720 --> 00:09:11,760 Speaker 1: dogs have less of it than we do, and more 167 00:09:11,800 --> 00:09:14,840 Speaker 1: of it than cats and mice, um. But they have 168 00:09:14,920 --> 00:09:17,600 Speaker 1: some of it for sure, which is what makes them trainable. 169 00:09:17,640 --> 00:09:20,600 Speaker 1: And you know, I wonder sometimes, because dogs can be trained, right, 170 00:09:20,960 --> 00:09:23,360 Speaker 1: nobody ever trains their cat. What does that say about 171 00:09:23,400 --> 00:09:27,040 Speaker 1: a cat's intelligence. I've always thought I love cats, but 172 00:09:27,080 --> 00:09:30,120 Speaker 1: I've always thought dogs are probably smarter than cats because 173 00:09:30,160 --> 00:09:32,920 Speaker 1: you can train them, right, Or maybe cats are more 174 00:09:33,000 --> 00:09:35,840 Speaker 1: intelligent in that they're they're not they don't allow themselves 175 00:09:35,880 --> 00:09:39,199 Speaker 1: to be trained by humans, right. And rocks, by that metric, 176 00:09:39,240 --> 00:09:41,760 Speaker 1: are the most intelligent because they completely ignore you, right 177 00:09:41,920 --> 00:09:43,880 Speaker 1: to see the fallacy of that argument right there. But 178 00:09:43,920 --> 00:09:47,480 Speaker 1: I mean, maybe there's sort of sort of like a hump, right, like, 179 00:09:47,600 --> 00:09:51,160 Speaker 1: as you get more intelligent, you're easily more trainable, trainable, 180 00:09:51,160 --> 00:09:54,439 Speaker 1: trainable bout somebody. You get so intelligent that you rebel 181 00:09:54,480 --> 00:09:56,600 Speaker 1: against your masters, And so how do you tell the 182 00:09:56,600 --> 00:10:00,400 Speaker 1: difference between something that's totally unintelligent and something that's so 183 00:10:00,440 --> 00:10:04,240 Speaker 1: intelligent and completely ignores you. Yeah, I don't know deep 184 00:10:04,360 --> 00:10:08,079 Speaker 1: question or believes all the rocks are probably thinking about him. 185 00:10:08,160 --> 00:10:09,840 Speaker 1: Well sure, I mean if he used to use the 186 00:10:09,840 --> 00:10:12,800 Speaker 1: ability to listen to what I say as a benchpark 187 00:10:12,840 --> 00:10:16,040 Speaker 1: of intelligence and then yeah, there's, um, something super intelligent 188 00:10:16,160 --> 00:10:18,000 Speaker 1: could be just as smart as the rock. But obviously 189 00:10:18,000 --> 00:10:21,080 Speaker 1: a cat is still making decisions and acting and you know, 190 00:10:21,200 --> 00:10:24,120 Speaker 1: doing things, so it's intelligent. But maybe it's much more 191 00:10:24,120 --> 00:10:26,480 Speaker 1: intelligent than a dog because it's it chooses not to 192 00:10:26,520 --> 00:10:28,719 Speaker 1: listen to us. All right, I think we need to 193 00:10:28,760 --> 00:10:32,239 Speaker 1: have a whole other podcast on who's smarter cats or dogs? 194 00:10:32,280 --> 00:10:34,920 Speaker 1: And before we do that, we will collect some data 195 00:10:35,160 --> 00:10:37,920 Speaker 1: to answer this question. Um, but I think with the 196 00:10:38,080 --> 00:10:40,800 Speaker 1: question we're focusing on is what is artificial intelligence? So 197 00:10:41,120 --> 00:10:43,880 Speaker 1: natural intelligence just the ability of an animal to learn. 198 00:10:44,280 --> 00:10:48,280 Speaker 1: Artificial intelligence would be if something artificial that we create 199 00:10:48,559 --> 00:10:52,640 Speaker 1: has that same property, the ability to change the way 200 00:10:52,840 --> 00:10:56,880 Speaker 1: it processes things in response to what it sees about 201 00:10:56,920 --> 00:11:00,319 Speaker 1: the world. Yeah, artificial intelligence is a very broad field 202 00:11:00,320 --> 00:11:02,600 Speaker 1: with lots of elements that we couldn't cover in just 203 00:11:02,760 --> 00:11:05,600 Speaker 1: one episode of a podcast. But let's just talk today 204 00:11:05,600 --> 00:11:09,320 Speaker 1: about one important sub field of AI, which is machine learning, 205 00:11:09,679 --> 00:11:12,640 Speaker 1: or more specifically, I would say, let's focus on training. Right, 206 00:11:12,800 --> 00:11:17,560 Speaker 1: can you build something or something artificial that can be trained? Right? 207 00:11:18,200 --> 00:11:20,280 Speaker 1: And uh? I think, I think let's talk for a 208 00:11:20,320 --> 00:11:23,439 Speaker 1: moment about, you know, how how normal computers work, and 209 00:11:23,480 --> 00:11:26,640 Speaker 1: then we can talk about how computers, smart computers, computers 210 00:11:26,640 --> 00:11:29,200 Speaker 1: that can learn, computers with artificial intelligence, how they work. 211 00:11:29,400 --> 00:11:31,120 Speaker 1: I think, I think what you keep talking about cats 212 00:11:31,120 --> 00:11:34,560 Speaker 1: and dogs? All right, we'll talk about cats and dogs, 213 00:11:34,559 --> 00:11:50,640 Speaker 1: but first let's take a quick break. Let's talk about 214 00:11:50,800 --> 00:11:56,000 Speaker 1: what computers can do. Yeah, let's because computers are smart, right, 215 00:11:56,200 --> 00:11:58,439 Speaker 1: you can program a computer to do smart things, but 216 00:11:58,679 --> 00:12:02,040 Speaker 1: that doesn't necessarily mean it has intelligence. That's right. There's 217 00:12:02,040 --> 00:12:04,360 Speaker 1: a difference between a computer that can do something and 218 00:12:04,360 --> 00:12:06,880 Speaker 1: a computer that can learn something. Right. The way I 219 00:12:06,880 --> 00:12:09,920 Speaker 1: think about non intelligent computers is the way you sort 220 00:12:09,960 --> 00:12:12,880 Speaker 1: of think about machines. Right. You can tell them what 221 00:12:12,960 --> 00:12:15,880 Speaker 1: to do, and they do exactly what you tell them, 222 00:12:16,080 --> 00:12:18,319 Speaker 1: regardless of whether it's the right thing. You don't give 223 00:12:18,360 --> 00:12:20,360 Speaker 1: them like a goal and say, hey, I just want 224 00:12:20,400 --> 00:12:22,720 Speaker 1: the house to be clean. Figure it out. You have 225 00:12:22,760 --> 00:12:24,840 Speaker 1: to tell them exactly what to do. You say, step 226 00:12:24,880 --> 00:12:27,920 Speaker 1: over here, move the broom this way, step over there, 227 00:12:28,080 --> 00:12:29,680 Speaker 1: you know, and if it's not cleaning the house because 228 00:12:29,679 --> 00:12:31,840 Speaker 1: they're stuck on a corner or they're you know, fell 229 00:12:31,880 --> 00:12:33,960 Speaker 1: on their on their butts or whatever, they don't care. 230 00:12:34,000 --> 00:12:36,280 Speaker 1: They just tell you do exactly what you tell them 231 00:12:36,320 --> 00:12:39,120 Speaker 1: to do. Have no sort of larger sense of what's important. 232 00:12:39,280 --> 00:12:43,480 Speaker 1: It just follows instructions, just follows the recipe you gave it. 233 00:12:43,640 --> 00:12:45,360 Speaker 1: That's right. It's like a like a wind up toy, 234 00:12:45,520 --> 00:12:47,760 Speaker 1: you know, you wind it up, you give it some energy, 235 00:12:47,840 --> 00:12:49,640 Speaker 1: and then it goes. And I really do think about 236 00:12:49,640 --> 00:12:52,959 Speaker 1: computer programs the way you might think about little machines, right, 237 00:12:53,080 --> 00:12:55,679 Speaker 1: because that's exactly what they are. They just execute a 238 00:12:55,720 --> 00:12:57,800 Speaker 1: set of instructions. You know. It's just like a bunch 239 00:12:57,800 --> 00:13:00,760 Speaker 1: of gears clicking into place, and they can't change the 240 00:13:00,800 --> 00:13:02,480 Speaker 1: way they do that. And they do it regardless of 241 00:13:02,480 --> 00:13:04,720 Speaker 1: whether it's the right thing, or whether it's effective or whatever. 242 00:13:04,760 --> 00:13:09,760 Speaker 1: It just goes. Like your electric toothbrush, you know, you 243 00:13:09,800 --> 00:13:12,280 Speaker 1: switch it on, and it's just it has a circuit 244 00:13:12,320 --> 00:13:15,319 Speaker 1: that just has it moved the bristles back and forth, 245 00:13:15,520 --> 00:13:17,240 Speaker 1: that's right, And it doesn't know if it's brushing your 246 00:13:17,240 --> 00:13:19,960 Speaker 1: teeth or just flailing around in midair. Right, has no idea, 247 00:13:20,040 --> 00:13:22,120 Speaker 1: It doesn't care, It doesn't think or feel whatever. It's 248 00:13:22,160 --> 00:13:26,240 Speaker 1: just a machine, right, Thank god, it doesn't know. It 249 00:13:26,280 --> 00:13:32,240 Speaker 1: would tell you to brush your teething like that's chocolate, Jorney, 250 00:13:32,320 --> 00:13:35,920 Speaker 1: I'm tired of this? What is this gunk? Yeah? Exactly. 251 00:13:35,920 --> 00:13:37,960 Speaker 1: And so that's what a sort of a normal machine is. 252 00:13:37,960 --> 00:13:40,000 Speaker 1: That's what a classical computer program is, right. I think 253 00:13:40,000 --> 00:13:41,360 Speaker 1: of it just the same way as you think of 254 00:13:41,360 --> 00:13:44,920 Speaker 1: a physical machine. Okay, it's just doing what you the 255 00:13:44,960 --> 00:13:48,080 Speaker 1: programmer told it to do. That's right, and it follows 256 00:13:48,080 --> 00:13:51,840 Speaker 1: your instructions exactly. Um. Now, a computer that can learn 257 00:13:52,040 --> 00:13:54,920 Speaker 1: is different, right. A computer that has artificial intelligence is 258 00:13:54,960 --> 00:13:57,959 Speaker 1: different in this really important way because you can train 259 00:13:58,040 --> 00:14:01,280 Speaker 1: it right, and you can train it because you we 260 00:14:01,400 --> 00:14:05,439 Speaker 1: build these things to model the way that we work. Right. 261 00:14:05,520 --> 00:14:09,440 Speaker 1: So for example, an AI program is sort of like, um, 262 00:14:09,600 --> 00:14:12,680 Speaker 1: like a newborn baby can't do anything. Right. Say, there's 263 00:14:12,679 --> 00:14:16,800 Speaker 1: a AI program, for example, that's supposed to recognize you 264 00:14:16,960 --> 00:14:18,760 Speaker 1: when you come in the door. Right, is this Jorge 265 00:14:18,920 --> 00:14:21,480 Speaker 1: or is this not Jorge? Right? Because it should only 266 00:14:21,680 --> 00:14:23,840 Speaker 1: open the door for Jorge and not open the door 267 00:14:23,960 --> 00:14:27,320 Speaker 1: for not Jorge. Okay, So when you created a new 268 00:14:27,360 --> 00:14:30,720 Speaker 1: AI program, you would start out like just a newborn baby, okay, 269 00:14:30,800 --> 00:14:33,720 Speaker 1: and like a blank slate, right, Like a blank slate, 270 00:14:33,720 --> 00:14:36,640 Speaker 1: it would make random decisions, right you You you show 271 00:14:36,640 --> 00:14:38,880 Speaker 1: it a face and it would say yes, it's Jorge, 272 00:14:39,160 --> 00:14:41,520 Speaker 1: and then you say no, you were wrong or yes 273 00:14:41,560 --> 00:14:44,160 Speaker 1: you were right. And then you would reward it if 274 00:14:44,160 --> 00:14:46,600 Speaker 1: it does well, if it gives the right answer, and 275 00:14:46,760 --> 00:14:49,880 Speaker 1: you would um, you would punish it if it doesn't, right, 276 00:14:49,920 --> 00:14:51,960 Speaker 1: you would tell it. I mean, you don't actually punish 277 00:14:51,960 --> 00:14:53,960 Speaker 1: it a reward. You just tell it, yes, you made 278 00:14:53,960 --> 00:14:56,320 Speaker 1: the right um call this time, and know you made 279 00:14:56,320 --> 00:14:58,840 Speaker 1: the wrong call this time this other time. But how 280 00:14:58,920 --> 00:15:01,640 Speaker 1: is that different than the of calibrating something? Do you 281 00:15:01,680 --> 00:15:05,640 Speaker 1: know what I mean? Like is calibration than artificial intelligence? Right? Well, 282 00:15:05,680 --> 00:15:09,240 Speaker 1: the difference is calibration is like here, I have a tool. 283 00:15:09,440 --> 00:15:10,960 Speaker 1: I know how to solve the problem. I just have 284 00:15:11,000 --> 00:15:13,600 Speaker 1: to adjust it so that it does exactly the right thing. 285 00:15:13,880 --> 00:15:16,640 Speaker 1: Um here, right, But you have a strategy that it's executing. 286 00:15:17,160 --> 00:15:19,920 Speaker 1: You know. It's like, uh, you have a drill and 287 00:15:20,040 --> 00:15:21,880 Speaker 1: you want it to drill fast or slow, and you 288 00:15:21,920 --> 00:15:23,480 Speaker 1: know you know what how to solve the problem. You 289 00:15:23,560 --> 00:15:25,360 Speaker 1: just you know it has to spin and screw and 290 00:15:25,440 --> 00:15:27,960 Speaker 1: screw the thing in or whatever. It's just adjusting a 291 00:15:28,040 --> 00:15:30,440 Speaker 1: knob here. You don't know how to solve the problem, 292 00:15:30,720 --> 00:15:33,720 Speaker 1: and so you've given it a very very very flexible 293 00:15:34,040 --> 00:15:36,960 Speaker 1: strategy on on the inside. You've given it like imagine 294 00:15:37,000 --> 00:15:40,840 Speaker 1: something has like a thousand knobs. If you twist all 295 00:15:40,880 --> 00:15:44,680 Speaker 1: these knobs, you could get all sorts of crazy um strategies. Right. 296 00:15:44,720 --> 00:15:47,200 Speaker 1: So back to the example of like recognizing Joge or 297 00:15:47,280 --> 00:15:50,720 Speaker 1: not when it when you tell it it's done, it's 298 00:15:50,720 --> 00:15:54,280 Speaker 1: given the wrong answer, then it it adjusts those knobs. 299 00:15:54,280 --> 00:15:56,400 Speaker 1: It says, well, let me try to tweak my strategy 300 00:15:56,440 --> 00:15:58,960 Speaker 1: for deciding is this Jorge, and then we'll see how 301 00:15:58,960 --> 00:16:01,640 Speaker 1: that goes. I think that's a key difference. It's the 302 00:16:01,760 --> 00:16:04,480 Speaker 1: number of knobs, right, Like a drill with the knob 303 00:16:04,680 --> 00:16:08,040 Speaker 1: for velocity. I mean, that is sort of trainable and 304 00:16:08,080 --> 00:16:10,640 Speaker 1: you can set it up to be adaptive. But it's 305 00:16:10,680 --> 00:16:13,720 Speaker 1: just one knob, and so it's not You wouldn't say 306 00:16:13,760 --> 00:16:16,720 Speaker 1: it's intelligent. It's not intelligent. The spectrum of things it 307 00:16:16,760 --> 00:16:19,280 Speaker 1: can do is very very seene right. But whereas like 308 00:16:19,360 --> 00:16:23,120 Speaker 1: something that recognizes a phase. It needs to evaluate like 309 00:16:23,160 --> 00:16:26,440 Speaker 1: a million pixels in a photo, right, and so for 310 00:16:26,480 --> 00:16:29,160 Speaker 1: you to tweak how it evaluates each of those pixels, 311 00:16:29,200 --> 00:16:31,840 Speaker 1: it would be really difficult for you to That's right. 312 00:16:31,880 --> 00:16:34,000 Speaker 1: So imagine you know the machine here is is a 313 00:16:34,080 --> 00:16:35,520 Speaker 1: camera in the door and takes a picture of who 314 00:16:35,600 --> 00:16:37,440 Speaker 1: you got a million pixels and then it has to 315 00:16:37,480 --> 00:16:39,680 Speaker 1: look at those pixels and decide is this Jorge or 316 00:16:39,720 --> 00:16:42,960 Speaker 1: is this not whohe? And so there's does some calculation 317 00:16:43,080 --> 00:16:45,880 Speaker 1: on that picture, right, and that calculation has millions of 318 00:16:45,960 --> 00:16:48,280 Speaker 1: knobs on it, right, how much do I weigh this pixel? 319 00:16:48,320 --> 00:16:50,680 Speaker 1: How much I weigh adjacent pixels? Do I look for 320 00:16:50,720 --> 00:16:52,440 Speaker 1: his nose? Do I look for his hair? Do I 321 00:16:52,480 --> 00:16:55,240 Speaker 1: look for the eyes? Right? So it's got some very 322 00:16:55,320 --> 00:16:57,960 Speaker 1: very flexible thing inside of it that can do almost anything. 323 00:16:58,080 --> 00:17:00,080 Speaker 1: And when you first start out, it's just random. So 324 00:17:00,120 --> 00:17:03,760 Speaker 1: it's making ridiculous, terrible decisions. But the key the thing 325 00:17:03,800 --> 00:17:06,800 Speaker 1: that models the learning, right, you know, just need artificial intelligence, 326 00:17:06,800 --> 00:17:10,000 Speaker 1: You need artificial learning. The thing that models that learning 327 00:17:10,280 --> 00:17:12,680 Speaker 1: is that when it gets the wrong answer, it knows 328 00:17:12,760 --> 00:17:15,720 Speaker 1: how to adjust those knobs so that next time it's 329 00:17:15,760 --> 00:17:18,719 Speaker 1: more correct. By itself. That's the key thing is that 330 00:17:18,760 --> 00:17:22,560 Speaker 1: it learns by itself. It doesn't need you. They're sitting like, oh, 331 00:17:22,680 --> 00:17:24,920 Speaker 1: you got this pixel wrong. You've got that pixel wrong. 332 00:17:25,240 --> 00:17:27,800 Speaker 1: A tweak you know, tweak this one this way. It's 333 00:17:27,840 --> 00:17:32,080 Speaker 1: really more like an autonomous automatic learning. That's right because 334 00:17:32,080 --> 00:17:33,560 Speaker 1: you don't know how to adjust it. If you knew 335 00:17:33,560 --> 00:17:35,520 Speaker 1: how to adjust it, you would just write that program. Right. 336 00:17:35,760 --> 00:17:38,879 Speaker 1: The key is artificial intelligence is excellent when you don't 337 00:17:38,920 --> 00:17:41,320 Speaker 1: know how to solve the problem, but you can define 338 00:17:41,359 --> 00:17:43,480 Speaker 1: the problem. You can say, this is a picture of 339 00:17:43,600 --> 00:17:46,200 Speaker 1: or and this is not learn a way to tell 340 00:17:46,320 --> 00:17:48,800 Speaker 1: the difference, right. So you give it a very flexible 341 00:17:48,800 --> 00:17:52,160 Speaker 1: strategy and then you try You let it try out, 342 00:17:52,240 --> 00:17:54,320 Speaker 1: and when it gives it the wrong answer, you would 343 00:17:54,359 --> 00:17:56,440 Speaker 1: let it adjust itself so that it gets closer and 344 00:17:56,480 --> 00:17:59,480 Speaker 1: closer to giving the right answer right, And eventually these 345 00:17:59,520 --> 00:18:02,560 Speaker 1: things will find the right setting for those millions of knobs, 346 00:18:02,960 --> 00:18:05,040 Speaker 1: so that it's doing the right thing. It's saying, oh, look, 347 00:18:05,240 --> 00:18:07,200 Speaker 1: this picture is a picture of Horhey, and it gets 348 00:18:07,200 --> 00:18:09,760 Speaker 1: the right answer at the time. And when you give 349 00:18:09,760 --> 00:18:11,399 Speaker 1: it a picture that's not a picture of Horne, and 350 00:18:11,400 --> 00:18:13,639 Speaker 1: you give it Daniel, it says no, sorry, you're not 351 00:18:13,680 --> 00:18:16,600 Speaker 1: getting in the house. Right. And I think a key 352 00:18:16,640 --> 00:18:19,320 Speaker 1: thing is also that you, as a programmer, could not 353 00:18:19,600 --> 00:18:22,800 Speaker 1: have predicted what all those knobs are going to be 354 00:18:22,880 --> 00:18:24,960 Speaker 1: at the end, right, Like, it's such a big problem. 355 00:18:25,000 --> 00:18:28,520 Speaker 1: There's a million knobs. There's no way that you can 356 00:18:28,520 --> 00:18:30,359 Speaker 1: predict what those knots are going to be set to 357 00:18:30,640 --> 00:18:33,199 Speaker 1: when it learns my face. That's right. It's perfect for 358 00:18:33,280 --> 00:18:36,520 Speaker 1: really hard problems where we don't know how to solve it, right. 359 00:18:37,040 --> 00:18:38,840 Speaker 1: We know how to describe the problem, but we don't 360 00:18:38,840 --> 00:18:40,640 Speaker 1: know how to solve it. You're right, So if I 361 00:18:40,680 --> 00:18:42,119 Speaker 1: already knew how to solve it, I could write a 362 00:18:42,119 --> 00:18:44,840 Speaker 1: computer program that that and and tell it just like 363 00:18:45,040 --> 00:18:47,399 Speaker 1: use this pixel, use that pixel, use this pixel. But 364 00:18:47,440 --> 00:18:50,000 Speaker 1: I don't know how to solve that problem. It's really hard, right, 365 00:18:50,160 --> 00:18:52,920 Speaker 1: But I can train a computer to figure it out, 366 00:18:53,440 --> 00:18:56,800 Speaker 1: just the same way I can train a dog. Right. 367 00:18:56,920 --> 00:18:59,880 Speaker 1: A dog can learn my face. Right, a dog recognizes 368 00:18:59,880 --> 00:19:03,000 Speaker 1: the owner, and you know, happily licks its face when 369 00:19:03,000 --> 00:19:05,840 Speaker 1: it comes home, and recognize that that when somebody's not 370 00:19:05,960 --> 00:19:08,080 Speaker 1: its owner, and barks like crazy and choose its face 371 00:19:08,080 --> 00:19:10,679 Speaker 1: off when it's not its owner, Right, remind me not 372 00:19:10,840 --> 00:19:18,679 Speaker 1: to visit your house, Daniel, seems a little dangerous. So 373 00:19:18,720 --> 00:19:22,239 Speaker 1: then a big thing is programming a structure in the 374 00:19:22,359 --> 00:19:26,320 Speaker 1: in the software that is kind of open ended and malleable, 375 00:19:26,440 --> 00:19:28,280 Speaker 1: do you know what I mean? Like, its something that 376 00:19:28,480 --> 00:19:31,840 Speaker 1: is kind of unpredictable in a way that can learn. 377 00:19:32,119 --> 00:19:34,040 Speaker 1: That's right. And that's the key thing is that some 378 00:19:34,040 --> 00:19:35,760 Speaker 1: people might be thinking, well, hold on, you said that 379 00:19:35,760 --> 00:19:37,879 Speaker 1: the computers can just do what they tell you, So 380 00:19:37,920 --> 00:19:40,120 Speaker 1: how can a computer learn? Right? How is that possible? 381 00:19:40,600 --> 00:19:42,760 Speaker 1: And the key is that it's an emergent property, right. 382 00:19:43,280 --> 00:19:45,600 Speaker 1: Like the way that you write a computer program that 383 00:19:45,640 --> 00:19:48,960 Speaker 1: can learn is you build all these little um calculating 384 00:19:49,000 --> 00:19:51,800 Speaker 1: bits with knobs on them, right, and each bit just 385 00:19:51,840 --> 00:19:54,560 Speaker 1: does what it's told. It takes some data, it makes 386 00:19:54,600 --> 00:19:56,359 Speaker 1: a decision based on the value of the knob, and 387 00:19:56,400 --> 00:19:59,760 Speaker 1: it sends out some data. And together all these things 388 00:19:59,800 --> 00:20:02,440 Speaker 1: may a decision. Right. Each individual piece has no idea 389 00:20:02,480 --> 00:20:04,640 Speaker 1: what it's doing. It's not smart or intelligent or making 390 00:20:04,640 --> 00:20:07,480 Speaker 1: its own decisions. That doesn't have free will, right, But 391 00:20:07,600 --> 00:20:11,119 Speaker 1: together they're doing something. And as you said earlier, they 392 00:20:11,160 --> 00:20:13,639 Speaker 1: can change the way they behave. They can adjust these 393 00:20:13,720 --> 00:20:16,680 Speaker 1: knobs themselves to improve their performance, and that's where the 394 00:20:16,760 --> 00:20:19,520 Speaker 1: learning comes from. It's from that training. It gets external 395 00:20:19,560 --> 00:20:22,960 Speaker 1: input and changes its behavior based on that expe. You're 396 00:20:23,000 --> 00:20:25,439 Speaker 1: saying that the way to program these AI s is 397 00:20:25,520 --> 00:20:30,119 Speaker 1: by h connecting a bunch of little simple things together 398 00:20:30,400 --> 00:20:33,480 Speaker 1: to get something complex. Yes, And here it's important to 399 00:20:33,520 --> 00:20:36,080 Speaker 1: remember that we are using neural networks as sort of 400 00:20:36,080 --> 00:20:39,480 Speaker 1: a standing to represent a big broad set of strategies 401 00:20:39,520 --> 00:20:42,120 Speaker 1: that are part of machine learning, right, And you don't 402 00:20:42,160 --> 00:20:44,160 Speaker 1: know how to set them, how to put them together 403 00:20:44,200 --> 00:20:46,560 Speaker 1: to get the right complex behavior. You just put them 404 00:20:46,560 --> 00:20:48,920 Speaker 1: together and then you train it. Right, You say, well, 405 00:20:48,960 --> 00:20:51,600 Speaker 1: I have I have something that's dumb, like a newborn baby, 406 00:20:51,600 --> 00:20:53,320 Speaker 1: and I teach it how to do the thing that 407 00:20:53,359 --> 00:20:55,560 Speaker 1: I want. But this really all sort of came about 408 00:20:55,640 --> 00:20:58,600 Speaker 1: from brain research, right, Like, people were studying the brain 409 00:20:58,640 --> 00:21:00,880 Speaker 1: and they figured out that our brain vans are made 410 00:21:00,920 --> 00:21:03,399 Speaker 1: up of all these little simple units neurons, that's right. 411 00:21:03,440 --> 00:21:06,119 Speaker 1: And each neuron is pretty simple, right, Like, it just 412 00:21:06,720 --> 00:21:10,199 Speaker 1: takes a couple inputs and then it just outputs one signal. 413 00:21:10,480 --> 00:21:12,960 Speaker 1: That's the really fascinating deep part about it, right, is 414 00:21:13,000 --> 00:21:16,080 Speaker 1: that the structures we use in computers are modeled after 415 00:21:16,200 --> 00:21:19,480 Speaker 1: what's actually happening in real brains. And when you say, 416 00:21:19,520 --> 00:21:21,919 Speaker 1: inside your brain are a bunch of neurons, right, and 417 00:21:21,920 --> 00:21:25,440 Speaker 1: these neurons taken some input and then if the input 418 00:21:25,480 --> 00:21:27,600 Speaker 1: is right or above some a certain amount, then they 419 00:21:27,640 --> 00:21:29,520 Speaker 1: send us some output, which is the input to the 420 00:21:29,560 --> 00:21:31,840 Speaker 1: next neuron. Right, and your brain is basically just a 421 00:21:31,880 --> 00:21:34,320 Speaker 1: big web of these things. Yeah, yeah, that's the right. 422 00:21:34,359 --> 00:21:37,199 Speaker 1: That's the key is that these neurons, they're simple, but 423 00:21:37,240 --> 00:21:39,880 Speaker 1: they're all sort of connected to each other. So it's 424 00:21:39,880 --> 00:21:43,320 Speaker 1: a huge complex web going on inside your head. And 425 00:21:43,359 --> 00:21:45,840 Speaker 1: when you're learning, what you're doing is you're kind of 426 00:21:45,880 --> 00:21:49,120 Speaker 1: like shaping that web. You're saying, some connections. These connections 427 00:21:49,119 --> 00:21:54,120 Speaker 1: are important for recognizing kore. Uh, these connections are important 428 00:21:54,119 --> 00:21:56,439 Speaker 1: when you want to when it's not where kind of think. 429 00:21:56,520 --> 00:21:59,000 Speaker 1: You know, your neurons can change. They have like basically 430 00:21:59,080 --> 00:22:01,800 Speaker 1: knobs on them. I mean not physical literal knobs, but 431 00:22:01,840 --> 00:22:05,280 Speaker 1: they have they can adjust. And so if you feel pain, 432 00:22:05,520 --> 00:22:08,400 Speaker 1: you know, or you have an experience, then that changes 433 00:22:08,560 --> 00:22:11,200 Speaker 1: the way your neurons work and it changes a little 434 00:22:11,200 --> 00:22:12,960 Speaker 1: bit who you are and how you react to things. 435 00:22:13,040 --> 00:22:15,960 Speaker 1: And that's why you know, newborn babies when they're born, 436 00:22:16,359 --> 00:22:20,040 Speaker 1: they're not very responsive to stimulan because they're just still 437 00:22:20,080 --> 00:22:22,359 Speaker 1: figuring figuring it out. You know, a newborn baby doesn't 438 00:22:22,400 --> 00:22:24,520 Speaker 1: even know like, this is my arm, and I know 439 00:22:24,560 --> 00:22:26,640 Speaker 1: how to control It has to learn all of these 440 00:22:26,640 --> 00:22:31,280 Speaker 1: things by being trained, by having experiences. You know, it 441 00:22:31,359 --> 00:22:34,640 Speaker 1: has the neurons, and the neurons are connected to each other, 442 00:22:34,960 --> 00:22:37,280 Speaker 1: but it hasn't figure out how to use those connections. 443 00:22:37,520 --> 00:22:39,479 Speaker 1: I try. It has to be trained to be useful 444 00:22:39,880 --> 00:22:41,560 Speaker 1: and to interact with the world in any sort of 445 00:22:41,640 --> 00:22:45,560 Speaker 1: meaningful way. Right, And so that's exactly the same sense. 446 00:22:45,560 --> 00:22:48,359 Speaker 1: And it's fascinating that if you build a mathematical system. 447 00:22:48,400 --> 00:22:50,959 Speaker 1: That's what a computer program is, basically a mathematical model 448 00:22:51,560 --> 00:22:54,080 Speaker 1: of the processes that are happening in your brain. It 449 00:22:54,119 --> 00:22:56,240 Speaker 1: performs in a very similar way, and it does this 450 00:22:56,320 --> 00:22:59,439 Speaker 1: amazing thing, which is it adjusts itself to improve its 451 00:22:59,480 --> 00:23:02,199 Speaker 1: performance on the task you've given it. Right, So it 452 00:23:02,280 --> 00:23:04,480 Speaker 1: really is like a model of learning. And and when 453 00:23:04,480 --> 00:23:06,879 Speaker 1: people saw this, they said, wow, I mean you you 454 00:23:06,920 --> 00:23:09,640 Speaker 1: look inside the brain. You're wondering, like, how does thinking work? 455 00:23:09,720 --> 00:23:12,280 Speaker 1: Where's the soul? Right? Where am I? You look inside 456 00:23:12,280 --> 00:23:14,280 Speaker 1: the brain? All you see all these weird neurons connected 457 00:23:14,320 --> 00:23:17,160 Speaker 1: to each other. You think, how could that possibly describe me? 458 00:23:17,920 --> 00:23:19,800 Speaker 1: But when you build a model in a computer and 459 00:23:19,800 --> 00:23:21,880 Speaker 1: it can do the things that you can do, which 460 00:23:21,920 --> 00:23:24,440 Speaker 1: is you learn and develop and react and be trained 461 00:23:24,640 --> 00:23:28,960 Speaker 1: and make bad jokes. Not yet, we have not yet 462 00:23:29,040 --> 00:23:32,120 Speaker 1: solved the bad joke problem. Right, humans are still world 463 00:23:32,200 --> 00:23:35,600 Speaker 1: champions in terms of bad jokes. We can still beat 464 00:23:35,640 --> 00:23:37,719 Speaker 1: them at something that's right. And you know, and this 465 00:23:37,800 --> 00:23:40,800 Speaker 1: is very useful because you want the systems around you 466 00:23:40,880 --> 00:23:43,360 Speaker 1: to learn and to react, you know. And like if 467 00:23:44,000 --> 00:23:46,520 Speaker 1: your phone, for example, it knows, hey, every time you 468 00:23:46,520 --> 00:23:49,040 Speaker 1: open your phone, you start with Twitter, right, and so 469 00:23:49,359 --> 00:23:53,080 Speaker 1: Twitter goes up on there on the most used app list, right. 470 00:23:53,119 --> 00:23:56,600 Speaker 1: And that's not very complex artificial intelligence, but it is. 471 00:23:56,720 --> 00:23:59,000 Speaker 1: And uh, and these sort of things are very helpful. 472 00:23:59,400 --> 00:24:14,200 Speaker 1: Let's take a break. So that's kind of what makes 473 00:24:14,200 --> 00:24:17,919 Speaker 1: AI is that a it can tackle complex problems that 474 00:24:17,960 --> 00:24:20,240 Speaker 1: we don't wouldn't even know how to program something to do, 475 00:24:20,880 --> 00:24:23,600 Speaker 1: and be that it changes and adapts and kind of 476 00:24:24,040 --> 00:24:26,760 Speaker 1: it can get better, not just better, but also kind 477 00:24:26,760 --> 00:24:29,680 Speaker 1: of adapt to the person using it. That's exactly right, 478 00:24:29,760 --> 00:24:32,840 Speaker 1: exactly right. And so for example, sometimes you know Netflix 479 00:24:32,920 --> 00:24:37,600 Speaker 1: uses AIS what program will you want to watch next? Well, 480 00:24:37,680 --> 00:24:40,840 Speaker 1: you know, um, that's an AI. It's been trained. They 481 00:24:40,920 --> 00:24:44,000 Speaker 1: feeded a bunch of examples. They say, Bob watched these 482 00:24:44,040 --> 00:24:46,960 Speaker 1: five shows and then he watched this sixth show. But 483 00:24:47,119 --> 00:24:49,080 Speaker 1: they gave the AI just the first five and they 484 00:24:49,080 --> 00:24:52,520 Speaker 1: asked it predict what show he will watch next, and 485 00:24:52,560 --> 00:24:54,040 Speaker 1: then they see if it doesn't do a a good job, 486 00:24:54,040 --> 00:24:55,679 Speaker 1: and if it does a good job, you know, they reward. 487 00:24:55,760 --> 00:24:58,119 Speaker 1: If it does a bad job, its its knobs to 488 00:24:58,200 --> 00:25:01,399 Speaker 1: do better. Then when you're sitting there watching five hours 489 00:25:01,400 --> 00:25:03,480 Speaker 1: of Netflix, it can do a pretty good job of 490 00:25:03,560 --> 00:25:06,440 Speaker 1: predicting what you're gonna watch next because it's been trained 491 00:25:06,760 --> 00:25:09,040 Speaker 1: on a lot of data. This is why people always 492 00:25:09,040 --> 00:25:11,840 Speaker 1: talking about big data. Big data. These companies are gathering 493 00:25:11,920 --> 00:25:14,520 Speaker 1: data about you so they can train their aiyes to 494 00:25:14,720 --> 00:25:18,560 Speaker 1: learn your behavior and predict it. Except the problem is 495 00:25:18,800 --> 00:25:21,520 Speaker 1: me and my spouse we share the same account and 496 00:25:21,600 --> 00:25:25,680 Speaker 1: the same log in, So that's right, So it's learning 497 00:25:25,760 --> 00:25:29,240 Speaker 1: some weird in your wife's brain. I have a very 498 00:25:29,280 --> 00:25:33,159 Speaker 1: confused Netflix. I can or maybe it understands your marriage 499 00:25:33,160 --> 00:25:37,320 Speaker 1: better than maybe's trying to tell us something. It's like 500 00:25:37,440 --> 00:25:40,720 Speaker 1: you guys. Should you guys? His wife is out of town, 501 00:25:40,760 --> 00:25:43,680 Speaker 1: he watches these shows. When she's in town, he has 502 00:25:43,720 --> 00:25:53,960 Speaker 1: to watch these other shows. Oh alright, I know, break 503 00:25:53,960 --> 00:25:57,040 Speaker 1: it down for us. How long before the aiyes take 504 00:25:57,119 --> 00:26:01,960 Speaker 1: over the world? Um? Not very long actually, But you 505 00:26:02,080 --> 00:26:05,040 Speaker 1: ask me different question earlier, which is is AI dangerous? 506 00:26:05,119 --> 00:26:07,280 Speaker 1: And I think that has the two different questions there, right, 507 00:26:07,320 --> 00:26:10,560 Speaker 1: I mean people are concerned. Some people are concerned. Yeah, 508 00:26:10,600 --> 00:26:12,320 Speaker 1: I think people are concerned, and they're a good reason 509 00:26:12,400 --> 00:26:14,520 Speaker 1: for it to be concerned. You know, One question is 510 00:26:14,920 --> 00:26:18,399 Speaker 1: will AI develop its own autonomy and uh, and you 511 00:26:18,440 --> 00:26:21,040 Speaker 1: know take over. That's a different question from are they dangerous, 512 00:26:21,080 --> 00:26:23,040 Speaker 1: because you know they could take over and then take 513 00:26:23,119 --> 00:26:25,240 Speaker 1: better care of the planet than we have, in which 514 00:26:25,240 --> 00:26:28,920 Speaker 1: case you know they're not dangerous. They're benevolent dictators. I 515 00:26:28,960 --> 00:26:31,439 Speaker 1: think the real question is will they take over? Will 516 00:26:31,480 --> 00:26:34,600 Speaker 1: they become autonomous? We lose control of them somehow? And 517 00:26:34,680 --> 00:26:37,200 Speaker 1: could they become smarter than us? So I see it's 518 00:26:37,240 --> 00:26:41,000 Speaker 1: two issues. One could it could have developed a consciousness 519 00:26:41,000 --> 00:26:43,960 Speaker 1: on its own and be is that consciousness good or 520 00:26:44,000 --> 00:26:47,159 Speaker 1: bad for us? And it's an important question because as 521 00:26:47,160 --> 00:26:51,399 Speaker 1: soon as you identify learning with consciousness, right, then you 522 00:26:51,440 --> 00:26:54,360 Speaker 1: wonder about that and and this connection between the structure 523 00:26:54,400 --> 00:26:57,040 Speaker 1: of AI and the structure our brain begs that question. 524 00:26:57,119 --> 00:27:00,119 Speaker 1: You know, if you created, for example, and artificial or 525 00:27:00,160 --> 00:27:01,960 Speaker 1: hate in the computer, if you built a set of 526 00:27:02,000 --> 00:27:06,040 Speaker 1: neurons that mimic your brain, you know, would that simulation 527 00:27:06,200 --> 00:27:08,879 Speaker 1: be alive? Would it be aware? Would it think I 528 00:27:08,920 --> 00:27:15,320 Speaker 1: would have a first person experience? You know, that's a 529 00:27:15,359 --> 00:27:18,399 Speaker 1: deep philosophical question, will never answer, right, And it's not 530 00:27:18,480 --> 00:27:20,919 Speaker 1: really the important question. The important question is would we 531 00:27:21,000 --> 00:27:24,040 Speaker 1: lose control of AI? Well? AI, because AI is something 532 00:27:24,080 --> 00:27:26,600 Speaker 1: that can change and that can evolve, It can handle 533 00:27:26,680 --> 00:27:29,639 Speaker 1: complex tasks. The question is can we lose control of it? 534 00:27:29,720 --> 00:27:32,080 Speaker 1: And I think the answer to that one is definitely yes. 535 00:27:32,480 --> 00:27:36,119 Speaker 1: We can lose control, meaning like, um, we'll give it 536 00:27:36,160 --> 00:27:39,359 Speaker 1: control and then not be able to take it back. Yes, exactly, 537 00:27:39,840 --> 00:27:42,560 Speaker 1: because the way AI is moving is that it can 538 00:27:42,640 --> 00:27:45,000 Speaker 1: handle more and more complex tasks so that you don't 539 00:27:45,040 --> 00:27:47,520 Speaker 1: have to be super specific about what it's doing, you know, 540 00:27:48,040 --> 00:27:51,000 Speaker 1: like we have amazing natural language processing. Now you can 541 00:27:51,000 --> 00:27:54,160 Speaker 1: say sort of vague things to your phone like hey, UM, 542 00:27:54,280 --> 00:27:57,399 Speaker 1: set me up an appointment for tomorrow afternoon, right, and 543 00:27:57,480 --> 00:28:00,399 Speaker 1: it will understand because it understands what your intent was, right, 544 00:28:00,440 --> 00:28:02,840 Speaker 1: has to judge your intent and then execute it. It 545 00:28:02,960 --> 00:28:04,760 Speaker 1: used to be you have to go into your computer 546 00:28:04,800 --> 00:28:06,280 Speaker 1: and you have to press the keys in order to 547 00:28:06,280 --> 00:28:08,080 Speaker 1: create that in your calendar. Now you can sort of 548 00:28:08,119 --> 00:28:10,399 Speaker 1: talk to your phone and it will interpret what what 549 00:28:10,520 --> 00:28:13,480 Speaker 1: you want and it will do that. And that's that's awesome. 550 00:28:13,520 --> 00:28:17,080 Speaker 1: That's wonderful for human computer interactions that we can use 551 00:28:17,160 --> 00:28:19,320 Speaker 1: our language to talk to them. We don't have to 552 00:28:19,359 --> 00:28:22,000 Speaker 1: write computer code. That's a huge step forward, right, that 553 00:28:22,119 --> 00:28:25,680 Speaker 1: people can construct machines using English rather than Python or 554 00:28:25,720 --> 00:28:28,119 Speaker 1: C plus plus. Right, it's a big step forward. I 555 00:28:28,119 --> 00:28:30,240 Speaker 1: think that's kind of what people find scary about a 556 00:28:30,720 --> 00:28:33,960 Speaker 1: is that you can't really predict what it's going to do. 557 00:28:34,200 --> 00:28:36,000 Speaker 1: I mean, it's sort of comedy gold when your kids 558 00:28:36,040 --> 00:28:38,160 Speaker 1: are trying to talk to Alexa and ask it funny questions. 559 00:28:38,160 --> 00:28:40,840 Speaker 1: But that's kind of what's fascinating about it, right, Like 560 00:28:40,880 --> 00:28:44,479 Speaker 1: you ask it questions, do you give a task and 561 00:28:44,520 --> 00:28:46,360 Speaker 1: you're so you really sort of don't know what it's 562 00:28:46,360 --> 00:28:49,200 Speaker 1: going to do. That's exactly right, because it's making higher 563 00:28:49,200 --> 00:28:51,719 Speaker 1: and higher level decisions, which makes it much more useful 564 00:28:51,760 --> 00:28:53,959 Speaker 1: and much more intelligent. The same way when your kid 565 00:28:54,040 --> 00:28:56,400 Speaker 1: grows up, right, when it's a When your kid is 566 00:28:56,440 --> 00:28:58,360 Speaker 1: for you have to be very specific. You have to 567 00:28:58,360 --> 00:29:00,479 Speaker 1: say things out loud which are ridiculous us, right, like 568 00:29:00,880 --> 00:29:03,320 Speaker 1: don't put that finger in your nose. You know, we're like, 569 00:29:03,480 --> 00:29:05,680 Speaker 1: uh oh, that's been on the floor, don't eat it. Right, 570 00:29:05,720 --> 00:29:07,800 Speaker 1: you have to be really specific. When they're ten, you 571 00:29:07,840 --> 00:29:10,360 Speaker 1: can say more general things and they'll understand, right, and 572 00:29:10,640 --> 00:29:13,440 Speaker 1: learned their intelligence, like don't put both fingers in your nose. 573 00:29:15,640 --> 00:29:17,360 Speaker 1: Only put one finger in your nose at a time, 574 00:29:17,480 --> 00:29:20,760 Speaker 1: so you don't put your finger and your sisters and right, um, 575 00:29:20,920 --> 00:29:24,600 Speaker 1: the same way as machines or artificial intelligence gets more intelligent, 576 00:29:24,880 --> 00:29:27,480 Speaker 1: you can give it vagrant instructions and then it makes 577 00:29:27,520 --> 00:29:31,040 Speaker 1: decisions based on its training. Right, and you don't really know, 578 00:29:31,760 --> 00:29:34,880 Speaker 1: just like you can't really read what is in every 579 00:29:34,880 --> 00:29:37,480 Speaker 1: neuron in another person. And then AI you sort of 580 00:29:37,560 --> 00:29:40,200 Speaker 1: you don't know what's gonna happen, what's gonna come out exactly, 581 00:29:40,400 --> 00:29:43,120 Speaker 1: So they're gonna start making decisions based on you know, 582 00:29:43,160 --> 00:29:45,280 Speaker 1: still what we tell them to do. But you know 583 00:29:45,320 --> 00:29:47,400 Speaker 1: what if you told your AI, you're like, hey, keep 584 00:29:47,440 --> 00:29:50,000 Speaker 1: my kids safe right. I mean, imagine some future where 585 00:29:50,000 --> 00:29:52,080 Speaker 1: you have an AI robot it's really smart, and you say, hey, 586 00:29:52,160 --> 00:29:53,960 Speaker 1: keep my kids safe, and you come home and it's 587 00:29:54,000 --> 00:29:57,320 Speaker 1: like lock them in the basement, right, and like, well, okay, 588 00:29:57,360 --> 00:29:59,960 Speaker 1: they're safe. But it's sort of a monkey pause situation, right, 589 00:30:00,080 --> 00:30:02,360 Speaker 1: Like you got exactly what you asked for, but you 590 00:30:02,360 --> 00:30:05,120 Speaker 1: didn't really labor the right way and made different decisions. Right, 591 00:30:05,160 --> 00:30:07,800 Speaker 1: So we still skip the question of whether aies can 592 00:30:07,840 --> 00:30:10,840 Speaker 1: be you know, a chief consciousness and become its own 593 00:30:11,040 --> 00:30:14,600 Speaker 1: kind of soul, have a soul. It doesn't seem like 594 00:30:14,640 --> 00:30:17,080 Speaker 1: you think that's a relevant question. I think it's important 595 00:30:17,080 --> 00:30:19,920 Speaker 1: because when AI gets to be super intelligent, it's gonna 596 00:30:20,000 --> 00:30:22,760 Speaker 1: seem like it has a soul. They're gonna seem like people, 597 00:30:22,800 --> 00:30:25,720 Speaker 1: and people don't wonder like do they have rights? Can 598 00:30:25,760 --> 00:30:28,320 Speaker 1: you kill an AI? What? How can you just delete it? 599 00:30:28,440 --> 00:30:31,360 Speaker 1: You know? Um, that's going to be a really interesting question. 600 00:30:31,400 --> 00:30:33,400 Speaker 1: But that's again that's a whole question of philosophy that 601 00:30:33,440 --> 00:30:35,720 Speaker 1: we could we could easily spend an hour on things 602 00:30:35,760 --> 00:30:37,960 Speaker 1: a much more practical question, which is will we lose 603 00:30:37,960 --> 00:30:40,040 Speaker 1: control of them whether or not they have first person 604 00:30:40,120 --> 00:30:42,880 Speaker 1: experiences so they just seem to. It's important to think 605 00:30:42,880 --> 00:30:45,840 Speaker 1: about whether we're gonna lose control. And there's two reasons 606 00:30:45,880 --> 00:30:48,400 Speaker 1: why I think that we will. One is, computers are 607 00:30:48,400 --> 00:30:51,920 Speaker 1: getting faster, really really quickly. Right, Every year, computers get 608 00:30:51,920 --> 00:30:54,680 Speaker 1: faster and faster and smarter and smarter, and the scale 609 00:30:54,720 --> 00:30:57,800 Speaker 1: is is is growing, right, So this thing is happening 610 00:30:57,880 --> 00:31:01,200 Speaker 1: very quickly. But we're not right. We're not getting smarter. Right, 611 00:31:01,360 --> 00:31:03,360 Speaker 1: human brain is not changing and evolving at a very 612 00:31:03,440 --> 00:31:06,400 Speaker 1: rapid rate. Computers are, So they're catching up and the 613 00:31:06,640 --> 00:31:09,040 Speaker 1: slope is steep. Right, you can just get bigger and 614 00:31:09,040 --> 00:31:11,640 Speaker 1: bigger computers and teaming together and paralyze them and you 615 00:31:11,680 --> 00:31:14,640 Speaker 1: can just keep going right, So eventually they'll definitely have 616 00:31:15,040 --> 00:31:17,680 Speaker 1: enormous computing power with capabilities to do things we can't 617 00:31:17,680 --> 00:31:22,000 Speaker 1: even imagine. And also being faster doesn't necessarily mean being smarter. 618 00:31:22,120 --> 00:31:24,200 Speaker 1: You also need like more dated to train on it. 619 00:31:24,640 --> 00:31:27,680 Speaker 1: And also being twice as fast doesn't mean being twice 620 00:31:27,720 --> 00:31:30,960 Speaker 1: as smart. It's not linear. So you think that they 621 00:31:31,160 --> 00:31:34,960 Speaker 1: will get more capable than us. But do you think 622 00:31:34,960 --> 00:31:39,920 Speaker 1: we will ever seed control of really important things to Aiyes, Like, hey, 623 00:31:40,000 --> 00:31:43,960 Speaker 1: here's the nuclear button, um only fire it if it's 624 00:31:43,960 --> 00:31:51,680 Speaker 1: necessary exactly, Let's talk about weapons. Weapons is going to 625 00:31:51,720 --> 00:31:54,520 Speaker 1: be what ends it because you know, for example, we 626 00:31:54,560 --> 00:31:57,560 Speaker 1: already have drones, right, and we have drones with missiles 627 00:31:57,560 --> 00:31:59,880 Speaker 1: on them, and these drones can kill people. They can, 628 00:32:00,200 --> 00:32:02,440 Speaker 1: they can, you can. Some pilot somewhere is flying it. 629 00:32:02,560 --> 00:32:05,080 Speaker 1: He's making a decision and he's gonna shoot dismissile to 630 00:32:05,160 --> 00:32:07,960 Speaker 1: kill a person, right, right, But you know the enemy 631 00:32:08,080 --> 00:32:10,000 Speaker 1: has drones, and pretty soon it's gonna be drone on 632 00:32:10,120 --> 00:32:13,440 Speaker 1: drone warfare, right, And again, drones are gonna shoot each other, 633 00:32:13,560 --> 00:32:16,520 Speaker 1: and at some point somebody's going to put an AI 634 00:32:16,600 --> 00:32:19,040 Speaker 1: in their drone. Why because an AI can make the 635 00:32:19,080 --> 00:32:22,480 Speaker 1: decision about shooting much faster than a human can, So 636 00:32:22,560 --> 00:32:25,240 Speaker 1: which drone is gonna win? An AI will be a 637 00:32:25,240 --> 00:32:29,240 Speaker 1: better fighter than a human fire yes, And so eventually 638 00:32:29,400 --> 00:32:32,959 Speaker 1: these AI will be making kill decisions, right because the 639 00:32:32,960 --> 00:32:34,720 Speaker 1: one that can make the decision faster, it's going to 640 00:32:34,760 --> 00:32:37,280 Speaker 1: be the one that wins. And so I don't think 641 00:32:37,320 --> 00:32:40,240 Speaker 1: it's gonna be very long before we have AI powered 642 00:32:40,320 --> 00:32:44,120 Speaker 1: drones that are authorized to kill people. Right. This is 643 00:32:44,160 --> 00:32:46,920 Speaker 1: a clear next step for the military. You know, like here, 644 00:32:47,040 --> 00:32:49,200 Speaker 1: here's a picture of somebody we you think is a terrorist. 645 00:32:49,720 --> 00:32:52,840 Speaker 1: If you spot them, just fire the missile. Don't bother checking, Yeah, 646 00:32:53,000 --> 00:32:56,440 Speaker 1: don't bother checking with us. Right, that's a clear next step. 647 00:32:56,640 --> 00:32:59,560 Speaker 1: So now you have AI that have the authority to 648 00:32:59,640 --> 00:33:02,880 Speaker 1: kill people, and why because they've been tasked to, you know, 649 00:33:03,040 --> 00:33:05,880 Speaker 1: take care of us or protect us or only only 650 00:33:05,920 --> 00:33:08,000 Speaker 1: if you give it that permission though, right, Like, I 651 00:33:08,040 --> 00:33:10,479 Speaker 1: mean that's a big ethical step to say, like, if 652 00:33:10,480 --> 00:33:12,600 Speaker 1: you see him, shoot him. Yeah, But I don't think 653 00:33:12,600 --> 00:33:14,680 Speaker 1: that's a big ethical step for the military. You know, 654 00:33:15,160 --> 00:33:17,600 Speaker 1: the protocols for shooting somebody in the military. I mean, 655 00:33:17,680 --> 00:33:20,640 Speaker 1: I'm not an expert on military protocols, but you know, 656 00:33:21,120 --> 00:33:24,200 Speaker 1: our military kills a lot of people for you know, 657 00:33:24,840 --> 00:33:27,080 Speaker 1: a lot of civilians get killed, right, and we decide 658 00:33:27,120 --> 00:33:29,920 Speaker 1: it's okay. A lot of innocent people get killed for 659 00:33:30,040 --> 00:33:32,400 Speaker 1: military purposes. And so I don't think it's too far 660 00:33:32,480 --> 00:33:37,160 Speaker 1: before AI is making that decision. And then it's AI. 661 00:33:37,240 --> 00:33:41,320 Speaker 1: It's weaponized AI, our weaponized AI versus their weaponized AI. 662 00:33:41,400 --> 00:33:43,560 Speaker 1: And then it's an arms race, and then the most 663 00:33:43,600 --> 00:33:46,320 Speaker 1: powerful army is gonna be the one that just makes 664 00:33:46,320 --> 00:33:48,240 Speaker 1: it all of his decisions, and the generals just say 665 00:33:48,520 --> 00:33:52,560 Speaker 1: defend us right, or respond if we're attacked, right, And 666 00:33:52,600 --> 00:33:55,640 Speaker 1: then you basically handed over control of the weapons to 667 00:33:55,840 --> 00:33:59,080 Speaker 1: the AI because the enemy has weaponized AI. But that 668 00:33:59,120 --> 00:34:01,840 Speaker 1: doesn't mean that they're trilling us. I mean, we use 669 00:34:01,920 --> 00:34:04,760 Speaker 1: them to protect us or to take away some decision 670 00:34:04,760 --> 00:34:07,160 Speaker 1: making for it, but that doesn't mean that they're necessarily 671 00:34:07,280 --> 00:34:09,600 Speaker 1: in control of us. And let's make sure not to 672 00:34:09,600 --> 00:34:11,920 Speaker 1: be too alarmist here, of course, because people are working 673 00:34:12,000 --> 00:34:14,440 Speaker 1: really hard to make sure that there are always ways 674 00:34:14,480 --> 00:34:17,160 Speaker 1: for humans to override these systems. We would be different. 675 00:34:17,280 --> 00:34:19,520 Speaker 1: That would be um, you know, it'd be like if 676 00:34:19,520 --> 00:34:22,680 Speaker 1: a robot then turns the weapons inwards. That's another deal, 677 00:34:22,800 --> 00:34:25,759 Speaker 1: I guess. Yeah. And of course AI researchers do their 678 00:34:25,800 --> 00:34:28,000 Speaker 1: best to make sure that the AI systems are very 679 00:34:28,040 --> 00:34:31,040 Speaker 1: well trained so that they do exactly what we want 680 00:34:31,080 --> 00:34:34,239 Speaker 1: them to do. But they are complex and unpredictable, just 681 00:34:34,280 --> 00:34:37,760 Speaker 1: like people are. Right, So this is a very interesting 682 00:34:37,800 --> 00:34:42,279 Speaker 1: topic whether AI is dangerous or not. And I know 683 00:34:42,400 --> 00:34:45,400 Speaker 1: Daniel that you you're sort of an expert in artificial 684 00:34:45,440 --> 00:34:49,440 Speaker 1: intelligence because you use it in your particle physics research, right, 685 00:34:49,480 --> 00:34:51,839 Speaker 1: you use machine learning, that's right. I wouldn't say I'm 686 00:34:51,840 --> 00:34:53,920 Speaker 1: an expert I mean, I know something about it. Um. 687 00:34:53,960 --> 00:34:55,920 Speaker 1: I've done some reading and I've used it, but I'm 688 00:34:55,960 --> 00:34:59,120 Speaker 1: certainly not a deep expert in artificial intelligence itself, right, 689 00:34:59,200 --> 00:35:02,719 Speaker 1: But you you know experts in your department, right, and 690 00:35:02,760 --> 00:35:04,799 Speaker 1: you're in your campus, that's right. You see, I has 691 00:35:04,840 --> 00:35:08,200 Speaker 1: an amazing computer science department and experts in machine learning. 692 00:35:08,440 --> 00:35:10,480 Speaker 1: Some of the folks I actually collaborate with. When we're 693 00:35:10,520 --> 00:35:13,640 Speaker 1: understanding the huge amounts of data from the large Hagon collider, 694 00:35:13,920 --> 00:35:16,680 Speaker 1: we train machines to sift through that data and like 695 00:35:16,760 --> 00:35:19,520 Speaker 1: look for the Higgs boson and learn to recognize new 696 00:35:19,560 --> 00:35:22,080 Speaker 1: kinds of particles. It's really fun. And these guys know 697 00:35:22,120 --> 00:35:24,279 Speaker 1: a lot about artificial intelligence more than I do. So 698 00:35:24,360 --> 00:35:26,520 Speaker 1: I went over there and I asked them if they 699 00:35:26,560 --> 00:35:29,400 Speaker 1: were worried about whether robots would take over the world, 700 00:35:29,840 --> 00:35:33,439 Speaker 1: and what did the robots say the robots had taken 701 00:35:33,480 --> 00:35:36,760 Speaker 1: over the professors and they answered for no. Um. First, 702 00:35:36,800 --> 00:35:40,760 Speaker 1: here's professor Pierre Baldi, he's a distinguished professor on campus. 703 00:35:40,880 --> 00:35:44,319 Speaker 1: And here's what he had to say. Potentially, yes, all 704 00:35:44,719 --> 00:35:48,680 Speaker 1: very powerful technologies I think can pose such a threat, 705 00:35:49,239 --> 00:35:52,160 Speaker 1: and all depends how they are deployed, how they are used. 706 00:35:52,280 --> 00:35:55,640 Speaker 1: Et cetera. Right, you can say that nuclear technology pulls 707 00:35:55,719 --> 00:35:59,320 Speaker 1: is such a threat and continues to pull such a threat. 708 00:36:00,280 --> 00:36:04,080 Speaker 1: And I think AI, if used in the wrong way 709 00:36:05,000 --> 00:36:09,239 Speaker 1: to pose a threat to mankind. Yes, the potential is 710 00:36:09,280 --> 00:36:12,399 Speaker 1: there and so we should be careful. Um, right, So 711 00:36:12,560 --> 00:36:15,279 Speaker 1: that was Professor Baldy, and then I also went down 712 00:36:15,280 --> 00:36:17,200 Speaker 1: the hall and asked another colleagues cause I thought let's 713 00:36:17,200 --> 00:36:19,880 Speaker 1: get more than one opinion, and so this is Professor 714 00:36:19,920 --> 00:36:22,920 Speaker 1: Park Smith, also a professor of computer science at U 715 00:36:23,000 --> 00:36:26,719 Speaker 1: c Irvine. I think the main threat with artificial intelligence 716 00:36:26,760 --> 00:36:30,479 Speaker 1: going forward is not understanding how the black boxes work. 717 00:36:30,760 --> 00:36:35,600 Speaker 1: And so I think not the typical sort of we're 718 00:36:35,600 --> 00:36:38,400 Speaker 1: going to have robots taking over the world, but more 719 00:36:38,880 --> 00:36:42,719 Speaker 1: the use of AI and situations where we're extrapolating beyond 720 00:36:42,760 --> 00:36:44,719 Speaker 1: what it can do. And so I think we need 721 00:36:44,760 --> 00:36:47,160 Speaker 1: to understand the limits of a I I think that's 722 00:36:47,320 --> 00:36:51,640 Speaker 1: a threat, all right, So the answer is yes, Well, 723 00:36:51,680 --> 00:36:55,279 Speaker 1: I think they're cautious, right, both of them think it's unpredictable. 724 00:36:55,360 --> 00:36:57,360 Speaker 1: We don't know what's going to happen. We're creating a 725 00:36:57,400 --> 00:37:00,239 Speaker 1: whole new kind of system and uh, and we may 726 00:37:00,360 --> 00:37:02,560 Speaker 1: lose control of parts of it. On the other hand, 727 00:37:02,680 --> 00:37:04,920 Speaker 1: you know it's likely for that to happen. You know, 728 00:37:04,960 --> 00:37:06,799 Speaker 1: a lot of people are working really hard to make 729 00:37:06,840 --> 00:37:09,279 Speaker 1: sure that AI will be contained and that in the 730 00:37:09,360 --> 00:37:11,200 Speaker 1: end you can just pull the plug if the robot 731 00:37:11,239 --> 00:37:15,759 Speaker 1: revolution starts, and so it is unpredictable. But also you know, 732 00:37:15,920 --> 00:37:18,399 Speaker 1: the future is unpredictable, is always going to be unpredictable. Yeah, 733 00:37:18,719 --> 00:37:20,959 Speaker 1: I feel like I thought it was interesting he said 734 00:37:21,280 --> 00:37:24,560 Speaker 1: it is dangerous, but not more so than any other 735 00:37:24,840 --> 00:37:28,480 Speaker 1: powerful technology. Yeah, that's a really interesting comment. It's true 736 00:37:28,560 --> 00:37:31,120 Speaker 1: that any technology you can create could be used for 737 00:37:31,160 --> 00:37:34,080 Speaker 1: good or for even if it's powerful, like I mean, 738 00:37:34,120 --> 00:37:37,160 Speaker 1: not just like you know, wind up toy. Maybe it's 739 00:37:37,160 --> 00:37:39,920 Speaker 1: not as dangerous. But but but I think that speaks 740 00:37:39,960 --> 00:37:42,160 Speaker 1: to the kind of the power of AI, Like it 741 00:37:42,239 --> 00:37:46,720 Speaker 1: really is maybe more powerful than we can handle. Yeah, 742 00:37:46,840 --> 00:37:49,080 Speaker 1: and it's it's powerful in a special way run like 743 00:37:49,160 --> 00:37:52,160 Speaker 1: nuclear weapons are powerful. Right, But in the end, a 744 00:37:52,239 --> 00:37:54,720 Speaker 1: human is making that decision, and so you're giving humans 745 00:37:54,760 --> 00:37:57,399 Speaker 1: a new kind of power, which is unpredictable. But here 746 00:37:57,480 --> 00:38:01,160 Speaker 1: you're you're unleashing something, right, You're creating AI, and it's 747 00:38:01,200 --> 00:38:04,080 Speaker 1: making its own decisions. Of course, it's making decisions based 748 00:38:04,080 --> 00:38:05,719 Speaker 1: on what has been told to do. Right, you have 749 00:38:05,760 --> 00:38:08,480 Speaker 1: to give it instructions. Still, you have to teach it um. 750 00:38:08,520 --> 00:38:10,960 Speaker 1: But you can't predict what these complex systems are going 751 00:38:11,000 --> 00:38:13,280 Speaker 1: to do in new circumstances and how they're gonna interpret 752 00:38:13,320 --> 00:38:15,640 Speaker 1: your instructions. And of course there are a lot of 753 00:38:15,640 --> 00:38:17,879 Speaker 1: AI supark people out there working hard to make sure 754 00:38:18,000 --> 00:38:22,000 Speaker 1: that their boundaries and safeties um installed in all AI systems. 755 00:38:22,040 --> 00:38:24,920 Speaker 1: But you know, I've seen Jurassic Park, you know, the 756 00:38:25,000 --> 00:38:28,080 Speaker 1: lesson there they had fences. Lesson there, they had fences. 757 00:38:28,280 --> 00:38:31,560 Speaker 1: We have fences. But then Jeff Goldbloom, you know, has 758 00:38:31,560 --> 00:38:34,520 Speaker 1: a theory about chaos. Yeah, exactly. You know, these systems 759 00:38:34,520 --> 00:38:37,000 Speaker 1: are hard to predict, and so I think we should 760 00:38:37,080 --> 00:38:39,240 Speaker 1: be worried, but then we should respond to that worry 761 00:38:39,320 --> 00:38:42,280 Speaker 1: with appropriate safeguards. You know, we should take this seriously, 762 00:38:42,400 --> 00:38:45,040 Speaker 1: but not be overly alarmed. Right. Well, the other point 763 00:38:45,120 --> 00:38:48,479 Speaker 1: that the other professor made is also interesting that he's 764 00:38:48,480 --> 00:38:51,279 Speaker 1: saying some of the danger is in the fact that 765 00:38:51,880 --> 00:38:54,520 Speaker 1: it's kind of like a black box, like we're trusting 766 00:38:55,000 --> 00:38:58,040 Speaker 1: these things, but we don't really know what's going on inside. 767 00:38:58,280 --> 00:39:02,640 Speaker 1: Like it's so complex they we we can't predict what 768 00:39:02,640 --> 00:39:06,239 Speaker 1: it's gonna do. We can't maybe even deconstruct how it 769 00:39:06,280 --> 00:39:09,560 Speaker 1: makes decisions. That's right, and uh, you know you train 770 00:39:09,640 --> 00:39:11,759 Speaker 1: these systems are very complicated and you don't know how 771 00:39:11,800 --> 00:39:14,600 Speaker 1: they're gonna respond to new circumstances. Right. It's same as 772 00:39:14,640 --> 00:39:16,920 Speaker 1: when like training your dog, Like do you know how 773 00:39:16,960 --> 00:39:18,880 Speaker 1: your dog makes a decision about who to bark end 774 00:39:18,920 --> 00:39:21,120 Speaker 1: who not to. You try to train it, You try 775 00:39:21,160 --> 00:39:22,880 Speaker 1: to give it instructions to try to make sure it 776 00:39:22,920 --> 00:39:24,520 Speaker 1: knows how to how to handle it stuff in a 777 00:39:24,560 --> 00:39:26,920 Speaker 1: new circumstances, but you can't honestly know what it's going 778 00:39:26,960 --> 00:39:29,759 Speaker 1: to do at any given moment. Yeah, I'm definitely not 779 00:39:29,880 --> 00:39:34,400 Speaker 1: visiting your house if you have dogs. I think about 780 00:39:34,920 --> 00:39:38,319 Speaker 1: I think about AI. Again, not an expert, so maybe 781 00:39:38,360 --> 00:39:40,839 Speaker 1: these are uninformed speculations, but I think about AI sort 782 00:39:40,880 --> 00:39:44,520 Speaker 1: of like digital children. You know, like you raise your children, 783 00:39:44,719 --> 00:39:46,840 Speaker 1: you know they're gonna take over one day because you know, 784 00:39:46,920 --> 00:39:48,080 Speaker 1: you and I are going to get old and our 785 00:39:48,160 --> 00:39:50,399 Speaker 1: kids are younger than we are, so eventually they will 786 00:39:50,400 --> 00:39:53,120 Speaker 1: take over and you don't know what they're gonna do, 787 00:39:53,360 --> 00:39:55,200 Speaker 1: and you raise them. You try to raise them in 788 00:39:55,200 --> 00:39:57,640 Speaker 1: a way that they have values they make reasonable decisions, 789 00:39:58,160 --> 00:40:00,000 Speaker 1: and you can sort of think about AI to same 790 00:40:00,000 --> 00:40:03,400 Speaker 1: in way like you try to create this new generation 791 00:40:03,440 --> 00:40:05,480 Speaker 1: of technology that's going to make its own decisions, but 792 00:40:05,480 --> 00:40:07,400 Speaker 1: you try to teach it to make good decisions so 793 00:40:07,400 --> 00:40:10,000 Speaker 1: that when you're in a home, right, it's making good 794 00:40:10,080 --> 00:40:12,359 Speaker 1: choices for you. And I know that some folks out 795 00:40:12,400 --> 00:40:14,400 Speaker 1: there think, well, you know, AI is never really going 796 00:40:14,440 --> 00:40:17,799 Speaker 1: to be separate from humanity. There's not this like cognitive separation, 797 00:40:17,880 --> 00:40:19,759 Speaker 1: Like you can just be part of who you are, 798 00:40:19,880 --> 00:40:22,920 Speaker 1: the way your iPhone feels like part of who you are. Um, 799 00:40:22,960 --> 00:40:26,120 Speaker 1: But we don't know necessarily if if that separation is 800 00:40:26,160 --> 00:40:28,239 Speaker 1: going to be serious, you know, if these things really 801 00:40:28,239 --> 00:40:29,800 Speaker 1: would be separate from us, or if they always just 802 00:40:29,840 --> 00:40:33,480 Speaker 1: feel like an extension of ourselves. Well, until then, I 803 00:40:33,520 --> 00:40:39,960 Speaker 1: think we should stick to regular dogs. Dogs. Yeah, But 804 00:40:40,000 --> 00:40:41,960 Speaker 1: you know, I think about it sometimes the way I 805 00:40:42,000 --> 00:40:44,560 Speaker 1: think about children, right, In the same way that you 806 00:40:44,640 --> 00:40:47,239 Speaker 1: raise your children and they're gonna take over, right, It's 807 00:40:47,239 --> 00:40:49,160 Speaker 1: gonna be some point when your children are in charge. 808 00:40:49,520 --> 00:40:52,200 Speaker 1: You raise them to have values and to make good decisions, 809 00:40:52,200 --> 00:40:54,480 Speaker 1: and you hope that when they take over, they're you know, 810 00:40:54,600 --> 00:40:56,960 Speaker 1: looking after you. In the same way, we got to 811 00:40:56,960 --> 00:40:59,600 Speaker 1: create these digital tools, and we've got to teach them 812 00:40:59,640 --> 00:41:01,440 Speaker 1: to be Hey, we got to teach them what's important, 813 00:41:01,440 --> 00:41:03,719 Speaker 1: and we got to teach them how to be responsible 814 00:41:03,760 --> 00:41:06,160 Speaker 1: so that if they take over, you know that we 815 00:41:06,200 --> 00:41:11,160 Speaker 1: hope they treat as well. Yeah, daddy, good daddy. Your 816 00:41:11,200 --> 00:41:22,480 Speaker 1: parents don't put creator good Please don't bury me underground. Well, 817 00:41:22,520 --> 00:41:24,680 Speaker 1: I personally am looking forward to a time when I have, 818 00:41:24,920 --> 00:41:27,720 Speaker 1: like I don't have to think as much, where life 819 00:41:27,760 --> 00:41:30,440 Speaker 1: is a little bit easier because we have these things 820 00:41:30,520 --> 00:41:33,719 Speaker 1: making things easier for us. It could handle a lot 821 00:41:33,719 --> 00:41:35,960 Speaker 1: of the drudgery and a lot of the logistics. You know, 822 00:41:36,200 --> 00:41:38,839 Speaker 1: eventually you could have a car that drives itself and 823 00:41:38,960 --> 00:41:40,960 Speaker 1: obeys your instructions. You can say like, hey, go pick 824 00:41:41,000 --> 00:41:43,040 Speaker 1: up my kids from school, and he would know how 825 00:41:43,040 --> 00:41:45,560 Speaker 1: to navigate and how to drive and recognize your children 826 00:41:45,960 --> 00:41:48,640 Speaker 1: and how to get back home. And that's totally within 827 00:41:48,680 --> 00:41:51,040 Speaker 1: the realm of possibility in a few years, right, And 828 00:41:51,080 --> 00:41:53,080 Speaker 1: that's pretty awesome. It will offload a lot of work 829 00:41:53,280 --> 00:41:55,920 Speaker 1: and logistics from beleaguered parents. I think you and I 830 00:41:55,920 --> 00:41:58,480 Speaker 1: are in a pretty good position career wise, you know, 831 00:41:58,640 --> 00:42:01,239 Speaker 1: Like I'm a cartoonist in your physicist. These are not 832 00:42:01,680 --> 00:42:04,279 Speaker 1: um jobs that are going to be taken away by 833 00:42:04,280 --> 00:42:08,120 Speaker 1: AI anytime soon. Hopefully have you not seen a our cartoons? 834 00:42:08,200 --> 00:42:11,279 Speaker 1: They're pretty good man, all right, they you should like 835 00:42:11,320 --> 00:42:15,120 Speaker 1: start a podcast instead of wearing of relying on your cartooning. Well, 836 00:42:15,160 --> 00:42:17,960 Speaker 1: there is definitely that as a genre of humor. Like, hey, 837 00:42:18,000 --> 00:42:20,800 Speaker 1: I put um so and so through an AI machine 838 00:42:20,840 --> 00:42:22,759 Speaker 1: and look look at the crazy thing it came out with. 839 00:42:23,400 --> 00:42:25,560 Speaker 1: Except those are all manufactured. None of those are no, 840 00:42:25,920 --> 00:42:29,400 Speaker 1: those are real, None of those are real. Those are 841 00:42:29,440 --> 00:42:33,720 Speaker 1: all made up. Well, that's good for humorist. So artificial 842 00:42:33,719 --> 00:42:37,040 Speaker 1: intelligence is certainly a revolution in thinking and in computing, 843 00:42:37,120 --> 00:42:39,960 Speaker 1: and it will definitely change the world. And so check 844 00:42:40,040 --> 00:42:41,960 Speaker 1: back in in ten years to see if we've been 845 00:42:42,000 --> 00:42:45,880 Speaker 1: replaced by robot Daniel and robot Warhead. Maybe we already 846 00:42:45,880 --> 00:42:49,680 Speaker 1: are bump bump ball. So thanks everyone for listening to 847 00:42:49,719 --> 00:42:53,040 Speaker 1: this episode of Daniel and Jorge Explain the Universe, and 848 00:42:53,080 --> 00:42:56,800 Speaker 1: to listen tomorrow. Just say, Alexa, what's the best science 849 00:42:56,800 --> 00:43:00,600 Speaker 1: podcast in the world. What's the third best? It's not 850 00:43:00,680 --> 00:43:11,640 Speaker 1: a Catherine the world. If you still have a question 851 00:43:11,680 --> 00:43:14,840 Speaker 1: after listening to all these explanations, please drop us a 852 00:43:14,880 --> 00:43:17,120 Speaker 1: line we'd love to hear from you. You can find 853 00:43:17,200 --> 00:43:20,920 Speaker 1: us at Facebook, Twitter, and Instagram at Daniel and Jorge 854 00:43:20,960 --> 00:43:24,400 Speaker 1: That's One Word, or email us at Feedback at Daniel 855 00:43:24,440 --> 00:43:34,600 Speaker 1: and Jorge dot com