1 00:00:04,440 --> 00:00:12,280 Speaker 1: Welcome to tech Stuff, a production from iHeartRadio. Hey there, 2 00:00:12,280 --> 00:00:15,960 Speaker 1: and welcome to tech Stuff. I'm your host, Jonathan Strickland. 3 00:00:16,000 --> 00:00:19,160 Speaker 1: I'm an executive producer with iHeartRadio and how the tech 4 00:00:19,200 --> 00:00:22,479 Speaker 1: are you? So? Over this past weekend, I was listening 5 00:00:22,520 --> 00:00:26,239 Speaker 1: to the podcast The Skeptics Guide to the Universe, which 6 00:00:26,280 --> 00:00:28,520 Speaker 1: I have no connection to. I just listened to it, 7 00:00:29,240 --> 00:00:32,800 Speaker 1: and it included a section on AI that referenced something 8 00:00:33,000 --> 00:00:37,040 Speaker 1: I don't think I had heard of before, which is 9 00:00:37,600 --> 00:00:41,680 Speaker 1: really talking more about my oversight than anything else. Maybe 10 00:00:41,720 --> 00:00:44,120 Speaker 1: I did hear about it, but then I forgot about it, 11 00:00:44,320 --> 00:00:48,040 Speaker 1: you know, catastrophically. So the thing they talked about was 12 00:00:48,200 --> 00:00:54,040 Speaker 1: catastrophic forgetting in artificial intelligence, specifically in machine learning systems 13 00:00:54,040 --> 00:00:57,920 Speaker 1: built on artificial neural networks. Now, before we talk about 14 00:00:58,080 --> 00:01:02,080 Speaker 1: catastrophic forgetting, which as I mentioned, is related to neural 15 00:01:02,120 --> 00:01:04,760 Speaker 1: networks and machine learning, we really need to do a 16 00:01:04,840 --> 00:01:07,120 Speaker 1: quick reminder, not a quick reminder. We need to do 17 00:01:07,160 --> 00:01:10,280 Speaker 1: a full reminder on how all this works. And that's 18 00:01:10,319 --> 00:01:13,080 Speaker 1: going to require us to do a whole lot of remembering. 19 00:01:13,360 --> 00:01:17,000 Speaker 1: Not a catastrophic amount, but a lot. So the history 20 00:01:17,040 --> 00:01:20,680 Speaker 1: of artificial intelligence as a discipline is one of intense 21 00:01:21,600 --> 00:01:26,000 Speaker 1: and important debates in fields like computer science. Now, I 22 00:01:26,040 --> 00:01:29,000 Speaker 1: have often talked about how AI can be seen as 23 00:01:29,160 --> 00:01:33,440 Speaker 1: the convergence of several other disciplines into its own field, 24 00:01:34,000 --> 00:01:37,720 Speaker 1: and there's more than one way to approach the challenge 25 00:01:37,720 --> 00:01:41,080 Speaker 1: of artificial intelligence. And in the history of AI, we 26 00:01:41,120 --> 00:01:45,680 Speaker 1: actually saw that play out, and some would argue the 27 00:01:45,680 --> 00:01:48,680 Speaker 1: way it played out means that we're actually just now 28 00:01:48,720 --> 00:01:53,000 Speaker 1: playing catch up. So different schools of thought pushed these 29 00:01:53,040 --> 00:01:58,760 Speaker 1: different approaches forward as this should be the prevailing methodology 30 00:01:58,800 --> 00:02:03,160 Speaker 1: we use to devel artificial intelligence. This is important because 31 00:02:03,200 --> 00:02:05,880 Speaker 1: the development of AI does not exist in a vacuum 32 00:02:06,440 --> 00:02:11,519 Speaker 1: right It exists in our real world. Research requires funding, 33 00:02:11,720 --> 00:02:17,240 Speaker 1: and when you've got different sides arguing that their approach 34 00:02:17,360 --> 00:02:22,359 Speaker 1: to artificial intelligence is superior and that the alternatives are 35 00:02:22,400 --> 00:02:25,200 Speaker 1: not just inferior, but potentially limited to the point of 36 00:02:25,200 --> 00:02:29,080 Speaker 1: being useless, well you've got a metaphorical wrestling match going on. 37 00:02:29,720 --> 00:02:33,120 Speaker 1: The winner takes home the big prize of getting funding 38 00:02:33,200 --> 00:02:36,240 Speaker 1: for their research, and the loser has to scrabble for 39 00:02:36,320 --> 00:02:39,160 Speaker 1: whatever they can find, and often they will see their 40 00:02:39,160 --> 00:02:43,520 Speaker 1: work languish as a result. By the way, this is 41 00:02:43,560 --> 00:02:46,520 Speaker 1: why I often bring stuff up in this podcast that 42 00:02:46,639 --> 00:02:50,680 Speaker 1: is outside the realm of tech. I've received a lot 43 00:02:50,720 --> 00:02:53,079 Speaker 1: of messages over the years from folks saying that I 44 00:02:53,080 --> 00:02:56,600 Speaker 1: should leave out stuff like money or politics. Politics is 45 00:02:56,600 --> 00:03:00,000 Speaker 1: the big one. But to me, that doesn't make sense 46 00:03:00,120 --> 00:03:04,840 Speaker 1: cause tech exists within our world, a world that is 47 00:03:04,960 --> 00:03:09,080 Speaker 1: largely shaped by money and politics. I don't think we 48 00:03:09,280 --> 00:03:12,240 Speaker 1: can separate the tech from all of that because I 49 00:03:12,280 --> 00:03:16,400 Speaker 1: believe that if you were to somehow magically remove those influences, 50 00:03:16,440 --> 00:03:19,920 Speaker 1: If somehow money and politics never played a part in 51 00:03:19,960 --> 00:03:23,480 Speaker 1: the development of technology, our tech would look very different 52 00:03:23,560 --> 00:03:27,040 Speaker 1: from what it does today. Not necessarily better or worse, 53 00:03:27,560 --> 00:03:31,160 Speaker 1: but different. I mean, think about Thomas Edison. He was 54 00:03:31,400 --> 00:03:36,680 Speaker 1: very much driven by financial success, like his work in 55 00:03:36,800 --> 00:03:40,960 Speaker 1: tech was really mostly about making lots of money. And 56 00:03:41,000 --> 00:03:43,920 Speaker 1: without the making lots of money part, you don't really 57 00:03:43,960 --> 00:03:48,120 Speaker 1: have his drive to really bring together the brightest minds 58 00:03:48,120 --> 00:03:51,040 Speaker 1: of his generation and set them to work on creating 59 00:03:51,080 --> 00:03:54,960 Speaker 1: incredible technology. So I think we have to take all 60 00:03:55,000 --> 00:03:58,680 Speaker 1: these things into consideration. Anyway, that's a total rabbit trail, 61 00:03:58,720 --> 00:04:01,280 Speaker 1: and I apologize. Let's get back to our story. It 62 00:04:01,320 --> 00:04:04,920 Speaker 1: really begins around nineteen forty three when a pair of 63 00:04:04,960 --> 00:04:09,240 Speaker 1: researchers at the University of Chicago first proposed the concept 64 00:04:09,400 --> 00:04:13,520 Speaker 1: of the basic unit of a neural network. Those researchers 65 00:04:13,600 --> 00:04:17,760 Speaker 1: were Warren McCullough and Walter Pets, And in fact, they 66 00:04:17,800 --> 00:04:22,400 Speaker 1: demonstrate their idea by showing a simple electrical circuit the 67 00:04:22,520 --> 00:04:25,719 Speaker 1: very basis for what would become a neural network. So 68 00:04:25,839 --> 00:04:29,400 Speaker 1: their proposal was a system that would use those simple 69 00:04:29,440 --> 00:04:33,960 Speaker 1: circuits to mimic the neurons that we have in our noggins. 70 00:04:34,320 --> 00:04:37,760 Speaker 1: So our brain consists of a bunch of these neurons, 71 00:04:38,320 --> 00:04:40,039 Speaker 1: and you might wonder how much is a bunch, Well, 72 00:04:40,920 --> 00:04:44,480 Speaker 1: we're talking about on average, around one hundred billion neurons 73 00:04:44,640 --> 00:04:48,240 Speaker 1: in the human brain. These neurons interconnect with each other. 74 00:04:48,360 --> 00:04:50,960 Speaker 1: It's not just a one to one, right, You've got 75 00:04:50,960 --> 00:04:55,159 Speaker 1: these interconnections between all these different neurons, not with every 76 00:04:55,160 --> 00:04:58,200 Speaker 1: neuron connected to every other neuron, but lots of interconnections. 77 00:04:58,200 --> 00:05:01,000 Speaker 1: And if we're looking at just the connection, you would 78 00:05:01,040 --> 00:05:04,159 Speaker 1: count more than one hundred trillion of them in the 79 00:05:04,160 --> 00:05:07,840 Speaker 1: typical human brain. And these connections in our brains make 80 00:05:07,960 --> 00:05:12,640 Speaker 1: up neural circuits. Those circuits light up, and that represents 81 00:05:12,720 --> 00:05:15,960 Speaker 1: us doing lots of different stuff from experiencing the world 82 00:05:16,000 --> 00:05:20,120 Speaker 1: around us so perception to thinking about a past memory. 83 00:05:20,320 --> 00:05:23,320 Speaker 1: You know that typically is like recreating the same pathway 84 00:05:23,360 --> 00:05:27,760 Speaker 1: over and over, and sometimes we don't recreate it exactly correctly, 85 00:05:28,240 --> 00:05:32,280 Speaker 1: and our memory ends up not being a perfect representation 86 00:05:32,360 --> 00:05:35,159 Speaker 1: of the thing that we actually experienced. This is why 87 00:05:35,440 --> 00:05:38,679 Speaker 1: things like eyewitness testimony is not always very reliable, because 88 00:05:38,680 --> 00:05:43,840 Speaker 1: our memories aren't infallible. They can trick us and we 89 00:05:43,880 --> 00:05:46,360 Speaker 1: can have all those pathways light up when we learn 90 00:05:46,400 --> 00:05:49,279 Speaker 1: a new skill and we start forming new pathways, and 91 00:05:49,320 --> 00:05:53,160 Speaker 1: then as we practice this skill, we start to reinforce 92 00:05:53,279 --> 00:05:56,919 Speaker 1: those pathways. So McCulla and Pitts propose that we create 93 00:05:57,040 --> 00:06:02,080 Speaker 1: machines capable of doing essentially a similar thing that our 94 00:06:02,120 --> 00:06:07,599 Speaker 1: brains do, so kind of a neuromimicry, not exactly one 95 00:06:07,640 --> 00:06:11,479 Speaker 1: to one the way our brains work, but inspired by 96 00:06:11,560 --> 00:06:15,560 Speaker 1: the way our brains work. Now, we would be limited 97 00:06:15,839 --> 00:06:18,919 Speaker 1: by what the technology of the day would be able 98 00:06:18,920 --> 00:06:22,000 Speaker 1: to do, because there's no feasible way we could create 99 00:06:22,040 --> 00:06:27,920 Speaker 1: a massive electrical system with one hundred billion individual simple 100 00:06:28,040 --> 00:06:31,880 Speaker 1: circuits with more than one hundred trillion connections between them. 101 00:06:32,160 --> 00:06:35,680 Speaker 1: That would be beyond our capability it would be beyond 102 00:06:35,680 --> 00:06:39,599 Speaker 1: our resources. We could, however, create systems that used interconnected 103 00:06:39,640 --> 00:06:44,320 Speaker 1: circuits to process information and to teach such a system 104 00:06:44,440 --> 00:06:49,080 Speaker 1: to do specific tasks. Now, in nineteen forty nine, Donald 105 00:06:49,200 --> 00:06:53,880 Speaker 1: Hebb wrote a book about biological neurons, and he titled 106 00:06:53,880 --> 00:06:58,800 Speaker 1: this book the Organization of Behavior and suggested neural pathways 107 00:06:58,800 --> 00:07:02,320 Speaker 1: get stronger with additional use, kind of like you know, 108 00:07:02,320 --> 00:07:05,400 Speaker 1: if you exercise your muscles, you build strength over time. 109 00:07:05,480 --> 00:07:09,560 Speaker 1: Will so is the same with neural pathways. And if 110 00:07:09,600 --> 00:07:12,560 Speaker 1: you don't use those muscles well, then your muscles get weaker. Well, 111 00:07:12,640 --> 00:07:16,080 Speaker 1: same with neural pathways. If you end up learning a skill, 112 00:07:16,760 --> 00:07:20,920 Speaker 1: but then over a great amount of time you no 113 00:07:20,960 --> 00:07:24,560 Speaker 1: longer practice that skill, you're gonna lose some of your ability. 114 00:07:25,160 --> 00:07:27,080 Speaker 1: Maybe not all of it, but at least some of it. 115 00:07:27,160 --> 00:07:29,400 Speaker 1: And you have to you know, like I think about 116 00:07:29,800 --> 00:07:34,200 Speaker 1: wrestlers who come back from from retirement. Professional wrestlers, they 117 00:07:34,240 --> 00:07:36,480 Speaker 1: call it ring rust. You got to knock off the 118 00:07:36,560 --> 00:07:39,000 Speaker 1: ring rust and get back into step and kind of 119 00:07:39,320 --> 00:07:41,520 Speaker 1: get back into your groove. And it takes a little time. 120 00:07:41,680 --> 00:07:46,000 Speaker 1: Typically sometimes you know, you can get back into the 121 00:07:46,040 --> 00:07:48,760 Speaker 1: game faster than others, but you get the idea, and 122 00:07:49,000 --> 00:07:53,640 Speaker 1: also heb ended up proposing the concept of cells that 123 00:07:53,760 --> 00:07:59,360 Speaker 1: fire together wire together, meaning that neurons that fire at 124 00:07:59,360 --> 00:08:04,880 Speaker 1: the same time end up strengthening faster than other neurons do. 125 00:08:05,520 --> 00:08:09,640 Speaker 1: So when you get into that system, you can actually 126 00:08:09,680 --> 00:08:15,160 Speaker 1: reinforce those pathways. And for AI this would be really important. 127 00:08:15,640 --> 00:08:18,200 Speaker 1: And it wasn't very long after Donald have had published 128 00:08:18,240 --> 00:08:21,240 Speaker 1: this work that researchers in the field of AI tried 129 00:08:21,280 --> 00:08:26,440 Speaker 1: to apply that concept that philosophy to computer science. By 130 00:08:26,480 --> 00:08:29,800 Speaker 1: the mid nineteen fifties, the burgeoning computer science Lab and 131 00:08:29,920 --> 00:08:34,600 Speaker 1: AI Lab at MIT was building out neural networks based 132 00:08:34,679 --> 00:08:41,359 Speaker 1: on Hebb's ideas. Meanwhile, another computer scientist named Frank Rosenblatt 133 00:08:41,679 --> 00:08:46,280 Speaker 1: was looking at primitive neural systems and he started with flies, 134 00:08:46,440 --> 00:08:50,760 Speaker 1: like house flies. He wanted to explore systems that were 135 00:08:50,800 --> 00:08:54,920 Speaker 1: involved when a fly would quickly move away after detecting 136 00:08:54,920 --> 00:08:59,560 Speaker 1: a possible threat, like instantly, or at least appearing to 137 00:08:59,640 --> 00:09:03,839 Speaker 1: us to instantly react to something. So, for example, a 138 00:09:03,920 --> 00:09:06,320 Speaker 1: fly swater coming at it like you might be moving 139 00:09:06,360 --> 00:09:08,400 Speaker 1: the fly swater very quickly, and yet the fly is 140 00:09:08,440 --> 00:09:14,240 Speaker 1: able to move super fast with no perceivable delay. Right, 141 00:09:14,360 --> 00:09:16,760 Speaker 1: we know that we have a delay from when we 142 00:09:16,800 --> 00:09:19,280 Speaker 1: perceive something to when we can act on something. Like 143 00:09:19,320 --> 00:09:21,599 Speaker 1: if you've ever been in a fender bender in a 144 00:09:21,640 --> 00:09:24,680 Speaker 1: car accident, you know that that there's a delay between 145 00:09:24,679 --> 00:09:26,880 Speaker 1: when you see the issue when you can hit the brake, 146 00:09:27,520 --> 00:09:30,960 Speaker 1: and that can lead to accidents. Well, with flies, that 147 00:09:31,000 --> 00:09:35,080 Speaker 1: delay seems to be super super small. So Rosenblatt was 148 00:09:35,120 --> 00:09:39,440 Speaker 1: really interested in exploring the neurological reasons for that. How 149 00:09:39,559 --> 00:09:41,920 Speaker 1: can that happen? It has to be really simple, right, 150 00:09:42,360 --> 00:09:44,680 Speaker 1: There has to be a simple and more or less 151 00:09:44,840 --> 00:09:50,400 Speaker 1: direct pathway that exists to allow a fly to react 152 00:09:50,520 --> 00:09:54,720 Speaker 1: to detecting a potential threat like that. And if you 153 00:09:54,840 --> 00:09:58,960 Speaker 1: could replicate that with electronics, you could have a very 154 00:09:59,000 --> 00:10:05,640 Speaker 1: simple but potentially powerful artificial intelligence system. So he came 155 00:10:05,720 --> 00:10:08,440 Speaker 1: up with this system that would be based off that 156 00:10:08,640 --> 00:10:11,080 Speaker 1: very simple direct pathway that you would see in something 157 00:10:11,120 --> 00:10:14,760 Speaker 1: like a fly, and he called it the perceptron. So 158 00:10:14,800 --> 00:10:17,080 Speaker 1: he went back to the simple circuit design that was 159 00:10:17,080 --> 00:10:20,080 Speaker 1: proposed by Pitts and McCullough and he built out the 160 00:10:20,120 --> 00:10:24,560 Speaker 1: Mark one perceptron or perceptron I guess I should say, 161 00:10:24,720 --> 00:10:27,520 Speaker 1: So let's talk about a perceptron like not big p 162 00:10:27,760 --> 00:10:30,600 Speaker 1: but a little p perceptron. This is probably what we 163 00:10:30,600 --> 00:10:33,920 Speaker 1: would call a neural node in a modern neural network. 164 00:10:34,240 --> 00:10:38,080 Speaker 1: So the purpose of the perceptron was to accept inputs 165 00:10:38,559 --> 00:10:43,319 Speaker 1: and produce an output based on some threshold. Like if 166 00:10:43,320 --> 00:10:46,720 Speaker 1: the inputs meet a certain threshold, one output would be produced. 167 00:10:46,760 --> 00:10:48,680 Speaker 1: If they failed to do so, a different output would 168 00:10:48,679 --> 00:10:53,320 Speaker 1: be produced. The inputs, in turn would be assigned weights, 169 00:10:53,760 --> 00:10:56,840 Speaker 1: which would factor into the output the perceptron would generate. 170 00:10:57,240 --> 00:11:02,880 Speaker 1: So when we're talking weights, I mean you eights as 171 00:11:02,920 --> 00:11:06,000 Speaker 1: in like how heavy something is, or in this case, 172 00:11:06,360 --> 00:11:10,440 Speaker 1: how much impact that thing has. So we're talking about 173 00:11:10,640 --> 00:11:14,280 Speaker 1: how much impact one input has relative to other inputs. 174 00:11:14,720 --> 00:11:17,480 Speaker 1: Let me use a really mundane human example to kind 175 00:11:17,480 --> 00:11:21,280 Speaker 1: of explain what this means. Let's say that your friend 176 00:11:21,360 --> 00:11:23,600 Speaker 1: asks you to go see a movie with them, and 177 00:11:23,640 --> 00:11:26,520 Speaker 1: it's going to be playing tonight at nine pm. But 178 00:11:26,600 --> 00:11:29,720 Speaker 1: you've had a really busy day and you might not 179 00:11:29,760 --> 00:11:32,920 Speaker 1: be able to even eat dinner until around nine pm. 180 00:11:33,280 --> 00:11:34,880 Speaker 1: And if you go see this movie, it might mean 181 00:11:34,920 --> 00:11:37,760 Speaker 1: having to skip dinner or to try and eat something 182 00:11:37,840 --> 00:11:41,600 Speaker 1: really fast and unhealthy before you go to the movie. 183 00:11:41,920 --> 00:11:45,120 Speaker 1: What's more, you got a really big day tomorrow and 184 00:11:45,400 --> 00:11:47,400 Speaker 1: you feel like you really need to be well rested 185 00:11:47,440 --> 00:11:51,240 Speaker 1: for it. However, at the same time, you haven't seen 186 00:11:51,280 --> 00:11:54,200 Speaker 1: this friend in ages, and you really like this person 187 00:11:54,360 --> 00:11:56,880 Speaker 1: and you've wanted to hang with them for a really 188 00:11:56,920 --> 00:11:59,880 Speaker 1: long time. Plus the movie they're suggesting is one you've 189 00:12:00,120 --> 00:12:03,000 Speaker 1: hell he wanted to see and you haven't gone yet. Well, 190 00:12:03,040 --> 00:12:07,160 Speaker 1: you would likely assign at least unconsciously weights to each 191 00:12:07,200 --> 00:12:09,800 Speaker 1: of these factors before you make your decision. You know, 192 00:12:09,840 --> 00:12:12,880 Speaker 1: if getting some dinner without having to rush and also 193 00:12:12,960 --> 00:12:15,880 Speaker 1: to be really well rested for tomorrow are really important 194 00:12:15,920 --> 00:12:20,240 Speaker 1: to you, you'll probably reluctantly decline the offer. But if 195 00:12:20,280 --> 00:12:22,440 Speaker 1: you really crave some time with your friend and you 196 00:12:22,520 --> 00:12:24,720 Speaker 1: really want to see that movie before all the spoilers 197 00:12:24,760 --> 00:12:28,000 Speaker 1: come out on Facebook or whatever, maybe you'll say yes. 198 00:12:28,559 --> 00:12:32,520 Speaker 1: Your decision depends upon the weights you assign those factors, 199 00:12:32,559 --> 00:12:36,160 Speaker 1: those inputs, even if you don't consciously think about it 200 00:12:36,200 --> 00:12:39,680 Speaker 1: that way. Well. The Perceptron system worked in a similar way, 201 00:12:39,880 --> 00:12:44,000 Speaker 1: produced outputs by taking the inputs into consideration, including each 202 00:12:44,120 --> 00:12:48,280 Speaker 1: input's weight. Moreover, the more you submitted inputs, the more 203 00:12:48,320 --> 00:12:51,680 Speaker 1: the system would quote unquote learn how to weight each 204 00:12:51,720 --> 00:12:54,160 Speaker 1: of those inputs, all with the goal of bringing the 205 00:12:54,320 --> 00:12:59,079 Speaker 1: actual output that the process or you know, generates closer 206 00:12:59,080 --> 00:13:03,720 Speaker 1: to the one you want it to generate. Okay, I 207 00:13:03,800 --> 00:13:05,800 Speaker 1: just said a lot there. We've got some more to 208 00:13:05,840 --> 00:13:07,839 Speaker 1: get through. But before we get to that, let's take 209 00:13:07,840 --> 00:13:19,120 Speaker 1: a quick break. All right. Before the break, we were 210 00:13:19,120 --> 00:13:22,720 Speaker 1: talking about inputs and weights and the idea of getting 211 00:13:23,320 --> 00:13:26,680 Speaker 1: an output that is close to what you want the 212 00:13:26,720 --> 00:13:30,240 Speaker 1: system to do. That's not a guarantee, right, The system 213 00:13:30,280 --> 00:13:34,200 Speaker 1: could generate an output that's quote unquote wrong, you know, 214 00:13:34,280 --> 00:13:38,400 Speaker 1: depending on whatever task you've set this machine learning system 215 00:13:38,520 --> 00:13:42,040 Speaker 1: to learn. And that gets a bit conceptual. So let's 216 00:13:42,040 --> 00:13:44,640 Speaker 1: talk about a simple example that I love to use. 217 00:13:44,720 --> 00:13:46,520 Speaker 1: If you've been listening to tech sta for a while, 218 00:13:46,600 --> 00:13:50,600 Speaker 1: you've heard this before, and that's talking about pictures of cats. 219 00:13:50,960 --> 00:13:54,000 Speaker 1: Because cats ruled the internet. I don't know if they 220 00:13:54,040 --> 00:13:56,400 Speaker 1: still do. They won't talk to me, so they just 221 00:13:56,440 --> 00:13:59,040 Speaker 1: knock things off shelves. Anyway, if your goal is to 222 00:13:59,160 --> 00:14:04,040 Speaker 1: teach a computer system to differentiate photos that include a 223 00:14:04,080 --> 00:14:08,240 Speaker 1: cat from photos that do not include a cat, well, 224 00:14:08,800 --> 00:14:10,840 Speaker 1: you would need to train the system, and part of 225 00:14:10,840 --> 00:14:15,640 Speaker 1: that includes feeding the system a whole bunch of photographs. 226 00:14:16,280 --> 00:14:19,200 Speaker 1: Some of those would have cats in them, some would not, 227 00:14:19,960 --> 00:14:24,120 Speaker 1: and chances are the system would misidentify photos, maybe a 228 00:14:24,160 --> 00:14:27,120 Speaker 1: significant number of those photos. You would probably have false 229 00:14:27,160 --> 00:14:29,800 Speaker 1: positives where the system thinks there's a cat there and 230 00:14:29,840 --> 00:14:32,960 Speaker 1: there's not, and false negatives where it doesn't think there's 231 00:14:33,000 --> 00:14:36,320 Speaker 1: a cat there but there is. At that point, your 232 00:14:36,320 --> 00:14:39,080 Speaker 1: goal is to try and teach the system to close 233 00:14:39,200 --> 00:14:43,280 Speaker 1: the gap between the actual results it produces and what 234 00:14:43,440 --> 00:14:46,600 Speaker 1: you want it to produce. In some systems that means 235 00:14:46,680 --> 00:14:49,480 Speaker 1: you might have to go in manually to adjust the 236 00:14:49,560 --> 00:14:52,480 Speaker 1: input weights to increase the weight of one input versus 237 00:14:52,560 --> 00:14:55,960 Speaker 1: another in an effort to cut down on mistakes. So 238 00:14:56,080 --> 00:15:01,440 Speaker 1: the perceptron was interesting, but it was very limit in complexity. 239 00:15:01,760 --> 00:15:04,160 Speaker 1: It was essentially a single layer where you'd feed a 240 00:15:04,160 --> 00:15:06,480 Speaker 1: bunch of inputs in and you would get an output, 241 00:15:06,840 --> 00:15:10,360 Speaker 1: So it was suitable for a subset of computational challenges, 242 00:15:10,360 --> 00:15:14,640 Speaker 1: but anything beyond that was well beyond its own reach 243 00:15:14,880 --> 00:15:18,040 Speaker 1: as a single layer network. By the late nineteen fifties, 244 00:15:18,080 --> 00:15:21,680 Speaker 1: other researchers had created new neural networks that were multi layered, 245 00:15:22,160 --> 00:15:26,920 Speaker 1: so a node or neuron didn't just accept inputs, it 246 00:15:26,920 --> 00:15:30,240 Speaker 1: would generate outputs that then would become inputs for another 247 00:15:30,760 --> 00:15:35,640 Speaker 1: layer down. So instead of just having one layer of nodes, 248 00:15:35,640 --> 00:15:38,040 Speaker 1: you would have multiple layers of nodes. Typically you would 249 00:15:38,040 --> 00:15:42,000 Speaker 1: have one at the quote unquote top of the network, 250 00:15:42,360 --> 00:15:43,920 Speaker 1: and you would have outputs at the bottom, and the 251 00:15:43,960 --> 00:15:47,240 Speaker 1: ones in between would be often referred to as hidden layers, 252 00:15:47,720 --> 00:15:51,040 Speaker 1: and who knows how many there would be. So anyway, 253 00:15:51,360 --> 00:15:54,160 Speaker 1: you would feed data to the system. The initial nodes 254 00:15:54,200 --> 00:15:58,200 Speaker 1: would generate information as outputs that would become inputs for 255 00:15:58,280 --> 00:16:02,400 Speaker 1: the next layer down, which would then continue the process 256 00:16:03,040 --> 00:16:05,000 Speaker 1: and so on and so forth until you get to 257 00:16:05,040 --> 00:16:08,080 Speaker 1: the output. So now you had artificial neural networks that 258 00:16:08,080 --> 00:16:12,480 Speaker 1: could tackle more complex challenges, and you would have multiple 259 00:16:12,520 --> 00:16:16,400 Speaker 1: steps in the process. Didn't necessarily mean they were automatically 260 00:16:16,480 --> 00:16:20,600 Speaker 1: better than the perceptron, was just that they were able 261 00:16:20,640 --> 00:16:26,400 Speaker 1: to tackle more complicated tasks. What followed is something that 262 00:16:26,480 --> 00:16:30,000 Speaker 1: will probably sound really familiar to you if you ever 263 00:16:30,160 --> 00:16:35,200 Speaker 1: follow technology or fads, the hype around machine learning and 264 00:16:35,320 --> 00:16:38,120 Speaker 1: artificial intelligence, And keep in mind this is like the 265 00:16:38,240 --> 00:16:43,200 Speaker 1: nineteen sixties. It grew beyond the technology's actual capabilities. At 266 00:16:43,240 --> 00:16:47,160 Speaker 1: that time, people started to project what this technology would 267 00:16:47,200 --> 00:16:49,560 Speaker 1: be able to do, and they did so thinking it 268 00:16:49,600 --> 00:16:52,840 Speaker 1: was going to be in a very short turnaround, like 269 00:16:52,880 --> 00:16:57,359 Speaker 1: we're right on the very precipice of a monstrous breakthrough 270 00:16:57,440 --> 00:17:00,000 Speaker 1: that will bring the science fiction future into the press. 271 00:17:01,160 --> 00:17:06,000 Speaker 1: So when it was realized that we weren't at that, like, 272 00:17:06,080 --> 00:17:09,920 Speaker 1: that's not how progress typically works. It's usually much more 273 00:17:10,440 --> 00:17:15,480 Speaker 1: gradual and humble than that, Well, then enthusiasm around AI 274 00:17:15,600 --> 00:17:18,120 Speaker 1: began to take a hit. And as I mentioned already, 275 00:17:18,160 --> 00:17:21,720 Speaker 1: a big part of AI research really comes down to funding, 276 00:17:22,280 --> 00:17:25,680 Speaker 1: and it gets really challenging to secure funding when public 277 00:17:25,800 --> 00:17:30,480 Speaker 1: opinion dims on a technology. We've seen this happen lots 278 00:17:30,520 --> 00:17:34,320 Speaker 1: of times, right, Like three D television was a fad 279 00:17:34,359 --> 00:17:37,040 Speaker 1: that was pushed. Now, granted, that one, you could argue 280 00:17:37,119 --> 00:17:40,440 Speaker 1: was more of an example of manufacturing companies that make 281 00:17:40,480 --> 00:17:44,080 Speaker 1: televisions trying to push a technology on consumers and the 282 00:17:44,119 --> 00:17:46,840 Speaker 1: consumers just weren't interested. You could argue that was the 283 00:17:46,880 --> 00:17:50,320 Speaker 1: case there. But virtual reality in the nineteen nineties definitely 284 00:17:50,359 --> 00:17:53,920 Speaker 1: followed this pathway. There was this excitement around virtual reality. 285 00:17:54,920 --> 00:17:58,800 Speaker 1: Then that excitement faded to almost nothing when people realized 286 00:17:58,800 --> 00:18:02,080 Speaker 1: that the actual state of the art of the technology 287 00:18:02,320 --> 00:18:05,800 Speaker 1: was far below where they expected it to be. And 288 00:18:05,880 --> 00:18:09,360 Speaker 1: suddenly people who are working in VR couldn't get funding 289 00:18:09,520 --> 00:18:11,720 Speaker 1: for their work, and they kind of had to scrounge 290 00:18:11,760 --> 00:18:15,680 Speaker 1: around in order to keep the development going at all, 291 00:18:16,320 --> 00:18:19,200 Speaker 1: and then eventually we would see that come back around again. 292 00:18:19,760 --> 00:18:23,320 Speaker 1: You could argue that NFTs recently went through this too, 293 00:18:23,400 --> 00:18:26,880 Speaker 1: where the hype went well beyond what NFTs could actually do. 294 00:18:27,920 --> 00:18:31,240 Speaker 1: I've been really down on NFTs in general. I do 295 00:18:31,280 --> 00:18:36,359 Speaker 1: think that there are potential legitimate uses for NFTs, but 296 00:18:36,480 --> 00:18:42,720 Speaker 1: I think the early examples were frivolous and almost solely 297 00:18:42,800 --> 00:18:48,640 Speaker 1: centered around speculation, as in like financial speculation, and as 298 00:18:48,720 --> 00:18:50,640 Speaker 1: a result, there was nothing for it to do other 299 00:18:50,720 --> 00:18:53,840 Speaker 1: than to create a bubble that would ultimately burst, which 300 00:18:53,880 --> 00:18:57,520 Speaker 1: is what happened. And maybe NFTs will recover from that 301 00:18:57,640 --> 00:19:01,760 Speaker 1: and become something that's more fundamentally useful in the Internet 302 00:19:01,840 --> 00:19:04,880 Speaker 1: in the future or in digital commerce in the future, 303 00:19:06,240 --> 00:19:10,200 Speaker 1: but it's going to have to get over the catastrophe 304 00:19:10,240 --> 00:19:12,960 Speaker 1: that happened when the rug was pulled out from underneath 305 00:19:13,080 --> 00:19:18,000 Speaker 1: n FTS, and that was all, you know, predictable and preventable. 306 00:19:18,760 --> 00:19:23,080 Speaker 1: But like I've said before, like I've lifted the joke 307 00:19:23,119 --> 00:19:25,359 Speaker 1: from Peter Cook, we've learned from our mistakes. We can 308 00:19:25,440 --> 00:19:29,760 Speaker 1: repeat them almost exactly. Anyway, This same sort of hype 309 00:19:29,960 --> 00:19:35,000 Speaker 1: cycle activity happened with neural networks and machine learning in 310 00:19:35,040 --> 00:19:40,800 Speaker 1: the nineteen sixties. Then enter Marvin Minsky and Seymour Pappart 311 00:19:40,840 --> 00:19:43,880 Speaker 1: of MIT's AI lab. They were leading that lab at 312 00:19:43,880 --> 00:19:46,840 Speaker 1: the time. In nineteen sixty nine, they co authored a 313 00:19:46,960 --> 00:19:53,280 Speaker 1: book titled Perceptrons. They were actually critical of that artificial 314 00:19:53,359 --> 00:19:56,480 Speaker 1: neural network approach to AI and machine learning. They were 315 00:19:56,520 --> 00:20:00,040 Speaker 1: concerned that the limitations of the technology meant that you 316 00:19:59,840 --> 00:20:03,560 Speaker 1: and you need an unrealistically huge system of artificial neurons, 317 00:20:03,920 --> 00:20:08,159 Speaker 1: perhaps then using that system to compute an infinite number 318 00:20:08,200 --> 00:20:12,679 Speaker 1: of variations of the same process or task if you 319 00:20:12,760 --> 00:20:15,200 Speaker 1: wanted to train the weights so that they were of 320 00:20:15,320 --> 00:20:19,719 Speaker 1: the optimal value. So, in other words, they thought, it's 321 00:20:19,920 --> 00:20:23,640 Speaker 1: too impractical, and it's going to take too much compute time, 322 00:20:23,680 --> 00:20:26,040 Speaker 1: and you're never going to achieve the result you want. 323 00:20:26,080 --> 00:20:30,200 Speaker 1: You're never going to get to that most perfect system. 324 00:20:30,880 --> 00:20:35,840 Speaker 1: And they believed it just had fundamental inescapable flaws. They 325 00:20:35,880 --> 00:20:40,760 Speaker 1: had different systems in mind. Now Minski and Separate tried 326 00:20:40,800 --> 00:20:43,159 Speaker 1: to push their systems forward, and I could do a 327 00:20:43,200 --> 00:20:46,800 Speaker 1: full episode about them too, and their ideas were not bad. 328 00:20:47,359 --> 00:20:50,119 Speaker 1: They were different. It was a different approach. But this 329 00:20:50,240 --> 00:20:53,040 Speaker 1: also meant that researchers who had been pushing the development 330 00:20:53,040 --> 00:20:56,800 Speaker 1: of our artificial neural networks felt forced to move on 331 00:20:57,000 --> 00:21:02,280 Speaker 1: to different projects because financial support for anything connected to 332 00:21:02,320 --> 00:21:07,320 Speaker 1: the concept of neural networks effectively disappeared, right like funding 333 00:21:07,480 --> 00:21:11,119 Speaker 1: just dropped for that. Because here you had these experts 334 00:21:11,160 --> 00:21:15,800 Speaker 1: in computer science saying, yeah, this approach, while interesting, has 335 00:21:15,840 --> 00:21:19,320 Speaker 1: already hit an insurmountable obstacle and it's not going to 336 00:21:19,359 --> 00:21:21,200 Speaker 1: go any further. It's gone as far as it can go. 337 00:21:21,840 --> 00:21:25,480 Speaker 1: And so a lot of computer scientists blamed Minsky and 338 00:21:25,520 --> 00:21:31,320 Speaker 1: Separate for essentially demolishing funding for neural networks for more 339 00:21:31,359 --> 00:21:34,720 Speaker 1: than a decade, and in fact, this would become an 340 00:21:34,760 --> 00:21:38,919 Speaker 1: era that retrospectively, computer scientists would reference as the AI 341 00:21:39,240 --> 00:21:42,960 Speaker 1: Winter got all Game of Thrones up in here Now. 342 00:21:43,000 --> 00:21:46,240 Speaker 1: In nineteen eighty two, there was a hint of spring 343 00:21:47,040 --> 00:21:51,119 Speaker 1: thawing out that AI Winter researchers in Japan were starting 344 00:21:51,160 --> 00:21:56,119 Speaker 1: to resurrect work on neural network projects. And meanwhile, a 345 00:21:56,160 --> 00:21:59,720 Speaker 1: scientist named John Hopfield submitted a research paper to the 346 00:21:59,800 --> 00:22:03,760 Speaker 1: Neattional Academy of Sciences that brought neural networks back into 347 00:22:03,840 --> 00:22:07,320 Speaker 1: discussion here in the United States, and because Japan was 348 00:22:07,440 --> 00:22:13,200 Speaker 1: actively investing in developing that technology, institutions in the United 349 00:22:13,240 --> 00:22:15,600 Speaker 1: States began to open up the purse strings a bit 350 00:22:16,000 --> 00:22:18,880 Speaker 1: because there was a concern that if there were something 351 00:22:19,359 --> 00:22:22,720 Speaker 1: to this artificial neural network concept, if in fact those 352 00:22:22,840 --> 00:22:28,040 Speaker 1: obstacles weren't insurmountable, as min skin Separate had suggested, the 353 00:22:28,200 --> 00:22:32,800 Speaker 1: US could potentially fall behind another country because it would 354 00:22:32,800 --> 00:22:36,159 Speaker 1: fail to fund its development. So, in a desire not 355 00:22:36,680 --> 00:22:38,760 Speaker 1: to have Japan take the ball and run with it, 356 00:22:39,080 --> 00:22:42,520 Speaker 1: the United States began to invest again in artificial neural 357 00:22:42,600 --> 00:22:46,600 Speaker 1: network research and development. In the mid nineteen eighties, computer 358 00:22:46,680 --> 00:22:53,040 Speaker 1: scientists essentially rediscovered the usefulness of a process called back propagation. 359 00:22:53,640 --> 00:22:56,720 Speaker 1: And I've already talked about nodes and weights and stuff, 360 00:22:56,760 --> 00:22:58,479 Speaker 1: but this is going to require a little bit more 361 00:22:58,480 --> 00:23:02,000 Speaker 1: explanation to understand what by propagation is all about. So 362 00:23:02,119 --> 00:23:05,560 Speaker 1: let's kind of try to visualize a neural network. So 363 00:23:05,600 --> 00:23:07,960 Speaker 1: you've got your input nodes. Just think of a bunch 364 00:23:07,960 --> 00:23:10,960 Speaker 1: of circles. If you were drawing it from top to bottom, 365 00:23:10,960 --> 00:23:13,760 Speaker 1: this would be your top layer. This is like the 366 00:23:13,800 --> 00:23:17,560 Speaker 1: funnels where you're going to feed data into the system. 367 00:23:18,040 --> 00:23:20,200 Speaker 1: Now you've got a whole bunch of these at the top, 368 00:23:20,280 --> 00:23:22,720 Speaker 1: and they can accept the data that you're feeding in. 369 00:23:23,200 --> 00:23:28,240 Speaker 1: They process that data and then based upon you some operation, 370 00:23:28,840 --> 00:23:32,920 Speaker 1: they will then send an output to a node one 371 00:23:33,000 --> 00:23:36,280 Speaker 1: layer down. So there's lots of other nodes in the 372 00:23:36,359 --> 00:23:38,760 Speaker 1: layers below, or maybe not as many as you have 373 00:23:38,840 --> 00:23:43,560 Speaker 1: initial layers. You might actually have fewer, and the layers 374 00:23:43,600 --> 00:23:47,119 Speaker 1: above will send to you know, data to a specific 375 00:23:47,200 --> 00:23:51,040 Speaker 1: node depending upon what the outcome is. Whatever the output is, 376 00:23:52,280 --> 00:23:56,800 Speaker 1: So these nodes accept the input. These inputs have a 377 00:23:56,840 --> 00:23:59,920 Speaker 1: bias and a weight to them, and this is one 378 00:24:00,040 --> 00:24:03,040 Speaker 1: the hidden layers. They will then create an output and 379 00:24:03,080 --> 00:24:07,840 Speaker 1: send that on to nodes another layer down. So this 380 00:24:07,960 --> 00:24:10,640 Speaker 1: goes on until you get to your output layer where 381 00:24:10,680 --> 00:24:14,280 Speaker 1: you get your final result, and then you can determine 382 00:24:14,280 --> 00:24:16,840 Speaker 1: whether or not the final result matches what you were 383 00:24:16,880 --> 00:24:20,480 Speaker 1: hoping for. So did your system properly identify which photos 384 00:24:20,520 --> 00:24:23,280 Speaker 1: do and don't have cats in them? Now, as I 385 00:24:23,280 --> 00:24:26,720 Speaker 1: mentioned earlier, you typically get results that aren't perfect, but 386 00:24:26,800 --> 00:24:30,520 Speaker 1: we want to train the system to improve with every test. 387 00:24:31,200 --> 00:24:35,320 Speaker 1: Back propagation is one way to do this. So with 388 00:24:35,359 --> 00:24:38,600 Speaker 1: that propagation, you actually start with the final output. You've 389 00:24:38,640 --> 00:24:41,840 Speaker 1: already done a test run, right, and you've got your output, 390 00:24:42,560 --> 00:24:48,080 Speaker 1: and maybe your test has five possible final outcomes, but 391 00:24:48,200 --> 00:24:51,440 Speaker 1: only one of those is the outcome you actually want. Okay, 392 00:24:51,480 --> 00:24:54,680 Speaker 1: we'll say it's outcome number one. We're saying I want 393 00:24:54,680 --> 00:24:58,439 Speaker 1: this system to more often than not come to the 394 00:24:58,480 --> 00:25:00,680 Speaker 1: conclusion that it's outcome number one one. But you run 395 00:25:00,720 --> 00:25:07,480 Speaker 1: your test. It's got one thousand little tasks in it, 396 00:25:07,520 --> 00:25:10,800 Speaker 1: and you run your test, you find out that it 397 00:25:10,880 --> 00:25:13,720 Speaker 1: only arrives at outcome number one five percent of the time, 398 00:25:13,920 --> 00:25:16,399 Speaker 1: which is actually worse than random chance. Right, it should 399 00:25:16,400 --> 00:25:18,640 Speaker 1: be twenty percent for random chance, But it's only getting 400 00:25:18,680 --> 00:25:21,679 Speaker 1: there five percent of the time. Something is going really 401 00:25:21,720 --> 00:25:25,480 Speaker 1: wrong with your system for it to mistakenly go to 402 00:25:25,560 --> 00:25:29,080 Speaker 1: one of the other options and very rarely go to 403 00:25:29,119 --> 00:25:32,280 Speaker 1: the correct one. So let's say you also noticed the 404 00:25:32,280 --> 00:25:35,359 Speaker 1: outcome number three. It goes to that one forty percent 405 00:25:35,400 --> 00:25:37,680 Speaker 1: of the time. So it's making this mistake forty percent 406 00:25:37,680 --> 00:25:39,760 Speaker 1: of the time and only getting it right five percent 407 00:25:39,760 --> 00:25:42,399 Speaker 1: of the time. So things are seriously out of whack. 408 00:25:42,440 --> 00:25:46,560 Speaker 1: You need to find which connections which would involve the 409 00:25:46,600 --> 00:25:50,480 Speaker 1: biases and the weights that are within your system that 410 00:25:50,520 --> 00:25:54,640 Speaker 1: are leading it to mistakenly arrive at the wrong outcome. 411 00:25:54,840 --> 00:25:59,359 Speaker 1: So frequently you want to reduce those factors, and simultaneously 412 00:25:59,400 --> 00:26:02,440 Speaker 1: you need to boost the ones that lead the system 413 00:26:02,560 --> 00:26:05,240 Speaker 1: to arrive at outcome number one, because that's the answer 414 00:26:05,320 --> 00:26:08,840 Speaker 1: you actually want the system to get to. All Right, 415 00:26:09,960 --> 00:26:12,040 Speaker 1: I've been droning on for a bit, Let's take another 416 00:26:12,119 --> 00:26:14,640 Speaker 1: quick break. When we come back, I'll finish up explaining 417 00:26:14,680 --> 00:26:27,159 Speaker 1: this and then we'll move on to catastrophic forgetting. Okay, 418 00:26:27,600 --> 00:26:30,639 Speaker 1: so we were talking about how you are looking at 419 00:26:30,640 --> 00:26:34,760 Speaker 1: a system that is coming to the wrong conclusion ninety 420 00:26:34,800 --> 00:26:37,880 Speaker 1: five percent of the time. It is a broken system. 421 00:26:38,200 --> 00:26:42,399 Speaker 1: You have to then figure out what factors are causing 422 00:26:42,440 --> 00:26:46,240 Speaker 1: this to happen, and they are numerous, right, They extend 423 00:26:46,320 --> 00:26:48,800 Speaker 1: all the way up to the very top of your 424 00:26:48,840 --> 00:26:51,480 Speaker 1: neural network, the other end where the input comes in. 425 00:26:51,800 --> 00:26:54,439 Speaker 1: But you can't just change everything all at once. You've 426 00:26:54,440 --> 00:26:57,800 Speaker 1: got to figure this out systematically, and that's what backpropagation 427 00:26:57,920 --> 00:27:02,200 Speaker 1: is really all about. It detects which links one layer 428 00:27:02,280 --> 00:27:06,520 Speaker 1: up from the output have the greatest impact on the outcome. Right, 429 00:27:07,200 --> 00:27:10,040 Speaker 1: changing everything would be tedious, It would be impractical. You 430 00:27:10,119 --> 00:27:13,440 Speaker 1: might even make things worse. Some of these neural networks 431 00:27:13,480 --> 00:27:18,600 Speaker 1: are confoundingly complicated, so it's not really a feasible solution. 432 00:27:19,000 --> 00:27:21,840 Speaker 1: So instead you look at the connections that are having 433 00:27:21,840 --> 00:27:25,080 Speaker 1: the biggest impact on your outcome. So you want things 434 00:27:25,119 --> 00:27:27,439 Speaker 1: where if you make a small change in either the 435 00:27:27,480 --> 00:27:30,399 Speaker 1: bias or the weight, or maybe both, you'll see a 436 00:27:30,520 --> 00:27:34,360 Speaker 1: larger end effect on the outcome. All the connections are 437 00:27:34,480 --> 00:27:38,359 Speaker 1: arguably important, but some are more important than others. Backpropagation 438 00:27:38,480 --> 00:27:41,240 Speaker 1: works backwards from the result toward the other end of 439 00:27:41,240 --> 00:27:44,040 Speaker 1: the network to tweak those connections. It boosts ones that 440 00:27:44,160 --> 00:27:48,120 Speaker 1: lead to the correct or desired response, and it reduces 441 00:27:48,160 --> 00:27:52,640 Speaker 1: the values of those that lead to incorrect or undesired responses. 442 00:27:53,040 --> 00:27:54,840 Speaker 1: If we were to think of this like the classic 443 00:27:54,920 --> 00:27:59,280 Speaker 1: example and chaos theory, this could potentially involve us studying 444 00:27:59,320 --> 00:28:01,879 Speaker 1: a hurricane as it hits land and tracing its history 445 00:28:01,960 --> 00:28:05,240 Speaker 1: back as it moved through the ocean, and we would 446 00:28:05,240 --> 00:28:08,560 Speaker 1: eventually arrive at the point where it was a tropical storm, 447 00:28:08,640 --> 00:28:11,359 Speaker 1: and then we would go further back and see the 448 00:28:11,359 --> 00:28:14,240 Speaker 1: factors that led to the creation of that storm, and 449 00:28:14,320 --> 00:28:16,040 Speaker 1: maybe if we tracked it all the way back we 450 00:28:16,040 --> 00:28:19,520 Speaker 1: would even find that one of a billion factors that 451 00:28:19,600 --> 00:28:22,800 Speaker 1: made the storm was in fact, a butterfly was flapping 452 00:28:22,880 --> 00:28:24,399 Speaker 1: its wings on the other side of the world, and 453 00:28:24,400 --> 00:28:27,680 Speaker 1: that contributed to it. Maybe we find out that butterfly 454 00:28:27,720 --> 00:28:31,520 Speaker 1: flap of its wings had an impact, but it was negligible, 455 00:28:31,560 --> 00:28:33,280 Speaker 1: and that if the butterfly hadn't flapped its wings, the 456 00:28:33,359 --> 00:28:36,640 Speaker 1: hurricane still would have happened. That would be an example of, well, 457 00:28:36,640 --> 00:28:40,600 Speaker 1: we don't bother adjusting the weight of the impact of 458 00:28:40,600 --> 00:28:43,640 Speaker 1: that butterfly flapping its wings because it doesn't matter for 459 00:28:43,680 --> 00:28:46,560 Speaker 1: the end result. But what if we were to discover 460 00:28:46,720 --> 00:28:49,720 Speaker 1: that that butterfly flap of its wings is the only 461 00:28:49,840 --> 00:28:53,680 Speaker 1: reason the hurricane happened that, or at least was the 462 00:28:53,720 --> 00:28:57,680 Speaker 1: primary reason that all the other factors pale in comparison, Well, 463 00:28:57,680 --> 00:29:00,800 Speaker 1: then we'd want to make sure we boost the weight 464 00:29:00,920 --> 00:29:05,680 Speaker 1: of that input, because clearly that butterfly is fundamental for hurricanes. 465 00:29:07,480 --> 00:29:09,920 Speaker 1: I think hurricanes are really dangerous, and I would ask 466 00:29:10,000 --> 00:29:13,920 Speaker 1: butterflies to kind of chill, all right. I mean, I 467 00:29:13,920 --> 00:29:17,200 Speaker 1: don't want butterflies to go away, just you know, maybe 468 00:29:17,240 --> 00:29:21,280 Speaker 1: stop flapping so much. Anyway, the formula for back propagation 469 00:29:21,360 --> 00:29:25,200 Speaker 1: gets into some calculus that is well beyond my knowledge 470 00:29:25,280 --> 00:29:28,360 Speaker 1: and skill. So rather than attempt to stumble my way 471 00:29:28,440 --> 00:29:32,560 Speaker 1: through an explanation that I don't actually understand, I think 472 00:29:32,560 --> 00:29:34,760 Speaker 1: it's best to leave the concept at the high level 473 00:29:34,920 --> 00:29:37,360 Speaker 1: that I have described right now. So just know that 474 00:29:37,400 --> 00:29:39,920 Speaker 1: it gets way more granular than what I've talked about. 475 00:29:39,960 --> 00:29:44,080 Speaker 1: But essentially, you're looking at those factors that led to 476 00:29:44,280 --> 00:29:48,200 Speaker 1: the ultimate decision and saying which ones of these had 477 00:29:48,240 --> 00:29:51,680 Speaker 1: the greatest impact, and how can I tweak them so 478 00:29:51,840 --> 00:29:54,680 Speaker 1: that I can shape the outcome to one I wanted. 479 00:29:54,760 --> 00:29:57,000 Speaker 1: If we were thinking about that example I gave about 480 00:29:57,280 --> 00:30:01,160 Speaker 1: whether or not you go to the movies, maybe in 481 00:30:02,400 --> 00:30:05,920 Speaker 1: present day you starts thinking about past experiences where you 482 00:30:06,040 --> 00:30:08,080 Speaker 1: made a decision to go out when you had a 483 00:30:08,080 --> 00:30:11,800 Speaker 1: big day, then the following day, and how that impacted you, 484 00:30:11,840 --> 00:30:14,480 Speaker 1: perhaps negatively. Maybe you're like, man, I should have gotten 485 00:30:14,520 --> 00:30:17,160 Speaker 1: a promotion by now, and then you think, well, I 486 00:30:17,200 --> 00:30:20,320 Speaker 1: do go to the movies an awful lot. You might say, 487 00:30:20,640 --> 00:30:23,400 Speaker 1: I need to adjust some of the factors that affect 488 00:30:23,440 --> 00:30:27,880 Speaker 1: my decision making process and perhaps prioritize my career. Or 489 00:30:28,600 --> 00:30:33,040 Speaker 1: if you've decided that late stage capitalism is terrible, evil, 490 00:30:33,160 --> 00:30:35,920 Speaker 1: and that you're going to try and live a hedonistic 491 00:30:35,960 --> 00:30:40,320 Speaker 1: lifestyle of a wandering soul. Maybe you say, I'm going 492 00:30:40,400 --> 00:30:43,200 Speaker 1: to go and see my movie with my friend, and yeah, 493 00:30:43,480 --> 00:30:45,600 Speaker 1: that's just how it is, because that's the most important 494 00:30:45,600 --> 00:30:47,800 Speaker 1: thing to me. You only go around this crazy world once. 495 00:30:47,880 --> 00:30:50,360 Speaker 1: After all, I'm not telling you which way to go. 496 00:30:51,360 --> 00:30:55,320 Speaker 1: I'm still finding my own way. But yeah, backpropagation would 497 00:30:55,360 --> 00:30:57,640 Speaker 1: be how you would go back and say, all right, well, 498 00:30:57,680 --> 00:31:00,760 Speaker 1: because I don't like the outcome that happened, and I 499 00:31:00,840 --> 00:31:05,240 Speaker 1: need to change the way. These factors weigh in on 500 00:31:05,360 --> 00:31:08,720 Speaker 1: the decision making process that goes through the whole system. Now, 501 00:31:08,760 --> 00:31:12,400 Speaker 1: the advancements in the science of neural networks proved that 502 00:31:12,400 --> 00:31:15,920 Speaker 1: the technology no longer operated under the constraints that concern 503 00:31:16,040 --> 00:31:19,240 Speaker 1: Minsky and support in the late sixties, So once again 504 00:31:19,640 --> 00:31:23,840 Speaker 1: funding found its way to neural network research and development projects. 505 00:31:24,560 --> 00:31:29,160 Speaker 1: Now let's finally talk about forgetting and what makes it catastrophic. 506 00:31:29,960 --> 00:31:33,680 Speaker 1: So you could, in theory, develop an artificial neural network 507 00:31:34,000 --> 00:31:37,800 Speaker 1: and have a library of training data, and the only 508 00:31:37,840 --> 00:31:40,560 Speaker 1: thing you ever do with this network is you feed 509 00:31:40,640 --> 00:31:45,280 Speaker 1: that same set of training data to that same neural 510 00:31:45,320 --> 00:31:49,760 Speaker 1: network over and over. In an effort to get performance 511 00:31:49,760 --> 00:31:53,040 Speaker 1: as close to perfect as you possibly can. Just you know, 512 00:31:53,120 --> 00:31:54,920 Speaker 1: it's kind of like if you have a car and 513 00:31:54,960 --> 00:31:58,719 Speaker 1: you're constantly tweaking it so it will perform better, and 514 00:31:58,800 --> 00:32:01,640 Speaker 1: maybe you change one thing and it boosts performance in 515 00:32:01,680 --> 00:32:05,440 Speaker 1: one area, but it kind of negatively impacts performance in 516 00:32:05,480 --> 00:32:08,360 Speaker 1: another area, so then you got to tweak something else. 517 00:32:08,720 --> 00:32:10,800 Speaker 1: You could be doing that with an artificial neural network 518 00:32:10,800 --> 00:32:13,200 Speaker 1: forever and just be using the same set of training data, 519 00:32:13,640 --> 00:32:15,600 Speaker 1: and all you're trying to do is make a system 520 00:32:15,960 --> 00:32:18,720 Speaker 1: that could handle that training data better than any other 521 00:32:18,760 --> 00:32:21,240 Speaker 1: system in the world. And that would be interesting, but 522 00:32:21,320 --> 00:32:24,880 Speaker 1: it would be useless from a practical standpoint. You could say, like, hey, 523 00:32:24,880 --> 00:32:26,680 Speaker 1: you want to see my machine that can sort through 524 00:32:26,880 --> 00:32:30,600 Speaker 1: only this collection of photographs and pick out the ones 525 00:32:30,680 --> 00:32:32,520 Speaker 1: that have cats in them and the ones that don't 526 00:32:32,840 --> 00:32:37,080 Speaker 1: pretty pretty darn effectively, but not perfectly. It's not really 527 00:32:37,120 --> 00:32:40,880 Speaker 1: an interesting value proposition, right, So more likely you are 528 00:32:40,920 --> 00:32:43,720 Speaker 1: eventually going to start feeding lots of different kinds of 529 00:32:43,800 --> 00:32:48,240 Speaker 1: data to this neural network, And yeah, you train the 530 00:32:48,320 --> 00:32:51,200 Speaker 1: network on certain data sets, but your goal is to 531 00:32:51,240 --> 00:32:53,960 Speaker 1: feed new sets of data data the system has never 532 00:32:54,040 --> 00:32:57,400 Speaker 1: encountered before. And rely on the system's ability to process 533 00:32:57,480 --> 00:33:00,760 Speaker 1: this information correctly to get the result you want. And 534 00:33:00,880 --> 00:33:03,720 Speaker 1: we might even be talking about stuff the human beings 535 00:33:03,720 --> 00:33:07,880 Speaker 1: can't easily do. Right. But see, the training data is 536 00:33:07,880 --> 00:33:09,800 Speaker 1: going to mean that the network will start to create 537 00:33:09,800 --> 00:33:14,640 Speaker 1: and reinforce certain pathways, and those pathways will over time 538 00:33:14,680 --> 00:33:16,760 Speaker 1: get stronger and stronger, just as we said at the 539 00:33:16,760 --> 00:33:19,960 Speaker 1: beginning of this episode. But new data is going to 540 00:33:20,000 --> 00:33:24,720 Speaker 1: necessitate new pathways. Sometimes, when the system begins to form 541 00:33:24,800 --> 00:33:29,520 Speaker 1: these new pathways, it forgets the old pathways. So it's 542 00:33:29,600 --> 00:33:32,600 Speaker 1: possible for a neural network to actually get worse at 543 00:33:32,600 --> 00:33:36,440 Speaker 1: the task it had previously been trained to do with 544 00:33:36,600 --> 00:33:40,440 Speaker 1: the actual training material. In fact, in a true catastrophe, 545 00:33:40,480 --> 00:33:44,760 Speaker 1: the system might forget the objective and doesn't recognize what 546 00:33:44,800 --> 00:33:47,720 Speaker 1: the desired outcome is meant to be, so the results 547 00:33:47,760 --> 00:33:50,760 Speaker 1: can appear random and meaningless. It's as if the system 548 00:33:50,800 --> 00:33:54,720 Speaker 1: has developed some form of amnesia. So this is prevalent, 549 00:33:55,320 --> 00:33:59,880 Speaker 1: most prevalent anyway, in systems that rely on unguided learning. 550 00:34:00,520 --> 00:34:05,440 Speaker 1: With guided learning, you have engineers who are carefully selecting 551 00:34:05,480 --> 00:34:10,120 Speaker 1: the data that gets fed into a system. An unguided 552 00:34:10,160 --> 00:34:14,480 Speaker 1: system would collect raw data from wherever and attempt to 553 00:34:14,520 --> 00:34:17,880 Speaker 1: deliver desired results, and that those are the kinds of 554 00:34:18,600 --> 00:34:22,319 Speaker 1: neural networks that are more prone to catastrophic forgetting. But 555 00:34:22,360 --> 00:34:26,640 Speaker 1: as I said, machine learning systems tackle new data, maybe 556 00:34:26,680 --> 00:34:30,560 Speaker 1: even new tasks, and then you get the risk of 557 00:34:30,600 --> 00:34:33,359 Speaker 1: the system forgetting stuff. So I jokingly say, it's kind 558 00:34:33,400 --> 00:34:35,480 Speaker 1: of like when I learned something new, it has to 559 00:34:35,520 --> 00:34:39,000 Speaker 1: push out something old, like you know, my friend's phone 560 00:34:39,040 --> 00:34:41,480 Speaker 1: number or something. Suddenly I can no longer remember it 561 00:34:41,560 --> 00:34:44,960 Speaker 1: because I learned some new interesting fact, as if I 562 00:34:45,000 --> 00:34:48,320 Speaker 1: have met my capacity for being able to know things. 563 00:34:48,520 --> 00:34:52,040 Speaker 1: So learning anything new necessitates having to forget something I 564 00:34:52,200 --> 00:34:55,440 Speaker 1: used to know, like gat Ye, because now gat Ye 565 00:34:55,600 --> 00:34:58,760 Speaker 1: is just somebody that I used to know. But wait, 566 00:34:59,040 --> 00:35:04,000 Speaker 1: there's more. Just as a system can experience catastrophic forgetting, 567 00:35:04,719 --> 00:35:10,239 Speaker 1: it can also experience catastrophic remembering. This is when a 568 00:35:10,280 --> 00:35:14,480 Speaker 1: system mistakenly believes it is doing one process a task 569 00:35:14,600 --> 00:35:18,480 Speaker 1: it had previously been trained to do, rather than the 570 00:35:18,480 --> 00:35:22,359 Speaker 1: one it's actually trying to do. So let's say we've 571 00:35:22,360 --> 00:35:25,680 Speaker 1: got an artificial neural network, and originally we taught it 572 00:35:25,719 --> 00:35:28,160 Speaker 1: to recognize the photos that have cats in them versus 573 00:35:28,239 --> 00:35:31,359 Speaker 1: the ones that don't. But now we have retrained the 574 00:35:31,400 --> 00:35:35,800 Speaker 1: same artificial neural network to try and recognize handwritten text, 575 00:35:36,520 --> 00:35:40,400 Speaker 1: except when we feed handwritten text to the system, suddenly 576 00:35:40,440 --> 00:35:43,879 Speaker 1: the system believes it's trying to determine where the cats are. 577 00:35:44,480 --> 00:35:47,040 Speaker 1: This is something that can happen with machine learning systems too, 578 00:35:47,040 --> 00:35:49,880 Speaker 1: and you still get bad results out of it. So 579 00:35:50,000 --> 00:35:54,440 Speaker 1: this is a real problem. Now, these are not insurmountable problems. 580 00:35:54,960 --> 00:35:58,840 Speaker 1: There are some solutions that are actually intuitive. For example, 581 00:35:59,360 --> 00:36:02,760 Speaker 1: any game out their nose that it's best to save 582 00:36:02,880 --> 00:36:05,720 Speaker 1: your game just before you head into a big boss battle, 583 00:36:06,000 --> 00:36:09,200 Speaker 1: just in case things don't go the way you planned. Well. 584 00:36:09,200 --> 00:36:12,640 Speaker 1: With artificial neural networks, it's maybe not a bad idea 585 00:36:12,680 --> 00:36:15,960 Speaker 1: to make a copy of a network before you retrain 586 00:36:16,040 --> 00:36:18,480 Speaker 1: it to do something new. Then you still have the 587 00:36:18,520 --> 00:36:22,399 Speaker 1: backup if things do go pair shape. There are other 588 00:36:22,440 --> 00:36:27,080 Speaker 1: approaches to decreasing the risk of catastrophic forgetting or catastrophic remembering. 589 00:36:27,600 --> 00:36:31,719 Speaker 1: An article in applied Mathematics titled Overcoming catastrophic forgetting in 590 00:36:31,760 --> 00:36:35,839 Speaker 1: neural networks describes a system in which the researchers purposefully 591 00:36:35,920 --> 00:36:40,960 Speaker 1: slowed down the network's ability to change the weights involved 592 00:36:41,040 --> 00:36:46,680 Speaker 1: in important tasks. From previous training cycles. So this makes 593 00:36:46,719 --> 00:36:49,560 Speaker 1: teaching the system to do new tasks a little more 594 00:36:49,640 --> 00:36:56,680 Speaker 1: challenging because it's protecting these weights. It's preventing the system's 595 00:36:56,680 --> 00:37:01,920 Speaker 1: ability to be completely plastic, which means the system has 596 00:37:01,960 --> 00:37:04,440 Speaker 1: to work around these constraints and still learn how to 597 00:37:04,480 --> 00:37:07,600 Speaker 1: do the new task, but in the process it means 598 00:37:07,600 --> 00:37:11,400 Speaker 1: it doesn't forget how to do the previous tasks. This 599 00:37:11,560 --> 00:37:15,240 Speaker 1: article is interesting because the tasks the researchers actually used 600 00:37:15,360 --> 00:37:17,920 Speaker 1: the purposes of training, Like, what were they teaching the 601 00:37:18,000 --> 00:37:20,759 Speaker 1: artificial neural network to do well. They were teaching it 602 00:37:20,960 --> 00:37:24,000 Speaker 1: how to play Atari twenty six hundred games. So they 603 00:37:24,000 --> 00:37:27,200 Speaker 1: would start with one game and train the system on 604 00:37:27,239 --> 00:37:30,960 Speaker 1: how to play the game. Then they would give the 605 00:37:31,000 --> 00:37:35,480 Speaker 1: system a new game with different game mechanics, and the 606 00:37:35,480 --> 00:37:38,160 Speaker 1: system would have to learn how to play this new game, 607 00:37:38,760 --> 00:37:40,800 Speaker 1: but they wanted to see if it could still remember 608 00:37:40,840 --> 00:37:43,400 Speaker 1: how to play the original game. That was kind of 609 00:37:43,200 --> 00:37:45,640 Speaker 1: the system they were working on. They were tweaking things 610 00:37:46,239 --> 00:37:50,680 Speaker 1: so that the machine learning artificial neural network as a 611 00:37:50,680 --> 00:37:53,719 Speaker 1: whole could learn how to play multiple Atari twenty six 612 00:37:53,800 --> 00:37:56,720 Speaker 1: hundred games without forgetting how to do the previous ones. 613 00:37:57,160 --> 00:37:59,319 Speaker 1: This is a non trivial task. I mean, it takes 614 00:37:59,320 --> 00:38:02,440 Speaker 1: a lot of work to see exactly how to preserve 615 00:38:02,480 --> 00:38:05,240 Speaker 1: things so that you're not slowing down the learning process 616 00:38:05,280 --> 00:38:07,920 Speaker 1: too much, but you're also not inviting the possibility of 617 00:38:07,960 --> 00:38:12,759 Speaker 1: catastrophic forgetting. Now that's just one example of how researchers 618 00:38:12,800 --> 00:38:16,400 Speaker 1: are looking to mitigate the problem of catastrophic forgetting in 619 00:38:16,400 --> 00:38:20,040 Speaker 1: catastrophic remembering, there are other methods as well, and maybe 620 00:38:20,080 --> 00:38:23,000 Speaker 1: I'll do another episode where I'll go into more detail 621 00:38:23,280 --> 00:38:26,480 Speaker 1: on some of those. They do get pretty complicated, and 622 00:38:26,800 --> 00:38:30,839 Speaker 1: in fact, eventually Rerilli and I even eventually pretty early on, 623 00:38:31,400 --> 00:38:35,000 Speaker 1: I hit my limit for as far as I can 624 00:38:35,120 --> 00:38:38,879 Speaker 1: understand the actual mechanics of the system. So rather than 625 00:38:40,280 --> 00:38:43,960 Speaker 1: try and punch above my weight, I think it's best 626 00:38:44,000 --> 00:38:47,000 Speaker 1: to kind of be a little more general, but just 627 00:38:47,040 --> 00:38:49,040 Speaker 1: to have that understanding to kind of get a better 628 00:38:49,080 --> 00:38:53,520 Speaker 1: appreciation of some of the challenges relating to artificial intelligence 629 00:38:53,560 --> 00:38:57,799 Speaker 1: in general and machine learning in particular. And again, like 630 00:38:57,840 --> 00:39:02,279 Speaker 1: this machine learning issue, it's a bigger problem with more 631 00:39:02,320 --> 00:39:08,279 Speaker 1: sophisticated systems that are meant to do unsupervised and unguided learning, right, 632 00:39:08,360 --> 00:39:09,879 Speaker 1: those are the ones that are going to be more 633 00:39:09,960 --> 00:39:14,040 Speaker 1: prone to these issues. If we're talking about supervised and 634 00:39:14,120 --> 00:39:18,640 Speaker 1: guided learning, where engineers are being very careful with the 635 00:39:18,719 --> 00:39:22,080 Speaker 1: data being fed to a system, it's less likely to happen. 636 00:39:22,440 --> 00:39:27,480 Speaker 1: But the whole promise, or at least the you know, 637 00:39:27,600 --> 00:39:29,640 Speaker 1: not the promise of the technology itself, but the promise 638 00:39:29,640 --> 00:39:32,880 Speaker 1: of the people who are funding it, is that this 639 00:39:32,920 --> 00:39:36,440 Speaker 1: technology is going to reach a point where it's able 640 00:39:36,440 --> 00:39:38,920 Speaker 1: to learn on its own and be able to do 641 00:39:39,000 --> 00:39:41,799 Speaker 1: things better than people can do, to free us up 642 00:39:41,840 --> 00:39:44,080 Speaker 1: to doing, you know, stuff we want to do instead 643 00:39:44,120 --> 00:39:46,800 Speaker 1: of stuff we have to do. That's like the science 644 00:39:46,800 --> 00:39:51,360 Speaker 1: fiction dream version of AI. As we all know, getting 645 00:39:51,400 --> 00:39:53,840 Speaker 1: there is much more painful. It's not like a simple 646 00:39:53,880 --> 00:39:58,320 Speaker 1: process of Hey, we've made everything easy to do now, 647 00:39:58,400 --> 00:40:01,040 Speaker 1: and you don't have to worry called day. You can 648 00:40:01,120 --> 00:40:04,839 Speaker 1: enjoy your life and pursue your dreams and develop your 649 00:40:04,880 --> 00:40:07,879 Speaker 1: hobbies and your interests, and you can have fulfillment and 650 00:40:07,960 --> 00:40:10,960 Speaker 1: somehow money isn't important anymore. Like that seems to be 651 00:40:11,080 --> 00:40:13,239 Speaker 1: the Star Trek version of the future that people want 652 00:40:13,280 --> 00:40:15,759 Speaker 1: it to go in. But as we have seen, the 653 00:40:15,800 --> 00:40:18,440 Speaker 1: process of getting there is way more painful. As you know, 654 00:40:18,480 --> 00:40:22,000 Speaker 1: people face a reality of potentially being out of work 655 00:40:22,600 --> 00:40:27,640 Speaker 1: because of AI, or maybe being paid way less to 656 00:40:27,800 --> 00:40:30,560 Speaker 1: do work because the AI is doing most of it. 657 00:40:31,400 --> 00:40:34,720 Speaker 1: These are not That's not Star Trek future. That's getting 658 00:40:34,760 --> 00:40:38,439 Speaker 1: like into Blade Runner future, So we don't want that one. 659 00:40:38,520 --> 00:40:42,759 Speaker 1: By the way, the tears in the Rain speech is fantastic, 660 00:40:42,840 --> 00:40:44,399 Speaker 1: but you do not want to live in the Blade 661 00:40:44,440 --> 00:40:47,520 Speaker 1: Runner world. Trust me, you might not want to live 662 00:40:47,520 --> 00:40:49,520 Speaker 1: in the Star Trek world either, because those outfits don't 663 00:40:49,560 --> 00:40:55,200 Speaker 1: look that comfortable. Anyway. That's my little discussion about AI, 664 00:40:55,400 --> 00:40:59,360 Speaker 1: machine learning and castrophic forgetting in cast trophic. Remembering again, 665 00:40:59,440 --> 00:41:03,480 Speaker 1: this is just one of the challenges associated with AI 666 00:41:03,640 --> 00:41:05,879 Speaker 1: and machine learning. I don't mean to suggest it's the 667 00:41:05,920 --> 00:41:09,560 Speaker 1: one and only, or even that it's the most important one, 668 00:41:09,800 --> 00:41:11,839 Speaker 1: but it is one that I had not really heard 669 00:41:11,840 --> 00:41:14,200 Speaker 1: of until I listened to that Skeptics Guide to the 670 00:41:14,320 --> 00:41:17,520 Speaker 1: Universe episode over the weekend, and it was really interesting 671 00:41:17,560 --> 00:41:21,120 Speaker 1: to dive into the material and read up about it 672 00:41:21,160 --> 00:41:23,280 Speaker 1: and to get a better understanding of what it means 673 00:41:23,280 --> 00:41:27,200 Speaker 1: and how it works. And as I said, we'll probably 674 00:41:27,239 --> 00:41:29,880 Speaker 1: revisit this topic in the future, especially since AI is 675 00:41:29,880 --> 00:41:32,719 Speaker 1: such a big deal these days. Okay, but that's it 676 00:41:32,920 --> 00:41:36,160 Speaker 1: for this episode. Of tech Stuff. I hope you are 677 00:41:36,280 --> 00:41:40,359 Speaker 1: all well, and I will talk to you again really soon. 678 00:41:46,680 --> 00:41:51,360 Speaker 1: Tech Stuff is an iHeartRadio production. For more podcasts from iHeartRadio, 679 00:41:51,680 --> 00:41:55,359 Speaker 1: visit the iHeartRadio app, Apple Podcasts, or wherever you listen 680 00:41:55,400 --> 00:41:56,480 Speaker 1: to your favorite shows.