1 00:00:04,400 --> 00:00:12,320 Speaker 1: Welcome to Tech Stuff, a production from iHeartRadio. Hey there, 2 00:00:12,360 --> 00:00:15,360 Speaker 1: and welcome to tech Stuff. I'm your host, Jonathan Strickland. 3 00:00:15,400 --> 00:00:18,720 Speaker 1: I'm an executive producer with iHeartRadio, and how the tech 4 00:00:18,840 --> 00:00:22,079 Speaker 1: are you? You know, often when we talk about tech, 5 00:00:22,360 --> 00:00:28,960 Speaker 1: we reference various thought experiments, hypothetical situations, philosophical problems, and 6 00:00:29,360 --> 00:00:32,479 Speaker 1: game theory. And this can get a little bit confusing 7 00:00:32,520 --> 00:00:35,199 Speaker 1: if you've never actually studied any of those things, and 8 00:00:35,240 --> 00:00:39,040 Speaker 1: people are just kind of off handedly spouting off terms. 9 00:00:39,040 --> 00:00:42,120 Speaker 1: So today I thought i'd cover a small handful of 10 00:00:42,159 --> 00:00:46,040 Speaker 1: them as a sort of foundation for future discussions. Keep 11 00:00:46,120 --> 00:00:50,520 Speaker 1: in mind, there are tons of these, and I'm only 12 00:00:51,000 --> 00:00:54,840 Speaker 1: covering like the teeniest, tiniest number of them. Some of 13 00:00:54,840 --> 00:00:58,360 Speaker 1: these I have talked about extensively another episode, so I'll 14 00:00:58,400 --> 00:01:01,400 Speaker 1: try to go a little light with them on this episode. 15 00:01:01,520 --> 00:01:03,520 Speaker 1: But some of them are brand new to me, or 16 00:01:03,560 --> 00:01:05,520 Speaker 1: I had only heard the name of them but had 17 00:01:05,560 --> 00:01:10,880 Speaker 1: never actually researched the actual scenario or thought experiment. Now, 18 00:01:10,880 --> 00:01:14,520 Speaker 1: these thought experiments in general, not the ones specifically we're 19 00:01:14,560 --> 00:01:18,640 Speaker 1: talking about today, but the practice of thought experiments date 20 00:01:18,680 --> 00:01:22,000 Speaker 1: back quite a ways, at the very least to ancient Greece, 21 00:01:22,160 --> 00:01:27,000 Speaker 1: because we have records of them, so back then they 22 00:01:27,000 --> 00:01:32,000 Speaker 1: were used to conceptualize complex mathematical problems and to give 23 00:01:32,040 --> 00:01:35,119 Speaker 1: people a chance to consider consequences outside of a real 24 00:01:35,160 --> 00:01:39,920 Speaker 1: world situation. But first up, I thought we would talk 25 00:01:40,120 --> 00:01:44,039 Speaker 1: a little bit about game theory because we actually saw 26 00:01:44,080 --> 00:01:48,080 Speaker 1: a real world version of game theory play out just 27 00:01:48,160 --> 00:01:51,240 Speaker 1: a couple of weeks ago. And arguably you could say 28 00:01:51,240 --> 00:01:55,080 Speaker 1: this is only tangentially related to technology, but it did 29 00:01:55,200 --> 00:01:58,280 Speaker 1: have and continues to have a massive impact on tech. 30 00:01:59,000 --> 00:02:03,240 Speaker 1: So the game theory situation that we're going to really 31 00:02:03,280 --> 00:02:07,560 Speaker 1: talk about is known as the prisoner's dilemma. Folks were 32 00:02:07,600 --> 00:02:11,560 Speaker 1: referencing this one in the wake of the Silicon Valley 33 00:02:11,600 --> 00:02:14,440 Speaker 1: bank collapse, which is why I say this is only 34 00:02:14,520 --> 00:02:17,960 Speaker 1: tangentially related to tech, because it's really about a run 35 00:02:18,040 --> 00:02:21,640 Speaker 1: on the bank, and that bank just happened to be 36 00:02:21,680 --> 00:02:25,160 Speaker 1: a really important bank to the tech industry. But the 37 00:02:25,240 --> 00:02:28,880 Speaker 1: dilemma has its roots in the mid twentieth century, and 38 00:02:28,960 --> 00:02:32,680 Speaker 1: the basic version goes something like this. A pair of 39 00:02:32,720 --> 00:02:37,080 Speaker 1: suspected criminals are caught by police. They become the prisoners, 40 00:02:37,400 --> 00:02:41,040 Speaker 1: and the police plan to interrogate each of the prisoners separately. 41 00:02:41,639 --> 00:02:46,080 Speaker 1: The prison sentence for these criminals if they are found guilty, 42 00:02:46,440 --> 00:02:51,560 Speaker 1: which is just taken as as the most likely outcome 43 00:02:51,919 --> 00:02:54,280 Speaker 1: would be ten years. So if they are convicted, they 44 00:02:54,320 --> 00:02:58,280 Speaker 1: get ten years in prison. However, each of them is 45 00:02:58,320 --> 00:03:03,600 Speaker 1: told separately that there are different possible outcomes depending upon 46 00:03:03,639 --> 00:03:08,360 Speaker 1: their cooperation or lack thereof with the police. They just 47 00:03:08,400 --> 00:03:11,520 Speaker 1: aren't allowed to talk to each other, so the two 48 00:03:11,880 --> 00:03:14,919 Speaker 1: prisoners have no way to communicate with one another. They 49 00:03:14,919 --> 00:03:18,919 Speaker 1: have to make up their minds individually, and the four 50 00:03:19,040 --> 00:03:24,679 Speaker 1: possible outcomes are this, neither suspect talks, neither prisoner confesses, 51 00:03:25,160 --> 00:03:28,400 Speaker 1: and in that they each get only two years in 52 00:03:28,520 --> 00:03:31,480 Speaker 1: prison because the cops won't have enough evidence to put 53 00:03:31,520 --> 00:03:35,200 Speaker 1: them away for the more serious crime. So they'll have 54 00:03:35,240 --> 00:03:37,040 Speaker 1: to go to prison, but i'll just be two years, 55 00:03:37,120 --> 00:03:41,160 Speaker 1: not the full ten. However, if one of them stays 56 00:03:41,160 --> 00:03:45,320 Speaker 1: silent but the other one confesses, well, then the one 57 00:03:45,320 --> 00:03:49,040 Speaker 1: who confesses gets to go free and the silent prisoner 58 00:03:49,440 --> 00:03:52,720 Speaker 1: has to serve the full ten years of the sentence. 59 00:03:53,480 --> 00:03:57,600 Speaker 1: If both prisoners confess, then each of them will get 60 00:03:57,880 --> 00:04:02,200 Speaker 1: five years. So you get your four possible outcomes. Suspects 61 00:04:02,240 --> 00:04:04,560 Speaker 1: A and B keep the trap shut and each of 62 00:04:04,560 --> 00:04:08,520 Speaker 1: them serve two years. Suspect A talks but Bee keeps quiet, 63 00:04:08,800 --> 00:04:11,800 Speaker 1: which means A goes off scott Free and B goes 64 00:04:11,840 --> 00:04:16,080 Speaker 1: to the pokey for ten years. Suspect A holds their tongue, 65 00:04:16,120 --> 00:04:19,320 Speaker 1: but Suspect B sings like a canary, and that time 66 00:04:19,480 --> 00:04:22,320 Speaker 1: Suspect B strolls out to freedom and Suspect A rots 67 00:04:22,320 --> 00:04:25,480 Speaker 1: away in prison, or they both go blabbing and they 68 00:04:25,520 --> 00:04:30,320 Speaker 1: both end up serving five years. Now, collectively, the most 69 00:04:30,400 --> 00:04:33,960 Speaker 1: beneficial outcome is to serve only two years by not 70 00:04:34,120 --> 00:04:38,200 Speaker 1: talking at all, and then the next best outcome for 71 00:04:38,360 --> 00:04:42,159 Speaker 1: both individuals collectively would be that they both talk and 72 00:04:42,200 --> 00:04:44,440 Speaker 1: they both have to serve five years, but not the 73 00:04:44,480 --> 00:04:50,360 Speaker 1: full ten. However, individually, if we're not talking about collectively, individually, 74 00:04:50,400 --> 00:04:53,480 Speaker 1: the best outcome is you talk and hope the other 75 00:04:53,520 --> 00:04:55,839 Speaker 1: one doesn't. That way, you can put on your dance 76 00:04:55,880 --> 00:04:59,120 Speaker 1: and shoes, just strut on out of the building and 77 00:04:59,160 --> 00:05:03,919 Speaker 1: the other one's days behind. But worst case scenario, you talk, 78 00:05:04,080 --> 00:05:07,039 Speaker 1: the other person also talks, and you just serve half 79 00:05:07,080 --> 00:05:11,760 Speaker 1: the full sentence each. Actually, worst case scenario is you 80 00:05:11,880 --> 00:05:14,800 Speaker 1: decide not to talk and the other person does talk, 81 00:05:14,960 --> 00:05:17,960 Speaker 1: and then you're looking at ten years in jail. Well, 82 00:05:17,960 --> 00:05:21,520 Speaker 1: when it came to Silicon Valley Bank, the real world 83 00:05:21,560 --> 00:05:25,719 Speaker 1: scenario went like this. If everyone had remained chill, their 84 00:05:25,760 --> 00:05:31,040 Speaker 1: money would have been safe. SBB had over extended its 85 00:05:31,160 --> 00:05:35,479 Speaker 1: investment and government backed securities, which would take years to mature. 86 00:05:36,200 --> 00:05:39,680 Speaker 1: And this was because SVB wasn't issuing as many loans 87 00:05:39,760 --> 00:05:44,119 Speaker 1: once interest rates had gone up significantly. Venture capitalists weren't 88 00:05:44,160 --> 00:05:47,640 Speaker 1: seeking loans so much. Plus they were already flush with cash, 89 00:05:47,680 --> 00:05:50,599 Speaker 1: and loans are the way that banks make money for 90 00:05:50,640 --> 00:05:53,200 Speaker 1: the most part. So instead the banks were investing in 91 00:05:54,480 --> 00:05:59,400 Speaker 1: longer term investments that would have a modest payout, but 92 00:05:59,520 --> 00:06:03,359 Speaker 1: it would have a payout once the investments matured. But 93 00:06:04,560 --> 00:06:07,880 Speaker 1: that would mean that everyone If they just you know, 94 00:06:08,279 --> 00:06:11,839 Speaker 1: cooled their jets and kept their money in place, things 95 00:06:11,880 --> 00:06:15,560 Speaker 1: would probably have been fine. SBB would have likely survived, 96 00:06:15,880 --> 00:06:20,679 Speaker 1: but instead the prisoners the customers of SVB in this case, 97 00:06:21,080 --> 00:06:24,160 Speaker 1: chose to pull their money out because the thinking went 98 00:06:24,279 --> 00:06:27,719 Speaker 1: something like this, if everyone else takes out their money 99 00:06:27,760 --> 00:06:31,159 Speaker 1: and I don't, and then then SBB could shut down, 100 00:06:31,200 --> 00:06:34,760 Speaker 1: and then my money will be stuck, it'll stop existing, 101 00:06:34,880 --> 00:06:36,600 Speaker 1: or I won't be able to get to it, and 102 00:06:36,640 --> 00:06:39,400 Speaker 1: I need my money. It's what lets me buy all 103 00:06:39,440 --> 00:06:43,920 Speaker 1: that stuff like private jets and politicians. So I'm gonna 104 00:06:43,960 --> 00:06:46,320 Speaker 1: go get my money out before i lose out on 105 00:06:46,360 --> 00:06:49,280 Speaker 1: that option. The problem was is that some very big 106 00:06:49,320 --> 00:06:52,039 Speaker 1: players who had a whole lot of money in SBB 107 00:06:52,400 --> 00:06:55,440 Speaker 1: did this, including the heads of venture capitalist groups, who 108 00:06:55,520 --> 00:06:58,640 Speaker 1: then urged all of their clients to do the same, 109 00:06:59,240 --> 00:07:01,320 Speaker 1: and so there was a run on the bank and 110 00:07:01,560 --> 00:07:04,760 Speaker 1: SVB could not cover all the withdrawals without selling off 111 00:07:04,800 --> 00:07:08,400 Speaker 1: assets at a huge loss, and that put SVB in 112 00:07:08,480 --> 00:07:12,120 Speaker 1: a very precarious situation, in fact, precarious enough that the 113 00:07:12,200 --> 00:07:14,560 Speaker 1: US government had to swoop in and take over the 114 00:07:14,600 --> 00:07:17,640 Speaker 1: bank and guarantee all the customers that they would still 115 00:07:17,680 --> 00:07:21,560 Speaker 1: be able to access their money. So enough prisoners took 116 00:07:21,640 --> 00:07:25,320 Speaker 1: the sure thing principle and they screwed over everybody else 117 00:07:25,320 --> 00:07:28,560 Speaker 1: in the process, which not a big surprise because generally 118 00:07:28,600 --> 00:07:32,680 Speaker 1: there's an agreement that taking the tactic of confessing makes 119 00:07:32,760 --> 00:07:36,120 Speaker 1: the most sense from a game theory perspective, and that 120 00:07:36,240 --> 00:07:39,680 Speaker 1: loyalty has no place in the game, that if you 121 00:07:39,720 --> 00:07:42,480 Speaker 1: are loyal, the best you can hope for is two 122 00:07:42,600 --> 00:07:45,840 Speaker 1: years in prison and the worst is ten. So it 123 00:07:45,920 --> 00:07:48,960 Speaker 1: makes more sense to confess and either screw over the 124 00:07:49,000 --> 00:07:53,280 Speaker 1: other person or you both get screwed. So that's the 125 00:07:53,400 --> 00:07:56,720 Speaker 1: thinking behind the prisoner's dilemma, and like I said, we 126 00:07:56,840 --> 00:07:59,400 Speaker 1: kind of saw it play out with the collapse of 127 00:07:59,400 --> 00:08:02,920 Speaker 1: the Silicon Valley Bank. Now, one of my favorite thought 128 00:08:02,960 --> 00:08:07,080 Speaker 1: experiments that has some connection to the tech industry is 129 00:08:07,200 --> 00:08:10,960 Speaker 1: the Ship of Theseus, which dates back to at least 130 00:08:10,960 --> 00:08:13,600 Speaker 1: a century of around five hundred to four hundred BC. 131 00:08:14,560 --> 00:08:17,280 Speaker 1: And the Ship of Theseus idea goes like this, So 132 00:08:17,360 --> 00:08:20,160 Speaker 1: you had the Greek hero Theseus. He had a ship, 133 00:08:20,760 --> 00:08:25,240 Speaker 1: and he ended up docking that ship, and people were 134 00:08:25,280 --> 00:08:29,120 Speaker 1: preserving the ship for his eventual return. And long after 135 00:08:29,400 --> 00:08:33,000 Speaker 1: the hero himself had faded away, the ship remained preserved. 136 00:08:33,280 --> 00:08:36,120 Speaker 1: But of course, over time, pieces of the ship need 137 00:08:36,160 --> 00:08:39,400 Speaker 1: to be replaced. You know, maybe the sails rip and tear, 138 00:08:39,520 --> 00:08:41,360 Speaker 1: so you need to put new sails on the ship. 139 00:08:42,080 --> 00:08:44,560 Speaker 1: Maybe rot sets into part of the deck, so you 140 00:08:44,559 --> 00:08:46,480 Speaker 1: have to rip that out and replace it with new 141 00:08:46,480 --> 00:08:50,720 Speaker 1: planking and so on. And eventually, over the course of time, 142 00:08:50,880 --> 00:08:54,920 Speaker 1: maybe it's decades, maybe it's even centuries, you gradually replace 143 00:08:55,240 --> 00:08:58,680 Speaker 1: every single piece of the ship, so you ultimately arrive 144 00:08:58,720 --> 00:09:02,120 Speaker 1: at a point where no element in the ship of 145 00:09:02,200 --> 00:09:08,120 Speaker 1: Theseus is the original component. Theseus himself never touched anything 146 00:09:08,120 --> 00:09:11,160 Speaker 1: on the ship at this point, So would you still 147 00:09:11,200 --> 00:09:15,120 Speaker 1: call it the same ship? If not, when did it 148 00:09:15,160 --> 00:09:19,880 Speaker 1: officially stop being the ship of theseus? Because obviously, if 149 00:09:19,880 --> 00:09:22,360 Speaker 1: you took possession of the ship right after Theseus got 150 00:09:22,360 --> 00:09:26,800 Speaker 1: tossed off a cliff by lack of medes, and then 151 00:09:26,920 --> 00:09:29,560 Speaker 1: on your first day you had to replace a sale, 152 00:09:29,720 --> 00:09:31,760 Speaker 1: you would still call it the ship of Theseus? Right, 153 00:09:32,000 --> 00:09:34,920 Speaker 1: is just one sale that you replaced. The ship itself 154 00:09:34,960 --> 00:09:38,040 Speaker 1: is still the same, And you know a single sale 155 00:09:38,080 --> 00:09:41,360 Speaker 1: does not change the identity of the overall ship. But 156 00:09:41,600 --> 00:09:44,240 Speaker 1: is there a point where that does happen where the 157 00:09:44,280 --> 00:09:49,720 Speaker 1: ship's identity changes? In tech, one way this thought experiment 158 00:09:49,800 --> 00:09:54,160 Speaker 1: can manifest is when a company undergoes digitalization. Companies that 159 00:09:54,200 --> 00:09:57,200 Speaker 1: have been around for decades have various systems and processes 160 00:09:57,240 --> 00:10:02,319 Speaker 1: in place that predate digital realization. And I always struggle 161 00:10:02,360 --> 00:10:04,320 Speaker 1: over that word, so you're gonna hear me stumble a lot. 162 00:10:04,360 --> 00:10:08,240 Speaker 1: But as a result you to modernize. The leaders of 163 00:10:08,280 --> 00:10:11,520 Speaker 1: these companies have to decide when, if ever, to convert 164 00:10:11,600 --> 00:10:15,520 Speaker 1: old processes into new ones in order to stay current 165 00:10:15,559 --> 00:10:19,680 Speaker 1: and to avoid problems with outdated legacy components. Whenever I 166 00:10:19,720 --> 00:10:22,200 Speaker 1: look at really big companies that have been around for 167 00:10:22,240 --> 00:10:25,720 Speaker 1: like a century, I am often left wondering how they 168 00:10:25,800 --> 00:10:30,040 Speaker 1: handled these transitions, or even if they tried to, because 169 00:10:30,040 --> 00:10:33,320 Speaker 1: these legacy systems are often crucial to the business. The 170 00:10:33,360 --> 00:10:38,280 Speaker 1: business grew around these systems, and so changing the system 171 00:10:38,480 --> 00:10:41,400 Speaker 1: is hard because you have so much other stuff that 172 00:10:41,480 --> 00:10:45,240 Speaker 1: grew up around it and depends upon it, and the 173 00:10:45,400 --> 00:10:49,120 Speaker 1: hardware becomes outdated. It can even get difficult or sometimes 174 00:10:49,160 --> 00:10:54,760 Speaker 1: impossible to maintain or replace old equipment simply because no 175 00:10:54,800 --> 00:10:58,640 Speaker 1: one makes that thing anymore. You know that particular computer 176 00:10:58,679 --> 00:11:02,520 Speaker 1: system may not even be available at any rate, so 177 00:11:02,559 --> 00:11:04,920 Speaker 1: you have to figure out a different way to do things. 178 00:11:05,840 --> 00:11:09,800 Speaker 1: Digitalization makes it easier to track progress and identify bobblenecks 179 00:11:09,800 --> 00:11:12,960 Speaker 1: and such, but it also might mean having to take 180 00:11:13,040 --> 00:11:17,200 Speaker 1: a slightly different approach to try and get a similar 181 00:11:17,240 --> 00:11:21,040 Speaker 1: result as your legacy systems. So the new version isn't 182 00:11:21,520 --> 00:11:24,280 Speaker 1: perfectly recreating the old one. It's just trying to reach 183 00:11:24,320 --> 00:11:28,760 Speaker 1: the same conclusion. But in the process new things might 184 00:11:28,800 --> 00:11:33,880 Speaker 1: pop up, unexpected complications or diversions, and thus he might 185 00:11:33,920 --> 00:11:36,840 Speaker 1: be left wondering if the IBM of today is really 186 00:11:36,880 --> 00:11:39,720 Speaker 1: the same company as the IBM that was founded in 187 00:11:39,840 --> 00:11:43,600 Speaker 1: nineteen eleven. Actually nineteen eleven was the founding of the 188 00:11:43,600 --> 00:11:49,560 Speaker 1: Computing Tabulating Recording Company, which was the precursor to IBM, 189 00:11:49,640 --> 00:11:52,520 Speaker 1: But you get my point. There are some thought experiments 190 00:11:52,559 --> 00:11:55,840 Speaker 1: that are specific to computing problems. One of those is 191 00:11:55,880 --> 00:12:00,440 Speaker 1: called the two Generals problems, which focuses on the issues 192 00:12:00,440 --> 00:12:04,959 Speaker 1: you face when you try to establish communications across unreliable connections. 193 00:12:05,240 --> 00:12:07,640 Speaker 1: So when networks this becomes a big deal. Right. The 194 00:12:07,720 --> 00:12:11,000 Speaker 1: basis of Internet connections largely falls to figuring out the 195 00:12:11,040 --> 00:12:13,960 Speaker 1: most reliable way to deliver information to another system that's 196 00:12:13,960 --> 00:12:18,000 Speaker 1: connected to that network. But the basic two General's problems 197 00:12:18,040 --> 00:12:20,880 Speaker 1: goes something like this. You have an enemy that's in 198 00:12:20,920 --> 00:12:25,959 Speaker 1: control of a centralized valley, and you have two generals. 199 00:12:26,040 --> 00:12:29,360 Speaker 1: Each of them are in charge of their own army. 200 00:12:29,440 --> 00:12:34,000 Speaker 1: Each army is in a valley that neighbors this central valley, 201 00:12:34,040 --> 00:12:36,800 Speaker 1: So essentially you're flanking the enemy. You've got one army 202 00:12:36,840 --> 00:12:38,560 Speaker 1: on the left, one army on the right. They're both 203 00:12:38,559 --> 00:12:40,800 Speaker 1: in their own valleys, and the enemy is in the 204 00:12:40,880 --> 00:12:43,880 Speaker 1: valley in between the two so your goal is to 205 00:12:44,000 --> 00:12:46,800 Speaker 1: establish a time for both generals to attack the enemy 206 00:12:47,120 --> 00:12:50,080 Speaker 1: in the middle at the same time, because the enemy 207 00:12:50,160 --> 00:12:53,120 Speaker 1: is too well entrenched and too strong for either army 208 00:12:53,160 --> 00:12:55,720 Speaker 1: to defeat it on its own. If only one of 209 00:12:55,720 --> 00:12:58,200 Speaker 1: your army's attacks, it's going to get wiped out. So 210 00:12:58,320 --> 00:13:00,560 Speaker 1: really the only hope for your victory is to have 211 00:13:00,600 --> 00:13:04,560 Speaker 1: a coordinated attack. Now complicating things is that the only 212 00:13:04,600 --> 00:13:07,640 Speaker 1: way the two generals are able to communicate with one 213 00:13:07,679 --> 00:13:11,800 Speaker 1: another is to send a messenger across enemy lands to 214 00:13:11,840 --> 00:13:14,880 Speaker 1: reach the other general, and any messenger runs the risk 215 00:13:15,040 --> 00:13:18,400 Speaker 1: of being caught in the process. So let's say that 216 00:13:18,559 --> 00:13:21,160 Speaker 1: you determine before they set out. The General A is 217 00:13:21,200 --> 00:13:24,960 Speaker 1: in charge of establishing an attack time, and so General 218 00:13:24,960 --> 00:13:28,679 Speaker 1: A writes, we attack at dawn in three days and 219 00:13:28,760 --> 00:13:32,040 Speaker 1: sends a messenger out to travel to General B. Well, 220 00:13:32,160 --> 00:13:34,520 Speaker 1: General A doesn't know if the messenger makes it to 221 00:13:34,600 --> 00:13:37,560 Speaker 1: General B. So in three days at dawn, there's a 222 00:13:37,640 --> 00:13:40,559 Speaker 1: risk the General B never got the message, and if 223 00:13:40,559 --> 00:13:44,240 Speaker 1: General A attacks, it's going to mean a loss because 224 00:13:44,600 --> 00:13:49,640 Speaker 1: B won't be participating in a simultaneous attack. But what 225 00:13:49,760 --> 00:13:53,240 Speaker 1: if General B did get the message and then sends 226 00:13:53,320 --> 00:13:57,280 Speaker 1: a confirmation back message received, we attack in three days 227 00:13:57,280 --> 00:14:00,600 Speaker 1: at dawn. Well, that messenger might end up being intercepted. 228 00:14:00,960 --> 00:14:03,480 Speaker 1: So now in three days time, General B isn't sure 229 00:14:03,480 --> 00:14:07,320 Speaker 1: if General A knows that everything is good to go 230 00:14:07,520 --> 00:14:11,240 Speaker 1: as scheduled. So maybe General B hesitates to avoid defeat 231 00:14:11,679 --> 00:14:14,240 Speaker 1: because General B doesn't know if generally is aware of this. 232 00:14:15,200 --> 00:14:17,760 Speaker 1: Of course, General AA, receiving the reply could try to 233 00:14:17,800 --> 00:14:21,560 Speaker 1: send their own message back to General B. But maybe 234 00:14:21,600 --> 00:14:23,840 Speaker 1: that messenger gets cut along the way, and this goes 235 00:14:23,880 --> 00:14:27,360 Speaker 1: back and forth, and the challenges You cannot be confident 236 00:14:27,440 --> 00:14:30,520 Speaker 1: that any one message made it to the correct destination, 237 00:14:30,680 --> 00:14:34,720 Speaker 1: So how you design a communication system where you're reasonably 238 00:14:34,720 --> 00:14:38,960 Speaker 1: certain that messages are going through becomes a challenge. The 239 00:14:39,040 --> 00:14:42,760 Speaker 1: thought experiment shows that uncertainty in communication systems aren't a 240 00:14:42,800 --> 00:14:46,240 Speaker 1: problem that can necessarily be outright solved, but it perhaps 241 00:14:46,280 --> 00:14:48,680 Speaker 1: can be mitigated to a point where all sides are 242 00:14:48,800 --> 00:14:52,920 Speaker 1: comfortably communicating. Okay, we've got more to say about thought 243 00:14:52,960 --> 00:15:05,840 Speaker 1: experiments and philosophy, but let's take a quick break. All right, 244 00:15:06,160 --> 00:15:10,240 Speaker 1: we're back. Let's talk a little bit more about computing 245 00:15:10,360 --> 00:15:14,600 Speaker 1: thought experiments. We just mentioned that it's hard to be 246 00:15:14,800 --> 00:15:20,640 Speaker 1: certain about communications in uncertain situations. You know, it's you 247 00:15:20,640 --> 00:15:24,600 Speaker 1: can do the best you can and limit the chances 248 00:15:24,640 --> 00:15:29,480 Speaker 1: for messaging to fall through, but you can't ensure that 249 00:15:29,560 --> 00:15:33,160 Speaker 1: it is perfect. That's the purpose of the two generals 250 00:15:33,160 --> 00:15:35,280 Speaker 1: thought experiment. But let's talk about a different one. Let's 251 00:15:35,320 --> 00:15:39,840 Speaker 1: talk about philosophers and PISKEETI. Seriously, there is a thought 252 00:15:39,840 --> 00:15:45,360 Speaker 1: experiment called the dining philosopher's problem. This one is more 253 00:15:45,400 --> 00:15:50,560 Speaker 1: about the sharing of computational resources and avoiding a deadlock situation, 254 00:15:51,240 --> 00:15:56,280 Speaker 1: or a situation in which one computational process is hogging 255 00:15:56,320 --> 00:15:59,440 Speaker 1: all the resources and all the other processes that need 256 00:15:59,480 --> 00:16:02,960 Speaker 1: to run that same machine can't. So to understand this, 257 00:16:03,720 --> 00:16:06,440 Speaker 1: let's first recall that back in the old days, before 258 00:16:06,440 --> 00:16:10,960 Speaker 1: we got to microcomputers and many computers, a computer system 259 00:16:11,000 --> 00:16:16,120 Speaker 1: generally consisted of a big, centralized mainframe computer that you 260 00:16:16,120 --> 00:16:21,160 Speaker 1: would access through a data terminal, a dumb terminal, which 261 00:16:21,520 --> 00:16:24,960 Speaker 1: could include very basic input devices like a keyboard and 262 00:16:25,080 --> 00:16:28,760 Speaker 1: a very basic output device like a monitor. But the 263 00:16:28,840 --> 00:16:33,200 Speaker 1: dumb terminal wouldn't have any computational ability itself, like it 264 00:16:33,240 --> 00:16:36,320 Speaker 1: would look kind of like a desktop computer, but instead 265 00:16:36,760 --> 00:16:40,040 Speaker 1: it's literally just a monitor and a keyboard. It's connecting 266 00:16:40,600 --> 00:16:44,080 Speaker 1: to this centralized computer, which likely has lots of other 267 00:16:44,240 --> 00:16:48,200 Speaker 1: dumb terminals connected to it, with other people also accessing 268 00:16:48,200 --> 00:16:51,240 Speaker 1: the centralized computer. So what you really have is a 269 00:16:51,280 --> 00:16:56,640 Speaker 1: shared computational resource that is distributed across all these different 270 00:16:56,720 --> 00:17:01,760 Speaker 1: dumb terminals. Computers typically handled all this by dealing with 271 00:17:01,840 --> 00:17:05,600 Speaker 1: each terminal one at a time, in sequence, but at 272 00:17:05,640 --> 00:17:09,560 Speaker 1: really fast speeds, so it felt that it was pretty 273 00:17:09,600 --> 00:17:12,399 Speaker 1: responsive and that you were doing everything more or less 274 00:17:12,440 --> 00:17:16,399 Speaker 1: in real time, and it was called time sharing. Every 275 00:17:16,400 --> 00:17:20,800 Speaker 1: person at a data terminal was sharing time with this computer. 276 00:17:20,920 --> 00:17:24,400 Speaker 1: Time was really precious with these things too. But how 277 00:17:24,440 --> 00:17:27,840 Speaker 1: do you make sure that all the different processes slash 278 00:17:27,960 --> 00:17:30,919 Speaker 1: terminals are able to access the computer fairly? How do 279 00:17:30,920 --> 00:17:34,320 Speaker 1: you avoid situations where all the demands are coming in 280 00:17:34,359 --> 00:17:37,680 Speaker 1: at a point that effectively locks the entire system where 281 00:17:37,680 --> 00:17:41,320 Speaker 1: it can't do anything. This brings us to the dining 282 00:17:41,440 --> 00:17:46,600 Speaker 1: philosopher's problem. So imagine we've got ourselves a big round 283 00:17:46,600 --> 00:17:51,240 Speaker 1: table and we have placed five plates around this table. 284 00:17:51,880 --> 00:17:56,239 Speaker 1: There's a chair at each plate, and in between the 285 00:17:56,320 --> 00:17:59,639 Speaker 1: plates there is a single fork, so you have plate 286 00:17:59,720 --> 00:18:03,719 Speaker 1: for plate, fork plate fork plate, etc. So five plates, 287 00:18:03,760 --> 00:18:10,080 Speaker 1: five forks, five chairs. So far, so good, But the 288 00:18:10,160 --> 00:18:14,280 Speaker 1: problem is that the philosophers who are coming to dine 289 00:18:14,600 --> 00:18:18,680 Speaker 1: are there to eat spaghette. And the big old heap 290 00:18:18,760 --> 00:18:22,720 Speaker 1: and plate of spaghett is glorious, but the only way 291 00:18:22,720 --> 00:18:26,239 Speaker 1: to eat it is to use two forks simultaneously. So 292 00:18:26,280 --> 00:18:28,720 Speaker 1: you need a fork in each hand in order to 293 00:18:28,800 --> 00:18:33,600 Speaker 1: be able to wind up enough spaghett to shove into 294 00:18:33,640 --> 00:18:37,800 Speaker 1: your gob and you can eat your spaghetti. When you're 295 00:18:37,840 --> 00:18:40,920 Speaker 1: not eating, you can think, because you're a philosopher, so 296 00:18:40,960 --> 00:18:43,480 Speaker 1: you're either thinking or you're eating. That's all you're doing 297 00:18:43,480 --> 00:18:46,800 Speaker 1: at this table. But obviously if you go without eating 298 00:18:46,840 --> 00:18:50,080 Speaker 1: for too long, you'll starve yourself to death. Now, since 299 00:18:50,080 --> 00:18:52,359 Speaker 1: you need two forks to eat, and there are only 300 00:18:52,480 --> 00:18:56,040 Speaker 1: five forks at the table, if you grab the fork 301 00:18:56,119 --> 00:18:58,240 Speaker 1: on your left and the fork on your right, it 302 00:18:58,320 --> 00:19:00,240 Speaker 1: means that the people sitting to your left and to 303 00:19:00,280 --> 00:19:03,920 Speaker 1: your right they can't eat right they have access at 304 00:19:04,000 --> 00:19:06,800 Speaker 1: most to one fork. They don't have access to the 305 00:19:06,800 --> 00:19:08,520 Speaker 1: second one, because those are the ones that are in 306 00:19:08,560 --> 00:19:12,240 Speaker 1: your hands. Once you put the forks down, they become available, 307 00:19:12,560 --> 00:19:15,440 Speaker 1: and then the people on either side can pick that 308 00:19:15,600 --> 00:19:18,639 Speaker 1: fork up and potentially eat unless of course, the fork 309 00:19:18,800 --> 00:19:21,520 Speaker 1: on their opposite side has already been taken by the 310 00:19:21,600 --> 00:19:24,919 Speaker 1: other two people at the table, so they could be 311 00:19:24,960 --> 00:19:27,440 Speaker 1: out of luck, and you have to figure out how 312 00:19:27,440 --> 00:19:30,840 Speaker 1: to juggle this. Worse than that, though, Let's say that 313 00:19:30,920 --> 00:19:33,480 Speaker 1: you've set a rule that whenever a fork is laid down, 314 00:19:33,760 --> 00:19:36,399 Speaker 1: you pick it up immediately, so that you're always at 315 00:19:36,480 --> 00:19:40,840 Speaker 1: least half ready to start eating. But everyone follows this rule, 316 00:19:41,200 --> 00:19:44,200 Speaker 1: and at the very beginning of the meal, everybody reaches 317 00:19:44,200 --> 00:19:47,240 Speaker 1: over to their right and picks up a fork. Well, 318 00:19:47,920 --> 00:19:51,919 Speaker 1: now all five forks are in hand, five different hands 319 00:19:51,960 --> 00:19:54,480 Speaker 1: in fact, which means no one can eat or think 320 00:19:54,800 --> 00:19:57,760 Speaker 1: because they only have one fork in their hand. They 321 00:19:57,800 --> 00:20:00,639 Speaker 1: need two forks to eat. If they set down the fork, 322 00:20:00,960 --> 00:20:04,040 Speaker 1: then they're going to lose it, so they're holding onto it. 323 00:20:04,040 --> 00:20:07,840 Speaker 1: It deadlocks the whole system. So how do you fix this? Well, 324 00:20:07,880 --> 00:20:11,280 Speaker 1: they are actually different solutions to this problem, and they're 325 00:20:11,280 --> 00:20:13,959 Speaker 1: all meant to try and avoid deadlock. And then there 326 00:20:13,960 --> 00:20:16,680 Speaker 1: are other solutions that are meant to ensure fairness, because 327 00:20:16,720 --> 00:20:20,040 Speaker 1: that's not a guarantee in this system. For example, you 328 00:20:20,119 --> 00:20:23,000 Speaker 1: might set a rule that ends up assigning a number 329 00:20:23,200 --> 00:20:26,080 Speaker 1: to each one of the forks and maybe the rule 330 00:20:26,240 --> 00:20:30,159 Speaker 1: is that you can only grab the lower number that's 331 00:20:30,920 --> 00:20:34,199 Speaker 1: in front of you first, and then you can grab 332 00:20:35,560 --> 00:20:40,080 Speaker 1: whichever fork has a higher number, and everyone grabs their 333 00:20:40,119 --> 00:20:43,600 Speaker 1: lowest number. But someone's going to be left without being 334 00:20:43,600 --> 00:20:47,080 Speaker 1: able to do that because they will only be left 335 00:20:47,119 --> 00:20:49,960 Speaker 1: with the number five fork. There is no lower number four. 336 00:20:50,119 --> 00:20:52,960 Speaker 1: They cannot follow the rule because the number four has 337 00:20:52,960 --> 00:20:56,199 Speaker 1: been grabbed by someone else. This allows one person to 338 00:20:56,240 --> 00:20:58,720 Speaker 1: grab the number five fork because they've already grabbed the 339 00:20:58,800 --> 00:21:01,240 Speaker 1: lower number, and they can eat. Then they can sit 340 00:21:01,280 --> 00:21:05,119 Speaker 1: down their fork, and this can then continue with everybody 341 00:21:05,160 --> 00:21:07,760 Speaker 1: getting a chance, assuming you have other rules in place 342 00:21:08,280 --> 00:21:11,199 Speaker 1: to help guide things. Now that's just one approach, mind you, 343 00:21:11,240 --> 00:21:13,480 Speaker 1: there are lots of others, Like there's one where there's 344 00:21:13,480 --> 00:21:18,000 Speaker 1: an arbiter who is there to determine when each person 345 00:21:18,119 --> 00:21:20,399 Speaker 1: is allowed to eat. They essentially are the ones given 346 00:21:20,480 --> 00:21:26,600 Speaker 1: permission to grant the privilege of eating too specific people 347 00:21:26,640 --> 00:21:29,200 Speaker 1: and to make sure that no one overheats like. That's 348 00:21:29,240 --> 00:21:32,320 Speaker 1: another approach. So the point of the whole thought experiment 349 00:21:32,400 --> 00:21:36,240 Speaker 1: is to get people considering the challenges using a limited 350 00:21:36,320 --> 00:21:39,440 Speaker 1: number of resources for multiple entities in such a way 351 00:21:39,480 --> 00:21:42,440 Speaker 1: that no one goes without for too long, and there's 352 00:21:42,480 --> 00:21:45,119 Speaker 1: a means of managing things. It's meant to give computer 353 00:21:45,240 --> 00:21:49,040 Speaker 1: scientists heads up on things they have to consider when 354 00:21:49,080 --> 00:21:53,719 Speaker 1: they're designing their systems. So it's really thought experiments that's 355 00:21:53,720 --> 00:21:55,840 Speaker 1: where they're really valuable. Is that it's before you started 356 00:21:55,840 --> 00:22:01,119 Speaker 1: to build anything, right, you haven't dedicated asset and time 357 00:22:01,160 --> 00:22:04,399 Speaker 1: and effort to building something. You're thinking it through first 358 00:22:04,400 --> 00:22:09,240 Speaker 1: and saying, how do I avoid this perceived problem so 359 00:22:09,280 --> 00:22:11,560 Speaker 1: that we don't actually encounter it in the wild and 360 00:22:11,560 --> 00:22:13,359 Speaker 1: then have to figure out a solution. How can I 361 00:22:13,400 --> 00:22:17,040 Speaker 1: solve it just by thinking about it? That is what 362 00:22:17,160 --> 00:22:20,720 Speaker 1: these thought experiments give you the opportunity to do, assuming 363 00:22:20,720 --> 00:22:24,520 Speaker 1: that the thought experiment is constructed properly, which is not 364 00:22:25,119 --> 00:22:27,600 Speaker 1: always the case. There are thought experiments that later on 365 00:22:27,720 --> 00:22:32,840 Speaker 1: people picked apart and said, this thought experiment is predicated 366 00:22:32,920 --> 00:22:37,960 Speaker 1: upon assumptions that we can't be certain are true, and 367 00:22:38,000 --> 00:22:43,240 Speaker 1: therefore you can't really use this thought experiment without acknowledging 368 00:22:43,280 --> 00:22:46,080 Speaker 1: that it could just all be for nothing because the 369 00:22:46,119 --> 00:22:51,320 Speaker 1: actual primases aren't proven. But then there are the various 370 00:22:51,359 --> 00:22:54,640 Speaker 1: thought experiments and ethics problems that come into play when 371 00:22:54,640 --> 00:22:58,880 Speaker 1: you start to talk about artificial intelligence. Now, I've said 372 00:22:58,920 --> 00:23:02,000 Speaker 1: many times in this show, AI covers a huge amount 373 00:23:02,040 --> 00:23:06,159 Speaker 1: of ground. It's a very I think AI is a 374 00:23:06,240 --> 00:23:10,840 Speaker 1: dangerous term. Not dangerous in the sense of it potentially 375 00:23:10,880 --> 00:23:14,280 Speaker 1: being harmful to humans, but rather it's such a huge 376 00:23:14,359 --> 00:23:18,840 Speaker 1: discipline that it's very easy to be reductive when you're 377 00:23:18,840 --> 00:23:22,399 Speaker 1: talking about AI, and to think that when you say AI, 378 00:23:22,480 --> 00:23:26,159 Speaker 1: are just talking about machines, thinking as if they were people. 379 00:23:27,080 --> 00:23:32,920 Speaker 1: That's one version of what AI could be. It's generally 380 00:23:32,960 --> 00:23:36,680 Speaker 1: referred to as strong AI. But AI covers a lot 381 00:23:36,680 --> 00:23:40,400 Speaker 1: of ground. It is a multidisciplinary technology, and it encompasses 382 00:23:41,320 --> 00:23:47,080 Speaker 1: relatively constrained concepts like computer vision or language recognition, and 383 00:23:47,119 --> 00:23:49,280 Speaker 1: then it ranges all the way up to big ideas 384 00:23:49,359 --> 00:23:53,080 Speaker 1: like strong or general AI capable of processing information in 385 00:23:53,080 --> 00:23:56,320 Speaker 1: a way that it is at least human like. Well, 386 00:23:57,640 --> 00:24:01,280 Speaker 1: one of the elements, in fact, one that's closest to 387 00:24:01,359 --> 00:24:05,880 Speaker 1: strong AI, that's in the thought experiment world, is the 388 00:24:05,880 --> 00:24:09,280 Speaker 1: thought experiment of an artificial brain or artificial mind. What 389 00:24:09,400 --> 00:24:14,879 Speaker 1: would it take to produce an artificial brain? So something 390 00:24:14,920 --> 00:24:18,600 Speaker 1: that we have created that is capable of some form 391 00:24:18,680 --> 00:24:23,160 Speaker 1: of thought, something that we would recognize as thought. So 392 00:24:23,200 --> 00:24:25,639 Speaker 1: there's a question about whether or not it's even possible 393 00:24:25,800 --> 00:24:29,840 Speaker 1: to create an actual artificial brain and what that could entail. 394 00:24:30,320 --> 00:24:33,119 Speaker 1: Some argue that what it will take is just a 395 00:24:33,160 --> 00:24:38,200 Speaker 1: sufficiently complex computer system that's emulating how our brains work, 396 00:24:38,760 --> 00:24:41,600 Speaker 1: so like an artificial neural network. If we were able 397 00:24:41,640 --> 00:24:44,280 Speaker 1: to build an artificial neural network that was big enough 398 00:24:44,840 --> 00:24:51,480 Speaker 1: and fast enough on powerful enough computer systems, then potentially 399 00:24:51,520 --> 00:24:55,119 Speaker 1: we would see the formation of an artificial brain. That's 400 00:24:55,160 --> 00:24:59,479 Speaker 1: how that argument goes. And maybe we wouldn't even need 401 00:24:59,520 --> 00:25:05,960 Speaker 1: to do an actual artificial neural network. Maybe the collective 402 00:25:06,880 --> 00:25:12,600 Speaker 1: interconnections of the Internet could allow an intelligence to emerge, 403 00:25:13,280 --> 00:25:16,040 Speaker 1: you know, maybe it would even be transitory in nature. 404 00:25:16,280 --> 00:25:20,640 Speaker 1: Maybe it would be an intelligence that emerges and fades away, 405 00:25:20,720 --> 00:25:24,600 Speaker 1: and maybe so quickly that we can't even ever recognize it, 406 00:25:25,200 --> 00:25:30,760 Speaker 1: that it's elements of an intelligence that because it's so transitional, 407 00:25:31,200 --> 00:25:35,960 Speaker 1: we don't recognize it as such. And maybe it is 408 00:25:36,040 --> 00:25:38,400 Speaker 1: possible to create a brain or mind out of such 409 00:25:38,480 --> 00:25:41,680 Speaker 1: complex connections between high end computer systems. But the truth 410 00:25:41,720 --> 00:25:45,320 Speaker 1: of the matter is we don't have a full understanding 411 00:25:45,320 --> 00:25:49,120 Speaker 1: of how our minds work, the actual gray matter that's 412 00:25:49,119 --> 00:25:53,760 Speaker 1: in our heads. We don't have a full understanding of that. So, 413 00:25:53,880 --> 00:25:57,880 Speaker 1: because we don't fully understand how our brains work, there 414 00:25:57,880 --> 00:26:00,320 Speaker 1: are some who argue that you know that it's possible 415 00:26:00,520 --> 00:26:03,760 Speaker 1: there's some element in our minds that we have yet 416 00:26:03,800 --> 00:26:07,600 Speaker 1: to identify. They will be necessary for us to understand 417 00:26:08,000 --> 00:26:10,800 Speaker 1: if we are to ever realize a true artificial brain 418 00:26:11,240 --> 00:26:16,720 Speaker 1: that without this unknown but perhaps fundamental component, it just 419 00:26:16,800 --> 00:26:21,160 Speaker 1: won't happen. Maybe we would fall upon it by accident, 420 00:26:21,840 --> 00:26:24,160 Speaker 1: or maybe we'll hit a limit that we just can't 421 00:26:24,160 --> 00:26:26,800 Speaker 1: get around without first having a deeper understanding of how 422 00:26:26,840 --> 00:26:32,560 Speaker 1: our own brains work. Alternatively, it might be possible to 423 00:26:32,600 --> 00:26:36,800 Speaker 1: create an artificial brain or mind without attempting to simulate 424 00:26:36,880 --> 00:26:41,439 Speaker 1: or replicate how human minds work. Proponents of this argument 425 00:26:41,440 --> 00:26:46,080 Speaker 1: I pointed out that for much simpler tasks relatively speaking, 426 00:26:46,760 --> 00:26:52,000 Speaker 1: like human flight, we ultimately abandoned technologies that we're attempting 427 00:26:52,000 --> 00:26:55,679 Speaker 1: to replicate how birds fly. I'm sure you've seen old 428 00:26:55,760 --> 00:26:59,440 Speaker 1: film footage of experiments in heavier than air flight where 429 00:27:00,080 --> 00:27:02,439 Speaker 1: people had strapped wings to their arms and they were 430 00:27:02,440 --> 00:27:06,360 Speaker 1: flapping them up and down, or they had some mechanical 431 00:27:06,440 --> 00:27:09,040 Speaker 1: contraption that was moving wings up and down and it 432 00:27:09,080 --> 00:27:12,800 Speaker 1: was all an attempt to replicate how birds fly in 433 00:27:12,800 --> 00:27:16,960 Speaker 1: the air, but these didn't really work, and ultimately we 434 00:27:17,119 --> 00:27:21,439 Speaker 1: found that going with a fixed wing aircraft design and 435 00:27:21,520 --> 00:27:25,359 Speaker 1: abandoning our foolish attempts to replicate what birds are doing, 436 00:27:25,840 --> 00:27:28,960 Speaker 1: we could actually succeed. We ended up creating successful flying 437 00:27:28,960 --> 00:27:33,639 Speaker 1: machines even though we were not directly mimicking birds and nature. 438 00:27:34,080 --> 00:27:37,399 Speaker 1: So by that argument, you could say, well, maybe creating 439 00:27:37,400 --> 00:27:41,760 Speaker 1: an artificial brain won't involve mimicking our own neurological systems 440 00:27:41,800 --> 00:27:44,440 Speaker 1: at all. Maybe it'll be through some other means, such 441 00:27:44,440 --> 00:27:48,200 Speaker 1: as those complex connections on the Internet, for example, where 442 00:27:48,600 --> 00:27:55,199 Speaker 1: intelligence would be an emergent property. So that is another 443 00:27:56,320 --> 00:28:01,480 Speaker 1: another approach toward looking at an artificial brain. As for 444 00:28:01,600 --> 00:28:06,719 Speaker 1: what I believe, I think that with enough complexity and power, 445 00:28:07,840 --> 00:28:10,920 Speaker 1: maybe we could see something like an artificial brain emerge, 446 00:28:11,119 --> 00:28:15,639 Speaker 1: But I honestly don't know. I do think that the brain, 447 00:28:15,920 --> 00:28:21,920 Speaker 1: the mind is completely engulfed and encompassed by the gray 448 00:28:21,960 --> 00:28:25,359 Speaker 1: matter in our heads. I don't think there's anything metaphysical 449 00:28:25,840 --> 00:28:28,359 Speaker 1: that's going on there. That's my own personal belief, but 450 00:28:28,440 --> 00:28:31,400 Speaker 1: I don't know that for sure. It's just my belief 451 00:28:32,000 --> 00:28:34,440 Speaker 1: partially backed up by the fact that people who have 452 00:28:34,600 --> 00:28:38,680 Speaker 1: encountered some form of brain injury often have very different 453 00:28:38,720 --> 00:28:42,400 Speaker 1: experiences from that point forward. And to me, that means 454 00:28:42,440 --> 00:28:47,800 Speaker 1: that consciousness and experience are very tightly locked with the 455 00:28:47,880 --> 00:28:51,400 Speaker 1: actual organ of the brain. But that doesn't mean that, 456 00:28:51,880 --> 00:28:54,680 Speaker 1: you know, there's not something else going on that I'm 457 00:28:54,720 --> 00:28:58,719 Speaker 1: missing that would be necessary. I just don't know. All Right, 458 00:28:58,720 --> 00:29:00,440 Speaker 1: we're going to take another break. When we come back. 459 00:29:00,560 --> 00:29:03,280 Speaker 1: We've got a few more thought experiments we need to 460 00:29:03,280 --> 00:29:17,520 Speaker 1: talk about, including some golden oldies. Okay, let's talk about 461 00:29:17,560 --> 00:29:20,000 Speaker 1: the Turing test, because you could argue that this is 462 00:29:20,120 --> 00:29:22,880 Speaker 1: kind of a thought experiment. So Alan Turing based this 463 00:29:23,000 --> 00:29:26,200 Speaker 1: off a game called the imitation game, and the idea 464 00:29:26,240 --> 00:29:29,000 Speaker 1: behind this is that you have a contestant who gets 465 00:29:29,040 --> 00:29:32,440 Speaker 1: to communicate with someone without being able to see or 466 00:29:32,600 --> 00:29:35,840 Speaker 1: hear this person. So maybe they're typing things out on 467 00:29:35,880 --> 00:29:38,920 Speaker 1: a typewriter. They submit it and then they get a 468 00:29:38,960 --> 00:29:43,360 Speaker 1: typed response, and their goal is to try and figure 469 00:29:43,400 --> 00:29:48,040 Speaker 1: out with whom they are communicating. So one version of 470 00:29:48,040 --> 00:29:51,480 Speaker 1: the imitation game has them talking to someone who could 471 00:29:51,480 --> 00:29:54,480 Speaker 1: be a man or could be a woman, and if 472 00:29:54,640 --> 00:29:57,480 Speaker 1: it is a woman, the woman is posing as if 473 00:29:57,600 --> 00:30:00,360 Speaker 1: she were a man. So it's the contestants job to 474 00:30:00,520 --> 00:30:02,320 Speaker 1: sess out whether or not the person on the other 475 00:30:02,400 --> 00:30:07,520 Speaker 1: end of the communication chain is actually a man or 476 00:30:07,560 --> 00:30:10,320 Speaker 1: a woman pretending to be a man. So let's ignore 477 00:30:10,360 --> 00:30:15,680 Speaker 1: the dated concept of binary approach to gender that you know, 478 00:30:15,800 --> 00:30:21,400 Speaker 1: obviously that's definitely changed since then. Touring was suggesting that 479 00:30:21,560 --> 00:30:23,960 Speaker 1: you could play this same sort of game, but instead 480 00:30:23,960 --> 00:30:26,680 Speaker 1: of having a woman pose as a man, you could 481 00:30:26,720 --> 00:30:31,440 Speaker 1: have a machine posing as a human. The contestant would 482 00:30:31,480 --> 00:30:34,360 Speaker 1: have to decide whether or not they were interviewing a 483 00:30:34,400 --> 00:30:38,120 Speaker 1: person or a machine pretending to be a person. And 484 00:30:38,160 --> 00:30:41,600 Speaker 1: if the machines were reliably able to fool contestants into 485 00:30:41,640 --> 00:30:44,480 Speaker 1: thinking that they are chatting with another human being, then 486 00:30:44,520 --> 00:30:47,800 Speaker 1: the machine would be said to pass the Turing test. Now, 487 00:30:47,880 --> 00:30:51,760 Speaker 1: Turing wasn't actually saying that such a machine, essentially a chatbot, 488 00:30:52,360 --> 00:30:56,760 Speaker 1: is intelligent or was capable of thought. Instead, he was 489 00:30:56,800 --> 00:31:00,200 Speaker 1: saying the machine could simulate intelligence to a degree that 490 00:31:00,280 --> 00:31:02,480 Speaker 1: a person might not be able to tell the difference. 491 00:31:03,000 --> 00:31:06,600 Speaker 1: And after all, each individual doesn't know for sure that 492 00:31:06,640 --> 00:31:09,680 Speaker 1: the people they encounter possess intelligence. If you met me 493 00:31:09,720 --> 00:31:12,400 Speaker 1: and we had a conversation you wouldn't be sure that 494 00:31:12,520 --> 00:31:16,920 Speaker 1: I am actually intelligent. You would know you're intelligent because 495 00:31:16,920 --> 00:31:20,480 Speaker 1: you know your own experience, right, You've had your experiences, 496 00:31:20,680 --> 00:31:23,880 Speaker 1: You know you possess intelligence. When you talk to someone else, 497 00:31:24,080 --> 00:31:27,760 Speaker 1: you assume they also possess intelligence. But you can't know 498 00:31:27,840 --> 00:31:31,760 Speaker 1: for sure because you cannot occupy their experience. But we 499 00:31:31,840 --> 00:31:35,440 Speaker 1: grant the assumption that the people we encounter have intelligence. 500 00:31:36,400 --> 00:31:39,800 Speaker 1: Turing was kind of cheekily suggesting that perhaps we should 501 00:31:39,800 --> 00:31:42,920 Speaker 1: extend the same courtesy to machines that appear to possess 502 00:31:43,000 --> 00:31:45,840 Speaker 1: the same qualities. Whether or not the machine is actually 503 00:31:45,880 --> 00:31:49,080 Speaker 1: intelligent or is capable of thought is kind of moot. 504 00:31:49,200 --> 00:31:52,920 Speaker 1: If the outcome seems to mimic intelligence, why shouldn't we 505 00:31:52,920 --> 00:31:55,360 Speaker 1: just go ahead and say the machine is intelligent. Does 506 00:31:55,400 --> 00:32:00,480 Speaker 1: it really matter if the machine can actually think or not? Now, 507 00:32:00,600 --> 00:32:05,120 Speaker 1: philosopher John Searle said, heck, yeah, it matters if we 508 00:32:05,240 --> 00:32:09,000 Speaker 1: say computers think and they don't. And in fact, he 509 00:32:09,040 --> 00:32:11,400 Speaker 1: went so far as to say computers are not capable 510 00:32:11,680 --> 00:32:15,560 Speaker 1: of having a mind to make up because ultimately they 511 00:32:15,640 --> 00:32:20,040 Speaker 1: are just machines designed to follow instructions. Thus, you know 512 00:32:20,200 --> 00:32:24,240 Speaker 1: a machine that follows a program, the program could be 513 00:32:24,280 --> 00:32:29,840 Speaker 1: incredibly sophisticated and complicated, but it's still ultimately just a 514 00:32:30,000 --> 00:32:32,960 Speaker 1: list of instructions that the computer has to follow. The 515 00:32:32,960 --> 00:32:38,440 Speaker 1: computer can't divert away from those instructions. It might appear to, 516 00:32:39,040 --> 00:32:41,440 Speaker 1: but it can't go off book, you know, it can't 517 00:32:41,440 --> 00:32:44,440 Speaker 1: go off the script and start to improvise. And at 518 00:32:44,480 --> 00:32:48,920 Speaker 1: no point does this become something as human as a 519 00:32:49,080 --> 00:32:52,880 Speaker 1: mind is. So to illustrate his perspective, he proposed a 520 00:32:52,920 --> 00:32:56,920 Speaker 1: thought experiment called the Chinese Room. Now I've talked about 521 00:32:56,920 --> 00:32:59,200 Speaker 1: the Chinese Room and a lot of other episodes, so 522 00:32:59,240 --> 00:33:02,960 Speaker 1: I'll try to keep this kind of short. Searle argues 523 00:33:03,560 --> 00:33:06,080 Speaker 1: that a computer running a program is a bit like 524 00:33:06,320 --> 00:33:09,920 Speaker 1: taking a non Chinese speaking person, someone who does not 525 00:33:10,040 --> 00:33:13,479 Speaker 1: understand Chinese. They can't speak it or read it. You 526 00:33:13,520 --> 00:33:16,200 Speaker 1: put this person into a room that just has a 527 00:33:16,240 --> 00:33:19,440 Speaker 1: door with like a mail slot in it, and inside 528 00:33:19,480 --> 00:33:22,040 Speaker 1: the room is a desk, there's paper, there's a pen, 529 00:33:22,120 --> 00:33:25,560 Speaker 1: there's plenty of ink, and there's a giant book of instructions. 530 00:33:25,680 --> 00:33:28,680 Speaker 1: So once in a while, someone shoves a piece of 531 00:33:28,720 --> 00:33:31,200 Speaker 1: paper through the little mail slot in the door, and 532 00:33:31,240 --> 00:33:33,760 Speaker 1: the piece of paper has a Chinese symbol written on it. 533 00:33:34,160 --> 00:33:36,920 Speaker 1: The person in the room has a job. They take 534 00:33:36,960 --> 00:33:39,360 Speaker 1: that piece of paper with a Chinese symbol written on it. 535 00:33:39,800 --> 00:33:42,640 Speaker 1: They go through their big book of instructions looking for 536 00:33:42,720 --> 00:33:46,480 Speaker 1: that symbol, and they ultimately will find it, and then 537 00:33:46,480 --> 00:33:49,680 Speaker 1: they will produce a response based on what's in the book. 538 00:33:49,960 --> 00:33:52,960 Speaker 1: They'll have to draw a different Chinese symbol. They just 539 00:33:53,120 --> 00:33:56,560 Speaker 1: ape the instruction that's in the book. Then they put 540 00:33:56,600 --> 00:34:00,000 Speaker 1: that through the mail slot in the door and they're done. 541 00:34:00,160 --> 00:34:02,520 Speaker 1: On the other side of the door, you have someone 542 00:34:02,560 --> 00:34:05,360 Speaker 1: who has brought a question to the room. You know, 543 00:34:05,360 --> 00:34:08,960 Speaker 1: it's written in that Chinese symbol. So they submit a 544 00:34:09,040 --> 00:34:11,279 Speaker 1: question and then after a bit of time, they get 545 00:34:11,320 --> 00:34:15,160 Speaker 1: an answer, and to them it appears that whomever is 546 00:34:15,560 --> 00:34:20,360 Speaker 1: behind the door understands Chinese symbols and can respond in kind. 547 00:34:21,040 --> 00:34:24,480 Speaker 1: But the fact is the person in the room doesn't 548 00:34:24,600 --> 00:34:29,719 Speaker 1: understand Chinese. They're just following very thorough instructions. But they 549 00:34:29,760 --> 00:34:32,279 Speaker 1: have no idea what's being asked or even what the 550 00:34:32,360 --> 00:34:35,160 Speaker 1: response means. They don't know what they're saying. They're just 551 00:34:35,920 --> 00:34:39,839 Speaker 1: copying what's in the book. They're following the program. They're 552 00:34:39,880 --> 00:34:43,319 Speaker 1: matching questions with answers in a language they don't understand. 553 00:34:43,840 --> 00:34:46,520 Speaker 1: They don't even necessarily know that they're questions. They're just 554 00:34:47,320 --> 00:34:52,120 Speaker 1: submitting whatever the corresponding response should be. So Searle says 555 00:34:52,960 --> 00:34:55,920 Speaker 1: that machines are essentially doing this, that's what they're doing. 556 00:34:56,280 --> 00:34:59,480 Speaker 1: They're producing responses based on input, but they have no 557 00:34:59,560 --> 00:35:02,919 Speaker 1: unders standing of either the input or the response. They're 558 00:35:03,000 --> 00:35:06,840 Speaker 1: just following instructions. When you engage in a conversation with 559 00:35:06,960 --> 00:35:12,160 Speaker 1: chat GPT, chat GPT doesn't actually understand what you're talking about. 560 00:35:12,480 --> 00:35:17,040 Speaker 1: It doesn't comprehend the questions, It doesn't understand context or 561 00:35:17,040 --> 00:35:20,440 Speaker 1: anything like that. It just builds up responses based on 562 00:35:20,480 --> 00:35:24,520 Speaker 1: a really sophisticated program. But these responses, even if they 563 00:35:24,560 --> 00:35:28,760 Speaker 1: correctly answer your question, do not show that chat GPT 564 00:35:28,960 --> 00:35:33,400 Speaker 1: actually understands what is going on. It's just producing a result. 565 00:35:33,960 --> 00:35:38,400 Speaker 1: Searle says this is because the machine ultimately cannot think, 566 00:35:38,880 --> 00:35:41,600 Speaker 1: It cannot be said to have a mind, and he 567 00:35:41,680 --> 00:35:45,640 Speaker 1: further argues that strong AI is a dead end. We're 568 00:35:45,680 --> 00:35:49,560 Speaker 1: never going to get there. It is it's inherently impossible, 569 00:35:50,160 --> 00:35:52,720 Speaker 1: and there's actually a lot of discussion and debate around 570 00:35:52,760 --> 00:35:56,040 Speaker 1: the Chinese Room thought experiment. There are people on different 571 00:35:56,040 --> 00:35:59,279 Speaker 1: sides of the matter, arguing for or against its merits, 572 00:35:59,440 --> 00:36:02,680 Speaker 1: and the repretation of it. But again, this gets into 573 00:36:02,719 --> 00:36:05,000 Speaker 1: a lot of details that we don't really have time 574 00:36:05,040 --> 00:36:08,080 Speaker 1: to dive into for this episode. And I have done 575 00:36:08,120 --> 00:36:10,840 Speaker 1: episodes on the Chinese Room thought experiment in the past, 576 00:36:10,920 --> 00:36:14,960 Speaker 1: so let's move on for a different approach to AI. 577 00:36:15,360 --> 00:36:20,120 Speaker 1: Let us turn to Valentino Brightenberg, who was a neuroscientist 578 00:36:20,200 --> 00:36:24,279 Speaker 1: and an important figure in the field of cybernetics. And 579 00:36:24,320 --> 00:36:27,120 Speaker 1: I feel like I need to define cybernetics because I 580 00:36:27,160 --> 00:36:31,280 Speaker 1: had a complete misunderstanding of what that term meant until 581 00:36:31,320 --> 00:36:35,120 Speaker 1: I was doing research. Cybernetics is a discipline concerned with 582 00:36:35,200 --> 00:36:40,800 Speaker 1: communications and automatic control systems in both machines and living things, 583 00:36:41,040 --> 00:36:44,759 Speaker 1: as defined by Oxford Languages. The word has its origins 584 00:36:44,800 --> 00:36:50,320 Speaker 1: in the Greek word for kyberneticus, which means good at steering. 585 00:36:51,160 --> 00:36:54,239 Speaker 1: I didn't know that before I researched this episode. So 586 00:36:54,920 --> 00:36:58,200 Speaker 1: you could describe the act of a human picking up 587 00:36:58,239 --> 00:37:02,880 Speaker 1: a teacup from a saucer on the table as a 588 00:37:02,960 --> 00:37:06,520 Speaker 1: cybernetics series of actions. And you would first think of 589 00:37:06,560 --> 00:37:10,280 Speaker 1: like the human brain as a controller and it receives 590 00:37:10,320 --> 00:37:14,600 Speaker 1: information from a sensor the human's eyes, and this gives 591 00:37:14,600 --> 00:37:19,439 Speaker 1: information about the teacup, where the teacup is located, its 592 00:37:19,600 --> 00:37:24,200 Speaker 1: distance from the human in question, the teacup's orientation with 593 00:37:24,280 --> 00:37:29,600 Speaker 1: reference to the humans position, etc. This information is called feedback, 594 00:37:29,719 --> 00:37:32,640 Speaker 1: So the feedback goes to the controller, and then the 595 00:37:32,680 --> 00:37:36,040 Speaker 1: controller uses the feedback to make a decision in order 596 00:37:36,040 --> 00:37:40,120 Speaker 1: to achieve a desired outcome, in this case picking up 597 00:37:40,360 --> 00:37:43,600 Speaker 1: the teacup. But this actually happens in stages. Right. You 598 00:37:43,719 --> 00:37:47,840 Speaker 1: might as an outward observer, you might see this human 599 00:37:48,600 --> 00:37:53,480 Speaker 1: lean forward and reach out their arm and then open 600 00:37:53,520 --> 00:37:56,680 Speaker 1: their hand, and then take the teacup and then lift it. 601 00:37:57,000 --> 00:38:00,479 Speaker 1: So this is actually a series of steps. The goal 602 00:38:00,560 --> 00:38:04,040 Speaker 1: for the controller is to take the behavior that we're 603 00:38:04,040 --> 00:38:08,000 Speaker 1: observing as outsiders, they leaning forward in the reaching of 604 00:38:08,000 --> 00:38:11,280 Speaker 1: the hand and so forth, and to bring that into 605 00:38:11,360 --> 00:38:15,240 Speaker 1: alignment with the desired behavior of just picking up the teacup. 606 00:38:15,600 --> 00:38:18,400 Speaker 1: This discipline plays an important part not just in our 607 00:38:18,480 --> 00:38:22,560 Speaker 1: understanding of organisms in their behavior, but also how you 608 00:38:22,600 --> 00:38:25,640 Speaker 1: could create things like artificial limbs that interface with our 609 00:38:25,680 --> 00:38:29,399 Speaker 1: brains and have those artificial limbs behave similarly to an 610 00:38:29,520 --> 00:38:35,440 Speaker 1: organic limb. We have seen some really incredibly sophisticated robotic 611 00:38:35,520 --> 00:38:38,680 Speaker 1: limbs that can do this sort of thing, but they 612 00:38:38,719 --> 00:38:43,719 Speaker 1: have to really be grounded in this study to move 613 00:38:43,719 --> 00:38:47,279 Speaker 1: in a way that's natural and actually achieves whatever the 614 00:38:47,320 --> 00:38:50,680 Speaker 1: outcome is that the person who is attached to that 615 00:38:50,760 --> 00:38:53,200 Speaker 1: limb wants it to do. This is not something that 616 00:38:53,280 --> 00:38:56,160 Speaker 1: just automatically happens. You have to build it in. So 617 00:38:56,200 --> 00:38:59,960 Speaker 1: in the mid nineteen eighties, Brightenberg published a book called Vehicle. 618 00:39:00,680 --> 00:39:06,239 Speaker 1: In this book, he presented hypothetical self operating machines. So 619 00:39:06,280 --> 00:39:08,400 Speaker 1: these were not actual machines, they were just sort of 620 00:39:08,440 --> 00:39:11,839 Speaker 1: a thought experiment. He said, what imagine if you had 621 00:39:11,840 --> 00:39:15,200 Speaker 1: a machine that did this, and they would exhibit behaviors 622 00:39:15,200 --> 00:39:19,840 Speaker 1: that could become increasingly intricate and complicated and dynamic, But 623 00:39:20,080 --> 00:39:23,520 Speaker 1: ultimately you could start to boil down those behaviors as 624 00:39:23,600 --> 00:39:27,520 Speaker 1: following simpler rules. And if you just understood all the 625 00:39:27,600 --> 00:39:30,759 Speaker 1: different rules in all the different situations, you would be 626 00:39:30,800 --> 00:39:35,080 Speaker 1: able to even predict what something would do to some degree. So, 627 00:39:35,120 --> 00:39:38,759 Speaker 1: for example, a machine might have an optical sensor and 628 00:39:38,800 --> 00:39:42,160 Speaker 1: it can detect if something is in front of the machine. 629 00:39:42,200 --> 00:39:45,520 Speaker 1: So imagine you've mounted the sensor to the front of 630 00:39:45,560 --> 00:39:51,520 Speaker 1: a little four wheeled vehicle, and if the sensor doesn't 631 00:39:51,560 --> 00:39:54,120 Speaker 1: detect anything in front of it, it allows power to 632 00:39:54,160 --> 00:39:56,880 Speaker 1: go to the motor that drives those wheels, and the 633 00:39:56,960 --> 00:40:01,440 Speaker 1: little robotic car will move forward. But if something gets 634 00:40:01,480 --> 00:40:04,600 Speaker 1: in its way, then maybe it cuts power to the 635 00:40:04,640 --> 00:40:08,880 Speaker 1: motors and the wheels stop turning, or maybe it changes 636 00:40:09,280 --> 00:40:12,920 Speaker 1: the rate at which different wheels turn so that it 637 00:40:13,000 --> 00:40:15,719 Speaker 1: can rotate a bit, it can turn out the way 638 00:40:15,760 --> 00:40:19,799 Speaker 1: of whatever the obstacle is. I'm sure you've had experience 639 00:40:19,840 --> 00:40:21,759 Speaker 1: with little toys that do this sort of thing where 640 00:40:21,760 --> 00:40:23,920 Speaker 1: there's some sort of simple optical sensor so that if 641 00:40:23,920 --> 00:40:27,440 Speaker 1: it gets close to a wall, it stops and turns 642 00:40:27,440 --> 00:40:31,320 Speaker 1: and moves in a different direction. Heck, your typical robot 643 00:40:31,400 --> 00:40:35,520 Speaker 1: vacuum cleaner will do this, right, So this is something 644 00:40:35,520 --> 00:40:38,800 Speaker 1: that we've had some experience with at this point. And 645 00:40:39,520 --> 00:40:42,919 Speaker 1: Brightenberg actually went on further and hypothesize that you could 646 00:40:42,920 --> 00:40:47,319 Speaker 1: have machines that would follow slightly more complicated rules in 647 00:40:47,360 --> 00:40:50,800 Speaker 1: such a way that it could imply motivations behind movements, 648 00:40:51,160 --> 00:40:53,720 Speaker 1: things that we would normally associate with humans, like fear 649 00:40:53,920 --> 00:40:57,160 Speaker 1: or aggression, but really it could just be the machine 650 00:40:57,200 --> 00:41:00,840 Speaker 1: responding to different situations in a predetermined way. So let 651 00:41:00,920 --> 00:41:04,040 Speaker 1: me give you a simple example. Maybe you've got this 652 00:41:04,080 --> 00:41:06,600 Speaker 1: optical sensor that's on the front of this little four 653 00:41:06,600 --> 00:41:10,520 Speaker 1: wheeled vehicle, and when it detects something, it tries to 654 00:41:10,560 --> 00:41:13,200 Speaker 1: determine whether or not the thing ahead of it is 655 00:41:13,280 --> 00:41:16,520 Speaker 1: bigger or smaller than it is. If it's smaller, maybe 656 00:41:16,640 --> 00:41:19,759 Speaker 1: it accelerates toward it, as if to intimidate it. And 657 00:41:19,800 --> 00:41:23,399 Speaker 1: if it's larger, maybe it turns and accelerates away from 658 00:41:23,440 --> 00:41:27,200 Speaker 1: the object, as if it's in fear. Now, Brightenberg's hypothetical 659 00:41:27,280 --> 00:41:30,880 Speaker 1: vehicles didn't really need any cognitive processes. They would just 660 00:41:31,040 --> 00:41:34,319 Speaker 1: follow these simple instructions. But if you were to put 661 00:41:34,360 --> 00:41:38,320 Speaker 1: them in a complex enough environment with enough different sets 662 00:41:38,320 --> 00:41:43,719 Speaker 1: of instructions, these behaviors would potentially be very dynamic and complicated, 663 00:41:43,800 --> 00:41:47,719 Speaker 1: perhaps complicated enough to imply a deeper intelligence, even though 664 00:41:47,800 --> 00:41:52,880 Speaker 1: ultimately they were just following simple rules. Speaking of vehicles, 665 00:41:53,680 --> 00:41:56,520 Speaker 1: let's talk about the trolley problem. And you've likely heard 666 00:41:56,560 --> 00:41:58,279 Speaker 1: of this one. It's one of the more famous thought 667 00:41:58,320 --> 00:42:01,799 Speaker 1: experiments that relates to ethic. The basic version is that 668 00:42:01,840 --> 00:42:05,240 Speaker 1: there's a trolley hurtling down some tracks, and the breaks 669 00:42:05,239 --> 00:42:08,600 Speaker 1: on the trolley aren't working, and if the trolley keeps going, 670 00:42:08,840 --> 00:42:11,279 Speaker 1: it will hit a group of five people, killing all 671 00:42:11,320 --> 00:42:13,840 Speaker 1: of them. But you're standing at a switch. If you 672 00:42:13,880 --> 00:42:16,800 Speaker 1: throw the switch, the trolley will divert onto a separate 673 00:42:16,840 --> 00:42:20,520 Speaker 1: set of rails and strike one person, killing that one, 674 00:42:20,800 --> 00:42:24,200 Speaker 1: but the other five people will be saved. So do 675 00:42:24,280 --> 00:42:28,680 Speaker 1: you throw the switch dooming one person and saving five? 676 00:42:29,760 --> 00:42:32,000 Speaker 1: There's actually a lot of stuff to consider here. For example, 677 00:42:32,719 --> 00:42:36,040 Speaker 1: do you consider it more ethical to make an active choice? 678 00:42:36,800 --> 00:42:39,680 Speaker 1: Is it akin to murdering someone? If you throw the switch? 679 00:42:39,760 --> 00:42:43,839 Speaker 1: Are you killing that person? Like you're condemning them to die? 680 00:42:44,640 --> 00:42:47,640 Speaker 1: But if you choose not to act, does that exonerate 681 00:42:47,640 --> 00:42:50,600 Speaker 1: you from the death of the five people? You could say, well, 682 00:42:50,640 --> 00:42:53,040 Speaker 1: they'd be dead if I weren't at the switch, there'd 683 00:42:53,040 --> 00:42:54,839 Speaker 1: be no one to change it. They would have died 684 00:42:54,920 --> 00:42:59,040 Speaker 1: either way, So it's the only thing that's different is 685 00:42:59,080 --> 00:43:00,560 Speaker 1: that I happened to be at the which does that 686 00:43:00,600 --> 00:43:03,160 Speaker 1: make me a bad person for not throwing the switch? 687 00:43:03,960 --> 00:43:05,759 Speaker 1: There are variations of this as well that make it 688 00:43:05,800 --> 00:43:09,360 Speaker 1: even more complicated. For example, one early version had the 689 00:43:09,440 --> 00:43:12,440 Speaker 1: person at the switch having to choose between saving the 690 00:43:12,480 --> 00:43:17,120 Speaker 1: five people or condemning their own child. To death. Other 691 00:43:17,239 --> 00:43:21,040 Speaker 1: versions replaced the trolley. What if it's an incoming missile 692 00:43:21,239 --> 00:43:23,440 Speaker 1: and you have the ability to divert a missile that 693 00:43:23,640 --> 00:43:26,200 Speaker 1: was heading toward a city, But if you divert it, 694 00:43:26,239 --> 00:43:28,920 Speaker 1: that missile is going to hit a small town instead. 695 00:43:29,480 --> 00:43:32,560 Speaker 1: So if the missile hit the city, more people would die, 696 00:43:33,040 --> 00:43:35,960 Speaker 1: but there could still be some survivors in the city. 697 00:43:36,320 --> 00:43:38,240 Speaker 1: If it hits the town, it's going to essentially wipe 698 00:43:38,239 --> 00:43:42,640 Speaker 1: out the entire town population. Fewer people overall will die 699 00:43:42,800 --> 00:43:45,560 Speaker 1: because the town is smaller than the city, but essentially 700 00:43:45,600 --> 00:43:49,160 Speaker 1: everyone in the town dies in the city. It's a 701 00:43:49,280 --> 00:43:53,080 Speaker 1: massive but not entire part of the population. Well, what 702 00:43:53,120 --> 00:43:55,319 Speaker 1: does all this have to do with technology? These are 703 00:43:55,320 --> 00:43:57,760 Speaker 1: actually the sort of questions that engineers have to wrestle 704 00:43:57,760 --> 00:44:01,799 Speaker 1: with as a design stuff like aonamous systems. When we 705 00:44:01,840 --> 00:44:04,799 Speaker 1: look at the possibility of driverless vehicles, for example, we 706 00:44:04,920 --> 00:44:09,040 Speaker 1: have to consider how the vehicle will handle emergency situations. 707 00:44:09,520 --> 00:44:13,600 Speaker 1: So let's say a driverless car with passengers inside it 708 00:44:13,680 --> 00:44:16,239 Speaker 1: is motoring down the road and a person steps out 709 00:44:16,280 --> 00:44:19,759 Speaker 1: in the road suddenly, so it's too late for the 710 00:44:19,960 --> 00:44:22,719 Speaker 1: vehicle to break. Let's say let's say does the driverless 711 00:44:22,760 --> 00:44:25,799 Speaker 1: vehicle instead via ale the way, perhaps even going off road, 712 00:44:26,360 --> 00:44:28,879 Speaker 1: factoring in the fact that the passengers inside the car 713 00:44:28,960 --> 00:44:31,680 Speaker 1: have seat belts and have air bags and other protective 714 00:44:31,680 --> 00:44:36,920 Speaker 1: measures around them, and thus prioritize the pedestrians health, or 715 00:44:37,840 --> 00:44:41,520 Speaker 1: does it instead prioritize the safety of the passengers and 716 00:44:42,120 --> 00:44:45,919 Speaker 1: make a decision that it puts the pedestrians safety at 717 00:44:46,160 --> 00:44:50,680 Speaker 1: considerable risk. Machines do not intrinsically know this stuff. So 718 00:44:50,800 --> 00:44:54,080 Speaker 1: grim as it may seem, these are things that engineers 719 00:44:54,200 --> 00:44:58,000 Speaker 1: have to take into consideration as they build out complex 720 00:44:58,040 --> 00:45:02,720 Speaker 1: autonomous systems. Now, let me just finish up by touching 721 00:45:02,719 --> 00:45:06,320 Speaker 1: on a classic science fiction thought experiment. Are we living 722 00:45:06,320 --> 00:45:09,040 Speaker 1: in a simulation? You've seen this idea explored in movies 723 00:45:09,080 --> 00:45:12,040 Speaker 1: like the Matrix series, And there is an interesting thought 724 00:45:12,040 --> 00:45:16,200 Speaker 1: experiment proposed by Nick Bostrom regarding simulated realities, and he 725 00:45:16,280 --> 00:45:20,760 Speaker 1: posits that at least one of several possibilities must be true, 726 00:45:21,320 --> 00:45:24,920 Speaker 1: namely scenario one. Humans are never going to reach a 727 00:45:24,960 --> 00:45:28,520 Speaker 1: point in which they can construct a simulated reality sophisticate 728 00:45:28,600 --> 00:45:31,400 Speaker 1: enough for the inhabitants of that reality to believe they 729 00:45:31,480 --> 00:45:35,000 Speaker 1: are quote unquote real. So, in other words, for whatever reason, 730 00:45:35,360 --> 00:45:37,960 Speaker 1: maybe we destroy ourselves before we get there. Maybe we 731 00:45:38,040 --> 00:45:41,440 Speaker 1: just never develop technology sufficient enough to do it. But 732 00:45:41,520 --> 00:45:44,440 Speaker 1: we aren't able to create a computer simulation so robust 733 00:45:44,480 --> 00:45:47,239 Speaker 1: that the simulated beings inside of it have their own 734 00:45:47,360 --> 00:45:51,279 Speaker 1: kind of self awareness. Two that there are no other 735 00:45:51,320 --> 00:45:53,560 Speaker 1: civilizations out in the universe that are able to do 736 00:45:53,600 --> 00:45:58,240 Speaker 1: this for whatever reason. Three that we humans will one 737 00:45:58,320 --> 00:46:01,359 Speaker 1: day be able to do this, but we can't do 738 00:46:01,400 --> 00:46:04,680 Speaker 1: it yet. However, we will one day be able to 739 00:46:04,680 --> 00:46:06,839 Speaker 1: do it, we just haven't reached that point, and we're 740 00:46:06,880 --> 00:46:08,640 Speaker 1: the first to do it, like no one else has 741 00:46:08,640 --> 00:46:12,160 Speaker 1: managed to do it. Four that we're actually living in 742 00:46:12,200 --> 00:46:15,279 Speaker 1: a simulation. That the idea being that if it is 743 00:46:15,360 --> 00:46:19,080 Speaker 1: possible to build such a simulation where the beings inside 744 00:46:19,080 --> 00:46:22,439 Speaker 1: the simulation have self awareness and can think and have 745 00:46:22,480 --> 00:46:25,000 Speaker 1: emotions and all this sort of stuff that we associate 746 00:46:25,080 --> 00:46:29,800 Speaker 1: with being humans and having experiences, then if that is 747 00:46:29,840 --> 00:46:32,440 Speaker 1: possible to build such a thing, we are definitely in one, 748 00:46:32,760 --> 00:46:34,680 Speaker 1: or at least there's a fifty fifty shop we are, 749 00:46:35,239 --> 00:46:42,680 Speaker 1: because if it is possible, it would be pretty egotistical 750 00:46:42,840 --> 00:46:45,520 Speaker 1: to suggest that we'd be the first to do it. 751 00:46:45,000 --> 00:46:48,400 Speaker 1: That hasn't happened already, And that we are not in 752 00:46:48,440 --> 00:46:51,960 Speaker 1: fact a product of such a thing. There are a 753 00:46:52,000 --> 00:46:54,879 Speaker 1: lot of arguments that go into that. It's kind of fun. 754 00:46:55,560 --> 00:46:58,040 Speaker 1: I would argue, ultimately it's moot because it's not a 755 00:46:58,120 --> 00:47:03,759 Speaker 1: falsifiable hypothesis. And ultimately we still have our own experiences 756 00:47:03,760 --> 00:47:06,759 Speaker 1: in our own lives. So even if it is a simulation, 757 00:47:06,880 --> 00:47:09,560 Speaker 1: it matters to us as we're in it, like just 758 00:47:09,600 --> 00:47:11,759 Speaker 1: as much as it would matter if it's not a simulation. 759 00:47:12,120 --> 00:47:15,799 Speaker 1: So I say, simulation or not. Go out there, be 760 00:47:15,960 --> 00:47:20,759 Speaker 1: good people, use critical thinking, use compassion, and use these 761 00:47:20,760 --> 00:47:23,000 Speaker 1: thought experiments to kind of guide you a little bit 762 00:47:23,040 --> 00:47:25,640 Speaker 1: and kind of suss out what's right and what's wrong 763 00:47:25,640 --> 00:47:29,000 Speaker 1: and what are some possible solutions to these problems. Like 764 00:47:29,040 --> 00:47:31,759 Speaker 1: I said, this is just a small collection. They're ton more. 765 00:47:31,800 --> 00:47:34,399 Speaker 1: I'll probably do more episodes in the future about different ones. 766 00:47:34,600 --> 00:47:37,840 Speaker 1: There's a whole bunch about quantum mechanics. But boy, howdy 767 00:47:37,920 --> 00:47:41,520 Speaker 1: do those get heavy. So maybe we'll take another look 768 00:47:41,520 --> 00:47:44,120 Speaker 1: at these in a future episode. For now, I hope 769 00:47:44,120 --> 00:47:46,800 Speaker 1: you're all well, and I will talk to you again 770 00:47:47,680 --> 00:47:57,399 Speaker 1: really soon. Text Stuff is an iHeartRadio production. For more 771 00:47:57,480 --> 00:48:02,040 Speaker 1: podcasts from iHeartRadio, visit the Heart Radio app, Apple podcasts, 772 00:48:02,160 --> 00:48:04,160 Speaker 1: or wherever you listen to your favorite shows.