1 00:00:11,280 --> 00:00:14,520 Speaker 1: Empire of Ai. You're using the word empire here. Yeah, 2 00:00:14,600 --> 00:00:16,480 Speaker 1: Can you tell me a little bit more about that decision? 3 00:00:16,720 --> 00:00:21,920 Speaker 2: Yeah, So the term empire in the title is specifically 4 00:00:21,960 --> 00:00:23,960 Speaker 2: an argument that I make in the book that we 5 00:00:24,079 --> 00:00:26,560 Speaker 2: really need to start thinking of companies like open Ai 6 00:00:26,680 --> 00:00:28,320 Speaker 2: as new forms of empire. 7 00:00:29,800 --> 00:00:32,199 Speaker 1: Karen Howe is a journalist and the author of a 8 00:00:32,240 --> 00:00:35,920 Speaker 1: new book called Empire of Ai Dreams and Nightmares and 9 00:00:36,000 --> 00:00:39,560 Speaker 1: Sam Altman's Open Ai, which just came out last month. 10 00:00:39,880 --> 00:00:42,320 Speaker 1: In case you're not familiar with her work, she herself 11 00:00:42,400 --> 00:00:45,160 Speaker 1: is a data engineer, and she's also the first person 12 00:00:45,200 --> 00:00:48,240 Speaker 1: to ever really cover open Ai, the company that makes 13 00:00:48,320 --> 00:00:51,240 Speaker 1: chat gpt. On the surface, the book is a story 14 00:00:51,240 --> 00:00:53,760 Speaker 1: of the rise of open Ai and the story of 15 00:00:53,800 --> 00:00:57,000 Speaker 1: its founder, Sam Altman, and if you like drama, let 16 00:00:57,000 --> 00:00:59,720 Speaker 1: me tell you there's a lot of it. Someone could 17 00:00:59,800 --> 00:01:02,680 Speaker 1: definitely make a Netflix series about this. And they are 18 00:01:02,720 --> 00:01:06,600 Speaker 1: also comparisons to be made between Sam Altman and Steve Jobs, 19 00:01:07,040 --> 00:01:09,960 Speaker 1: and we will get to that. But before we do anything, 20 00:01:10,360 --> 00:01:12,280 Speaker 1: we should talk about a word that a lot of 21 00:01:12,280 --> 00:01:15,240 Speaker 1: the reviews I've seen are kind of glossing over, which 22 00:01:15,280 --> 00:01:20,160 Speaker 1: is weird because it's the first word in the title empire, and. 23 00:01:20,120 --> 00:01:23,119 Speaker 2: The reason is four different features that empires of AI 24 00:01:23,319 --> 00:01:26,520 Speaker 2: share with empires of old. The first one is that 25 00:01:26,560 --> 00:01:28,840 Speaker 2: they lay claim to resources that are not their own, 26 00:01:28,959 --> 00:01:32,039 Speaker 2: but they reinterpret the rules to suggest that those resources 27 00:01:32,040 --> 00:01:35,119 Speaker 2: were always their own. That refers to how these companies 28 00:01:35,280 --> 00:01:37,920 Speaker 2: just scrape the data on the Internet, same with the 29 00:01:37,920 --> 00:01:41,120 Speaker 2: intellectual property that these companies just take, like the work 30 00:01:41,160 --> 00:01:43,319 Speaker 2: of artists, the work of writers, the work of creators. 31 00:01:43,440 --> 00:01:46,520 Speaker 2: The second feature of empires is that they engage in 32 00:01:46,520 --> 00:01:48,960 Speaker 2: a lot of labor exploitation, and that refers not just 33 00:01:49,000 --> 00:01:52,520 Speaker 2: to the fact that these companies they contract a lot 34 00:01:52,560 --> 00:01:55,320 Speaker 2: of workers, often in the Global South or other economically 35 00:01:55,400 --> 00:01:59,320 Speaker 2: vulnerable communities in the Global North, and pay them extraordinarily 36 00:01:59,360 --> 00:02:04,840 Speaker 2: small amounts of money to do extremely exploitative work data annotation, 37 00:02:04,960 --> 00:02:09,880 Speaker 2: data cleaning, data preparation, content moderation, where workers are left 38 00:02:09,960 --> 00:02:12,680 Speaker 2: with the same level of trauma that social media content 39 00:02:12,720 --> 00:02:15,400 Speaker 2: moderators were left with. It also refers to the fact 40 00:02:15,480 --> 00:02:21,080 Speaker 2: that these companies are inherently building labor automating technologies. So 41 00:02:21,200 --> 00:02:25,680 Speaker 2: opening eyes definition of artificial general intelligence is highly autonomous 42 00:02:25,720 --> 00:02:31,360 Speaker 2: systems that outperform humans at most economically valuable work and 43 00:02:31,440 --> 00:02:35,480 Speaker 2: so they're explicitly saying we're going to automate away the 44 00:02:35,480 --> 00:02:38,440 Speaker 2: things that people usually get paid for, and that in 45 00:02:38,480 --> 00:02:40,919 Speaker 2: and of itself is a labor exploitation. So there's labor 46 00:02:40,919 --> 00:02:43,160 Speaker 2: exploitation happening on the way in and happening on the 47 00:02:43,160 --> 00:02:43,800 Speaker 2: way out. 48 00:02:44,080 --> 00:02:47,760 Speaker 1: Karen is not kidding. In the book, she travels all 49 00:02:47,800 --> 00:02:50,600 Speaker 1: over the world and speaks to people who work in 50 00:02:50,680 --> 00:02:54,040 Speaker 1: all levels of the AI industry. And what you've found 51 00:02:54,280 --> 00:02:57,080 Speaker 1: was that countries that were undergoing some kind of economic 52 00:02:57,160 --> 00:03:00,079 Speaker 1: crisis were being targeted by the industry as places to 53 00:03:00,200 --> 00:03:03,639 Speaker 1: hire workers who train AI models to understand what kind 54 00:03:03,680 --> 00:03:07,240 Speaker 1: of stuff to allow and what to block. This is 55 00:03:07,320 --> 00:03:11,040 Speaker 1: work that only a human can do. So who's this 56 00:03:11,120 --> 00:03:14,960 Speaker 1: empire for? And where do you and I fit in here? 57 00:03:21,360 --> 00:03:31,120 Speaker 1: I'm afraid Kaleidoscope and iHeart podcasts. This is kill switch. 58 00:03:32,040 --> 00:03:33,079 Speaker 1: I'm doctor Thomas. 59 00:03:34,000 --> 00:04:00,160 Speaker 3: I'm sorry, I'm goodbye. 60 00:04:03,160 --> 00:04:06,920 Speaker 1: Getting back to Karen's definitions of colonial empires, the first 61 00:04:07,040 --> 00:04:09,400 Speaker 1: is that they change the rules so that when they 62 00:04:09,440 --> 00:04:12,640 Speaker 1: find something like oil in the ground or art and 63 00:04:12,760 --> 00:04:15,560 Speaker 1: literature on the internet, they can say that it belongs 64 00:04:15,560 --> 00:04:18,320 Speaker 1: to them. The second is that they exploit people for 65 00:04:18,360 --> 00:04:21,600 Speaker 1: their labor. The third is that they control what people 66 00:04:21,640 --> 00:04:24,640 Speaker 1: can know about them. In the case of the AI industry, 67 00:04:24,920 --> 00:04:28,160 Speaker 1: this can look like companies just finding the top researchers 68 00:04:28,160 --> 00:04:31,360 Speaker 1: who are working at universities, paying them lots of money 69 00:04:31,400 --> 00:04:33,839 Speaker 1: to drop their university job and come to the company, 70 00:04:34,440 --> 00:04:37,839 Speaker 1: and then censoring anything that those researchers write that is 71 00:04:37,880 --> 00:04:42,440 Speaker 1: critical of their technology or of how they treat the environment. Again, 72 00:04:42,720 --> 00:04:46,080 Speaker 1: controlling what the public knows about them. But I think 73 00:04:46,080 --> 00:04:49,400 Speaker 1: it's the fourth element where things start to really get interesting. 74 00:04:49,800 --> 00:04:52,200 Speaker 2: And then the last future is that empires always have 75 00:04:52,279 --> 00:04:56,600 Speaker 2: this narrative that there are good empires and their evil empires. 76 00:04:57,040 --> 00:05:00,599 Speaker 2: So they the good empire need to be empire in 77 00:05:00,600 --> 00:05:02,880 Speaker 2: the first place and engage in all this resource extraction 78 00:05:02,920 --> 00:05:06,160 Speaker 2: and labor exploitation because that is what will make them 79 00:05:06,200 --> 00:05:08,760 Speaker 2: strong enough to beat back the evil empire. So back 80 00:05:08,760 --> 00:05:11,520 Speaker 2: in the day and during European colonialism, the British Empire 81 00:05:11,520 --> 00:05:13,240 Speaker 2: would say that they were better than the Dutch Empire. 82 00:05:13,279 --> 00:05:14,919 Speaker 2: The French Empire would say they were better than the 83 00:05:14,920 --> 00:05:19,640 Speaker 2: British Empire. And part of being the good empire is 84 00:05:19,680 --> 00:05:23,520 Speaker 2: that what they were ultimately doing was civilizing the world. 85 00:05:24,160 --> 00:05:28,200 Speaker 2: They were bringing progress and modernity to everyone, and they 86 00:05:28,560 --> 00:05:33,120 Speaker 2: were giving humanity the chance to enter heaven instead of hell. 87 00:05:33,480 --> 00:05:39,520 Speaker 2: And this is literally the language that AI people use 88 00:05:39,600 --> 00:05:42,320 Speaker 2: these days. They talk about heaven and hell. They talk 89 00:05:42,360 --> 00:05:48,320 Speaker 2: about recreating God and about bringing the next era of 90 00:05:48,360 --> 00:05:50,559 Speaker 2: civilization into being. 91 00:05:51,440 --> 00:05:54,320 Speaker 1: Yeah, and I mean Sam Altman twenty twenty three going 92 00:05:54,360 --> 00:05:58,640 Speaker 1: before Congress and saying, look, if we don't put a 93 00:05:58,640 --> 00:06:01,440 Speaker 1: whole bunch of resources in AA, guess who will China? 94 00:06:01,800 --> 00:06:04,440 Speaker 1: Would you like China to win? I don't think you do. 95 00:06:04,760 --> 00:06:07,320 Speaker 1: I know that you're scared of China. Let's play nice 96 00:06:07,360 --> 00:06:10,680 Speaker 1: here so that you know, play nice specifically with our industry. 97 00:06:10,720 --> 00:06:13,920 Speaker 1: Also play nice with my company so that we can 98 00:06:13,960 --> 00:06:16,919 Speaker 1: come out on top, which again is very similar to 99 00:06:17,320 --> 00:06:22,320 Speaker 1: how an empire justifies itself, which is, if we don't 100 00:06:22,360 --> 00:06:25,280 Speaker 1: do this, the bad guys will, such as you lay 101 00:06:25,360 --> 00:06:29,000 Speaker 1: on the book open AI really starts out by saying, listen, 102 00:06:29,640 --> 00:06:32,680 Speaker 1: AI is going to happen. AJI is going to happen. 103 00:06:33,120 --> 00:06:36,039 Speaker 1: It needs to be the good guys. We're the good guys. 104 00:06:36,080 --> 00:06:37,920 Speaker 1: And I think that's something that you do in the 105 00:06:37,960 --> 00:06:40,920 Speaker 1: book is I think we've gotten really used to this 106 00:06:41,080 --> 00:06:44,839 Speaker 1: idea that everything was inevitable this all was going to happen, 107 00:06:44,960 --> 00:06:47,200 Speaker 1: and the stuff that happens tomorrow it was always going 108 00:06:47,279 --> 00:06:49,719 Speaker 1: to happen, And the stuff that happens three years from now, 109 00:06:49,760 --> 00:06:52,720 Speaker 1: it was always going to happen. But you put it 110 00:06:52,760 --> 00:06:56,320 Speaker 1: together that none of this is actually inevitable, and not 111 00:06:56,400 --> 00:06:59,760 Speaker 1: only that, it's decisions are being made, and actually a 112 00:06:59,760 --> 00:07:03,320 Speaker 1: lot of these decisions are being made by a pretty 113 00:07:03,360 --> 00:07:06,159 Speaker 1: small handful of people in Silicon Valley. 114 00:07:06,520 --> 00:07:09,040 Speaker 2: Absolutely, this is definitely one of the core messages that 115 00:07:09,080 --> 00:07:11,600 Speaker 2: I hope people can take away from the book is 116 00:07:12,120 --> 00:07:14,920 Speaker 2: technology is very much a product of human choices, and 117 00:07:15,000 --> 00:07:18,400 Speaker 2: it just so happens that AI has been the product 118 00:07:18,480 --> 00:07:20,960 Speaker 2: of a very small handful of people's choices who have 119 00:07:21,120 --> 00:07:25,120 Speaker 2: a very what I would argue, narrow world view, a 120 00:07:25,240 --> 00:07:27,920 Speaker 2: narrow view of how the world works now and how 121 00:07:28,000 --> 00:07:31,880 Speaker 2: it should continue to work. And if I had to 122 00:07:31,880 --> 00:07:34,960 Speaker 2: point to one decision that was not inevitable, but that 123 00:07:35,040 --> 00:07:37,640 Speaker 2: Opening Eye sort of ushered in with the shape of 124 00:07:37,680 --> 00:07:41,160 Speaker 2: AI today, it is the fact that they decided to 125 00:07:41,360 --> 00:07:47,480 Speaker 2: scale this technology aggressively, and that happened because open Ai 126 00:07:47,680 --> 00:07:51,640 Speaker 2: early on identified that they had to be number one 127 00:07:51,760 --> 00:07:53,960 Speaker 2: based on this. There's the evil Empire out there, and 128 00:07:53,960 --> 00:07:58,800 Speaker 2: we the Good Empire, have to race aggressively. And they 129 00:07:58,880 --> 00:08:03,240 Speaker 2: realized that the fact fastest and easiest way to get 130 00:08:03,280 --> 00:08:06,360 Speaker 2: to number one was by taking existing techniques within the 131 00:08:06,360 --> 00:08:10,040 Speaker 2: AI research field and blowing it up with an unprecedented 132 00:08:10,040 --> 00:08:14,120 Speaker 2: amount of data and an unprecedented amount of computational resources. 133 00:08:14,120 --> 00:08:16,920 Speaker 2: So they realized, if we can build the largest supercomputers 134 00:08:16,920 --> 00:08:21,200 Speaker 2: in the world, we will have the best chance at 135 00:08:21,360 --> 00:08:25,440 Speaker 2: dominating this technology. All of a sudden, all of the 136 00:08:25,480 --> 00:08:28,600 Speaker 2: tech companies saw what opening I did and when we 137 00:08:28,640 --> 00:08:31,280 Speaker 2: want to do that too, And suddenly you see a 138 00:08:31,360 --> 00:08:34,679 Speaker 2: step change in the sheer amount of resources that are 139 00:08:34,679 --> 00:08:37,720 Speaker 2: now going into developing this technology, and it very much 140 00:08:37,760 --> 00:08:42,600 Speaker 2: becomes a scale at all costs paradigm for AI development. 141 00:08:42,960 --> 00:08:48,480 Speaker 2: This very specific scaling decision is what leads to significant 142 00:08:48,480 --> 00:08:51,440 Speaker 2: amounts of labor exploitation and is what leads to significant 143 00:08:51,480 --> 00:08:55,320 Speaker 2: amounts of environmental harm, because that is when you start 144 00:08:55,360 --> 00:08:58,559 Speaker 2: talking about covering the earth with data centers and supercomputers, 145 00:08:58,760 --> 00:09:02,480 Speaker 2: leading to climate harm, healthcarms, the exacerbation of the fresh 146 00:09:02,520 --> 00:09:05,040 Speaker 2: water crisis because these data centers need to be cooled 147 00:09:05,080 --> 00:09:09,800 Speaker 2: with fresh water, and also in order to meet the 148 00:09:09,920 --> 00:09:13,920 Speaker 2: data imperative for the size of these models. That is, 149 00:09:13,960 --> 00:09:17,640 Speaker 2: when they start training on polluted data sets. That was 150 00:09:17,679 --> 00:09:20,560 Speaker 2: not a norm at all in AI research. In fact, 151 00:09:20,720 --> 00:09:23,720 Speaker 2: before Opening I started training on polluted data sets, the 152 00:09:24,000 --> 00:09:29,280 Speaker 2: norm was actually shifting towards really small, extremely curated, and 153 00:09:29,360 --> 00:09:32,720 Speaker 2: clean data sets. A lot of research was coming out 154 00:09:32,720 --> 00:09:37,600 Speaker 2: at the time pre the CHATGBT moment where people realize 155 00:09:37,600 --> 00:09:40,520 Speaker 2: you could actually get away with very tiny data sets 156 00:09:41,080 --> 00:09:46,160 Speaker 2: if you prepared it correctly. But then Opening I shifted 157 00:09:46,280 --> 00:09:50,440 Speaker 2: to let's use huge data sets, poor quality data sets, 158 00:09:50,720 --> 00:09:53,120 Speaker 2: and that's why you end up having the need for 159 00:09:53,200 --> 00:09:56,560 Speaker 2: content moderation, because when you're pumping a bunch of gunk 160 00:09:57,040 --> 00:09:59,120 Speaker 2: into the models, then a bunch of gun comes out, 161 00:09:59,520 --> 00:10:02,920 Speaker 2: and then you have to create content moderation filters to 162 00:10:03,040 --> 00:10:06,320 Speaker 2: strip that gunk out before it reaches the user. And 163 00:10:06,320 --> 00:10:09,480 Speaker 2: that involves humans, and that involves humans who then are 164 00:10:09,600 --> 00:10:12,920 Speaker 2: left with a significant amount of trauma. 165 00:10:13,040 --> 00:10:15,560 Speaker 1: There is a reason that we're talking about Sam Altman 166 00:10:15,679 --> 00:10:18,800 Speaker 1: and open ai so much. Straight up. It's the same 167 00:10:18,920 --> 00:10:22,080 Speaker 1: reason that people say chat GBT as a generic term 168 00:10:22,120 --> 00:10:25,240 Speaker 1: for AI in general. Open Ai was the first company 169 00:10:25,280 --> 00:10:28,400 Speaker 1: to break through and everyone copy them, and so the 170 00:10:28,440 --> 00:10:32,400 Speaker 1: company culture at open Ai has influenced how other companies act, 171 00:10:32,640 --> 00:10:36,439 Speaker 1: even Google and Microsoft, and then in turn, that's influenced 172 00:10:36,440 --> 00:10:39,559 Speaker 1: how the public thinks about AI. And really a lot 173 00:10:39,600 --> 00:10:44,040 Speaker 1: of our conversation about AI isn't about computer science. It's 174 00:10:44,040 --> 00:10:49,040 Speaker 1: about culture, or maybe subculture, and that subculture is very 175 00:10:49,080 --> 00:10:52,440 Speaker 1: interested in a fight between the good and evil use 176 00:10:52,559 --> 00:10:56,600 Speaker 1: of AI. And it has this question that keeps coming 177 00:10:56,679 --> 00:11:00,400 Speaker 1: up in the book what's your P? Doom? So I 178 00:11:01,000 --> 00:11:03,520 Speaker 1: And before somebody listens to this and thinks, dexter, what 179 00:11:03,520 --> 00:11:07,520 Speaker 1: did you just ask this person? P parentheses in the 180 00:11:07,520 --> 00:11:08,480 Speaker 1: word doom. 181 00:11:08,640 --> 00:11:10,720 Speaker 2: So I'm not a great fan of this phrase. It 182 00:11:10,760 --> 00:11:15,920 Speaker 2: refers to probability of doom, and that is a shorthand 183 00:11:15,960 --> 00:11:22,880 Speaker 2: within a particular community, ideological community that believes that AI 184 00:11:23,520 --> 00:11:26,880 Speaker 2: has the potential to destroy humanity, and so probability of 185 00:11:26,960 --> 00:11:30,640 Speaker 2: doom means the probability that humanity will be destroyed by 186 00:11:30,760 --> 00:11:33,160 Speaker 2: artificial intelligence. The reason why I'm not a fan of 187 00:11:33,200 --> 00:11:36,880 Speaker 2: it is because this ideology is predicated on the idea 188 00:11:37,520 --> 00:11:42,080 Speaker 2: that we can fundamentally recreate human intelligence in computers, which 189 00:11:42,120 --> 00:11:45,880 Speaker 2: is something that is still heavily scientifically debated. And this 190 00:11:45,920 --> 00:11:49,560 Speaker 2: community also believes that once that happens, AI systems will 191 00:11:49,559 --> 00:11:55,559 Speaker 2: develop their own consciousness or self motivation or self preservation, 192 00:11:55,720 --> 00:11:58,920 Speaker 2: whatever it is that then makes them quote unquote go 193 00:11:59,200 --> 00:12:02,200 Speaker 2: rogue and be unable to be controlled by the humans 194 00:12:02,280 --> 00:12:05,080 Speaker 2: that originally program them. And that's what will lead to 195 00:12:05,280 --> 00:12:09,280 Speaker 2: potential disaster of them just killing everyone or consuming all 196 00:12:09,320 --> 00:12:12,200 Speaker 2: the resources such that most people have horrible lives, or 197 00:12:12,320 --> 00:12:14,959 Speaker 2: keeping us as pets. You know, these are all scenarios, 198 00:12:14,960 --> 00:12:16,320 Speaker 2: but the community talks about. 199 00:12:16,840 --> 00:12:19,120 Speaker 1: When you say this is what the community talks about. 200 00:12:19,280 --> 00:12:22,280 Speaker 1: There are people who like ask other people, like at 201 00:12:22,280 --> 00:12:24,800 Speaker 1: a party or something, Yeah, so what's your P doom 202 00:12:24,840 --> 00:12:27,240 Speaker 1: in somebody else? Yes, Oh mine's thirty five, And oh yeah, 203 00:12:27,360 --> 00:12:29,200 Speaker 1: you know what I'm at about a seventy five. Recently, 204 00:12:29,200 --> 00:12:30,560 Speaker 1: my P doom is seventy five. 205 00:12:30,640 --> 00:12:35,280 Speaker 2: This happens, Yes, this happens. In addition to me fundamentally 206 00:12:35,320 --> 00:12:39,200 Speaker 2: disagreeing with the ideological foundations of probability of doom, I 207 00:12:39,400 --> 00:12:44,600 Speaker 2: also disagree with the veneer of rigor that a phrase 208 00:12:44,679 --> 00:12:48,680 Speaker 2: like p doom ascribes to something that is inherently unrigorous. 209 00:12:49,360 --> 00:12:53,760 Speaker 2: So they're putting mathematical values to something that is inherently illogical. 210 00:12:54,240 --> 00:12:56,439 Speaker 2: There is no scientific evidence that we can point to 211 00:12:56,600 --> 00:13:00,199 Speaker 2: that AI will go rogue, that it will do any 212 00:13:00,240 --> 00:13:03,160 Speaker 2: of these things. It really is based on a belief, 213 00:13:03,320 --> 00:13:06,840 Speaker 2: and over the last few years, especially within Silicon Valley, 214 00:13:06,880 --> 00:13:10,240 Speaker 2: there's been the cultivation of what I call quasi religious 215 00:13:10,280 --> 00:13:14,240 Speaker 2: movements around what AI is, what it will be, and 216 00:13:14,480 --> 00:13:18,280 Speaker 2: ultimately how it will impact people. And we just talked 217 00:13:18,280 --> 00:13:21,079 Speaker 2: about what most people call the Doomer ideology, and then 218 00:13:21,080 --> 00:13:24,359 Speaker 2: there's the boomer ideology, which also believes that it's fundamentally 219 00:13:24,360 --> 00:13:28,080 Speaker 2: possible to recreate human intelligence in computers, but for them 220 00:13:28,320 --> 00:13:32,240 Speaker 2: this will be a positive civilizational transformation rather than a 221 00:13:32,240 --> 00:13:35,679 Speaker 2: negative one. And both of them do not actually have 222 00:13:35,760 --> 00:13:37,560 Speaker 2: scientific evidence one where or the other. They are talking 223 00:13:37,600 --> 00:13:41,520 Speaker 2: about theoretical scenarios in projecting into the future based on 224 00:13:41,880 --> 00:13:45,000 Speaker 2: their own conceptions of what human intelligence is and the 225 00:13:45,040 --> 00:13:50,000 Speaker 2: idea that human intelligence is fundamentally computable, and they do 226 00:13:50,080 --> 00:13:54,520 Speaker 2: not actually observe the real world harms or benefits of 227 00:13:54,559 --> 00:13:59,000 Speaker 2: this technology as part of these philosophies, these ideologies. 228 00:13:59,000 --> 00:14:03,720 Speaker 1: Okay, clarify here when people talk about boomers and doomers 229 00:14:03,760 --> 00:14:06,800 Speaker 1: in the world of AI. Again, a boomer is someone 230 00:14:06,800 --> 00:14:10,840 Speaker 1: who is really optimistic about AI. For example, believing that 231 00:14:10,920 --> 00:14:14,760 Speaker 1: once we get artificial general intelligence, or basically an AI 232 00:14:14,880 --> 00:14:18,160 Speaker 1: that's overall smarter than humans, that that will lead us 233 00:14:18,240 --> 00:14:22,800 Speaker 1: to a utopia, prosperity, solving disease, fixing the climate, having 234 00:14:22,880 --> 00:14:25,640 Speaker 1: a colony on the moon, food for everybody, all that 235 00:14:25,680 --> 00:14:29,600 Speaker 1: sort of thing. A doomer is pessimistic about it. They 236 00:14:29,640 --> 00:14:32,440 Speaker 1: think that AI will get smart and then suddenly go 237 00:14:32,560 --> 00:14:35,760 Speaker 1: full sky net and kill everybody. So usually when we 238 00:14:35,800 --> 00:14:38,920 Speaker 1: hear that there are two sides to an argument, we figure, Okay, 239 00:14:39,320 --> 00:14:42,360 Speaker 1: the truth is probably somewhere in the middle there. But 240 00:14:42,800 --> 00:14:46,000 Speaker 1: what if both sides are wrong? What if we're arguing 241 00:14:46,040 --> 00:14:50,040 Speaker 1: about a question that doesn't even make sense? And this 242 00:14:50,120 --> 00:14:52,760 Speaker 1: is one of the really interesting things I think about 243 00:14:52,960 --> 00:14:55,360 Speaker 1: the process of reading the book. Right, it starts off 244 00:14:55,400 --> 00:14:58,400 Speaker 1: with us as an inner office drama. There's some backstabbing here. 245 00:14:58,520 --> 00:15:01,040 Speaker 1: There's a group of employees that really believes in this 246 00:15:01,160 --> 00:15:03,400 Speaker 1: leader that goes over there and they're arguing and who's 247 00:15:03,440 --> 00:15:04,800 Speaker 1: going to be the next leader, and do we need 248 00:15:04,840 --> 00:15:06,080 Speaker 1: to get rid of this guy? Do we not? You 249 00:15:06,120 --> 00:15:08,240 Speaker 1: need to get rid of this guy? But you've talked 250 00:15:08,280 --> 00:15:11,440 Speaker 1: to so many people as you start to see how 251 00:15:11,480 --> 00:15:14,440 Speaker 1: people are interacting with each other. You realize, Wait, this 252 00:15:14,560 --> 00:15:18,240 Speaker 1: is an inner office drama or personalities coming up against 253 00:15:18,240 --> 00:15:22,640 Speaker 1: each other. This is people who fundamentally believe something fairly deeply, 254 00:15:23,360 --> 00:15:25,960 Speaker 1: and those things are really clashing against each other. 255 00:15:26,200 --> 00:15:29,680 Speaker 2: Yeah. The shocking thing was as I was interviewing people 256 00:15:29,720 --> 00:15:32,200 Speaker 2: who I identified as part of the Boomer group or 257 00:15:32,240 --> 00:15:34,800 Speaker 2: part of the Doomer group. I mean the boomers, like 258 00:15:34,840 --> 00:15:38,680 Speaker 2: their eyes would light up when they were talking about 259 00:15:38,800 --> 00:15:44,720 Speaker 2: this potential future where prosperity was abundant and everything would 260 00:15:44,720 --> 00:15:48,600 Speaker 2: be perfect, and the doomers, like I spoke to people 261 00:15:48,600 --> 00:15:52,560 Speaker 2: whose voices were quivering with anxiety. This was a genuine emotional, 262 00:15:53,160 --> 00:15:57,240 Speaker 2: visceral reaction to the ideas. Wow, we only had a 263 00:15:57,240 --> 00:16:01,240 Speaker 2: few years left on this earth potentially if we did 264 00:16:01,280 --> 00:16:04,880 Speaker 2: not figure out how to get a handle on this 265 00:16:04,960 --> 00:16:08,920 Speaker 2: technology and make it go well instead of badly. And 266 00:16:09,280 --> 00:16:13,080 Speaker 2: that's when I began to understand more deeply. Oh, there 267 00:16:13,120 --> 00:16:18,080 Speaker 2: are so many more layers to all of the headlines 268 00:16:18,120 --> 00:16:21,120 Speaker 2: that we see about this company, about this technology, and 269 00:16:21,600 --> 00:16:24,400 Speaker 2: about the dramatic firing and rehiring of Sam Woman. There's 270 00:16:24,440 --> 00:16:29,400 Speaker 2: so many deeper spiritual layers behind the clashing that's happening 271 00:16:29,440 --> 00:16:31,280 Speaker 2: to shape this technology. 272 00:16:31,080 --> 00:16:34,120 Speaker 1: But really, what is this spiritual fight that we're trying 273 00:16:34,160 --> 00:16:48,240 Speaker 1: to have here? More on that after the break. So 274 00:16:48,520 --> 00:16:52,080 Speaker 1: it's interesting that people sometimes compare Sam Altman, the head 275 00:16:52,120 --> 00:16:54,840 Speaker 1: of open Ai, to Steve Jobs. You say in the 276 00:16:54,880 --> 00:16:58,320 Speaker 1: book that in some ways he's this generational talent, but 277 00:16:58,640 --> 00:17:00,760 Speaker 1: you also explain that there's a lot of people who 278 00:17:00,800 --> 00:17:04,560 Speaker 1: really just don't trust him, that he's polarizing, because obviously 279 00:17:04,600 --> 00:17:07,080 Speaker 1: there's a way in which I mean Steve Jobs is 280 00:17:07,119 --> 00:17:10,239 Speaker 1: also a polarizing figure too. There's people who think he's 281 00:17:10,280 --> 00:17:12,560 Speaker 1: an absolute genius. There's people who think he's a jerk 282 00:17:12,560 --> 00:17:15,560 Speaker 1: who just stole ideas, and there's people who think he's 283 00:17:15,600 --> 00:17:18,560 Speaker 1: a genius and he's a jerk. Yeah, yeah, right, But 284 00:17:18,600 --> 00:17:21,440 Speaker 1: then if you think here about what are the stakes 285 00:17:21,480 --> 00:17:23,560 Speaker 1: of what Steve Jobs was doing? If you want to 286 00:17:23,560 --> 00:17:26,800 Speaker 1: give him all the credit, yeah, is what was he doing? 287 00:17:26,800 --> 00:17:30,480 Speaker 1: What's the mission? Make beautiful products, make very easy to 288 00:17:30,560 --> 00:17:34,680 Speaker 1: use computers great? What are the stakes for open ai? Actually, 289 00:17:34,760 --> 00:17:36,479 Speaker 1: hold on, let me not even try to answer this. 290 00:17:36,800 --> 00:17:39,639 Speaker 1: What is open AI's mission because that's something that seems 291 00:17:39,640 --> 00:17:43,600 Speaker 1: to change through the book. So what is open ai 292 00:17:43,680 --> 00:17:44,400 Speaker 1: trying to do here? 293 00:17:44,920 --> 00:17:49,480 Speaker 2: Yeah? So their mission, which on paper has never changed, 294 00:17:49,760 --> 00:17:53,720 Speaker 2: is to ensure artificial general intelligence benefits all of humanity. 295 00:17:53,840 --> 00:17:57,520 Speaker 2: That's the direct quote. The challenge is that each of 296 00:17:57,560 --> 00:18:02,359 Speaker 2: these components of its mission are extremely ill defined. I mean, 297 00:18:02,440 --> 00:18:04,680 Speaker 2: there used to be a joke with an open eye. 298 00:18:04,720 --> 00:18:08,280 Speaker 2: If you ask thirteen researchers at the company what artificial 299 00:18:08,320 --> 00:18:13,080 Speaker 2: general intelligence is, you'll get fifteen answers. And it's pretty 300 00:18:13,119 --> 00:18:16,400 Speaker 2: true for all the components of the mission. So what's 301 00:18:16,440 --> 00:18:20,040 Speaker 2: happened over the course of the organization's history it originally 302 00:18:20,040 --> 00:18:22,359 Speaker 2: started as a nonprofit, now it's one of the most 303 00:18:22,359 --> 00:18:27,479 Speaker 2: capitalistic companies in Silicon Valley history, is that different people 304 00:18:27,760 --> 00:18:31,639 Speaker 2: interpret the mission in a fundamentally different way. The Boomers 305 00:18:31,760 --> 00:18:34,639 Speaker 2: interpret benefiting all of humanity as build this technology as 306 00:18:34,640 --> 00:18:37,239 Speaker 2: fast as possible and unleash it onto the world as 307 00:18:37,280 --> 00:18:40,960 Speaker 2: quickly as possible. The Doomers interpret as build this technology 308 00:18:40,960 --> 00:18:44,520 Speaker 2: as fast as possible and hold onto the technology so 309 00:18:44,600 --> 00:18:47,360 Speaker 2: that we have a lead time to do more research 310 00:18:47,400 --> 00:18:49,840 Speaker 2: on it before bad actors have a chance to do 311 00:18:49,880 --> 00:18:52,480 Speaker 2: research on it as well. And then there are plenty 312 00:18:52,520 --> 00:18:54,640 Speaker 2: of other people who are not necessarily in the boom 313 00:18:54,680 --> 00:18:57,920 Speaker 2: or a dumer category that are just regular tech company, 314 00:18:58,160 --> 00:19:00,879 Speaker 2: people who came from Facebook, who it came from Google, 315 00:19:00,920 --> 00:19:05,119 Speaker 2: who came from Microsoft, who are just like the mission 316 00:19:05,160 --> 00:19:08,240 Speaker 2: for benefiting all of humanity is building products that people 317 00:19:08,240 --> 00:19:11,439 Speaker 2: want to pay for. There's such a vast range of 318 00:19:11,960 --> 00:19:16,320 Speaker 2: interpretations that essentially all the mission does is it just 319 00:19:16,359 --> 00:19:18,520 Speaker 2: allows people to put a mirror up to themselves and 320 00:19:18,560 --> 00:19:21,520 Speaker 2: say what is it that I want and to make 321 00:19:21,560 --> 00:19:23,919 Speaker 2: themselves the protagonist of their own story and say what 322 00:19:23,960 --> 00:19:26,320 Speaker 2: I want is the most beneficial for humanity. So that's 323 00:19:26,880 --> 00:19:30,200 Speaker 2: my interpretation of what Opening I should be doing, and 324 00:19:30,240 --> 00:19:33,160 Speaker 2: that is part of the reason why Opening I has 325 00:19:33,359 --> 00:19:37,000 Speaker 2: had so much drama through its history, because no one 326 00:19:37,000 --> 00:19:42,199 Speaker 2: can ever agree on the most fundamental building block of 327 00:19:42,240 --> 00:19:46,320 Speaker 2: the company. No one can agree on what direction they 328 00:19:46,359 --> 00:19:47,640 Speaker 2: should actually be going. 329 00:19:48,000 --> 00:19:51,840 Speaker 1: Speaking of benefiting all of humanity, a couple episodes ago, 330 00:19:52,280 --> 00:19:55,159 Speaker 1: we did an episode about the impact of AI on 331 00:19:55,200 --> 00:19:58,680 Speaker 1: the environment, climate change, things like that. But then there's 332 00:19:58,760 --> 00:20:02,720 Speaker 1: this kind of background for anybody who's really interested in AI, 333 00:20:02,800 --> 00:20:06,240 Speaker 1: really kind of AI promoter. There's this counterclaim that, Okay, 334 00:20:06,280 --> 00:20:09,680 Speaker 1: any bad stuff we do to the environment, Ai'll fix it. Yeah, 335 00:20:09,800 --> 00:20:13,760 Speaker 1: AI can fix climate change. Yeah, you've run up against 336 00:20:13,800 --> 00:20:17,320 Speaker 1: this claim in person more than I have. Have you 337 00:20:17,359 --> 00:20:19,440 Speaker 1: been able to make any sense of that? What's the 338 00:20:19,520 --> 00:20:21,760 Speaker 1: argument here that AI is going to fix climate change 339 00:20:21,840 --> 00:20:22,880 Speaker 1: or AI can help there? 340 00:20:23,240 --> 00:20:29,840 Speaker 2: The most charitable relaying of the argument is for people 341 00:20:29,920 --> 00:20:34,520 Speaker 2: who believe that human intelligence is fundamentally computable, and therefore 342 00:20:34,600 --> 00:20:36,879 Speaker 2: if you have enough data and you have enough compute, 343 00:20:36,920 --> 00:20:40,520 Speaker 2: you will inevitably be able to recreate it and create 344 00:20:40,760 --> 00:20:44,560 Speaker 2: so called artificial general intelligence. Then you should be able 345 00:20:44,600 --> 00:20:49,359 Speaker 2: to solve any problem at that point, because the challenge 346 00:20:49,840 --> 00:20:53,919 Speaker 2: with our as humans, our inability to deal with the 347 00:20:53,920 --> 00:20:59,840 Speaker 2: climate crisis is a lack of cooperation and digital intelligence. 348 00:20:59,840 --> 00:21:02,320 Speaker 2: They won't have egos, they won't be like clashing against 349 00:21:02,320 --> 00:21:04,840 Speaker 2: each other, so the saying goes, and so they won't 350 00:21:04,840 --> 00:21:08,880 Speaker 2: have any issue cooperating, and they also won't have any 351 00:21:09,000 --> 00:21:13,399 Speaker 2: lack of ideas and ability to experiment, develop new energy 352 00:21:13,400 --> 00:21:17,320 Speaker 2: storage solutions, develop new forms of renewable energy, and so 353 00:21:17,400 --> 00:21:19,679 Speaker 2: on and so forth. So that is the kind of 354 00:21:19,720 --> 00:21:25,000 Speaker 2: most charitable version of why artificial general intelligence could fundamentally 355 00:21:25,040 --> 00:21:29,879 Speaker 2: solve climate change. My critique is, again, we don't have 356 00:21:29,960 --> 00:21:33,560 Speaker 2: scientific evidence that AGI will ever come to pass, and 357 00:21:33,640 --> 00:21:39,639 Speaker 2: so we are basically trying to justify current day vast 358 00:21:39,800 --> 00:21:43,240 Speaker 2: environmental harms and the acceleration of the climate crisis with 359 00:21:43,320 --> 00:21:47,400 Speaker 2: a speculative possibility that it might one day be able 360 00:21:47,440 --> 00:21:50,960 Speaker 2: to go away. And so we're essentially trying to cover 361 00:21:51,200 --> 00:21:57,359 Speaker 2: up real scientific evidence of present day reality with a 362 00:21:57,600 --> 00:22:01,240 Speaker 2: spiritual belief that it'll all be okay in the end. 363 00:22:01,640 --> 00:22:04,000 Speaker 1: Listen. I would go further. And I mean my issue 364 00:22:04,000 --> 00:22:07,880 Speaker 1: with this is even if an AI agent, your chatbot, 365 00:22:07,880 --> 00:22:10,520 Speaker 1: can spit out the answer, Hey, here's what to do, 366 00:22:10,520 --> 00:22:13,480 Speaker 1: do this, do this, and do this, and climate change 367 00:22:13,520 --> 00:22:16,600 Speaker 1: will grind to a halt. What if we're not interested 368 00:22:16,640 --> 00:22:18,560 Speaker 1: in listening, you know what I mean? 369 00:22:18,920 --> 00:22:19,520 Speaker 3: Yeah? Right? 370 00:22:19,800 --> 00:22:21,520 Speaker 1: You could get on chat YOUBT, you can get on 371 00:22:21,560 --> 00:22:24,720 Speaker 1: claud you can get on Gemini or whatever and say, hey, hey, 372 00:22:24,760 --> 00:22:28,320 Speaker 1: AI friend, I'm I'm feeling really sick. I'm eating all 373 00:22:28,359 --> 00:22:30,240 Speaker 1: of this cake and I really don't feel good. What 374 00:22:30,280 --> 00:22:32,840 Speaker 1: should I do? And I'll say, hey, buddy, stop eating cake. 375 00:22:33,240 --> 00:22:34,359 Speaker 1: I'm gonna keep eating cake. 376 00:22:34,440 --> 00:22:34,520 Speaker 3: Yo. 377 00:22:34,960 --> 00:22:37,720 Speaker 1: You don't actually have to listen. The computer can't make 378 00:22:37,760 --> 00:22:40,800 Speaker 1: you listen, even if it has the answer. And here 379 00:22:40,840 --> 00:22:44,000 Speaker 1: in the States particularly, we've got the data. There are 380 00:22:44,040 --> 00:22:46,800 Speaker 1: some ideas on things we could do to reduce the 381 00:22:46,880 --> 00:22:48,280 Speaker 1: impact on the environment. 382 00:22:48,400 --> 00:22:51,320 Speaker 2: We're just not doing them exactly. I think this is 383 00:22:51,320 --> 00:22:53,600 Speaker 2: one of the things that has always broken down in 384 00:22:53,640 --> 00:22:56,760 Speaker 2: these theoretical future AGI arguments of whether it's going to 385 00:22:56,800 --> 00:23:01,240 Speaker 2: be fundamentally positively or negatively transformative. There's never a clear 386 00:23:01,320 --> 00:23:05,280 Speaker 2: articulation of how it's operationalized in the physical world. Are 387 00:23:05,359 --> 00:23:07,960 Speaker 2: they going to have robot bodies, are they going to 388 00:23:07,960 --> 00:23:12,639 Speaker 2: be mining the earth and developing and cultivating these new 389 00:23:12,760 --> 00:23:17,400 Speaker 2: energy storage solutions, or are they directing humans to do that, 390 00:23:18,040 --> 00:23:20,760 Speaker 2: at which point we still run into all of the 391 00:23:20,800 --> 00:23:24,680 Speaker 2: same problems that we've always right pad, which is humans 392 00:23:24,760 --> 00:23:27,320 Speaker 2: don't listen. So I think you're hitting upon one of 393 00:23:27,400 --> 00:23:32,119 Speaker 2: the core weaknesses of just many of the arguments in 394 00:23:32,160 --> 00:23:37,440 Speaker 2: this community is they do not actually acknowledge the social, political, 395 00:23:37,520 --> 00:23:44,760 Speaker 2: and economic aspects of the way that technology ultimately impacts society. 396 00:23:45,160 --> 00:23:47,960 Speaker 2: Like it's not just you have a technical capability and 397 00:23:48,000 --> 00:23:52,280 Speaker 2: everything is suddenly solved. There's so many more layers to 398 00:23:52,800 --> 00:23:56,120 Speaker 2: how the capability then translates into real world impact. 399 00:23:58,000 --> 00:24:01,919 Speaker 1: Yeah, I'm sympathetic to some of the ideas that are 400 00:24:01,960 --> 00:24:05,399 Speaker 1: put forth, like AI could cure diseases. Hey, look, you 401 00:24:05,400 --> 00:24:07,639 Speaker 1: can mix these three chemicals and it gives you a 402 00:24:07,680 --> 00:24:10,879 Speaker 1: pill and it's two bucks. Give it to everybody. Okay, 403 00:24:11,160 --> 00:24:14,000 Speaker 1: I'll buy that. And also healthcare obviously has a huge 404 00:24:14,040 --> 00:24:18,280 Speaker 1: social component to it. But yeah, something that truly is 405 00:24:18,359 --> 00:24:23,280 Speaker 1: linked to our behavior. I struggle to see how exactly 406 00:24:23,320 --> 00:24:24,200 Speaker 1: that's going to work. 407 00:24:24,960 --> 00:24:28,600 Speaker 2: What's interesting about the drug discovery or curing diseases thing, too, 408 00:24:29,200 --> 00:24:33,640 Speaker 2: is there's a word game that people within the AI 409 00:24:33,720 --> 00:24:37,240 Speaker 2: world play where they talk about when they talk about 410 00:24:37,240 --> 00:24:41,119 Speaker 2: AGA is going to cure cancer. What's confusing is that 411 00:24:41,160 --> 00:24:44,760 Speaker 2: there are plenty of AI technologies that can help us 412 00:24:44,880 --> 00:24:48,320 Speaker 2: advance and tackle that challenge, but it has nothing to 413 00:24:48,359 --> 00:24:51,119 Speaker 2: do with the type of AI that Open AI or 414 00:24:51,160 --> 00:24:53,760 Speaker 2: the rest of these Silicon Valley companies are building. So 415 00:24:53,800 --> 00:24:56,840 Speaker 2: they play this game where they just say AI and 416 00:24:57,000 --> 00:25:00,720 Speaker 2: AI is a huge umbrella term. It's like the word transportation, 417 00:25:00,960 --> 00:25:03,880 Speaker 2: like you could be talking about a bicycle or a bus, 418 00:25:04,000 --> 00:25:06,600 Speaker 2: or a gaskillsing truck or a rocket. These are all 419 00:25:06,640 --> 00:25:12,000 Speaker 2: different forms of transportation, and they're building rockets, but they're 420 00:25:12,080 --> 00:25:15,959 Speaker 2: pointing to the benefits of bicycles and public transport. And 421 00:25:16,040 --> 00:25:19,360 Speaker 2: so when these companies say AI is going to help 422 00:25:19,480 --> 00:25:23,479 Speaker 2: us cure diseases. There are plenty of AI technologies that 423 00:25:23,640 --> 00:25:26,840 Speaker 2: literally have no relation to what they do that are 424 00:25:26,960 --> 00:25:30,880 Speaker 2: making positive impacts on healthcare. There are machine learning models 425 00:25:30,920 --> 00:25:34,200 Speaker 2: that can be trained on MIRI scans to identify cancer, 426 00:25:34,200 --> 00:25:35,800 Speaker 2: and there have been studies that show that if you 427 00:25:35,840 --> 00:25:39,040 Speaker 2: give these tools to trained radiologists, they will be able 428 00:25:39,080 --> 00:25:43,199 Speaker 2: to identify cancer far earlier with a much higher accuracy, 429 00:25:43,760 --> 00:25:47,600 Speaker 2: such that patients can actually intervene early on and have 430 00:25:47,680 --> 00:25:50,919 Speaker 2: a much higher likelihood of kicking that disease. There is 431 00:25:50,960 --> 00:25:53,960 Speaker 2: also Last year, the twenty twenty four Nobel Prize in 432 00:25:54,080 --> 00:25:57,960 Speaker 2: Chemistry was awarded to a team at DeepMind that pre 433 00:25:58,240 --> 00:26:01,880 Speaker 2: them joining this large language model race. They developed this 434 00:26:01,960 --> 00:26:04,920 Speaker 2: tool called alpha fold and it was able to predict 435 00:26:04,920 --> 00:26:08,080 Speaker 2: with extremely high accuracy all of the protein structures of 436 00:26:08,119 --> 00:26:10,679 Speaker 2: different amino acids, and that in and of itself is 437 00:26:10,720 --> 00:26:14,160 Speaker 2: going to be one extremely critical building block for understanding 438 00:26:14,200 --> 00:26:18,919 Speaker 2: disease and for discovering new drugs. That was trained on 439 00:26:19,160 --> 00:26:24,160 Speaker 2: extremely clean, highly curated data sets. It was just amino 440 00:26:24,240 --> 00:26:27,800 Speaker 2: acids and protein folding structures. That was a very task 441 00:26:27,920 --> 00:26:32,040 Speaker 2: specific AI model that then was able to do incredible 442 00:26:32,080 --> 00:26:36,760 Speaker 2: things because fundamentally it was a well scoped highly computational problem, 443 00:26:36,880 --> 00:26:39,120 Speaker 2: and that's what AI is good at. You throw AI 444 00:26:39,200 --> 00:26:42,000 Speaker 2: at a highly computational problem and it will compute. But 445 00:26:42,080 --> 00:26:44,480 Speaker 2: what these companies are doing, and the critique that I 446 00:26:44,560 --> 00:26:48,080 Speaker 2: have is they pretend that they're trying to build everything machines, 447 00:26:48,440 --> 00:26:51,159 Speaker 2: and they're trying to do that by then scraping the 448 00:26:51,320 --> 00:26:54,800 Speaker 2: entirety of the English language Internet with just a boatload 449 00:26:54,880 --> 00:26:58,359 Speaker 2: of polluted data that has nothing to do with healthcare. 450 00:26:58,720 --> 00:27:00,960 Speaker 2: But it's just like people throwing curse words at each 451 00:27:00,960 --> 00:27:03,960 Speaker 2: other online and then they're like, this is going to 452 00:27:04,080 --> 00:27:11,399 Speaker 2: cure cancer, and it's like what are you smoking? Like, like, 453 00:27:11,640 --> 00:27:15,080 Speaker 2: how is that gonna get us to? Like what? Like, 454 00:27:15,160 --> 00:27:19,440 Speaker 2: we already have these other AI technologies that are making 455 00:27:19,520 --> 00:27:23,359 Speaker 2: these advancements, that are being trained on actual, high quality 456 00:27:23,480 --> 00:27:26,240 Speaker 2: data sets. And then you're gonna pump a bunch of 457 00:27:26,280 --> 00:27:31,280 Speaker 2: gunk into this large, nebulous, large language model that really 458 00:27:31,320 --> 00:27:35,480 Speaker 2: has very little articulated purpose and say it's going. 459 00:27:35,400 --> 00:27:37,439 Speaker 1: To do the same thing and pump gunk can do 460 00:27:37,480 --> 00:27:38,399 Speaker 1: the environment. 461 00:27:38,359 --> 00:27:40,640 Speaker 2: And pump gunk into the environment and exploit a lot 462 00:27:40,640 --> 00:27:44,320 Speaker 2: of labor and get like potentially automate away a ton 463 00:27:44,359 --> 00:27:45,600 Speaker 2: of jobs in the process. 464 00:27:46,080 --> 00:27:48,920 Speaker 1: So, yeah, there are people in Silicon Valley who love 465 00:27:48,960 --> 00:27:51,760 Speaker 1: to pitch AI as a silver bullet for everything from 466 00:27:51,800 --> 00:27:55,520 Speaker 1: cancer to climate change. But of course the reality is 467 00:27:55,560 --> 00:27:58,520 Speaker 1: not that simple. What this company or that company you're 468 00:27:58,520 --> 00:28:02,440 Speaker 1: building could really solve some of these problems. But as 469 00:28:02,480 --> 00:28:05,160 Speaker 1: they're doing it, they're also doing things that you might 470 00:28:05,200 --> 00:28:09,320 Speaker 1: have read about before, not in Forbes or in Wired magazine, 471 00:28:09,440 --> 00:28:12,959 Speaker 1: but in your middle school history textbook. Companies in the 472 00:28:12,960 --> 00:28:15,760 Speaker 1: industry are starting to act a lot more like empires, 473 00:28:16,440 --> 00:28:21,520 Speaker 1: Empires that extract resources, expand rapidly, and then leave communities 474 00:28:21,560 --> 00:28:24,560 Speaker 1: to deal with the consequences. We'll get into some of 475 00:28:24,600 --> 00:28:40,840 Speaker 1: those consequences after the break. Some of the consequences of 476 00:28:40,840 --> 00:28:44,320 Speaker 1: AI's colonialism you might already be able to imagine, like 477 00:28:44,400 --> 00:28:46,400 Speaker 1: when Google went to a part of Chile that was 478 00:28:46,440 --> 00:28:49,400 Speaker 1: in the middle of a huge water crisis and tried 479 00:28:49,400 --> 00:28:52,080 Speaker 1: to build a massive data center. If you think back 480 00:28:52,120 --> 00:28:54,720 Speaker 1: to our episode on AI and the Environment, you know 481 00:28:54,760 --> 00:28:58,160 Speaker 1: that data centers use up a ton of water that 482 00:28:58,480 --> 00:29:02,920 Speaker 1: didn't seem to matter to Google. But it's not just environmental. 483 00:29:03,600 --> 00:29:06,440 Speaker 1: In the book, Karen also talks about how the industry 484 00:29:06,640 --> 00:29:09,240 Speaker 1: always seems to go to countries undergoing some kind of 485 00:29:09,280 --> 00:29:14,120 Speaker 1: economic crisis and then find workers there to exploit. Karen 486 00:29:14,160 --> 00:29:17,680 Speaker 1: found that open ai used an intermediary company to contract 487 00:29:17,680 --> 00:29:20,000 Speaker 1: workers in Kenya for less than two dollars an hour 488 00:29:20,280 --> 00:29:24,400 Speaker 1: to build automated content moderation filters. For these filters to work, 489 00:29:24,520 --> 00:29:28,440 Speaker 1: you need humans to catalog, sometimes hundreds of thousands of 490 00:29:28,440 --> 00:29:32,200 Speaker 1: examples of things like the graphic content that OpenAI wanted 491 00:29:32,200 --> 00:29:35,320 Speaker 1: to prevent its model from generating. One worker on the 492 00:29:35,440 --> 00:29:39,880 Speaker 1: quote sexual content team had to review fifteen thousand pieces 493 00:29:39,920 --> 00:29:43,120 Speaker 1: of sexual content per month. And we're not talking just 494 00:29:43,320 --> 00:29:48,160 Speaker 1: regular porn, I also mean child sexual abuse material. Some 495 00:29:48,280 --> 00:29:51,840 Speaker 1: of this material was even generated by open AI's own software, 496 00:29:51,920 --> 00:29:54,560 Speaker 1: so that again they could check if the filters were 497 00:29:54,600 --> 00:29:58,120 Speaker 1: catching the bad stuff. But the workers who were manually 498 00:29:58,160 --> 00:30:01,760 Speaker 1: sifting through this stuff day after today, it caused some 499 00:30:01,880 --> 00:30:08,480 Speaker 1: serious mental consequences environmental exploitation, worker exploitation. This sounds like 500 00:30:08,600 --> 00:30:12,560 Speaker 1: straight up empire behavior, and it feels familiar. You could 501 00:30:12,560 --> 00:30:15,320 Speaker 1: take a map of the colonial powers and the colonies 502 00:30:15,520 --> 00:30:18,880 Speaker 1: like India, Latin America, parts of West Africa from a 503 00:30:18,920 --> 00:30:21,640 Speaker 1: couple centuries ago and lay it over a map of 504 00:30:21,640 --> 00:30:25,240 Speaker 1: where these AI companies are exploiting now, and it would 505 00:30:25,320 --> 00:30:29,200 Speaker 1: line up almost exactly. So we've read about this stuff 506 00:30:29,200 --> 00:30:32,480 Speaker 1: in the past in school. So now what this brings 507 00:30:32,520 --> 00:30:33,760 Speaker 1: me to a question? Though I don't know if this 508 00:30:33,800 --> 00:30:36,960 Speaker 1: is pushing back or maybe I'm just a little bit 509 00:30:37,120 --> 00:30:40,800 Speaker 1: pessimistic here, But if you talk about colonialism, you know, 510 00:30:40,920 --> 00:30:45,200 Speaker 1: empire's imperialism, those words mean different things in different places. Absolutely, 511 00:30:45,440 --> 00:30:47,760 Speaker 1: I think if you go to a place that was colonized, 512 00:30:47,920 --> 00:30:52,000 Speaker 1: it's going to bring back memories of slavery, members of exploitation, 513 00:30:52,080 --> 00:30:56,840 Speaker 1: memories of poverty, starvation, generations of wrecked governments, things like that. 514 00:30:57,280 --> 00:31:01,120 Speaker 1: For an American, we think colonial is a furniture style, 515 00:31:01,480 --> 00:31:02,840 Speaker 1: you know what I mean. We think it's a we 516 00:31:02,840 --> 00:31:05,240 Speaker 1: think it's a cool way to build your house. Oh God, 517 00:31:05,440 --> 00:31:08,600 Speaker 1: let's be real. That is so real. We'd be real. 518 00:31:08,680 --> 00:31:11,000 Speaker 1: And so I think about this, and I think about 519 00:31:11,040 --> 00:31:16,160 Speaker 1: your explaining imperialism of AI companies, and I think one 520 00:31:16,160 --> 00:31:18,720 Speaker 1: of the features of an empire, one of the features 521 00:31:18,760 --> 00:31:22,840 Speaker 1: of a colonialist empire is not only the leaders, but 522 00:31:23,000 --> 00:31:26,080 Speaker 1: the people who live there also start to believe that 523 00:31:26,080 --> 00:31:28,880 Speaker 1: that's just the natural way of things, that oh, there 524 00:31:28,880 --> 00:31:31,440 Speaker 1: are people in Kenya who are being forced to see 525 00:31:31,480 --> 00:31:38,640 Speaker 1: just absolutely horrific carnage imagery and getting paid cents and oh, well, 526 00:31:38,640 --> 00:31:42,000 Speaker 1: that's just the way things go. That hey, third world country, 527 00:31:42,120 --> 00:31:44,960 Speaker 1: Global South, that's what happens there and it doesn't happen here. 528 00:31:45,360 --> 00:31:48,120 Speaker 1: Let's just be real. How do you make a reader 529 00:31:48,160 --> 00:31:51,040 Speaker 1: who is in the empire understand that? 530 00:31:51,480 --> 00:31:55,400 Speaker 2: You know, I went on book tour. I stopped in Seattle, 531 00:31:55,400 --> 00:31:57,760 Speaker 2: and I had this wonderful opportunity. You talked to Ted Chang, 532 00:31:58,280 --> 00:32:02,480 Speaker 2: one of the most decorated science fiction writers ever, and 533 00:32:03,120 --> 00:32:05,680 Speaker 2: we were talking about this exact question, and he told me, 534 00:32:05,880 --> 00:32:08,400 Speaker 2: and I think he's exactly right. He was like, your 535 00:32:08,480 --> 00:32:10,520 Speaker 2: job is not to convince the people that are already 536 00:32:10,560 --> 00:32:16,719 Speaker 2: convinced of something completely ideologically opposite to you. If someone 537 00:32:16,760 --> 00:32:19,720 Speaker 2: already is convinced that there should be a hierarchy in 538 00:32:19,760 --> 00:32:22,600 Speaker 2: the world and that certain people don't deserve fundamental, basic 539 00:32:22,720 --> 00:32:26,760 Speaker 2: human rights like you do, not waste your time convincing them. 540 00:32:27,200 --> 00:32:29,640 Speaker 2: Who you're trying to convince is the broader public and 541 00:32:29,680 --> 00:32:32,560 Speaker 2: people who just don't really know how to think about 542 00:32:32,560 --> 00:32:37,840 Speaker 2: this technology and don't really know how to interface with it. Ultimately, 543 00:32:37,880 --> 00:32:40,520 Speaker 2: all empires are made to feel inevitable, but all empires 544 00:32:40,600 --> 00:32:45,240 Speaker 2: fall because the majority of people under subjugation end up 545 00:32:45,320 --> 00:32:48,280 Speaker 2: rising up and protesting the empire, and those are the 546 00:32:48,320 --> 00:32:50,760 Speaker 2: people that you should be speaking to. Another feature of 547 00:32:50,800 --> 00:32:54,000 Speaker 2: empire building is there are people who are richly rewarded 548 00:32:54,040 --> 00:32:58,320 Speaker 2: by empire, usually the people that are most powerful politically 549 00:32:58,320 --> 00:33:00,520 Speaker 2: and economically. Those are the people that are rewarded, and 550 00:33:00,560 --> 00:33:03,440 Speaker 2: that's why empires are able to perpetuate so long in 551 00:33:03,480 --> 00:33:06,680 Speaker 2: the first place, because the people who are totally coddled 552 00:33:06,720 --> 00:33:09,640 Speaker 2: into believing that this is a great state of affairs 553 00:33:09,960 --> 00:33:13,000 Speaker 2: are the people that then have the most access to 554 00:33:13,040 --> 00:33:15,840 Speaker 2: all of the levers to maintain the status quo. And 555 00:33:15,880 --> 00:33:17,720 Speaker 2: so he was like, don't talk to those people. Those 556 00:33:17,760 --> 00:33:20,800 Speaker 2: are not the people that you should focus on, because 557 00:33:20,800 --> 00:33:24,160 Speaker 2: they're never going to change their minds, like both philosophically 558 00:33:24,160 --> 00:33:29,320 Speaker 2: because they don't see a problem with an exploitaive, extractive worldview, 559 00:33:29,880 --> 00:33:32,920 Speaker 2: but also because all of the evidence that they are 560 00:33:32,960 --> 00:33:37,040 Speaker 2: exposed to reinforces the idea that things are just fine, 561 00:33:37,560 --> 00:33:41,800 Speaker 2: and so focus on everyone else, focus on rallying the 562 00:33:41,840 --> 00:33:45,760 Speaker 2: majority of the world around this idea. When I tell 563 00:33:45,840 --> 00:33:49,720 Speaker 2: people this empire metaphor, outside the US, no one has 564 00:33:49,840 --> 00:33:53,640 Speaker 2: ever questioned me. When I was talking with Chilean water 565 00:33:53,720 --> 00:33:57,560 Speaker 2: activists for my book about the expansion of data centers 566 00:33:57,560 --> 00:34:01,160 Speaker 2: in their community, they were first to bring up the 567 00:34:01,360 --> 00:34:05,680 Speaker 2: relation to their history, their colonial history. Really yeah, I 568 00:34:05,720 --> 00:34:09,320 Speaker 2: didn't actually even say it. They were like, what's happening 569 00:34:09,360 --> 00:34:14,080 Speaker 2: now is what's happened to us for centuries, first at 570 00:34:14,080 --> 00:34:17,040 Speaker 2: the hands of Spanish colonizers, then at the hands of 571 00:34:17,040 --> 00:34:21,560 Speaker 2: American multinationals, and now at the hands of new American multinationals. 572 00:34:21,960 --> 00:34:25,440 Speaker 2: And to your point that, like every different country experiences 573 00:34:25,440 --> 00:34:29,200 Speaker 2: colonialism differently, I mean, it was remarkable how they still 574 00:34:29,239 --> 00:34:32,160 Speaker 2: experience it differently, but in exactly the same way as before. 575 00:34:32,800 --> 00:34:36,400 Speaker 2: So in Kenya they were like, there is a connection 576 00:34:36,640 --> 00:34:41,680 Speaker 2: between slavery and labor exploitation happening now with the AI industry. 577 00:34:41,760 --> 00:34:45,040 Speaker 2: And in Chile they are a country that has been 578 00:34:45,160 --> 00:34:48,480 Speaker 2: extracted for their natural resources again and again. And the 579 00:34:48,640 --> 00:34:54,080 Speaker 2: term extractivism, which is an anti colonial term that refers 580 00:34:54,120 --> 00:34:56,880 Speaker 2: to the idea that massive amounts of resources are extracted 581 00:34:56,920 --> 00:35:00,000 Speaker 2: from one place and used to benefit a far away place, 582 00:35:00,800 --> 00:35:03,080 Speaker 2: no benefit goes to the local community. That was originally 583 00:35:03,160 --> 00:35:06,400 Speaker 2: about the Spanish and Portuguese term coined by scholars in 584 00:35:06,440 --> 00:35:12,480 Speaker 2: Latin America extractavismo or estrachivismo in Spanish and Portuguese, and 585 00:35:13,080 --> 00:35:16,200 Speaker 2: they were like, this is extractivism, right, We've seen this. 586 00:35:16,760 --> 00:35:21,239 Speaker 2: There's a connection between the Spanish colonial extractivism and the 587 00:35:21,320 --> 00:35:26,279 Speaker 2: air industry's extractivism. So it literally is a replaying. Yeah, 588 00:35:26,320 --> 00:35:29,240 Speaker 2: and people who live that history and understand that history, 589 00:35:29,680 --> 00:35:32,280 Speaker 2: there's no leap that they have to make to connect 590 00:35:32,320 --> 00:35:32,640 Speaker 2: the two. 591 00:35:32,880 --> 00:35:36,160 Speaker 1: That makes a lot of sense. And to be clear, 592 00:35:36,360 --> 00:35:38,960 Speaker 1: this isn't just happening in what we consider quote the 593 00:35:39,000 --> 00:35:43,000 Speaker 1: global South. It happens in the US too. That example 594 00:35:43,040 --> 00:35:45,240 Speaker 1: of Google trying to build a data center in Chile 595 00:35:45,560 --> 00:35:49,680 Speaker 1: despite the residents not wanting it might sound familiar. In 596 00:35:49,719 --> 00:35:52,440 Speaker 1: our episode on Ai in the Environment, we talked a 597 00:35:52,480 --> 00:35:56,480 Speaker 1: little bit about Memphis, where Elon Musk's company Xai has 598 00:35:56,480 --> 00:36:00,359 Speaker 1: been running gas turbines without permits for months now. It's 599 00:36:00,440 --> 00:36:03,359 Speaker 1: making people there sick. Just a couple of weeks ago, 600 00:36:03,920 --> 00:36:06,880 Speaker 1: More Perfect Union published a deeper dive on what's happening 601 00:36:06,880 --> 00:36:10,719 Speaker 1: in Memphis that gives some more details, and Xai has 602 00:36:10,880 --> 00:36:14,880 Speaker 1: already started building a second data center. So in Chile, 603 00:36:15,120 --> 00:36:19,080 Speaker 1: the local people organized and successfully stopped Google from building 604 00:36:19,080 --> 00:36:23,080 Speaker 1: that data center in Memphis. That fight is still ongoing. 605 00:36:23,640 --> 00:36:26,799 Speaker 1: But even if they're able to shut those turbines down tomorrow, 606 00:36:27,320 --> 00:36:29,839 Speaker 1: what happens to the people who were already hurt by 607 00:36:29,840 --> 00:36:33,279 Speaker 1: what XAI has done to the environment there. It's just 608 00:36:33,400 --> 00:36:36,719 Speaker 1: another example of what Karen Howe describes as the rise 609 00:36:36,760 --> 00:36:42,520 Speaker 1: of AI empires, taking resources, dodging accountability, and leaving communities 610 00:36:42,560 --> 00:36:46,480 Speaker 1: to deal with the consequences. You talk about some potential 611 00:36:46,520 --> 00:36:48,560 Speaker 1: alternative ways that this is going to be used or 612 00:36:48,600 --> 00:36:51,000 Speaker 1: this could be used, and so you're not necessarily talking 613 00:36:51,040 --> 00:36:54,680 Speaker 1: about everybody turn off the computer type thing. But yeah, 614 00:36:54,719 --> 00:36:57,720 Speaker 1: just tying in with the title, where's the kill switch 615 00:36:57,760 --> 00:36:59,879 Speaker 1: on this thing? And what hitting the kill switch? 616 00:37:00,800 --> 00:37:03,279 Speaker 2: So to me, hitting the kill switch in this context 617 00:37:03,800 --> 00:37:08,440 Speaker 2: is killing the imperial conception of AI development. Not killing 618 00:37:08,440 --> 00:37:11,120 Speaker 2: all AI development, but the imperial one where people at 619 00:37:11,120 --> 00:37:12,920 Speaker 2: the top can just say this is how it's going 620 00:37:12,960 --> 00:37:16,239 Speaker 2: to go and then consume the entire world of resources 621 00:37:16,600 --> 00:37:19,360 Speaker 2: in a pursuit of a Morphis vision of progress. The 622 00:37:19,480 --> 00:37:21,960 Speaker 2: thing that I want to see and I think how 623 00:37:21,960 --> 00:37:25,120 Speaker 2: we can get there. I want to see broadly beneficial, 624 00:37:25,520 --> 00:37:30,799 Speaker 2: task specific AI models that are developed and deployed through 625 00:37:30,840 --> 00:37:35,160 Speaker 2: the participation of communities. And when you think about the 626 00:37:35,160 --> 00:37:37,920 Speaker 2: AI supply chain, which I try to make visible in 627 00:37:37,960 --> 00:37:41,720 Speaker 2: my book, there's data, there's land, there's energy, there's water, 628 00:37:41,760 --> 00:37:45,200 Speaker 2: there's labor. There are spaces that companies need access to 629 00:37:45,239 --> 00:37:48,400 Speaker 2: deploy their technologies, like schools and hospitals and government agencies. 630 00:37:48,520 --> 00:37:50,319 Speaker 2: Silicon Valley has done a really great job over the 631 00:37:50,360 --> 00:37:53,520 Speaker 2: last ten years of making people feel like these resources 632 00:37:53,520 --> 00:37:57,800 Speaker 2: in these spaces are actually owned by Silicon Valley, but no, 633 00:37:58,080 --> 00:38:01,319 Speaker 2: they're owned by us. That day is your data, it's 634 00:38:01,360 --> 00:38:04,000 Speaker 2: my data. That in intellectual property is the intellectual property 635 00:38:04,000 --> 00:38:08,680 Speaker 2: of artists, writers, creators. The schools that's collectively owned by 636 00:38:08,680 --> 00:38:12,120 Speaker 2: teachers and students. Those hospitals are collectively owned by doctors, nurses, 637 00:38:12,120 --> 00:38:16,040 Speaker 2: and patients. And we're already seeing movements around the world 638 00:38:16,280 --> 00:38:19,960 Speaker 2: of people actually fighting back and reclaiming ownership of those 639 00:38:20,080 --> 00:38:22,440 Speaker 2: resources in those spaces. So artists and writers that are 640 00:38:22,480 --> 00:38:24,560 Speaker 2: suing these companies saying no, you can't just take our 641 00:38:24,560 --> 00:38:27,920 Speaker 2: intellectual property. The Chland water activists that are write about 642 00:38:27,920 --> 00:38:29,879 Speaker 2: in my book who said no, you can't just take 643 00:38:29,880 --> 00:38:33,319 Speaker 2: our fresh water and successfully stalled Google from building a 644 00:38:33,360 --> 00:38:37,439 Speaker 2: data center within their community for now five years, many 645 00:38:37,440 --> 00:38:40,040 Speaker 2: many movements around the world that are replicating that to 646 00:38:40,120 --> 00:38:42,880 Speaker 2: push back against data centers, teachers and students that are 647 00:38:42,920 --> 00:38:45,879 Speaker 2: having public debates now saying do we actually want AI 648 00:38:45,920 --> 00:38:49,760 Speaker 2: in our schools and if so, under what terms? Because 649 00:38:49,920 --> 00:38:53,160 Speaker 2: we wanted to foster curiosity and create a critical thinking, 650 00:38:53,360 --> 00:38:56,839 Speaker 2: not just totally eroded away. And if we can have 651 00:38:57,360 --> 00:39:01,600 Speaker 2: those conversations one hundred thousand times full and start moving 652 00:39:02,200 --> 00:39:07,680 Speaker 2: more towards task specific AI technologies, we will get to 653 00:39:07,800 --> 00:39:11,720 Speaker 2: a place where we do have AI that is broadly 654 00:39:11,800 --> 00:39:15,839 Speaker 2: beneficial and actually works for the people rather than us 655 00:39:15,920 --> 00:39:16,760 Speaker 2: working for AI. 656 00:39:17,560 --> 00:39:19,480 Speaker 1: Thank you, Thank you so much. Hope to be able 657 00:39:19,480 --> 00:39:20,240 Speaker 1: to chat with you again. 658 00:39:20,320 --> 00:39:21,440 Speaker 2: Thank you so much. Dexter. 659 00:39:26,600 --> 00:39:28,280 Speaker 1: All right, so this is the part of the episode 660 00:39:28,280 --> 00:39:30,759 Speaker 1: where I usually have some kind of closing thoughts, you know, 661 00:39:30,880 --> 00:39:33,160 Speaker 1: at my little two cents, three cents go on a 662 00:39:33,160 --> 00:39:35,640 Speaker 1: little bit, But this time I'm going to keep it 663 00:39:35,760 --> 00:39:43,719 Speaker 1: to four words. Go read Karen's book. Seriously, Just go 664 00:39:43,800 --> 00:39:46,480 Speaker 1: read Karen's book. One thing we didn't talk about, and 665 00:39:46,640 --> 00:39:48,279 Speaker 1: one thing I really do like about the book that 666 00:39:48,320 --> 00:39:50,920 Speaker 1: I should say is that you could pick it up 667 00:39:50,960 --> 00:39:54,120 Speaker 1: with absolutely no idea how AI works and you'll not 668 00:39:54,200 --> 00:39:57,239 Speaker 1: only understand the societal implications that you know we kind 669 00:39:57,239 --> 00:39:59,920 Speaker 1: of talked about in this episode, but you'll come up 670 00:40:00,160 --> 00:40:03,759 Speaker 1: with a better understanding of the technology of AI than 671 00:40:03,800 --> 00:40:06,359 Speaker 1: most people, even if you're one of those people who 672 00:40:06,400 --> 00:40:08,440 Speaker 1: says you don't like computers, you hate computers, you don't 673 00:40:08,480 --> 00:40:11,320 Speaker 1: understand them for real. It breaks it down in a 674 00:40:11,360 --> 00:40:14,080 Speaker 1: way that I've never seen done before. So highly recommend it. 675 00:40:14,520 --> 00:40:16,640 Speaker 1: And if you've already read the book or you just 676 00:40:16,719 --> 00:40:18,640 Speaker 1: want to talk about something else, let us know what 677 00:40:18,680 --> 00:40:22,440 Speaker 1: you think. We're on Instagram at kill switch pod, or 678 00:40:22,480 --> 00:40:24,960 Speaker 1: you can hit me at dex Digi that's d e 679 00:40:25,160 --> 00:40:29,520 Speaker 1: x DGI again on Instagram or am alsa Wan Blue Sky. 680 00:40:30,080 --> 00:40:32,319 Speaker 1: And if you haven't done it yet, leave us a 681 00:40:32,360 --> 00:40:36,760 Speaker 1: review on your favorite podcast platform. People actually read those things, 682 00:40:37,000 --> 00:40:39,480 Speaker 1: and your review could be the thing that convinces someone 683 00:40:39,760 --> 00:40:43,560 Speaker 1: or someone's to check us out. And this show is 684 00:40:43,640 --> 00:40:47,520 Speaker 1: hosted by Me Dexter Thomas. It's produced by Shena Ozaki, 685 00:40:47,960 --> 00:40:51,839 Speaker 1: Darluk Potts, and Kate Osbourne. Our theme song is by 686 00:40:51,880 --> 00:40:54,960 Speaker 1: me and Kyle Murdoch, and Kyle also mixed the show 687 00:40:55,840 --> 00:41:00,480 Speaker 1: from Kaleidoscope. Our executive producers are Ozma Lashin, Mangesha Gadur 688 00:41:00,840 --> 00:41:05,360 Speaker 1: and Kate Osborne from iHeart Our. Executive producers are Katrina 689 00:41:05,440 --> 00:41:22,080 Speaker 1: Norville and Nikki Etor catch On the Next One, Goodbye,