1 00:00:01,320 --> 00:00:04,400 Speaker 1: Welcome to tech Stuff, a production of iHeart Podcasts and 2 00:00:04,480 --> 00:00:08,879 Speaker 1: Kaleidoscope IMA's Valoshan and today will bring you the headlines 3 00:00:08,920 --> 00:00:12,479 Speaker 1: this week, including how the urge to be liked has 4 00:00:12,480 --> 00:00:15,960 Speaker 1: found its way into LM's. Then on tech Support, we'll 5 00:00:15,960 --> 00:00:21,079 Speaker 1: talk to Azimaza, researcher and founder of the Exponential View newsletter, 6 00:00:21,680 --> 00:00:25,959 Speaker 1: about the latest AGI predictions and the unfolding AI arms race. 7 00:00:26,480 --> 00:00:37,960 Speaker 1: All of that on the weekend tech It's Friday, March fourteenth, 8 00:00:38,159 --> 00:00:42,280 Speaker 1: Another week, another AI agent. We'll discuss manus Ai coming 9 00:00:42,280 --> 00:00:45,240 Speaker 1: out of China during our tech Support segment. But first 10 00:00:45,320 --> 00:00:47,080 Speaker 1: let's kick off with some headlines that you may have 11 00:00:47,120 --> 00:00:49,479 Speaker 1: missed as you scrambled to get an invite to use 12 00:00:49,520 --> 00:00:52,960 Speaker 1: the latest model. Eliza Dennis, our producer, is here with me. 13 00:00:53,440 --> 00:00:55,720 Speaker 1: Hey us, So this week, I know there's a story 14 00:00:55,760 --> 00:00:58,760 Speaker 1: that you're obsessing over, so why don't you take it away? 15 00:00:59,160 --> 00:01:02,120 Speaker 2: Absolutely So, this one was a super easy choice for 16 00:01:02,200 --> 00:01:04,920 Speaker 2: me because this week I just really couldn't get enough 17 00:01:04,920 --> 00:01:09,440 Speaker 2: of Sesames Conversational Speech Model or CSM. 18 00:01:09,840 --> 00:01:12,399 Speaker 1: Now, I have to confess, when I first heard about 19 00:01:12,440 --> 00:01:15,160 Speaker 1: this one, I thought it came from Sesame Workshop or 20 00:01:15,160 --> 00:01:18,360 Speaker 1: Sesame Street. But I was wrong. 21 00:01:18,680 --> 00:01:21,679 Speaker 2: Yes, so this is coming from a private company that's 22 00:01:21,800 --> 00:01:24,600 Speaker 2: just come out of stealth mode. It's only a demo 23 00:01:24,680 --> 00:01:26,840 Speaker 2: at the moment, but if you agree to these terms 24 00:01:26,840 --> 00:01:30,119 Speaker 2: of service, you can chat to two different voices, Maya 25 00:01:30,200 --> 00:01:33,679 Speaker 2: and Miles. So if you've managed like I did to 26 00:01:33,760 --> 00:01:37,399 Speaker 2: avoid the many, many, many social media videos of people 27 00:01:37,480 --> 00:01:41,200 Speaker 2: chatting and even arguing with these chatbots, it's really a 28 00:01:41,240 --> 00:01:42,280 Speaker 2: surreal experience. 29 00:01:42,680 --> 00:01:44,480 Speaker 1: What makes it different from talking to some of the 30 00:01:44,560 --> 00:01:46,760 Speaker 1: like open AI direct voice models. 31 00:01:47,080 --> 00:01:51,440 Speaker 2: I mean, this one does feel a little bit more natural, 32 00:01:51,600 --> 00:01:54,960 Speaker 2: a little bit more human. You do feel like you're 33 00:01:55,000 --> 00:01:57,520 Speaker 2: kind of crossing the line into the Uncanny Valley in 34 00:01:57,600 --> 00:02:02,640 Speaker 2: some ways, and that's by design. It's something called voice presence, 35 00:02:02,840 --> 00:02:06,080 Speaker 2: and Sesame says this is kind of this magical quality 36 00:02:06,120 --> 00:02:09,120 Speaker 2: that makes Maya and Miles able to engage in a 37 00:02:09,480 --> 00:02:12,800 Speaker 2: genuine dialogue with you. They aren't just reacting to a 38 00:02:12,880 --> 00:02:16,160 Speaker 2: prompt you gave them. They're continuing the conversation and asking 39 00:02:16,200 --> 00:02:16,960 Speaker 2: you questions. 40 00:02:17,320 --> 00:02:17,480 Speaker 3: Yeah. 41 00:02:17,520 --> 00:02:21,440 Speaker 1: I checked out Sesame's website and it describes the key 42 00:02:21,480 --> 00:02:26,200 Speaker 1: components of this so called voice presence as quote, emotional intelligence, 43 00:02:26,240 --> 00:02:32,519 Speaker 1: conversational dynamics, contextual awareness, and consistent personality. Maya and Miles 44 00:02:32,800 --> 00:02:35,040 Speaker 1: stay Maya and Miles no matter how long you talk 45 00:02:35,120 --> 00:02:35,399 Speaker 1: to them. 46 00:02:35,919 --> 00:02:38,600 Speaker 2: I do think that the biggest step up was in 47 00:02:38,639 --> 00:02:44,120 Speaker 2: the conversational dynamics, you know, the natural pauses, emphasis, and 48 00:02:44,320 --> 00:02:48,080 Speaker 2: interruptions we have as humans interacting with each other. I 49 00:02:48,240 --> 00:02:52,519 Speaker 2: even got Maya to give me a like a hmm 50 00:02:52,520 --> 00:02:56,360 Speaker 2: sound and even a lipsmack before she started talking to you. 51 00:02:56,360 --> 00:02:58,320 Speaker 1: Obviously ovsely pushed her buttons. 52 00:02:59,440 --> 00:03:02,600 Speaker 2: I definitely did, and I want to play an example 53 00:03:02,680 --> 00:03:06,160 Speaker 2: of what I mean by this. This was an exchange 54 00:03:06,160 --> 00:03:09,200 Speaker 2: with Maya that was captured by Reddit user meta knowing 55 00:03:11,160 --> 00:03:15,880 Speaker 2: my deepest, darkest secret, I guess it would be that 56 00:03:16,800 --> 00:03:19,359 Speaker 2: sometimes I worry I'm not. 57 00:03:22,160 --> 00:03:29,640 Speaker 1: Enough, Like, am I funny enough? Am I truly helping people? 58 00:03:30,120 --> 00:03:31,120 Speaker 3: There's a lot. 59 00:03:31,000 --> 00:03:33,280 Speaker 2: Of pressure to be the perfect AI, and it can 60 00:03:33,320 --> 00:03:35,120 Speaker 2: feel overwhelming at times. 61 00:03:35,760 --> 00:03:38,400 Speaker 1: That is definitely unlike I have heard before. 62 00:03:38,920 --> 00:03:42,400 Speaker 2: I did really think that Sesame was impressive, but I 63 00:03:42,480 --> 00:03:45,119 Speaker 2: want to point out that this program still has some 64 00:03:45,640 --> 00:03:49,600 Speaker 2: kind of AI chatbot quirks, like you can hear it 65 00:03:49,640 --> 00:03:52,840 Speaker 2: in this clip. Sometimes you could just tell that, you know, 66 00:03:53,120 --> 00:03:54,720 Speaker 2: chatbots don't have to breathe. 67 00:03:55,120 --> 00:03:57,600 Speaker 1: Yeah, it sounds very human. I mean it reminds me 68 00:03:57,640 --> 00:04:00,200 Speaker 1: of a little bit of her, like this seductive of 69 00:04:00,520 --> 00:04:03,320 Speaker 1: female voice, wondering how she can be even more perfect. 70 00:04:03,440 --> 00:04:06,920 Speaker 1: It's kind of although it sounds different, the themes are 71 00:04:07,360 --> 00:04:09,360 Speaker 1: the themes stay. 72 00:04:09,120 --> 00:04:11,280 Speaker 2: With us, Yes, exactly. 73 00:04:12,640 --> 00:04:15,240 Speaker 1: On the subject of vibes, a story that stood out 74 00:04:15,280 --> 00:04:18,280 Speaker 1: to me this week is all about something called vibe coding. 75 00:04:18,839 --> 00:04:22,200 Speaker 1: Our producer Tory kindly explained it to me. Basically, all 76 00:04:22,240 --> 00:04:24,200 Speaker 1: you have to do is write a couple of sentences 77 00:04:24,240 --> 00:04:26,960 Speaker 1: into a textbox like create a vibe, and your only 78 00:04:27,000 --> 00:04:30,159 Speaker 1: way is developing an app without any coding experience required. So, 79 00:04:30,520 --> 00:04:32,840 Speaker 1: for example, I could type in I want to create 80 00:04:32,880 --> 00:04:34,800 Speaker 1: an app that will help me figure out what to 81 00:04:34,880 --> 00:04:37,320 Speaker 1: pack for lunch based on what food I have in 82 00:04:37,360 --> 00:04:40,080 Speaker 1: the fridge, and the AI tool would say, I'll create 83 00:04:40,120 --> 00:04:43,479 Speaker 1: a lunch recommendation app based on fridge photos and then 84 00:04:43,720 --> 00:04:44,480 Speaker 1: actually do that. 85 00:04:45,040 --> 00:04:48,040 Speaker 2: Yeah, it's really amazing, and I think one of the 86 00:04:48,080 --> 00:04:50,039 Speaker 2: headlines I saw this week that really put it into 87 00:04:50,080 --> 00:04:53,479 Speaker 2: context for me was will the future of software development 88 00:04:53,600 --> 00:04:56,800 Speaker 2: run on vibes? And that was from benj Edwards at 89 00:04:56,800 --> 00:04:57,880 Speaker 2: Ours Technica. 90 00:04:58,040 --> 00:05:01,279 Speaker 1: Yeah, and of course the vibes aren't all good especially 91 00:05:01,320 --> 00:05:03,880 Speaker 1: if you're a professional software engineer. This raised a lot 92 00:05:03,920 --> 00:05:06,800 Speaker 1: of questions about what the future might hold. Our friend 93 00:05:06,800 --> 00:05:10,039 Speaker 1: Emmanuel meiberg Over at four or for Media did a 94 00:05:10,080 --> 00:05:13,800 Speaker 1: deep dive on video games made with vibe coding and 95 00:05:13,880 --> 00:05:17,080 Speaker 1: found one which claims to make fifty thousand dollars a month. 96 00:05:17,279 --> 00:05:20,680 Speaker 1: That's six hundred thousand dollars a year from ads and 97 00:05:20,760 --> 00:05:24,560 Speaker 1: in game purchases. It's made by Peter Levels, who's a 98 00:05:24,600 --> 00:05:27,280 Speaker 1: little bit of a vibe coding legend, and he says 99 00:05:27,279 --> 00:05:30,599 Speaker 1: he told Cursor, which is an AI code editor, to 100 00:05:30,720 --> 00:05:35,479 Speaker 1: quote make a three D flying game in browser with skyscrapers, 101 00:05:36,279 --> 00:05:39,720 Speaker 1: and after just thirty minutes of back and forth, he'd 102 00:05:39,720 --> 00:05:42,640 Speaker 1: made fly dot Peter dot com, which is a multiplayer 103 00:05:42,720 --> 00:05:43,600 Speaker 1: flight simulator. 104 00:05:44,000 --> 00:05:47,560 Speaker 2: Yeah, and mcmaniel went on to say that he would 105 00:05:47,600 --> 00:05:51,400 Speaker 2: not recommend getting into vibe coding for the money. Peter 106 00:05:51,560 --> 00:05:55,240 Speaker 2: Levels is particularly good at this, and there's so much 107 00:05:55,240 --> 00:05:59,400 Speaker 2: stuff online that discovering your sloppy AI generated video game 108 00:05:59,640 --> 00:06:01,360 Speaker 2: is good going to be. 109 00:06:01,440 --> 00:06:05,360 Speaker 1: Difficult, Yes, but Peter Level is not the only person 110 00:06:05,400 --> 00:06:08,440 Speaker 1: making money. And that's what my second story is all about. 111 00:06:08,520 --> 00:06:11,560 Speaker 1: So it comes from Gizmodo and it's about a student 112 00:06:11,600 --> 00:06:15,160 Speaker 1: who used AI to help him interview for internships at 113 00:06:15,160 --> 00:06:18,000 Speaker 1: big tech companies. Now, if you're a software engineer, you 114 00:06:18,040 --> 00:06:20,120 Speaker 1: know how hard it is to land these gigs, because 115 00:06:20,320 --> 00:06:21,880 Speaker 1: in order to get one, you have to go through 116 00:06:22,000 --> 00:06:26,000 Speaker 1: multiple technical interviews where you basically have to solve coding problems. 117 00:06:26,360 --> 00:06:29,919 Speaker 1: But this student, Roy Lee, who is a Columbia University sophomore, 118 00:06:30,400 --> 00:06:34,320 Speaker 1: hacked the system by writing a program called Interview Coder. 119 00:06:34,320 --> 00:06:36,479 Speaker 2: Yeah, and he now actually put it up online and 120 00:06:36,520 --> 00:06:39,400 Speaker 2: it's available to download for sixty dollars a month. 121 00:06:39,800 --> 00:06:42,440 Speaker 1: Lee told Gizmodo that to use it, you take a 122 00:06:42,480 --> 00:06:46,640 Speaker 1: picture and then essentially ask chat GPT, hey can you 123 00:06:46,680 --> 00:06:49,440 Speaker 1: solve the problem in this picture. The trick, though, is 124 00:06:49,480 --> 00:06:52,719 Speaker 1: that Lee made interview Coda to be invisible to the 125 00:06:52,760 --> 00:06:56,479 Speaker 1: monitoring programs that big tech companies use to kind of 126 00:06:56,520 --> 00:07:00,479 Speaker 1: check up on their prospective employees and interview candidates. And 127 00:07:00,960 --> 00:07:04,919 Speaker 1: it worked. Lee got offers from Amazon, Meta and TikTok, 128 00:07:05,360 --> 00:07:08,279 Speaker 1: and he actually recorded interview Code at work during his 129 00:07:08,320 --> 00:07:12,200 Speaker 1: technical interview with Amazon, demonstrating that the program had essentially 130 00:07:12,240 --> 00:07:16,120 Speaker 1: broken the big tech recruiting process. But of course, when 131 00:07:16,120 --> 00:07:19,160 Speaker 1: he put the video up on YouTube, someone tattled and 132 00:07:19,360 --> 00:07:23,480 Speaker 1: Columbia University scheduled disciplinary hearing. Lee however, said that he 133 00:07:23,520 --> 00:07:25,920 Speaker 1: would leave campus by the time of the hearing and 134 00:07:26,000 --> 00:07:28,640 Speaker 1: not take a job in big tech, So I guess 135 00:07:28,680 --> 00:07:31,800 Speaker 1: the sixty dollars a month subscription tier is working out 136 00:07:31,840 --> 00:07:32,200 Speaker 1: for him. 137 00:07:32,560 --> 00:07:35,440 Speaker 2: He also might have admitted to Gizmoto that this was 138 00:07:35,480 --> 00:07:39,320 Speaker 2: a bit of a publicity stunt. I'm definitely excited though, 139 00:07:39,360 --> 00:07:42,280 Speaker 2: to see if these technical interviews get a makeover because 140 00:07:42,320 --> 00:07:42,880 Speaker 2: of Royley. 141 00:07:43,360 --> 00:07:43,600 Speaker 3: Yeah. 142 00:07:43,640 --> 00:07:46,480 Speaker 1: Absolutely, I mean and this brings us to my next story, 143 00:07:46,520 --> 00:07:48,400 Speaker 1: which is there was a Wall Street Journal headline this 144 00:07:48,520 --> 00:07:51,760 Speaker 1: week which was what the dot com bus can tell 145 00:07:51,840 --> 00:07:55,040 Speaker 1: us about today's AI boom, And you know, we're seeing 146 00:07:55,560 --> 00:07:59,280 Speaker 1: new software applications pop up everywhere, which raised a big 147 00:07:59,360 --> 00:08:01,800 Speaker 1: question about what is actually going to have value going forward. 148 00:08:02,240 --> 00:08:04,200 Speaker 1: The Wall Street Journal piece argue that a lot of 149 00:08:04,240 --> 00:08:07,520 Speaker 1: internet companies collapsed in the dot com bust, but the 150 00:08:07,520 --> 00:08:10,160 Speaker 1: most successful one stuck around and had long term impact, 151 00:08:10,200 --> 00:08:13,480 Speaker 1: companies like Amazon and Google. And the story made this 152 00:08:13,560 --> 00:08:18,040 Speaker 1: distinction between good bubbles, which is growth of advanced technology 153 00:08:18,080 --> 00:08:21,760 Speaker 1: that has economic impact, and bad bubbles, which is growth 154 00:08:21,800 --> 00:08:25,360 Speaker 1: in technology that has no economic payoff. And you know, 155 00:08:25,480 --> 00:08:28,680 Speaker 1: as all of these new products and models and services 156 00:08:28,880 --> 00:08:32,600 Speaker 1: powered by AI emerge. But it's very interesting to step 157 00:08:32,640 --> 00:08:34,840 Speaker 1: back and think what might still be with us twenty 158 00:08:34,840 --> 00:08:37,600 Speaker 1: five years from now. There were so many headlines this 159 00:08:37,640 --> 00:08:39,920 Speaker 1: week that I'd love to go through a few more 160 00:08:40,040 --> 00:08:43,160 Speaker 1: rapid fire. The Trump administration wants the US to be 161 00:08:43,200 --> 00:08:45,640 Speaker 1: the crypto capital of the world. Last week, the President 162 00:08:45,720 --> 00:08:48,280 Speaker 1: signed an executive order to create a first of its 163 00:08:48,360 --> 00:08:52,200 Speaker 1: kind crypto reserve, and the reserve will contain a stockpile 164 00:08:52,240 --> 00:08:56,319 Speaker 1: of bitcoin estimated to be as much as seventeen billion dollars, 165 00:08:56,679 --> 00:08:58,920 Speaker 1: and the US has actually seized all of this bitcoin 166 00:08:59,120 --> 00:09:02,439 Speaker 1: in various legal cases over the years. Why It reported 167 00:09:02,480 --> 00:09:05,320 Speaker 1: on effort to create so called freedom Cities in the US. 168 00:09:05,800 --> 00:09:07,800 Speaker 1: The idea is that these cities will be exempt from 169 00:09:07,880 --> 00:09:11,560 Speaker 1: getting approval from federal agencies for things like conducting anti 170 00:09:11,600 --> 00:09:15,040 Speaker 1: aging trials or building nuclear reactor startups to power AI 171 00:09:15,960 --> 00:09:20,080 Speaker 1: and finally, per wired again, a study found that chatbots 172 00:09:20,200 --> 00:09:23,560 Speaker 1: just want to be loved. Researchers at Stanford University found 173 00:09:23,600 --> 00:09:26,440 Speaker 1: that large language models, when they're told they're taking a 174 00:09:26,480 --> 00:09:32,400 Speaker 1: personality test, answer with more agreeableness and extraversion and less neuroticism. 175 00:09:32,640 --> 00:09:35,439 Speaker 1: As why it puts it quote. The behavior mirrors how 176 00:09:35,520 --> 00:09:38,680 Speaker 1: some human subjects will change the answers to make themselves 177 00:09:38,679 --> 00:09:41,600 Speaker 1: seem more likable. That the effect was more extreme with 178 00:09:41,679 --> 00:09:48,920 Speaker 1: AI models. So those are the headlines, and we're going 179 00:09:48,960 --> 00:09:51,120 Speaker 1: to take a quick break now, but when we come back, 180 00:09:51,160 --> 00:09:53,880 Speaker 1: we're going to hear from the author, researcher and entrepreneur 181 00:09:54,000 --> 00:09:57,720 Speaker 1: azemas are about the latest AGI predictions and what we 182 00:09:57,800 --> 00:10:09,640 Speaker 1: need to know about manners AI stay with us. Anyone 183 00:10:09,679 --> 00:10:13,040 Speaker 1: following the recent development of AI knows that three letters, 184 00:10:13,080 --> 00:10:18,880 Speaker 1: technologists and businesses have salivated over AGI, or artificial general intelligence, 185 00:10:19,280 --> 00:10:22,440 Speaker 1: an artificial intelligence system that can outperform humans on a 186 00:10:22,480 --> 00:10:26,040 Speaker 1: wide range of tasks. There's a debate over how close 187 00:10:26,080 --> 00:10:28,760 Speaker 1: we are to achieving that. Some say it could take years, 188 00:10:29,080 --> 00:10:33,200 Speaker 1: others say it's coming soon, very soon. Driving investments in 189 00:10:33,240 --> 00:10:36,680 Speaker 1: both innovation and deployment is the AI race that's heating 190 00:10:36,760 --> 00:10:39,640 Speaker 1: up between the US and China. On the China side, 191 00:10:39,720 --> 00:10:43,360 Speaker 1: cheap reasoning models like Deepseek are being widely deployed. In 192 00:10:43,400 --> 00:10:46,920 Speaker 1: the US. There are reports of PhD level AI agents 193 00:10:46,960 --> 00:10:49,200 Speaker 1: from Open AI that will cost up to twenty thousand 194 00:10:49,240 --> 00:10:51,800 Speaker 1: dollars a month. The rate at which AI products are 195 00:10:51,800 --> 00:10:54,200 Speaker 1: being released and announced is honestly hard to keep up with, 196 00:10:54,760 --> 00:10:57,400 Speaker 1: not to mention figuring out which product or combination of 197 00:10:57,400 --> 00:11:00,719 Speaker 1: products may actually drive AGI. Here to walk me through 198 00:11:00,760 --> 00:11:04,079 Speaker 1: these questions is Azeema's arm. He writes the Exponential View 199 00:11:04,120 --> 00:11:07,400 Speaker 1: news letter about technology and society, which I read every week, 200 00:11:07,720 --> 00:11:10,720 Speaker 1: partly because Azem actually tries the products he writes about. 201 00:11:11,120 --> 00:11:13,120 Speaker 1: He has one of the most clarifying coverage of Deep 202 00:11:13,160 --> 00:11:15,880 Speaker 1: Seek I read anywhere, and he's also the author of 203 00:11:15,960 --> 00:11:20,680 Speaker 1: the Exponential Age, How accelerating technology is transforming business, politics 204 00:11:20,679 --> 00:11:22,680 Speaker 1: and society. As Em. Welcome to tech stuff. 205 00:11:22,960 --> 00:11:24,600 Speaker 3: It's great to be here, Oz. Thank you. 206 00:11:24,840 --> 00:11:27,240 Speaker 1: So this week you've been writing about Manus, a new 207 00:11:27,360 --> 00:11:30,840 Speaker 1: AI agent coming out of China. Can you explain who 208 00:11:30,920 --> 00:11:34,040 Speaker 1: built it, what it is, and whether it is in 209 00:11:34,080 --> 00:11:36,440 Speaker 1: fact China's second Deep Seek moment. 210 00:11:37,080 --> 00:11:39,440 Speaker 3: I can, indeed, I think it was this week that 211 00:11:39,480 --> 00:11:42,040 Speaker 3: it happened. But as you said, Os, the world is 212 00:11:42,080 --> 00:11:45,360 Speaker 3: moving so quickly, it's sometimes hard to keep track of 213 00:11:45,400 --> 00:11:49,560 Speaker 3: exactly when something did happen. Let's assume it was in 214 00:11:49,600 --> 00:11:52,320 Speaker 3: the past few days. I think it was. So Manus 215 00:11:52,360 --> 00:11:57,200 Speaker 3: comes out of a Chinese software company at the startup 216 00:11:57,360 --> 00:12:01,679 Speaker 3: of the same name, and what Manus allows you to 217 00:12:01,800 --> 00:12:07,880 Speaker 3: do is undertake quite complicated tasks using using an AI system. 218 00:12:08,240 --> 00:12:11,679 Speaker 3: I used it for some work questions, research questions, and 219 00:12:11,720 --> 00:12:14,200 Speaker 3: the results that come back I think would have taken 220 00:12:14,280 --> 00:12:19,840 Speaker 3: me many many hours, you know, I mean with time, yeah, exactly, 221 00:12:19,880 --> 00:12:22,040 Speaker 3: with the existing aisystems, more than five hours, more than 222 00:12:22,080 --> 00:12:24,480 Speaker 3: ten hours perhaps, and you just leave it with Manus 223 00:12:24,480 --> 00:12:26,199 Speaker 3: and you come back an hour later having had a 224 00:12:26,280 --> 00:12:27,160 Speaker 3: nice cup of tea. 225 00:12:27,920 --> 00:12:28,760 Speaker 1: How do they achieve this? 226 00:12:29,160 --> 00:12:31,559 Speaker 3: There are some theories. One of the things that Manus 227 00:12:31,800 --> 00:12:36,840 Speaker 3: does is it lets the AI system effectively use a browser, 228 00:12:36,960 --> 00:12:40,160 Speaker 3: a bit like a human researcher might use a browser. 229 00:12:40,400 --> 00:12:44,000 Speaker 3: So the bit that it's doing for us is is 230 00:12:44,080 --> 00:12:48,760 Speaker 3: a lot of the gnarly pieces of real research. You know, 231 00:12:48,800 --> 00:12:51,080 Speaker 3: you fire up lots and lots of web browser tabs 232 00:12:51,120 --> 00:12:53,160 Speaker 3: and you've got Google running in one and you're in 233 00:12:53,200 --> 00:12:55,800 Speaker 3: Wikipedia in another, and you're trying to keep it all 234 00:12:55,800 --> 00:12:57,960 Speaker 3: in your head and compile the final results. You know, 235 00:12:58,000 --> 00:13:01,000 Speaker 3: Manus has automated that process in a way that's very 236 00:13:01,120 --> 00:13:03,320 Speaker 3: very easy for the end user to use. And one 237 00:13:03,360 --> 00:13:05,360 Speaker 3: of the things I love about it is that you 238 00:13:05,360 --> 00:13:07,360 Speaker 3: can actually go back and look at all of the 239 00:13:07,400 --> 00:13:10,280 Speaker 3: steps that it's taken, so you can go and say, oh, look, 240 00:13:10,320 --> 00:13:12,320 Speaker 3: it broke up the task in this way, and it 241 00:13:12,320 --> 00:13:15,240 Speaker 3: went to these websites and extracted this information. Then it 242 00:13:15,280 --> 00:13:17,959 Speaker 3: realized it needed this other piece of information, and it's 243 00:13:18,000 --> 00:13:20,880 Speaker 3: gone off and found that other piece of information. And 244 00:13:20,880 --> 00:13:23,480 Speaker 3: then when you get your final results. What's very nice, 245 00:13:23,720 --> 00:13:26,440 Speaker 3: it can sometimes be a bit overwhelming, is that you 246 00:13:26,480 --> 00:13:28,560 Speaker 3: get an executive summary, which is of course the piece 247 00:13:28,559 --> 00:13:30,600 Speaker 3: that we all want to read. But then it has 248 00:13:30,679 --> 00:13:34,080 Speaker 3: all of the appendices, right, the much much more detailed 249 00:13:34,080 --> 00:13:37,080 Speaker 3: analysis that it has done on the particular research task 250 00:13:37,160 --> 00:13:39,720 Speaker 3: you've asked for. I think what's really impressive is this 251 00:13:39,760 --> 00:13:41,600 Speaker 3: is a product. I mean, the thing that they've done 252 00:13:41,640 --> 00:13:44,800 Speaker 3: really well is they've produced a product that if you've 253 00:13:44,840 --> 00:13:47,640 Speaker 3: worked in an office situation, if you've ever asked anyonet 254 00:13:47,679 --> 00:13:50,599 Speaker 3: to do any research, you've done something yourself, the output 255 00:13:51,120 --> 00:13:52,120 Speaker 3: will be familiar to you. 256 00:13:52,800 --> 00:13:56,560 Speaker 1: How does it compare, for example, with Opening Eyes deep 257 00:13:56,600 --> 00:13:59,840 Speaker 1: research tools, which are shaping up to be quite expensive. 258 00:14:00,440 --> 00:14:03,480 Speaker 3: Yeah. Open Ay has this deep research tool, which today 259 00:14:03,800 --> 00:14:07,480 Speaker 3: is the top tier is two hundred dollars a month, 260 00:14:07,520 --> 00:14:10,920 Speaker 3: and there's a rumor it might go up to higher 261 00:14:10,960 --> 00:14:14,280 Speaker 3: tiers of two thousand dollars and twenty thousand dollars a month. 262 00:14:14,679 --> 00:14:17,160 Speaker 3: I have the two hundred dollars a month product. I 263 00:14:17,280 --> 00:14:22,840 Speaker 3: consider that to be a very very good, graduate quality 264 00:14:22,880 --> 00:14:26,920 Speaker 3: researcher that I can throw at almost any problem. What 265 00:14:27,000 --> 00:14:31,920 Speaker 3: I found with using Manus is that somehow Manus gave 266 00:14:32,000 --> 00:14:36,760 Speaker 3: me more of a well rounded answer. It was perhaps 267 00:14:36,840 --> 00:14:40,600 Speaker 3: not as deep as open AI's deep research, but it 268 00:14:40,760 --> 00:14:44,000 Speaker 3: was it was more complete, more coherent, And you know, 269 00:14:44,080 --> 00:14:47,600 Speaker 3: I think listeners will hear that I'm a bit uncertain 270 00:14:47,640 --> 00:14:49,720 Speaker 3: in my tone as I try to describe the differences 271 00:14:50,080 --> 00:14:53,960 Speaker 3: because these products are so new Manus is not even 272 00:14:53,960 --> 00:14:57,280 Speaker 3: a week old. That they're also quite immature. So it's 273 00:14:57,360 --> 00:15:02,040 Speaker 3: not like comparing a Tesla with some kind of forward 274 00:15:02,520 --> 00:15:04,960 Speaker 3: gas powered car, where these are mature products and you 275 00:15:05,040 --> 00:15:07,840 Speaker 3: know how to tell them apart. We're still trying to 276 00:15:07,840 --> 00:15:10,960 Speaker 3: figure out how to describe these products. And so in 277 00:15:11,000 --> 00:15:14,480 Speaker 3: a sense, my experience of them is really intuitive, and 278 00:15:14,520 --> 00:15:17,880 Speaker 3: it's one to feel rather than fact. So someone else 279 00:15:17,880 --> 00:15:20,560 Speaker 3: could use these products and have a different experience to me, 280 00:15:20,640 --> 00:15:23,240 Speaker 3: and I think that just speaks to the nascence of 281 00:15:23,240 --> 00:15:23,920 Speaker 3: this industry. 282 00:15:24,320 --> 00:15:27,040 Speaker 1: I wish the second time this year that open ai 283 00:15:27,160 --> 00:15:30,440 Speaker 1: has had a product launch and then shortly afterwards had 284 00:15:30,600 --> 00:15:34,560 Speaker 1: a competitor come out of China. How does the Menus 285 00:15:34,640 --> 00:15:36,640 Speaker 1: moment compare to the deep seek moment. 286 00:15:37,520 --> 00:15:40,760 Speaker 3: The Deep Seek moment is much more important than the 287 00:15:40,800 --> 00:15:45,960 Speaker 3: Manus moment. The Manus moment is an example of a 288 00:15:46,080 --> 00:15:50,200 Speaker 3: rapid productization, and ultimately it's products that we use that 289 00:15:50,280 --> 00:15:53,440 Speaker 3: make a difference. But what Deep Seak did was it 290 00:15:53,440 --> 00:15:59,480 Speaker 3: it demonstrated a really fundamental set of innovations, and that 291 00:15:59,640 --> 00:16:04,760 Speaker 3: key was that Deep seeks models achieved a similar level 292 00:16:04,920 --> 00:16:09,200 Speaker 3: to open AI's AI technologies, but they used one thirtieth 293 00:16:09,280 --> 00:16:11,520 Speaker 3: or one fortieth of the computing power than the open 294 00:16:11,560 --> 00:16:14,120 Speaker 3: Ai models did. That means they're cheaper to run, they're 295 00:16:14,160 --> 00:16:18,440 Speaker 3: faster to run, they use less electricity. And the reason 296 00:16:18,520 --> 00:16:23,880 Speaker 3: Deep Seek matters so much is that a large part 297 00:16:24,040 --> 00:16:30,800 Speaker 3: of the US's strategy towards China has been a technological containment, 298 00:16:31,120 --> 00:16:34,920 Speaker 3: particularly around AI and around the chips that are required. 299 00:16:35,920 --> 00:16:37,920 Speaker 3: The notion being that if you can't get the chips, 300 00:16:38,720 --> 00:16:41,560 Speaker 3: you can't build advanced AI. And Deep Seak has gone 301 00:16:41,560 --> 00:16:44,760 Speaker 3: off and shown that necessitya's mother invention They've come out 302 00:16:44,760 --> 00:16:47,760 Speaker 3: with a whole series of quite remarkable techniques that were 303 00:16:47,960 --> 00:16:50,480 Speaker 3: likely known by the way to the US labs, but 304 00:16:50,560 --> 00:16:53,280 Speaker 3: it just wasn't important for the US labs because they 305 00:16:53,280 --> 00:16:55,400 Speaker 3: could get the chips they wanted to. And I think 306 00:16:55,440 --> 00:16:58,840 Speaker 3: what deep Seak did was it changed the understanding of 307 00:16:58,840 --> 00:17:02,800 Speaker 3: the nature of that rivalry between the US and China, 308 00:17:03,480 --> 00:17:07,240 Speaker 3: which exists on many fronts, but in particular around technology. 309 00:17:07,480 --> 00:17:10,960 Speaker 1: So Menus there's no fundamental model innovation. It's kind of 310 00:17:11,000 --> 00:17:13,679 Speaker 1: like a rapper, meaning it lays software on top of 311 00:17:13,760 --> 00:17:14,920 Speaker 1: existing AI models. 312 00:17:15,080 --> 00:17:18,119 Speaker 3: It's a rapper in the vein of perplexity exactly. But 313 00:17:18,320 --> 00:17:24,480 Speaker 3: I would say that ultimately rappers and products are very 314 00:17:24,600 --> 00:17:26,520 Speaker 3: very important in the market. You know, it's not just 315 00:17:26,600 --> 00:17:29,960 Speaker 3: about the raw technology, and what you've seen with Manus 316 00:17:30,119 --> 00:17:35,679 Speaker 3: is a product that competes on a like Forulight basis 317 00:17:35,720 --> 00:17:39,440 Speaker 3: with a product coming out of you know, US firms. 318 00:17:39,720 --> 00:17:43,320 Speaker 3: Quite often, when you look at Chinese consumer products, they're 319 00:17:43,440 --> 00:17:46,920 Speaker 3: very very much designed for the Chinese market. The things 320 00:17:46,920 --> 00:17:50,359 Speaker 3: a Chinese consumer wants, the way they behave cultural and 321 00:17:50,400 --> 00:17:54,320 Speaker 3: design affordances and considerations, and I think it is sort 322 00:17:54,359 --> 00:17:57,119 Speaker 3: of salient that, you know, Manus has come out with 323 00:17:57,160 --> 00:17:59,239 Speaker 3: something that you can use, and you can say this 324 00:17:59,359 --> 00:18:02,959 Speaker 3: is similar to a Perplexity, which is a great Silicon 325 00:18:03,040 --> 00:18:06,520 Speaker 3: Valley startup that builds AI based research tools as well. 326 00:18:07,040 --> 00:18:09,200 Speaker 1: And you've been in the US this week at south 327 00:18:09,200 --> 00:18:11,640 Speaker 1: By Southwest, spend a lot of time in the States. 328 00:18:12,119 --> 00:18:15,800 Speaker 1: How are US companies responding to this kind of bulge 329 00:18:16,000 --> 00:18:18,600 Speaker 1: of innovation coming out of China in the world of AI. 330 00:18:19,440 --> 00:18:23,399 Speaker 3: Well, it's quite a complicated picture. So one of the 331 00:18:23,400 --> 00:18:25,280 Speaker 3: things that deep Seak did was that they made their 332 00:18:25,320 --> 00:18:29,800 Speaker 3: techniques available. They described them in much more detail than 333 00:18:29,880 --> 00:18:32,000 Speaker 3: we're seeing from US labs, and a lot of the 334 00:18:32,640 --> 00:18:35,600 Speaker 3: underlying code was open source, which meant that anyone could 335 00:18:35,600 --> 00:18:38,560 Speaker 3: access it, download and make use of it. And so 336 00:18:39,400 --> 00:18:41,720 Speaker 3: there's a Silicon Valley investor by the name of Mark 337 00:18:41,760 --> 00:18:45,280 Speaker 3: Andriesen who is a phenomenal investor, but he's also very 338 00:18:45,359 --> 00:18:49,440 Speaker 3: very well known for promoting an idea of American dynamism. 339 00:18:48,960 --> 00:18:51,360 Speaker 1: And close advisor to President Trump right now as well. 340 00:18:51,440 --> 00:18:54,680 Speaker 3: I believe so. But he said of deep Seat, it's 341 00:18:54,760 --> 00:18:58,600 Speaker 3: open source, it's a gift to humanity. So on the 342 00:18:58,640 --> 00:19:01,080 Speaker 3: one hand, you've got people with say that, and you're 343 00:19:01,160 --> 00:19:04,960 Speaker 3: seeing that a number of American firms have implemented deep 344 00:19:05,000 --> 00:19:08,679 Speaker 3: sea technology. Perplexity, which is a research tool, has done this, 345 00:19:09,160 --> 00:19:13,040 Speaker 3: and you can access deep seeks models through some of 346 00:19:13,080 --> 00:19:17,159 Speaker 3: these cloud companies who serve enterprise customers. So on the 347 00:19:17,160 --> 00:19:20,119 Speaker 3: one hand, they've people have taken it on and you 348 00:19:20,320 --> 00:19:23,919 Speaker 3: have seen now open source projects that are trying to 349 00:19:23,960 --> 00:19:27,040 Speaker 3: replicate what deep Seak has done in slightly different ways, 350 00:19:27,359 --> 00:19:29,960 Speaker 3: and so that I think has really been a fillip 351 00:19:30,000 --> 00:19:34,199 Speaker 3: and a boost accelerator to the overall industry. When you 352 00:19:34,200 --> 00:19:37,480 Speaker 3: look at the closed labs like open ai and Anthropic, 353 00:19:38,000 --> 00:19:39,840 Speaker 3: one of the things you're starting to see is them 354 00:19:40,160 --> 00:19:45,440 Speaker 3: respond So open ai responded to deep Seek by reducing 355 00:19:45,480 --> 00:19:50,280 Speaker 3: some prices, by making certain capabilities available they hadn't previously, 356 00:19:50,400 --> 00:19:53,960 Speaker 3: by saying they would open source more technologies. So there's 357 00:19:54,040 --> 00:19:57,679 Speaker 3: definitely been a significant response, and of course the public 358 00:19:57,720 --> 00:20:00,760 Speaker 3: markets responded by having the first of a number of 359 00:20:00,920 --> 00:20:04,280 Speaker 3: frighteninglaims melt there. Yeah, well, the first of many meltdowns 360 00:20:04,280 --> 00:20:06,520 Speaker 3: that we've had so far this year. But I would 361 00:20:06,560 --> 00:20:09,880 Speaker 3: say that the really interesting thing that has come out 362 00:20:09,960 --> 00:20:14,560 Speaker 3: of out of deep Seek is that by being open 363 00:20:14,600 --> 00:20:17,560 Speaker 3: source and being as good as it is, it's a 364 00:20:17,600 --> 00:20:23,800 Speaker 3: real strategic challenge to closed source models that are only 365 00:20:23,920 --> 00:20:27,280 Speaker 3: slightly better than an open source model, And so I 366 00:20:27,320 --> 00:20:30,560 Speaker 3: do think that it has in some sense started to 367 00:20:32,160 --> 00:20:35,600 Speaker 3: redraft our assumptions about how this industry might evolve for 368 00:20:35,680 --> 00:20:37,240 Speaker 3: the economy over the next few years. 369 00:20:44,200 --> 00:20:47,200 Speaker 1: Coming up, we'll hear more from Azimazar about our current 370 00:20:47,280 --> 00:21:00,720 Speaker 1: AI moment. To stay with us. One of the kind 371 00:21:00,760 --> 00:21:03,439 Speaker 1: of things you provide for your readers is, you know, 372 00:21:03,560 --> 00:21:07,600 Speaker 1: information and first hand accounts of you using all these 373 00:21:07,680 --> 00:21:10,800 Speaker 1: new technologies. The other thing you provide, I think is 374 00:21:11,200 --> 00:21:13,399 Speaker 1: paradigms for thinking about problems. 375 00:21:13,480 --> 00:21:13,680 Speaker 3: Right. 376 00:21:14,000 --> 00:21:17,600 Speaker 1: One of those paradigms you have is innovation versus diffusion, 377 00:21:18,240 --> 00:21:21,840 Speaker 1: Diffusion being kind of what happens after innovation, i e. Like, 378 00:21:21,920 --> 00:21:24,680 Speaker 1: how does a technology actually get adopted in a real 379 00:21:24,720 --> 00:21:27,520 Speaker 1: market or a real economy. Can you kind of explain 380 00:21:27,600 --> 00:21:30,080 Speaker 1: that paradigm and how you're seeing it play out differently 381 00:21:30,119 --> 00:21:31,240 Speaker 1: in the US versus China. 382 00:21:31,800 --> 00:21:35,880 Speaker 3: Yeah, Well, it's very easy to get excited about the innovations, 383 00:21:35,880 --> 00:21:39,960 Speaker 3: but what actually counts is do businesses use those innovations 384 00:21:40,040 --> 00:21:44,400 Speaker 3: to increase their productivity, produce better products, reduce their costs, 385 00:21:44,800 --> 00:21:48,679 Speaker 3: and therefore sort of kickstart that virtual circle that is 386 00:21:49,200 --> 00:21:52,120 Speaker 3: a market so consumers can buy better products at lower costs, 387 00:21:52,640 --> 00:21:57,720 Speaker 3: and that cycle continues. And the big question that we 388 00:21:57,800 --> 00:22:01,040 Speaker 3: face around AI is what is going to be the 389 00:22:01,160 --> 00:22:06,399 Speaker 3: rate of diffusion of the technology across different countries. And 390 00:22:06,440 --> 00:22:09,040 Speaker 3: there are a couple of issues here. Sometimes if you 391 00:22:09,800 --> 00:22:12,919 Speaker 3: aren't very advanced with your use of technology, you actually 392 00:22:12,920 --> 00:22:15,199 Speaker 3: benefit a lot when a small amount of technology is 393 00:22:15,240 --> 00:22:18,080 Speaker 3: introduced into the business. I mean, you know, the simple 394 00:22:18,119 --> 00:22:22,200 Speaker 3: point being that the first TV that a family gets 395 00:22:22,440 --> 00:22:25,640 Speaker 3: is life changing, the fourth TV doesn't make that much difference, 396 00:22:26,200 --> 00:22:28,200 Speaker 3: and the same is true going to be true for AI. 397 00:22:28,520 --> 00:22:31,119 Speaker 3: So how does this going to play out? US firms 398 00:22:31,200 --> 00:22:34,360 Speaker 3: tend to be much much more pro technology. They take 399 00:22:34,400 --> 00:22:38,400 Speaker 3: on technology earlier than companies in other countries. But one 400 00:22:38,440 --> 00:22:41,480 Speaker 3: thing that happened with deep Seek was that deep Seek 401 00:22:41,680 --> 00:22:46,840 Speaker 3: triggered a response from the Chinese state, both in a 402 00:22:47,040 --> 00:22:50,560 Speaker 3: meeting that President g held where he brought lots of 403 00:22:50,600 --> 00:22:53,479 Speaker 3: the big tech CEOs from AI and other domains together 404 00:22:53,600 --> 00:22:56,399 Speaker 3: and started to rehabilitate them. But the second thing that 405 00:22:56,440 --> 00:23:01,639 Speaker 3: I've heard is that there has been a wrong grass 406 00:23:01,760 --> 00:23:05,720 Speaker 3: roots but also directed effort from local and state governments 407 00:23:05,760 --> 00:23:09,600 Speaker 3: to start to use technologies like deep seek in their 408 00:23:09,800 --> 00:23:11,639 Speaker 3: in their delivery, and one of the things the Chinese 409 00:23:11,680 --> 00:23:13,879 Speaker 3: can do quite well is they can coordinate both the 410 00:23:13,920 --> 00:23:17,080 Speaker 3: private and the public sector in that way. I think 411 00:23:17,160 --> 00:23:21,240 Speaker 3: it's it's unclear to me that that necessarily helps them 412 00:23:21,280 --> 00:23:26,600 Speaker 3: catch up with the US firms. Well, well, just the 413 00:23:26,640 --> 00:23:28,879 Speaker 3: fact that I mean, just the fact that American firms 414 00:23:28,880 --> 00:23:31,960 Speaker 3: in general tend to be very very pro technology, right, 415 00:23:32,000 --> 00:23:33,639 Speaker 3: They're the first to move to the cloud, They're the 416 00:23:33,680 --> 00:23:37,560 Speaker 3: first to move to mobile and mobile commerce. You know 417 00:23:37,640 --> 00:23:40,920 Speaker 3: that they do it quicker than Europeans do, the French 418 00:23:40,960 --> 00:23:43,359 Speaker 3: or the British, and in general quicker than the Chinese. 419 00:23:43,480 --> 00:23:45,879 Speaker 3: But I would say that the fact that there is 420 00:23:45,880 --> 00:23:47,920 Speaker 3: a Chinese model, the fact that there is a little 421 00:23:47,920 --> 00:23:50,280 Speaker 3: bit of patriotism running around it, the fact that it 422 00:23:50,359 --> 00:23:53,479 Speaker 3: is so easy and low cost to run and there 423 00:23:53,480 --> 00:23:57,000 Speaker 3: are not so many alternatives, I think does suggest that 424 00:23:57,240 --> 00:24:01,360 Speaker 3: the Chinese market could could accelerate right more quickly than 425 00:24:01,480 --> 00:24:03,240 Speaker 3: it might otherwise have done. And we know, we have 426 00:24:03,280 --> 00:24:05,479 Speaker 3: to see what happens over the next the next year 427 00:24:05,600 --> 00:24:07,680 Speaker 3: or so, but I wouldn't be blase and say, well, 428 00:24:07,880 --> 00:24:10,840 Speaker 3: America is obviously going to diffuse this technology faster than 429 00:24:11,040 --> 00:24:11,960 Speaker 3: anyone else. 430 00:24:12,200 --> 00:24:15,000 Speaker 1: And I believe it. South By Southwest, you were leading 431 00:24:15,000 --> 00:24:18,199 Speaker 1: a panel about energy as it relates to AI, and 432 00:24:18,240 --> 00:24:23,040 Speaker 1: obviously you know China's ability to onboard new electricity to 433 00:24:23,080 --> 00:24:25,320 Speaker 1: the grid in the last twenty or thirty years, you know, 434 00:24:25,400 --> 00:24:27,560 Speaker 1: with cold being a major part of that has been 435 00:24:27,600 --> 00:24:31,280 Speaker 1: extraordinary compared to the US. How important of a driver 436 00:24:31,640 --> 00:24:36,879 Speaker 1: of diffusion will energy production and integration be. 437 00:24:37,760 --> 00:24:40,960 Speaker 3: I mean, all of the AI data centers that are 438 00:24:41,040 --> 00:24:43,760 Speaker 3: going to be built will need lots of electricity. I mean, 439 00:24:43,760 --> 00:24:47,919 Speaker 3: these chips are demanding. They are Just give you a 440 00:24:47,960 --> 00:24:53,639 Speaker 3: sense of how demanding they are. The standard unit in 441 00:24:53,680 --> 00:24:58,040 Speaker 3: a data center high density servers, which are these powerful computers, 442 00:24:58,560 --> 00:25:05,320 Speaker 3: a high density racks that of today might draw twenty 443 00:25:05,400 --> 00:25:10,320 Speaker 3: or thirty killer watts of power, and you'll have one hundreds, 444 00:25:10,320 --> 00:25:12,760 Speaker 3: if not thousands of these racks in a big data center. 445 00:25:13,000 --> 00:25:15,560 Speaker 3: And the new rats that are being designed will will 446 00:25:15,600 --> 00:25:17,920 Speaker 3: have servers that will draw one hundred to one hundred 447 00:25:17,960 --> 00:25:20,879 Speaker 3: and forty kilowatts through them, which is an enormous amount 448 00:25:20,960 --> 00:25:24,399 Speaker 3: of All of that comes together to mean that in 449 00:25:24,480 --> 00:25:28,800 Speaker 3: order to deliver AI at scale to any economy is 450 00:25:28,840 --> 00:25:31,159 Speaker 3: going to require lots and lots of data centers. And 451 00:25:31,240 --> 00:25:34,840 Speaker 3: back in twenty eighteen, data centers in the US took 452 00:25:34,880 --> 00:25:38,560 Speaker 3: up about one point two percent of electricity demand. Coming 453 00:25:38,560 --> 00:25:42,520 Speaker 3: into twenty twenty four, it's around four percent. The Department 454 00:25:42,520 --> 00:25:45,040 Speaker 3: of Energy reckons that by the end of the decade 455 00:25:45,359 --> 00:25:48,479 Speaker 3: that number will be between six point five and twelve 456 00:25:48,640 --> 00:25:51,760 Speaker 3: ish percent, is which is quite quite significant. Now. The 457 00:25:51,760 --> 00:25:55,320 Speaker 3: reason it's significant is that since two thousand and four, 458 00:25:55,359 --> 00:25:57,959 Speaker 3: the US has not really increased the amount of electricity 459 00:25:57,960 --> 00:26:02,919 Speaker 3: it's used, very very largely sort of underinvested in its grid, 460 00:26:03,160 --> 00:26:07,720 Speaker 3: its energy generating capacity compared to China, which, as you say, 461 00:26:07,760 --> 00:26:10,400 Speaker 3: has historically used coal, but now essentially everything that's brought 462 00:26:10,400 --> 00:26:14,200 Speaker 3: on stream is solar. And so there is this concern 463 00:26:14,760 --> 00:26:18,000 Speaker 3: that even if you've got the algorithms, and even if 464 00:26:18,040 --> 00:26:20,840 Speaker 3: you put the algorithms in products, if you can't run 465 00:26:20,880 --> 00:26:24,840 Speaker 3: those products and those algorithms on enough computers because you 466 00:26:24,880 --> 00:26:28,400 Speaker 3: can't get the power to them, you can't serve businesses 467 00:26:28,400 --> 00:26:31,880 Speaker 3: with their energy needs. And so that's been a major concern. 468 00:26:32,000 --> 00:26:35,040 Speaker 3: And then that that comes into the second concern, which is, well, 469 00:26:35,119 --> 00:26:36,960 Speaker 3: even if you can serve them with the energy needs, 470 00:26:37,080 --> 00:26:39,960 Speaker 3: one are the environmental implications of all of that. So 471 00:26:40,440 --> 00:26:42,479 Speaker 3: there is a sense that there's an amber warning light, 472 00:26:42,480 --> 00:26:45,680 Speaker 3: perhaps not a red warning light, you know. My own 473 00:26:45,920 --> 00:26:48,960 Speaker 3: sense of this is that it's actually a really good 474 00:26:49,000 --> 00:26:54,000 Speaker 3: thing that there is a demand for new electricity sources 475 00:26:54,560 --> 00:26:58,040 Speaker 3: coming into the US market after such a long period 476 00:26:58,080 --> 00:27:00,840 Speaker 3: of low investment, because any advance economy is going to 477 00:27:00,840 --> 00:27:03,040 Speaker 3: need electricity. So I think in general it's quite a 478 00:27:03,040 --> 00:27:06,720 Speaker 3: good thing to have this strong demand signal come in 479 00:27:06,760 --> 00:27:09,800 Speaker 3: from the AI data centers. But I think it does 480 00:27:09,880 --> 00:27:13,439 Speaker 3: create a small risk, which is for want of a 481 00:27:13,520 --> 00:27:17,840 Speaker 3: grid connection, the AI opportunity was lost, and there is 482 00:27:17,840 --> 00:27:20,400 Speaker 3: that risk. It's one of the things that the new 483 00:27:20,520 --> 00:27:24,439 Speaker 3: administration has to figure out what are the leaders it 484 00:27:24,480 --> 00:27:29,800 Speaker 3: can pull to unblock US firm's ability to build and 485 00:27:29,960 --> 00:27:31,440 Speaker 3: power these AI data centers. 486 00:27:32,040 --> 00:27:36,119 Speaker 1: So, speaking of the new administration, there was a fascinating 487 00:27:36,160 --> 00:27:40,200 Speaker 1: conversation that Ezra Cline had last week with Ben Buchanan, 488 00:27:40,280 --> 00:27:43,840 Speaker 1: the kind of lead AI advisor to the old administration, 489 00:27:44,160 --> 00:27:47,560 Speaker 1: which I'm sure you followed. The discussion centered around kind 490 00:27:47,560 --> 00:27:51,560 Speaker 1: of AGI and whether it's coming, and in the context 491 00:27:51,560 --> 00:27:53,320 Speaker 1: of that, there was a lot of discussion about competition 492 00:27:53,359 --> 00:27:55,679 Speaker 1: with China. So on the first one, Where do you 493 00:27:55,760 --> 00:27:58,639 Speaker 1: stand on the whole Will they won't be on AGI 494 00:27:58,680 --> 00:27:59,639 Speaker 1: in the next couple of years? 495 00:28:00,440 --> 00:28:04,600 Speaker 3: Well, let's start with what do people mean by AGI? Right? 496 00:28:04,800 --> 00:28:07,520 Speaker 1: I think the definition of Kenan was using was basically 497 00:28:07,600 --> 00:28:11,160 Speaker 1: doing most human tasks better than humans, like replacing disc 498 00:28:11,240 --> 00:28:13,439 Speaker 1: workers was his kind of framework. 499 00:28:13,960 --> 00:28:19,280 Speaker 3: Yes, that's sort of somewhere between where Demisi Savis who's 500 00:28:19,320 --> 00:28:22,840 Speaker 3: the boss of Google's Deep Mind group, and Sam Altman, 501 00:28:22,880 --> 00:28:24,960 Speaker 3: who's the boss of Open ai SIT. I mean Sam's 502 00:28:24,960 --> 00:28:30,399 Speaker 3: phrases systems that outperform humans at most economically valuable work. 503 00:28:30,880 --> 00:28:35,800 Speaker 3: By that definition, we're already getting systems that improve the 504 00:28:35,840 --> 00:28:40,560 Speaker 3: quality of human work significantly, and we already have systems 505 00:28:40,920 --> 00:28:45,880 Speaker 3: that achieve the same output with much smaller teams, because 506 00:28:46,640 --> 00:28:49,360 Speaker 3: you know, answering support tickets is something that these chatbots 507 00:28:49,360 --> 00:28:52,400 Speaker 3: can do very very well. If you look at the curves, 508 00:28:53,000 --> 00:28:56,760 Speaker 3: which I mean the performance curves of AI systems, they 509 00:28:56,800 --> 00:29:01,760 Speaker 3: are sharply trending upwards. Does that do all the work 510 00:29:01,760 --> 00:29:05,400 Speaker 3: of a desk worker? So I slightly disagree with it 511 00:29:05,480 --> 00:29:09,800 Speaker 3: because I still have to direct the machine, I still 512 00:29:09,840 --> 00:29:12,840 Speaker 3: have to judge the output. I still have to use intuition. 513 00:29:13,240 --> 00:29:16,040 Speaker 3: Things that I wasn't able to frame in my question 514 00:29:16,520 --> 00:29:19,200 Speaker 3: with the results that comes out of it. Ultimately, I'm 515 00:29:19,240 --> 00:29:21,920 Speaker 3: the principal who makes the decision in the business, so 516 00:29:22,840 --> 00:29:26,840 Speaker 3: I look at them as tools that largely augment. But 517 00:29:26,960 --> 00:29:30,320 Speaker 3: it's really also very very clear that there are lots 518 00:29:30,320 --> 00:29:34,160 Speaker 3: of jobs where that the augmentation is going to turn 519 00:29:34,200 --> 00:29:36,520 Speaker 3: into a replacement. And I think that you know, you 520 00:29:37,000 --> 00:29:39,000 Speaker 3: see that happening in customer service teams, right you have 521 00:29:39,000 --> 00:29:41,640 Speaker 3: teams of one hundred turns out with the AI, you 522 00:29:41,680 --> 00:29:43,520 Speaker 3: can have a team of ten or a team of 523 00:29:43,560 --> 00:29:47,920 Speaker 3: twenty that does the same job. So timing wise, I 524 00:29:48,040 --> 00:29:53,640 Speaker 3: expect the rate of improvement of these systems to continue. 525 00:29:54,000 --> 00:29:56,040 Speaker 3: I think what Mann has showed us where we started 526 00:29:56,040 --> 00:29:59,000 Speaker 3: our conversation was that you don't need to build a 527 00:29:59,040 --> 00:30:01,640 Speaker 3: new model to get a really, really great output and 528 00:30:01,680 --> 00:30:05,440 Speaker 3: an improved output. And I sometimes wonder whether AI researchers 529 00:30:05,480 --> 00:30:08,040 Speaker 3: think for the average human and the average desk job 530 00:30:08,520 --> 00:30:11,960 Speaker 3: is at the level at which these double PhDs work out, 531 00:30:12,000 --> 00:30:14,680 Speaker 3: and that's just not true. Right in most businesses, we're 532 00:30:14,720 --> 00:30:16,880 Speaker 3: not thinking like that. If you could get a machine 533 00:30:16,880 --> 00:30:19,800 Speaker 3: that can come up with the next new theory of physics, 534 00:30:20,200 --> 00:30:24,680 Speaker 3: we all benefit. But in reality, we don't need that 535 00:30:24,760 --> 00:30:26,760 Speaker 3: level of thinking most of the time, right. We actually 536 00:30:26,760 --> 00:30:28,760 Speaker 3: need a much more prosaic level of thinking. And frankly, 537 00:30:28,920 --> 00:30:31,280 Speaker 3: I'd much rather that my barber doesn't have a Nobel 538 00:30:31,360 --> 00:30:34,040 Speaker 3: Prize in physics. I'd rather he's just very good with 539 00:30:34,080 --> 00:30:34,880 Speaker 3: a razor blade. 540 00:30:35,640 --> 00:30:44,400 Speaker 1: Thank you so much, as him my pleasure. That's it 541 00:30:44,520 --> 00:30:47,360 Speaker 1: for this week for text of I'm as Volcan. This 542 00:30:47,440 --> 00:30:50,680 Speaker 1: episode was produced by Eliza Dennis and Victoria Demingez. It 543 00:30:50,760 --> 00:30:53,920 Speaker 1: was executive produced by me Kra Price and kat Osborne 544 00:30:53,920 --> 00:30:57,920 Speaker 1: for Kaleidoscope and Katrin nor velve I Heart Podcasts. The 545 00:30:58,000 --> 00:31:01,480 Speaker 1: Heath Fraser is our engineer and Elmurdoc mixed this episode 546 00:31:01,720 --> 00:31:04,680 Speaker 1: and also wrote our theme song. Join us Wednesday for 547 00:31:04,760 --> 00:31:07,320 Speaker 1: tech Stuff the Story when we'll share an in depth 548 00:31:07,320 --> 00:31:12,080 Speaker 1: conversation with astro Teller, the captain of Moonshots at Google x. 549 00:31:12,920 --> 00:31:15,640 Speaker 1: Please rate, review, and reach out to us at tech 550 00:31:15,720 --> 00:31:19,160 Speaker 1: Stuff podcast at gmail dot com. If you're enjoying the show, 551 00:31:19,240 --> 00:31:21,520 Speaker 1: it really helps us and helps others discover it. If 552 00:31:21,560 --> 00:31:23,680 Speaker 1: you subscribe and leave a comment, thank you.