1 00:00:02,720 --> 00:00:09,560 Speaker 1: Bloomberg Audio Studios, Podcasts, radio News. On our current path, 2 00:00:09,960 --> 00:00:12,400 Speaker 1: this handful of people will decide what the future of 3 00:00:12,440 --> 00:00:17,159 Speaker 1: AI is, and the best way to counter balance that 4 00:00:17,680 --> 00:00:20,960 Speaker 1: is to have a vision that's different and hopefully better 5 00:00:21,040 --> 00:00:26,920 Speaker 1: for society. 6 00:00:31,080 --> 00:00:33,720 Speaker 2: Welcome to Trumpnomics, the podcast that looks at the economic 7 00:00:33,760 --> 00:00:36,479 Speaker 2: world of Donald Trump, how he's already shaped the global 8 00:00:36,520 --> 00:00:38,400 Speaker 2: economy and modern earth is going to. 9 00:00:38,400 --> 00:00:39,320 Speaker 1: Happen next. 10 00:00:41,200 --> 00:00:43,880 Speaker 2: Well, we talk about the big forces affecting our economy 11 00:00:43,920 --> 00:00:45,720 Speaker 2: in the broader world on this show. And there's no 12 00:00:45,760 --> 00:00:49,519 Speaker 2: bigger topic in economic policy and general discussion these days 13 00:00:49,560 --> 00:00:53,200 Speaker 2: than the impact of AI. We've talked at different times 14 00:00:53,200 --> 00:00:57,200 Speaker 2: about the consequences for jobs, inflation, interest rates, and where 15 00:00:57,200 --> 00:01:00,600 Speaker 2: the policy makers, let alone ordinary people, ready for any 16 00:01:00,640 --> 00:01:04,160 Speaker 2: of it. Well, my guest this week, Darren Asimoglu, famously 17 00:01:04,200 --> 00:01:06,880 Speaker 2: takes the long view on these matters. A recipient of 18 00:01:06,880 --> 00:01:09,920 Speaker 2: the Nobel Prize for Economics in twenty twenty four, he 19 00:01:10,000 --> 00:01:13,000 Speaker 2: probably wrote the single most widely read book of economic 20 00:01:13,120 --> 00:01:16,880 Speaker 2: history of recent times, Why Nations Fail, And more recently 21 00:01:16,959 --> 00:01:20,800 Speaker 2: he wrote with Simon Johnson, Power and Progress, Our thousand 22 00:01:20,840 --> 00:01:24,880 Speaker 2: years struggle over technology and prosperity, and I talked to 23 00:01:24,959 --> 00:01:27,679 Speaker 2: him about that book and the lessons for this AI 24 00:01:27,760 --> 00:01:31,120 Speaker 2: revolution a while back in the summer of twenty twenty three, 25 00:01:31,360 --> 00:01:33,720 Speaker 2: but given everything that's been going on, I wanted to 26 00:01:33,760 --> 00:01:37,120 Speaker 2: have him back to see whether he saw anything in 27 00:01:37,280 --> 00:01:41,360 Speaker 2: the new waves of AI that we've had since twenty 28 00:01:41,400 --> 00:01:44,119 Speaker 2: twenty three, particularly in the last few months, whether he'd 29 00:01:44,120 --> 00:01:47,800 Speaker 2: seen anything to make him change his view on either 30 00:01:47,840 --> 00:01:50,560 Speaker 2: how fast this technology is going to change our economy 31 00:01:50,960 --> 00:01:53,600 Speaker 2: or how well placed we are to get the best 32 00:01:53,600 --> 00:01:56,000 Speaker 2: out of it. Darren, thanks very much for coming back 33 00:01:56,040 --> 00:01:57,120 Speaker 2: on to this podcast. 34 00:01:57,720 --> 00:02:02,560 Speaker 1: Thank you. Stephanie's my pleasure to be here. 35 00:02:04,520 --> 00:02:08,240 Speaker 2: We will get into some of the sort of key 36 00:02:08,320 --> 00:02:10,840 Speaker 2: dimensions of this in a minute, but I should just 37 00:02:10,880 --> 00:02:12,960 Speaker 2: get a sense from you. I mean, we're talking now 38 00:02:13,080 --> 00:02:17,400 Speaker 2: mid March twenty twenty six. There's been so much chatter, 39 00:02:17,680 --> 00:02:19,840 Speaker 2: and I suspect many people listening have had their own 40 00:02:19,880 --> 00:02:24,720 Speaker 2: real life experience now of the development of all the 41 00:02:24,760 --> 00:02:27,400 Speaker 2: different forms of AI, and particularly the sort of agentic 42 00:02:27,480 --> 00:02:30,600 Speaker 2: AI that we talk about. Are you, in a kind 43 00:02:30,600 --> 00:02:35,079 Speaker 2: of very broad sense reassessing how fast or how fundamentally 44 00:02:35,120 --> 00:02:36,440 Speaker 2: this is going to change our world? 45 00:02:37,280 --> 00:02:42,720 Speaker 1: Yeah? Every day, I think the underlying technology is changing 46 00:02:42,880 --> 00:02:46,519 Speaker 1: faster than what I would have predicted, what many would 47 00:02:46,560 --> 00:02:51,600 Speaker 1: have predicted eight and a half ago. So, especially with 48 00:02:52,680 --> 00:02:57,000 Speaker 1: the recent developments in agentic AI, especially led by Anthropic, 49 00:02:58,480 --> 00:03:01,919 Speaker 1: there is a real possible ability that these tools can 50 00:03:02,000 --> 00:03:07,200 Speaker 1: be broadly useful in what people do. There is still 51 00:03:07,280 --> 00:03:10,880 Speaker 1: a lot of uncertainty, however. First we are not seeing 52 00:03:11,000 --> 00:03:17,000 Speaker 1: any of the prepackaged, easy to use, reliable applications think 53 00:03:17,040 --> 00:03:20,920 Speaker 1: of it as the Microsoft Words or Microsoft Offices of 54 00:03:21,320 --> 00:03:25,160 Speaker 1: AI that can be used across a broad range of 55 00:03:25,240 --> 00:03:28,680 Speaker 1: occupations or in some specific occupations. Those are not around yet. 56 00:03:29,120 --> 00:03:34,680 Speaker 1: There is still uncertainty about whether there will be bottlenecks 57 00:03:34,720 --> 00:03:39,760 Speaker 1: in reaching higher reliability and higher judgment. There is every 58 00:03:40,440 --> 00:03:45,200 Speaker 1: evidence that there is a lot of rapid progress, but 59 00:03:45,520 --> 00:03:50,200 Speaker 1: there are some weaknesses in these models that are still persistent, 60 00:03:50,960 --> 00:03:54,280 Speaker 1: and I don't just mean hallucinations, but lack of a 61 00:03:54,360 --> 00:03:57,440 Speaker 1: deep understanding. They don't seem to have a conceptual framework, 62 00:03:57,480 --> 00:04:01,240 Speaker 1: they don't understand the context, and they cannot reason at 63 00:04:01,400 --> 00:04:06,080 Speaker 1: multiple levels of abstraction about a problem. Yet so those 64 00:04:06,240 --> 00:04:08,480 Speaker 1: may be overcome, and I think many of those are 65 00:04:08,520 --> 00:04:12,120 Speaker 1: going to be important in dealing with edge cases in 66 00:04:12,160 --> 00:04:19,440 Speaker 1: many occupations. So wholesale automation of occupations is still not 67 00:04:19,680 --> 00:04:22,960 Speaker 1: something we're going to see right away, but some people 68 00:04:23,160 --> 00:04:24,960 Speaker 1: swear that we're going to see it in one year 69 00:04:25,040 --> 00:04:27,960 Speaker 1: or two years, three years. So there's a lot of uncertainty. 70 00:04:28,040 --> 00:04:33,279 Speaker 1: But let me make one thing clear. If we do 71 00:04:33,440 --> 00:04:39,000 Speaker 1: not up our game about both how we regulate these 72 00:04:39,040 --> 00:04:43,039 Speaker 1: models and how we actually develop them, there could be 73 00:04:43,040 --> 00:04:45,000 Speaker 1: a huge amount of damage to society. 74 00:04:45,640 --> 00:04:48,719 Speaker 2: I think that's very helpful because there's two elements of 75 00:04:48,760 --> 00:04:51,520 Speaker 2: this where there's obviously, as you say, there's a lot 76 00:04:51,520 --> 00:04:54,479 Speaker 2: of uncertainty, and there's a wide range of opinion. If possible, 77 00:04:54,480 --> 00:04:56,240 Speaker 2: I'm going to try and separate them, but obviously they 78 00:04:56,320 --> 00:04:58,839 Speaker 2: merge into each other. One is this question of the 79 00:04:58,880 --> 00:05:01,240 Speaker 2: pace of change, faster company is really going to be 80 00:05:01,279 --> 00:05:04,479 Speaker 2: able to change their practices or capture those productivity improvements. 81 00:05:04,960 --> 00:05:07,680 Speaker 2: And then the second is which you've just highlighted, is 82 00:05:07,720 --> 00:05:09,960 Speaker 2: how well are we positioned to make the best of this, 83 00:05:10,160 --> 00:05:13,159 Speaker 2: not just to get all the productivity growth, but to 84 00:05:13,200 --> 00:05:15,400 Speaker 2: make sure it's actually positive for most of the population, 85 00:05:16,000 --> 00:05:18,080 Speaker 2: not just a few. And you highlighted in the twenty 86 00:05:18,160 --> 00:05:20,080 Speaker 2: twenty three book, you know that none of that was 87 00:05:20,120 --> 00:05:22,839 Speaker 2: automatic in the case of the industrial revolution, and we 88 00:05:22,880 --> 00:05:25,200 Speaker 2: may have to do it much faster this time. Just 89 00:05:25,279 --> 00:05:29,280 Speaker 2: focusing on this speed question, there's a Citrini report which 90 00:05:29,279 --> 00:05:31,640 Speaker 2: there's been a little mini industry in sort of debunking 91 00:05:31,680 --> 00:05:35,320 Speaker 2: this research report which sort of went viral because it 92 00:05:35,360 --> 00:05:37,760 Speaker 2: captured some elements of this sort of faster pace that 93 00:05:37,800 --> 00:05:39,640 Speaker 2: we were seeing. I think you wrote a paper, you know, 94 00:05:39,680 --> 00:05:43,120 Speaker 2: the basic Macroeconomics of AI. In the debate, I would 95 00:05:43,120 --> 00:05:47,080 Speaker 2: say that you're fairly low key about the pace of change, 96 00:05:47,120 --> 00:05:49,320 Speaker 2: the extent of change in any given year. I think 97 00:05:49,320 --> 00:05:53,120 Speaker 2: you said at most a few percent over ten years, 98 00:05:53,160 --> 00:05:55,920 Speaker 2: so maybe even a fraction of a percent of productivity 99 00:05:55,960 --> 00:05:59,400 Speaker 2: growth overall productivity growth a year. Would you stand by 100 00:06:00,080 --> 00:06:03,600 Speaker 2: that basic assessment today or do you think maybe the gains, 101 00:06:03,880 --> 00:06:06,239 Speaker 2: just the purebreductivity gains could be a bit faster. 102 00:06:06,920 --> 00:06:09,560 Speaker 1: I think they could be a bit faster. There has 103 00:06:09,640 --> 00:06:17,719 Speaker 1: been faster change of the capabilities of the foundation models. However, 104 00:06:18,720 --> 00:06:22,680 Speaker 1: it would still require some big breakthroughs, especially at the 105 00:06:22,720 --> 00:06:29,840 Speaker 1: application layer. So the bottom line of that paper was 106 00:06:29,920 --> 00:06:34,200 Speaker 1: to point out how we can get a fairly simple 107 00:06:34,760 --> 00:06:39,920 Speaker 1: understanding of the constituent parts of the contribution of AI 108 00:06:40,640 --> 00:06:46,360 Speaker 1: to productivity and GDP growth, And that comes from realizing 109 00:06:46,560 --> 00:06:50,160 Speaker 1: that the GDP contribution of AI is nothing other than 110 00:06:50,600 --> 00:06:52,920 Speaker 1: what fraction of tasks are going to be taken over 111 00:06:53,200 --> 00:06:57,160 Speaker 1: or completely transformed by AI in the economy times the 112 00:06:57,200 --> 00:07:00,960 Speaker 1: average productivity gain or average cost savings in these tasks. 113 00:07:01,640 --> 00:07:04,640 Speaker 1: So that's the calculation that I did with the available 114 00:07:04,720 --> 00:07:08,960 Speaker 1: evidence in twenty twenty two. But even then a lot 115 00:07:08,960 --> 00:07:13,760 Speaker 1: of people took issue at how I interpreted the data, etc. 116 00:07:14,120 --> 00:07:17,400 Speaker 1: So you could boost some of the numbers that I have, 117 00:07:17,640 --> 00:07:20,640 Speaker 1: which were that about five percent of the whole economy 118 00:07:20,680 --> 00:07:24,520 Speaker 1: will be taken by AI within ten years, by twenty 119 00:07:24,560 --> 00:07:29,400 Speaker 1: thirty or thereabouts, and that that would lead to about 120 00:07:29,640 --> 00:07:34,960 Speaker 1: twenty five percent cost savings or relative to labor costs 121 00:07:35,040 --> 00:07:39,440 Speaker 1: that firms used to spend on the same tasks. Now 122 00:07:39,480 --> 00:07:44,960 Speaker 1: you can boost my numbers by increasing either or both 123 00:07:45,000 --> 00:07:48,000 Speaker 1: of these quantities. So you can say, no, no, it's 124 00:07:48,040 --> 00:07:51,320 Speaker 1: not five percent, it's going to be twenty percent of 125 00:07:51,360 --> 00:07:53,680 Speaker 1: the economy that AI is going to take over, in 126 00:07:53,720 --> 00:07:56,880 Speaker 1: which case you would quadruple my numbers. Or you could 127 00:07:56,880 --> 00:08:00,280 Speaker 1: say it's not twenty percent cost savings, but it's going 128 00:08:00,320 --> 00:08:03,040 Speaker 1: to lead to thirty percent or forty percent cost savings. 129 00:08:03,440 --> 00:08:06,200 Speaker 1: After all, you know, comp lead to three hundred percent 130 00:08:06,240 --> 00:08:09,480 Speaker 1: cost savings because you know labor wasn't very expensive anyway. 131 00:08:09,640 --> 00:08:12,920 Speaker 1: But you see the elbow room to do that kind 132 00:08:12,920 --> 00:08:14,720 Speaker 1: of thing. But you're not going to come up with 133 00:08:14,800 --> 00:08:18,200 Speaker 1: revolutionary numbers here. And part of the problem here is 134 00:08:18,200 --> 00:08:22,600 Speaker 1: that we are right now, and that was the case 135 00:08:22,600 --> 00:08:25,080 Speaker 1: two years ago and continues to be the case. We 136 00:08:25,160 --> 00:08:29,160 Speaker 1: are right now focusing on AI as an automation tool, 137 00:08:29,240 --> 00:08:33,120 Speaker 1: as a tool to replace workers. That's not the best 138 00:08:33,200 --> 00:08:36,360 Speaker 1: way of using AI. The best way of using AI 139 00:08:36,640 --> 00:08:39,719 Speaker 1: is to try to complement workers so that they can 140 00:08:39,760 --> 00:08:42,720 Speaker 1: do new things, they can perform new tasks, they can 141 00:08:43,280 --> 00:08:48,920 Speaker 1: increase their sophistication level, and also respond to challenges in 142 00:08:48,960 --> 00:08:52,520 Speaker 1: the world economy from globalization, from aging, from climate change 143 00:08:52,880 --> 00:08:56,280 Speaker 1: by creating new goods and services, new organizations and so on. 144 00:08:56,480 --> 00:08:58,679 Speaker 1: If we do that, I think I would be more 145 00:08:58,720 --> 00:09:02,000 Speaker 1: optimistic about the future away. And it's one of the 146 00:09:02,040 --> 00:09:04,720 Speaker 1: aspects of the wisdom gap that we have right now. 147 00:09:05,000 --> 00:09:07,280 Speaker 1: We don't know how to regulate existing models, and we 148 00:09:07,320 --> 00:09:10,160 Speaker 1: are not really focusing on what we can do best 149 00:09:10,440 --> 00:09:13,800 Speaker 1: with these models. And that's both for productivity and social consequences. 150 00:09:13,880 --> 00:09:16,320 Speaker 1: So I've just made the productivity case that we could 151 00:09:16,320 --> 00:09:19,880 Speaker 1: actually get better productivity consequences. But actually the social consequences 152 00:09:19,920 --> 00:09:23,000 Speaker 1: are even starker. If we displace people, if we say 153 00:09:23,040 --> 00:09:25,640 Speaker 1: displaced twenty percent of the population from their jobs and 154 00:09:25,640 --> 00:09:28,760 Speaker 1: they remain unemployed or they go to lower quality, lower 155 00:09:28,800 --> 00:09:32,000 Speaker 1: pay jobs, our democracy is not going to survive. We're 156 00:09:32,040 --> 00:09:37,760 Speaker 1: already struggling to make our democratic system. Yeah, we're not 157 00:09:37,760 --> 00:09:41,640 Speaker 1: doing that well. And if we put another huge shock 158 00:09:41,679 --> 00:09:43,760 Speaker 1: on top of that, I'm not very optimistic. 159 00:09:43,880 --> 00:09:47,120 Speaker 2: So beware and just thinking about the economics of what 160 00:09:47,160 --> 00:09:51,000 Speaker 2: you're saying, and also thinking about what captured people's imaginations 161 00:09:51,040 --> 00:09:53,760 Speaker 2: about that Suitrinian report. They say themselves, this is a 162 00:09:53,760 --> 00:09:56,600 Speaker 2: thought experiment. There was something kind of gripping about the 163 00:09:56,600 --> 00:09:58,520 Speaker 2: fact that it was claiming to be a memo written 164 00:09:58,559 --> 00:10:01,160 Speaker 2: in twenty twenty eight. The claim was you would have 165 00:10:01,240 --> 00:10:03,520 Speaker 2: an extraordinary amount of change in business models in a 166 00:10:03,600 --> 00:10:06,880 Speaker 2: very short period of time. I assume you would say, 167 00:10:07,440 --> 00:10:12,280 Speaker 2: given real world frictions and just the way things tend 168 00:10:12,360 --> 00:10:15,880 Speaker 2: to happen, that a two year timeframe is very unrealistic. 169 00:10:16,160 --> 00:10:18,840 Speaker 2: But there was another basic assumption built into that that 170 00:10:19,240 --> 00:10:21,320 Speaker 2: the change that we will most immediately see and will 171 00:10:21,360 --> 00:10:24,480 Speaker 2: have the biggest impact will be simply to replace labor, 172 00:10:24,960 --> 00:10:27,680 Speaker 2: not to augment it, and that to the extent that 173 00:10:27,760 --> 00:10:31,559 Speaker 2: it's creating other stuff that's going to be far outweighed 174 00:10:32,080 --> 00:10:34,960 Speaker 2: by the job destruction. Even if you don't accept the 175 00:10:35,000 --> 00:10:38,000 Speaker 2: time frame, do you think there is an emphasis on 176 00:10:38,559 --> 00:10:41,920 Speaker 2: replacement relative to augmentation or is it a gent ki 177 00:10:42,040 --> 00:10:43,400 Speaker 2: really both things. 178 00:10:43,720 --> 00:10:48,280 Speaker 1: I don't know. It's an open question, but my bet 179 00:10:48,440 --> 00:10:52,040 Speaker 1: would be on the Cititrini side, not on the timeframe, 180 00:10:52,720 --> 00:10:56,480 Speaker 1: but on the path that we are following. There is 181 00:10:56,679 --> 00:11:00,960 Speaker 1: so little that these companies are doing in order to 182 00:11:01,080 --> 00:11:05,800 Speaker 1: understand what work humans do and try to be useful 183 00:11:05,800 --> 00:11:09,280 Speaker 1: to humans. The whole agenda of all of the leading 184 00:11:09,320 --> 00:11:11,720 Speaker 1: companies in the United States and now joined by deep 185 00:11:11,760 --> 00:11:17,520 Speaker 1: Seek in China is agi artificial general intelligence. That is 186 00:11:17,559 --> 00:11:21,000 Speaker 1: a banner for saying these models are going to do 187 00:11:21,080 --> 00:11:24,600 Speaker 1: everything better than humans, which of course then leads to 188 00:11:24,640 --> 00:11:28,040 Speaker 1: the corollary that a lot of companies should just throw 189 00:11:28,080 --> 00:11:30,920 Speaker 1: away their humans and use these companies. That is an 190 00:11:30,960 --> 00:11:35,200 Speaker 1: automation agenda that is exactly what Catrini banked on. Now, 191 00:11:35,520 --> 00:11:39,720 Speaker 1: they then made a number of other assumptions and steps 192 00:11:39,760 --> 00:11:42,560 Speaker 1: about how that would work out, what its consequences would be, 193 00:11:42,559 --> 00:11:44,760 Speaker 1: how quickly those would be. Those I don't agree with, 194 00:11:45,440 --> 00:11:49,120 Speaker 1: but credit to them. They said this was a scenario. 195 00:11:49,200 --> 00:11:51,839 Speaker 1: They weren't even making a prediction. I don't know why 196 00:11:51,880 --> 00:11:55,040 Speaker 1: the markets went haywire, given that there was no new 197 00:11:55,040 --> 00:11:57,640 Speaker 1: information in there. Everything that was in the Sutrini report 198 00:11:57,679 --> 00:11:59,800 Speaker 1: has been said, and they themselves said, there is no 199 00:12:00,120 --> 00:12:03,320 Speaker 1: search here that's original. But you know, we are living 200 00:12:03,400 --> 00:12:08,480 Speaker 1: in such fragile times in everything. You know, the valuation 201 00:12:08,600 --> 00:12:12,439 Speaker 1: of these companies is all based on very fragile assumptions 202 00:12:12,480 --> 00:12:14,240 Speaker 1: about what they're going to be able to achieve in 203 00:12:14,280 --> 00:12:17,360 Speaker 1: the future. If you look at the amount that they're 204 00:12:17,400 --> 00:12:21,000 Speaker 1: spending and their valuations, this can only be justify if 205 00:12:21,000 --> 00:12:25,160 Speaker 1: they make something like a trillion dollar revenues in the 206 00:12:25,200 --> 00:12:28,559 Speaker 1: foreseeable future as an AI industry. I mean, that's just incredible. 207 00:12:28,679 --> 00:12:30,920 Speaker 1: How are you going to get to a trillion dollar there? 208 00:12:31,040 --> 00:12:33,480 Speaker 1: They're hardly struggling to make a couple of billion dollars 209 00:12:33,559 --> 00:12:36,800 Speaker 1: right now as a whole industry. So there is something, 210 00:12:37,520 --> 00:12:39,920 Speaker 1: you know, glass in this house. 211 00:12:41,040 --> 00:12:42,720 Speaker 2: Okay, I have to say that I'm slightly depressed by 212 00:12:42,720 --> 00:12:44,240 Speaker 2: that answer because I thought you were going to push 213 00:12:44,280 --> 00:12:49,040 Speaker 2: back more heavily against the Citrey assessment. Though I absolutely 214 00:12:49,080 --> 00:12:51,640 Speaker 2: know that you're I know you have your concerned about 215 00:12:52,440 --> 00:12:55,280 Speaker 2: our capacity to to cape with this, but I thought 216 00:12:55,360 --> 00:12:57,199 Speaker 2: I let. 217 00:12:57,160 --> 00:12:59,040 Speaker 1: Me give me out of the pushback as well. I mean, 218 00:12:59,080 --> 00:13:02,800 Speaker 1: that's that's the point I want to make throughout. There 219 00:13:02,920 --> 00:13:06,959 Speaker 1: is the potential to use AI not for automation. That's 220 00:13:06,960 --> 00:13:11,040 Speaker 1: what I keep emphasizing. But I also want to push 221 00:13:11,240 --> 00:13:15,760 Speaker 1: very hard against the assumption that either we are going 222 00:13:15,800 --> 00:13:17,920 Speaker 1: there already, No we're not, or that we can get 223 00:13:17,920 --> 00:13:20,960 Speaker 1: there automatically. No, we cannot. Because all of these companies 224 00:13:21,160 --> 00:13:25,360 Speaker 1: have this business model or just let's repoliticplace all the workers. 225 00:13:25,640 --> 00:13:29,480 Speaker 1: They haven't even put into their calculations much of a 226 00:13:30,480 --> 00:13:34,040 Speaker 1: revenue stream that they can get from complementing workers, because 227 00:13:34,080 --> 00:13:36,920 Speaker 1: that's just a very difficult thing to monetize. So I 228 00:13:36,920 --> 00:13:41,440 Speaker 1: think that's where our wisdom gap is. We are not 229 00:13:41,760 --> 00:13:45,240 Speaker 1: even wisely thinking about what we should be doing with 230 00:13:45,840 --> 00:13:49,760 Speaker 1: these very capable models, and the industry is going in 231 00:13:49,800 --> 00:13:50,720 Speaker 1: its own direction. 232 00:13:51,480 --> 00:13:54,320 Speaker 2: You have just done a paper with two of your 233 00:13:54,360 --> 00:13:57,320 Speaker 2: colleagues for Brookings that is trying to give some concrete 234 00:13:57,320 --> 00:14:00,400 Speaker 2: advice to policymakers in this area to answer specifically that question. 235 00:14:00,720 --> 00:14:01,960 Speaker 2: I do want to get to that. I just want 236 00:14:02,000 --> 00:14:05,160 Speaker 2: to quickly one of the things, just to make sure 237 00:14:05,280 --> 00:14:07,160 Speaker 2: coming out of this conversation. One of the things that 238 00:14:07,160 --> 00:14:10,160 Speaker 2: we see a lot in this is, and particularly if 239 00:14:10,160 --> 00:14:14,320 Speaker 2: you look at the research studies in this area, they 240 00:14:14,440 --> 00:14:17,440 Speaker 2: tend to look at the range of occupations and talk 241 00:14:17,480 --> 00:14:21,200 Speaker 2: about their degree of exposure quote unquote to AI. And 242 00:14:21,240 --> 00:14:23,640 Speaker 2: there's a whole range, I think in your original assessment, 243 00:14:23,680 --> 00:14:25,320 Speaker 2: and a lot of people use this. They sort of 244 00:14:25,360 --> 00:14:28,360 Speaker 2: thought it was about twenty percent of occupations, and obviously 245 00:14:28,440 --> 00:14:31,200 Speaker 2: some people have high numbers. If one's thinking about different 246 00:14:31,280 --> 00:14:34,120 Speaker 2: kinds of AI and the potential of different of AI, 247 00:14:34,680 --> 00:14:37,480 Speaker 2: how should we read those? Are they just going on 248 00:14:37,520 --> 00:14:39,880 Speaker 2: your the way that businesses is looking at it now, 249 00:14:39,920 --> 00:14:42,040 Speaker 2: as you pointed out as very much as a labor 250 00:14:42,080 --> 00:14:46,440 Speaker 2: replacement technology, should we see that as exposure to replacement 251 00:14:46,560 --> 00:14:48,560 Speaker 2: or should we see it something more sophisticated? 252 00:14:48,800 --> 00:14:51,120 Speaker 1: Great set of questions. There are really three questions you're 253 00:14:51,120 --> 00:14:53,720 Speaker 1: asking here, Stephanie. Let me answer each one of them 254 00:14:53,760 --> 00:14:56,920 Speaker 1: in turn. One is what does this AI exposure mean? 255 00:14:57,840 --> 00:15:02,560 Speaker 1: And in general it is an ill defined concept because 256 00:15:02,600 --> 00:15:05,160 Speaker 1: you could be exposed to AI because you can lose 257 00:15:05,200 --> 00:15:08,560 Speaker 1: your job with AI, or you could be exposed to 258 00:15:08,600 --> 00:15:12,160 Speaker 1: AI because you could use AI to increase your contribution 259 00:15:12,240 --> 00:15:12,840 Speaker 1: to your job. 260 00:15:13,000 --> 00:15:15,560 Speaker 2: And we see that in the company's exposure as well. 261 00:15:15,600 --> 00:15:18,400 Speaker 2: And investors can't decide the difference between those two either. 262 00:15:19,200 --> 00:15:24,200 Speaker 1: That's why they fluctuate between devaluing software companies and giving 263 00:15:24,240 --> 00:15:27,320 Speaker 1: them a huge boost. So that's the first problem with 264 00:15:27,400 --> 00:15:30,600 Speaker 1: AI exposure. So when I wrote my paper, I took 265 00:15:31,080 --> 00:15:33,440 Speaker 1: a position similar to the Citrenia report, and I said, 266 00:15:33,520 --> 00:15:35,800 Speaker 1: right now, we're going towards automation, so let me focus 267 00:15:35,840 --> 00:15:42,440 Speaker 1: on that second. Where does that twenty percent number come from? So, 268 00:15:42,600 --> 00:15:45,960 Speaker 1: roughly speaking, think of it this way right now, and 269 00:15:46,040 --> 00:15:51,000 Speaker 1: I think in the near future, AI is pretty useless 270 00:15:51,040 --> 00:15:55,440 Speaker 1: in jobs that involve a huge amount of interaction with 271 00:15:55,520 --> 00:16:00,960 Speaker 1: the physical world construction, custodial work, manufacturing work, work that 272 00:16:01,040 --> 00:16:07,320 Speaker 1: involves home care, airdressing, hairdressing. The reason being that we 273 00:16:07,440 --> 00:16:12,200 Speaker 1: are very far behind in robotics, but also AI models 274 00:16:12,240 --> 00:16:16,240 Speaker 1: themselves don't have a good conceptual understanding of spatial causal 275 00:16:16,360 --> 00:16:20,920 Speaker 1: relations that even if we had fantastically flexible robots that 276 00:16:21,000 --> 00:16:25,720 Speaker 1: could cut your hair or hold your hand. AI models 277 00:16:25,720 --> 00:16:30,480 Speaker 1: would continuously make mistakes about spatial coosal relations and those 278 00:16:30,560 --> 00:16:35,040 Speaker 1: unreliabilities would end up breaking your neck. So let's eliminate 279 00:16:35,120 --> 00:16:39,280 Speaker 1: those jobs. I've also eliminated, again based on other people's coding, 280 00:16:39,520 --> 00:16:44,080 Speaker 1: any jobs that include a high degree of judgment. So 281 00:16:44,160 --> 00:16:51,360 Speaker 1: we wouldn't want AI to run air traffic control. So Stephanie, 282 00:16:51,400 --> 00:16:54,880 Speaker 1: think about it yourself. If the Manchester Airport said from 283 00:16:54,920 --> 00:16:57,200 Speaker 1: now on, we're not going to have any air traffic controlers. 284 00:16:57,200 --> 00:16:59,320 Speaker 1: Everything's gonna be done by AI. It might hallucinate, it 285 00:16:59,360 --> 00:17:02,200 Speaker 1: might make some mistakes, but that's fine. It's cost savings. 286 00:17:02,280 --> 00:17:05,520 Speaker 1: Would you fly to Manchester Airport? So we don't want that. 287 00:17:06,280 --> 00:17:09,520 Speaker 1: So those jobs are out, and any job that involves 288 00:17:09,520 --> 00:17:13,200 Speaker 1: a high degree of social interaction is out as well. 289 00:17:13,240 --> 00:17:17,639 Speaker 1: So that leaves essentially a range of office cognitive jobs. 290 00:17:17,840 --> 00:17:20,159 Speaker 1: So that's where the twenty percent comes from. Now what 291 00:17:20,240 --> 00:17:24,120 Speaker 1: about companies. So companies are indeed going after those jobs. 292 00:17:24,119 --> 00:17:27,240 Speaker 1: They're going after it jobs, they're going after back office jobs. 293 00:17:27,560 --> 00:17:30,800 Speaker 1: But there are several new papers that have come out 294 00:17:30,920 --> 00:17:33,280 Speaker 1: over the last few months and they all find the 295 00:17:33,280 --> 00:17:37,040 Speaker 1: same things. The companies are talking a big game about AI. 296 00:17:37,240 --> 00:17:39,840 Speaker 1: They say, oh, we have a lot of AI being used, 297 00:17:40,160 --> 00:17:42,640 Speaker 1: but it has so far zero impact on the companies, 298 00:17:43,240 --> 00:17:48,080 Speaker 1: zero impact on employment, zero impact on productivity because it's 299 00:17:48,160 --> 00:17:53,080 Speaker 1: actually like just other technologies, it's spread slowly. That's us 300 00:17:53,160 --> 00:17:57,080 Speaker 1: the basis of my numbers. And it's very difficult to 301 00:17:57,200 --> 00:18:00,480 Speaker 1: integrate AI into what those companies do with the big 302 00:18:00,600 --> 00:18:03,560 Speaker 1: organizational change. And I think when actually push comes to shove, 303 00:18:04,359 --> 00:18:07,040 Speaker 1: when they try the organizational change, but they're going to 304 00:18:07,119 --> 00:18:11,479 Speaker 1: realize that you cannot really replace it security people with AI. 305 00:18:11,520 --> 00:18:15,040 Speaker 1: You need to use it security people together with AI, 306 00:18:15,080 --> 00:18:17,919 Speaker 1: and that might actually give us a boost towards more human, complementary, 307 00:18:17,960 --> 00:18:20,200 Speaker 1: more pro worker AI. But we're not there yet because 308 00:18:20,280 --> 00:18:23,080 Speaker 1: they're not trying to do that in big scale yet. Now, 309 00:18:23,359 --> 00:18:27,200 Speaker 1: of course code that's a big advance. Will that change 310 00:18:27,200 --> 00:18:30,440 Speaker 1: things in twenty twenty six, I don't know. By twenty 311 00:18:30,480 --> 00:18:33,200 Speaker 1: twenty seven, I'm sure there will be more companies that 312 00:18:33,640 --> 00:18:36,040 Speaker 1: have attempted to do things, and perhaps we'll have a 313 00:18:36,119 --> 00:18:38,080 Speaker 1: rude awakening and this is not going to work in 314 00:18:38,080 --> 00:18:40,080 Speaker 1: the way that we're trying to do it. Perhaps we'll 315 00:18:40,080 --> 00:18:43,080 Speaker 1: find a new direction. But this is where both policy 316 00:18:43,119 --> 00:18:45,119 Speaker 1: and public debate are really important. 317 00:18:45,480 --> 00:18:47,479 Speaker 2: To your point, I think Goldman Sachs added up if 318 00:18:47,480 --> 00:18:49,879 Speaker 2: you just listen to all the earnings sort of calls 319 00:18:49,960 --> 00:18:52,720 Speaker 2: that companies are doing and chief executives are giving. Just 320 00:18:52,760 --> 00:18:54,840 Speaker 2: to your point about all these companies claiming I think 321 00:18:54,880 --> 00:18:57,919 Speaker 2: the average productivity growth that they're claiming is thirty two percent, 322 00:18:58,520 --> 00:19:06,400 Speaker 2: but it's not necessary evident in any of the numbers. 323 00:19:13,840 --> 00:19:16,680 Speaker 2: Let's get onto what policy makers could do about it, 324 00:19:17,160 --> 00:19:20,920 Speaker 2: because that's something that governments everywhere are obviously very focused on. 325 00:19:21,400 --> 00:19:24,600 Speaker 2: And I noticed that you had recently done this report 326 00:19:24,600 --> 00:19:27,760 Speaker 2: for Brookings. I think about a framework for thinking about 327 00:19:27,840 --> 00:19:31,520 Speaker 2: pro worker AI. What are the main sort of policy 328 00:19:31,560 --> 00:19:33,639 Speaker 2: areas that you would like people to focus on for that. 329 00:19:33,960 --> 00:19:36,800 Speaker 1: First of all, just two points I want to make 330 00:19:37,560 --> 00:19:40,040 Speaker 1: before I talk about policy. The first one is just 331 00:19:40,080 --> 00:19:42,720 Speaker 1: to clarify that by pro worker AI, I mean exactly 332 00:19:42,720 --> 00:19:44,359 Speaker 1: the same thing that I was just talking about a 333 00:19:44,440 --> 00:19:48,720 Speaker 1: second ago. Human complementary AIAI that helps workers do more, 334 00:19:49,200 --> 00:19:53,240 Speaker 1: help workers become more expert in their jobs, perform new tasks, 335 00:19:53,640 --> 00:19:57,240 Speaker 1: have better information for problem solving, travel shooting, judgment, and 336 00:19:57,280 --> 00:19:59,880 Speaker 1: so on. That's what we're talking about with pro worker 337 00:20:00,200 --> 00:20:02,000 Speaker 1: and not just for office workers. We have a lot 338 00:20:02,000 --> 00:20:05,720 Speaker 1: of examples in the paper showing how manual workers can 339 00:20:05,760 --> 00:20:10,960 Speaker 1: benefit from AI. It cannot replace manual workers, but electricians, plumbers, 340 00:20:11,119 --> 00:20:14,760 Speaker 1: nurses can hugely benefit from having the right kind of 341 00:20:14,800 --> 00:20:17,280 Speaker 1: AI assistant. But it has to be the right kind 342 00:20:17,280 --> 00:20:19,760 Speaker 1: of aassistant. It's not going to be chut GBT. So 343 00:20:19,800 --> 00:20:22,040 Speaker 1: that's the first point. The second point is that my 344 00:20:22,160 --> 00:20:26,840 Speaker 1: belief is that as important as policy actually is what 345 00:20:26,880 --> 00:20:31,640 Speaker 1: we're doing right now, Stephanie, is the public debate. Right now, 346 00:20:31,680 --> 00:20:37,880 Speaker 1: we have delegated the future of this very very important technology. 347 00:20:38,119 --> 00:20:40,719 Speaker 1: Some would argue, therefore the future of humanity to a 348 00:20:40,760 --> 00:20:44,760 Speaker 1: handful of people who have no feedback from society, who 349 00:20:44,800 --> 00:20:49,080 Speaker 1: have no accountability to society, and right now society is confused. 350 00:20:50,359 --> 00:20:54,200 Speaker 1: So on our current path, this handful of people will 351 00:20:54,200 --> 00:20:57,840 Speaker 1: decide what the future of AI is. And the best 352 00:20:57,880 --> 00:21:02,480 Speaker 1: way to counter that is to have a vision that's 353 00:21:02,520 --> 00:21:06,240 Speaker 1: different and hopefully better for society. And that's what I 354 00:21:06,280 --> 00:21:09,000 Speaker 1: hope the pro worker AI vision is. So the more 355 00:21:09,040 --> 00:21:12,640 Speaker 1: people talk about that, the more the public pressure will grow, 356 00:21:12,680 --> 00:21:15,120 Speaker 1: and the more of an alternative there will be. Look, 357 00:21:15,720 --> 00:21:21,560 Speaker 1: my understanding from my limited experience is that anthropic Google 358 00:21:21,880 --> 00:21:26,200 Speaker 1: Open AI are filled with people who are very well meaning. 359 00:21:26,680 --> 00:21:30,439 Speaker 1: If they thought that there is a socially beneficial and 360 00:21:30,520 --> 00:21:34,640 Speaker 1: still technically exciting area of AI, they would be much 361 00:21:34,640 --> 00:21:39,520 Speaker 1: more likely to take the plunge in that direction. It's 362 00:21:39,600 --> 00:21:42,640 Speaker 1: just that we're not offering them an alternative, and society 363 00:21:42,720 --> 00:21:46,760 Speaker 1: is not pushing back against some Outlin and his ilk's vision. 364 00:21:46,840 --> 00:21:49,239 Speaker 1: So that's the point. Policy, in my view, is a 365 00:21:49,280 --> 00:21:53,639 Speaker 1: supporting set of instruments. It can remove the stortions that 366 00:21:53,760 --> 00:21:58,800 Speaker 1: exist that solidify the existing system, and it can give 367 00:21:58,840 --> 00:22:02,800 Speaker 1: a notch to people, as policy has done in the past, 368 00:22:03,040 --> 00:22:06,840 Speaker 1: to try new things. So on the first bucket, there 369 00:22:06,880 --> 00:22:12,320 Speaker 1: are many problems in our current system that would make 370 00:22:13,480 --> 00:22:17,200 Speaker 1: a redirection of AI in a pro worker direction more difficult. 371 00:22:17,320 --> 00:22:19,159 Speaker 1: I would single out two of them, but there are 372 00:22:19,200 --> 00:22:22,560 Speaker 1: is more. The first one is that our tax code. 373 00:22:22,880 --> 00:22:24,840 Speaker 1: That's true in the UK, that's true in the US. 374 00:22:25,320 --> 00:22:30,280 Speaker 1: Our tax code encourages firms to replace workers because we 375 00:22:30,400 --> 00:22:34,880 Speaker 1: tax capital essentially at zero percent, labor twenty five to 376 00:22:34,920 --> 00:22:37,480 Speaker 1: thirty percent, especially in the US once you had the 377 00:22:37,520 --> 00:22:40,800 Speaker 1: healthcare costs and all the payroll taxes and everything. So 378 00:22:40,840 --> 00:22:45,959 Speaker 1: that's a massive subsidy to capital that would make firms 379 00:22:46,040 --> 00:22:50,280 Speaker 1: adopt automation even if automation wasn't better than humans, because 380 00:22:50,280 --> 00:22:55,080 Speaker 1: they're getting this subsidy. Second, we know from historical evidence 381 00:22:55,119 --> 00:22:58,840 Speaker 1: and current evidence that new things are done by new firms. 382 00:23:00,160 --> 00:23:05,000 Speaker 1: Competition is really important. The tech industry has become one 383 00:23:05,000 --> 00:23:12,280 Speaker 1: of the least competitive industries in history, and moreover, business 384 00:23:12,320 --> 00:23:16,240 Speaker 1: models that are new and different are likely to get crushed. 385 00:23:16,320 --> 00:23:23,720 Speaker 1: So encouraging more competition via antitrust by enabling new companies 386 00:23:23,800 --> 00:23:25,560 Speaker 1: to enter and try new things, I think that's a 387 00:23:25,640 --> 00:23:27,400 Speaker 1: very important part of way. Now there is a lot 388 00:23:27,400 --> 00:23:30,920 Speaker 1: of energy in Silicon Valley, but it's all these startups 389 00:23:30,920 --> 00:23:33,240 Speaker 1: that try to do exactly what open Ai and Anthropic 390 00:23:33,359 --> 00:23:35,560 Speaker 1: and Google do so that they can be both by them. 391 00:23:35,560 --> 00:23:37,680 Speaker 1: So that's not the kind of competition I'm talking about. 392 00:23:37,880 --> 00:23:41,639 Speaker 1: And then in terms of nudging us to do new things, 393 00:23:42,240 --> 00:23:45,640 Speaker 1: the government is horrible, in my opinion, at being an entrepreneur. 394 00:23:45,720 --> 00:23:48,240 Speaker 1: It cannot be a venture capitalist, it cannot be an entrepreneur, 395 00:23:48,280 --> 00:23:52,640 Speaker 1: it cannot be an innovator, but it has great potential 396 00:23:53,320 --> 00:23:58,160 Speaker 1: to be an aspiring leader. We have had so many 397 00:23:58,240 --> 00:24:01,480 Speaker 1: examples where a small amount of money from the government 398 00:24:02,440 --> 00:24:08,080 Speaker 1: has kickstarted industries in nanotechnology, in the internet, in robotics, 399 00:24:08,080 --> 00:24:11,439 Speaker 1: it was the Robotics Challenge, a million dollar challenge that 400 00:24:11,560 --> 00:24:14,480 Speaker 1: really focused people's attention to get robots that could actually 401 00:24:14,480 --> 00:24:17,520 Speaker 1: play a game. So we could do the same with 402 00:24:17,600 --> 00:24:22,080 Speaker 1: pro worker technologies. So we have given several examples of 403 00:24:22,160 --> 00:24:25,840 Speaker 1: technologies that are very feasible but are not getting much investment. 404 00:24:26,160 --> 00:24:28,840 Speaker 1: A few of them are getting some investment from smaller companies. 405 00:24:29,680 --> 00:24:32,840 Speaker 1: You can come up with another ten, fifteen examples and 406 00:24:32,880 --> 00:24:35,840 Speaker 1: the government could have an easy competition in these kinds 407 00:24:35,840 --> 00:24:38,840 Speaker 1: of technologies to focus the mind and show the demonstration 408 00:24:38,920 --> 00:24:41,639 Speaker 1: effects that would then say to people, wow, you know, 409 00:24:41,680 --> 00:24:43,879 Speaker 1: we could do this in other industries and other occupations 410 00:24:43,920 --> 00:24:44,320 Speaker 1: as well. 411 00:24:44,720 --> 00:24:48,000 Speaker 2: In just thinking about what you've just said, and the 412 00:24:48,000 --> 00:24:51,679 Speaker 2: paper you wrote for Brookings is trying to encourage us 413 00:24:51,720 --> 00:24:54,000 Speaker 2: to think about AI policy in a different way. So 414 00:24:54,640 --> 00:24:55,639 Speaker 2: I can't know what it was called, but it was 415 00:24:55,680 --> 00:24:58,359 Speaker 2: the AI Action Plan or something that the Trump administration 416 00:24:58,440 --> 00:25:02,680 Speaker 2: brought out lot and the way that we describe it generally, 417 00:25:02,680 --> 00:25:04,879 Speaker 2: but particularly when we're talking about China in the US, 418 00:25:05,400 --> 00:25:08,359 Speaker 2: AI policy is all about how to get there as 419 00:25:08,400 --> 00:25:11,040 Speaker 2: fast as possible, how to make sure especially in the US, 420 00:25:11,080 --> 00:25:13,280 Speaker 2: how to make sure we win the race. And there's 421 00:25:13,320 --> 00:25:16,880 Speaker 2: quite a lot of focus on sort of privacy and 422 00:25:17,240 --> 00:25:20,400 Speaker 2: concerns around that, and maybe concerns about the pace of adoption, 423 00:25:21,040 --> 00:25:22,680 Speaker 2: and that's obviously the gap that you're trying to fill, 424 00:25:22,720 --> 00:25:25,480 Speaker 2: but it doesn't feel like there's much about how to 425 00:25:25,560 --> 00:25:27,919 Speaker 2: make this work for people. And I'm sort of struck 426 00:25:27,920 --> 00:25:30,440 Speaker 2: because we had the Chancellor of Rachel Reeves, the UK 427 00:25:30,720 --> 00:25:33,199 Speaker 2: Finance Minister, on the show a week or two ago, 428 00:25:33,960 --> 00:25:35,560 Speaker 2: and you know, I think that's one of the things 429 00:25:35,600 --> 00:25:38,840 Speaker 2: that she's thinking about is we're not going to lead 430 00:25:38,880 --> 00:25:41,680 Speaker 2: the AI race in the UK, but we have said 431 00:25:41,680 --> 00:25:44,480 Speaker 2: a lot about leading on the adoption and I guess 432 00:25:44,480 --> 00:25:46,040 Speaker 2: the piece of that that you would add is you've 433 00:25:46,080 --> 00:25:48,200 Speaker 2: got to adopt it in a pro worker way. I mean, 434 00:25:48,200 --> 00:25:50,440 Speaker 2: what does that what would that look like for the UK. 435 00:25:50,800 --> 00:25:53,240 Speaker 1: Let me first say that the paper that you're referring 436 00:25:53,240 --> 00:25:55,800 Speaker 1: to is actually co authored with David Arter and Simon Johnson, 437 00:25:55,840 --> 00:25:58,600 Speaker 1: so let me give a shout out to them as well. Secondly, 438 00:25:59,280 --> 00:26:03,560 Speaker 1: I think you're absolutely right. While you could give some 439 00:26:03,640 --> 00:26:08,280 Speaker 1: credit to the Trump administration for emphasizing AI, they are 440 00:26:08,440 --> 00:26:12,919 Speaker 1: really radardless their only shtick here is this has to 441 00:26:12,920 --> 00:26:15,760 Speaker 1: be an American technology, and we have to race and 442 00:26:15,800 --> 00:26:19,760 Speaker 1: we have to get rid of all regulations. That's not 443 00:26:20,119 --> 00:26:24,280 Speaker 1: a coherent AI policy. But I also fear it's even 444 00:26:24,440 --> 00:26:28,520 Speaker 1: worse than that, and it's worse in the following way. 445 00:26:30,400 --> 00:26:37,320 Speaker 1: This AGI winner take all framing is having truly pernicious 446 00:26:37,320 --> 00:26:42,719 Speaker 1: effects on US China relations because once you are in 447 00:26:42,760 --> 00:26:47,280 Speaker 1: this mindset that you are locked into this existential race 448 00:26:47,359 --> 00:26:50,919 Speaker 1: for AI supremacy with China, it means that there's no 449 00:26:51,080 --> 00:26:53,800 Speaker 1: room for collaboration with China on anything because they are 450 00:26:53,840 --> 00:26:56,800 Speaker 1: your mortal enemy, because if they get to AI supremacy 451 00:26:56,800 --> 00:26:59,520 Speaker 1: before you, they are going to destroy you. That's completely false. 452 00:27:00,280 --> 00:27:02,440 Speaker 1: Models are not going to be at a level that 453 00:27:02,480 --> 00:27:05,760 Speaker 1: they can just give you global supremacy by themselves, And 454 00:27:05,840 --> 00:27:07,960 Speaker 1: there are many other things that we can do with AI. 455 00:27:08,640 --> 00:27:14,440 Speaker 1: In fact, now coming to the UK, China, Germany are 456 00:27:14,520 --> 00:27:17,520 Speaker 1: doing more interesting things with AI than the US in 457 00:27:17,560 --> 00:27:23,159 Speaker 1: some domain. Sure, the US has the unrivaled leadership in 458 00:27:23,280 --> 00:27:28,879 Speaker 1: large language models and foundation models, but I think the 459 00:27:28,920 --> 00:27:31,840 Speaker 1: real gains from AI, as I hinted at the beginning 460 00:27:31,880 --> 00:27:35,280 Speaker 1: of our conversation, will come from using AI in applications 461 00:27:35,920 --> 00:27:40,080 Speaker 1: in manufacturing. Healthcare I think is huge, but manufacturing is 462 00:27:40,119 --> 00:27:45,200 Speaker 1: going to be easier. And who is leading the efforts 463 00:27:45,640 --> 00:27:49,880 Speaker 1: to put AI into manufacturing is China. It's Germany even 464 00:27:49,880 --> 00:27:53,119 Speaker 1: though they have no LLM industry, because they have the 465 00:27:53,119 --> 00:27:55,640 Speaker 1: manufacturing know how, they have the data, and they are 466 00:27:55,680 --> 00:27:59,560 Speaker 1: not beholden to this AGI race, so they're trying to 467 00:27:59,560 --> 00:28:02,439 Speaker 1: do more practical things. I think that's the space in 468 00:28:02,440 --> 00:28:05,640 Speaker 1: which the UK has to be now. Unfortunately UK doesn't 469 00:28:05,720 --> 00:28:10,600 Speaker 1: have much manufacturing left, but I think for the remaining 470 00:28:10,680 --> 00:28:14,040 Speaker 1: manufacturing and other applications, I think that's where UK can 471 00:28:14,160 --> 00:28:17,040 Speaker 1: have a leadership role because Germany is so far behind. 472 00:28:17,080 --> 00:28:20,280 Speaker 1: Germany shouldn't have a leadership role. China, of course is 473 00:28:20,320 --> 00:28:21,879 Speaker 1: going to have a leadership role, but UK can have 474 00:28:21,920 --> 00:28:27,000 Speaker 1: a leadership role once they have a broader scoping of 475 00:28:27,040 --> 00:28:29,720 Speaker 1: what it is that we can do with AI, and 476 00:28:30,680 --> 00:28:34,280 Speaker 1: if we actually manage that, it will have beneficial effects 477 00:28:34,320 --> 00:28:36,840 Speaker 1: for global balances, because once you get out of this 478 00:28:38,000 --> 00:28:41,800 Speaker 1: trap of winner take all, we cannot collaborate on anything 479 00:28:41,800 --> 00:28:45,400 Speaker 1: with China. We have so many global problems, global peace, 480 00:28:46,480 --> 00:28:50,240 Speaker 1: all the societies that are aging, that require adjustment, climate change, pandemics, 481 00:28:50,320 --> 00:28:52,880 Speaker 1: There is so much that we actually need to collaborate 482 00:28:52,920 --> 00:28:56,120 Speaker 1: with China, and if in fact China makes breakthroughs in 483 00:28:56,360 --> 00:29:01,960 Speaker 1: applying AI to manufacturing, US should learn from them. So 484 00:29:02,000 --> 00:29:04,160 Speaker 1: there should be information sharing in AI as well. 485 00:29:05,080 --> 00:29:06,280 Speaker 2: I'm going to run out of time, but I had 486 00:29:06,280 --> 00:29:08,320 Speaker 2: a couple of more and one is following on from 487 00:29:08,600 --> 00:29:11,920 Speaker 2: what we were saying about how a country could position 488 00:29:12,000 --> 00:29:13,800 Speaker 2: itself that's not trying to be in this kind of 489 00:29:13,840 --> 00:29:18,080 Speaker 2: existential race that the US has positioned itself. In the 490 00:29:18,120 --> 00:29:20,600 Speaker 2: other conversation, you hear a lot in the UK is 491 00:29:20,640 --> 00:29:23,200 Speaker 2: and I mentioned it to the Chancellor the other day, 492 00:29:23,320 --> 00:29:28,000 Speaker 2: is you know that professional services that are successful in 493 00:29:28,040 --> 00:29:31,360 Speaker 2: the UK, we'd still have some advanced manufacturing, but our 494 00:29:31,440 --> 00:29:35,320 Speaker 2: strongest categories tend to be along with creative industry, professional services, 495 00:29:35,640 --> 00:29:39,040 Speaker 2: legal services, accounting, finance, all those things. They seem to 496 00:29:39,040 --> 00:29:42,160 Speaker 2: be particularly in the frame when it comes to AI, 497 00:29:42,200 --> 00:29:44,360 Speaker 2: at least in the discussion, and there has been I 498 00:29:44,440 --> 00:29:47,800 Speaker 2: know a government concern. That means, if we want them 499 00:29:47,840 --> 00:29:49,960 Speaker 2: still to be leading sectors, they have to be leading 500 00:29:50,000 --> 00:29:51,800 Speaker 2: in adoption and we have to make sure there are 501 00:29:51,800 --> 00:29:56,800 Speaker 2: no regulatory or data privacy obstacles in the way of that. 502 00:29:57,560 --> 00:29:59,320 Speaker 2: I mean, I guess that raises the question in the 503 00:29:59,400 --> 00:30:01,400 Speaker 2: race to A do you could actually be making the 504 00:30:01,440 --> 00:30:05,080 Speaker 2: institutional set up worse, not just failing to make it better. 505 00:30:05,320 --> 00:30:08,200 Speaker 1: Right, you've given me an opening to talk about another 506 00:30:08,200 --> 00:30:13,640 Speaker 1: one of my topics, which is data. So yes, Indeed, 507 00:30:13,800 --> 00:30:16,800 Speaker 1: if your objective was pour as much money into AI 508 00:30:17,280 --> 00:30:21,760 Speaker 1: as possible and get rid of all short term obstacles 509 00:30:21,760 --> 00:30:24,400 Speaker 1: to AI, you would get rid of privacy and you 510 00:30:24,400 --> 00:30:29,040 Speaker 1: would allow AI companies to capture as much data as 511 00:30:29,040 --> 00:30:33,520 Speaker 1: they want freely. That would be and that has been 512 00:30:33,880 --> 00:30:37,680 Speaker 1: the worst idea you can imagine. It's actually worse for 513 00:30:37,720 --> 00:30:42,640 Speaker 1: the industry. The future of the industry depends on data. 514 00:30:42,840 --> 00:30:46,960 Speaker 1: Data is going to be more important for our future 515 00:30:47,120 --> 00:30:50,640 Speaker 1: as an economy, as a society than land. Can you 516 00:30:50,680 --> 00:30:53,520 Speaker 1: imagine that if we said any piece of land you want, 517 00:30:53,560 --> 00:30:56,280 Speaker 1: you can take it. That would be just chaos. But 518 00:30:56,320 --> 00:30:59,880 Speaker 1: that's how we treat data, and that is actually bad 519 00:30:59,880 --> 00:31:02,400 Speaker 1: for the industry because it creates a tragedy of the 520 00:31:02,480 --> 00:31:07,120 Speaker 1: commons where everybody's exploiting data and nobody is investing in data. 521 00:31:07,360 --> 00:31:09,480 Speaker 1: Especially if you want to do useful things with AI, 522 00:31:09,600 --> 00:31:11,800 Speaker 1: like the pro worker AI that I was mentioning, you 523 00:31:11,880 --> 00:31:15,280 Speaker 1: need a lot of high quality use cases. We can 524 00:31:15,320 --> 00:31:18,760 Speaker 1: do pro worker AI to help teachers, to have nurses, 525 00:31:18,760 --> 00:31:21,040 Speaker 1: to help electricians. How are you going to do that? Well, 526 00:31:21,080 --> 00:31:24,320 Speaker 1: you need to train these models on basic knowledge, but 527 00:31:24,360 --> 00:31:27,200 Speaker 1: you also need to train them on use cases by 528 00:31:27,360 --> 00:31:32,000 Speaker 1: the most experienced workers in that field, working with edge cases, 529 00:31:32,040 --> 00:31:34,760 Speaker 1: difficult cases, and they're not going to produce that data 530 00:31:34,840 --> 00:31:38,640 Speaker 1: unless you pay them. So the current environment where we 531 00:31:38,680 --> 00:31:41,200 Speaker 1: say privacy doesn't matter data, you should give it as 532 00:31:41,280 --> 00:31:44,120 Speaker 1: much data to these companies because they're data hungry. That's 533 00:31:44,160 --> 00:31:47,640 Speaker 1: actually destroying the future of the industry because these models 534 00:31:47,640 --> 00:31:50,840 Speaker 1: are going to run out of high quality data, they're 535 00:31:50,840 --> 00:31:53,360 Speaker 1: going to be produced, they're going to be trained on 536 00:31:53,400 --> 00:31:55,360 Speaker 1: low quality data, and they're going to be more likely 537 00:31:55,400 --> 00:31:58,280 Speaker 1: to create AI slope rather than the kind of high quality, 538 00:31:58,320 --> 00:32:02,040 Speaker 1: reliable AI that we need across a range of occupations. 539 00:32:02,320 --> 00:32:05,160 Speaker 2: I guess just coming back to sort of where we 540 00:32:05,200 --> 00:32:08,240 Speaker 2: started in the sense of the perspective of your twenty 541 00:32:08,280 --> 00:32:10,680 Speaker 2: twenty three book, and one of the features of that 542 00:32:10,760 --> 00:32:14,800 Speaker 2: you and Simon's book was the comparison with the Industrial 543 00:32:14,800 --> 00:32:17,960 Speaker 2: Revolution and making the point that although we tend to say, oh, 544 00:32:18,000 --> 00:32:20,200 Speaker 2: it was fine, we ended up with the productivity, and 545 00:32:20,240 --> 00:32:22,920 Speaker 2: it made everyone better off. You were just pointing to 546 00:32:23,000 --> 00:32:27,200 Speaker 2: how long the transition lasted, how incredibly costly that was 547 00:32:27,240 --> 00:32:30,360 Speaker 2: for people, and how much it required active effort to 548 00:32:30,480 --> 00:32:34,080 Speaker 2: manage it and have a better outcome for people. One 549 00:32:34,120 --> 00:32:38,040 Speaker 2: of the big differences, it seems, between that Industrial revolution 550 00:32:38,280 --> 00:32:40,160 Speaker 2: and what we may see now in the next few 551 00:32:40,200 --> 00:32:43,040 Speaker 2: years with AI is that the workers in the frame 552 00:32:43,200 --> 00:32:47,080 Speaker 2: fundamentally are white collar workers. And in fact, Dario Medea's 553 00:32:47,200 --> 00:32:51,680 Speaker 2: talked about half of entry level white collar work. It's 554 00:32:51,760 --> 00:32:53,480 Speaker 2: quite a sort of safe number because he talks about 555 00:32:53,480 --> 00:32:55,200 Speaker 2: this and then you could change you could change all 556 00:32:55,200 --> 00:32:58,000 Speaker 2: the definitions, but half of the entry level white collar 557 00:32:58,080 --> 00:33:01,960 Speaker 2: jobs will be gone in five years. Does it fundamentally 558 00:33:02,120 --> 00:33:06,840 Speaker 2: change the challenge for policymakers and even the sort of 559 00:33:06,920 --> 00:33:11,719 Speaker 2: short term macroeconomic impact If the main workers affected are 560 00:33:11,800 --> 00:33:15,280 Speaker 2: also white collar workers, they're possibly some of the better paid, 561 00:33:16,000 --> 00:33:18,840 Speaker 2: greatest consuming members of society. 562 00:33:19,440 --> 00:33:22,240 Speaker 1: Well, first of all, yes, indeed, there's a lot of 563 00:33:22,320 --> 00:33:25,000 Speaker 1: uncertainty about what that impact is going to be, but 564 00:33:25,080 --> 00:33:26,920 Speaker 1: it's true that it's going to be on white color 565 00:33:26,960 --> 00:33:30,280 Speaker 1: workers more than manufacturing workers. For the reasons that we 566 00:33:30,360 --> 00:33:33,760 Speaker 1: talked about that these models cannot do physical work or 567 00:33:33,880 --> 00:33:38,200 Speaker 1: cannot be combined with physical work. Yet now white color 568 00:33:38,240 --> 00:33:42,440 Speaker 1: workers are college educated, our leaders are college educated, so 569 00:33:43,320 --> 00:33:47,240 Speaker 1: their plight might have a bigger impact on the political 570 00:33:47,280 --> 00:33:51,680 Speaker 1: system than the plight of say, high school graduate or 571 00:33:51,760 --> 00:33:54,200 Speaker 1: high school dropout workers did in the United States or 572 00:33:54,240 --> 00:33:56,400 Speaker 1: the UK in the nineteen eighties, for example, So that's 573 00:33:56,400 --> 00:34:02,840 Speaker 1: a possibility. The second important issue is that the Industrial Revolution. Indeed, 574 00:34:02,880 --> 00:34:06,720 Speaker 1: and this is very important because you hear this sort 575 00:34:06,720 --> 00:34:10,239 Speaker 1: of grossy view over the Industrial Revolution from Silicon Valley 576 00:34:10,280 --> 00:34:12,200 Speaker 1: all the time that everything worked that well, it took 577 00:34:12,200 --> 00:34:15,399 Speaker 1: about one hundred years of pain and suffering before things 578 00:34:15,440 --> 00:34:18,480 Speaker 1: started getting better. Well, we don't have that kind of time. 579 00:34:19,400 --> 00:34:23,640 Speaker 1: Our democracy wouldn't survive, and AI is advancing far too rapidly, 580 00:34:23,680 --> 00:34:26,279 Speaker 1: so our political system needs to be much better and 581 00:34:26,360 --> 00:34:30,480 Speaker 1: much faster at redirecting things and adjusting to things. So 582 00:34:30,520 --> 00:34:37,000 Speaker 1: I think those are very important points for us to remember. 583 00:34:38,160 --> 00:34:44,280 Speaker 1: But finally, I think it's also very important to recognize 584 00:34:44,960 --> 00:34:49,200 Speaker 1: that the impact will not stop with white color workers, 585 00:34:49,560 --> 00:34:53,839 Speaker 1: because if college graduates cannot get the jobs that they 586 00:34:54,400 --> 00:34:56,840 Speaker 1: want to get, they're going to go and compete for 587 00:34:56,920 --> 00:35:00,840 Speaker 1: other jobs. They're not going to stay at home, they'll 588 00:35:01,160 --> 00:35:05,480 Speaker 1: create wage downward, wage pressure and job displacement risk for 589 00:35:05,560 --> 00:35:10,239 Speaker 1: other people, or they will all be pulled into sort 590 00:35:10,239 --> 00:35:13,399 Speaker 1: of gig work, which then creates all sorts of other 591 00:35:13,440 --> 00:35:15,759 Speaker 1: problems for the economy and for the labor market. So 592 00:35:16,000 --> 00:35:19,000 Speaker 1: it's a systemic problem for the labor market as well. 593 00:35:19,520 --> 00:35:22,600 Speaker 2: Okay, I'm not sure that that was the most uplifting 594 00:35:22,640 --> 00:35:26,399 Speaker 2: place to end, but it's been embracing but profoundly illuminating 595 00:35:26,520 --> 00:35:29,920 Speaker 2: to me and clarifying conversation. Darren A Samoglu, thank you 596 00:35:29,960 --> 00:35:30,359 Speaker 2: so much. 597 00:35:30,440 --> 00:35:32,319 Speaker 1: Thank you, Stephanie, it was great to talk to you. 598 00:35:50,400 --> 00:35:53,040 Speaker 2: Thanks for listening to Trumponomics from Bloomberg. It was hosted 599 00:35:53,040 --> 00:35:57,120 Speaker 2: by me Stephanie Flanders. Trumponomics was produced by Samasadi and Versus, 600 00:35:57,120 --> 00:36:01,280 Speaker 2: and sound design was by Blake Maple's and Aaron Casper. 601 00:36:01,760 --> 00:36:04,160 Speaker 2: To help others find the show, please rate and review 602 00:36:04,200 --> 00:36:05,720 Speaker 2: it highly wherever you listen.