1 00:00:00,240 --> 00:00:05,159 Speaker 1: From self driving cars to robot powered factories. Artificial intelligence 2 00:00:05,240 --> 00:00:09,360 Speaker 1: is taking over significant pieces of the global economy. This 3 00:00:09,440 --> 00:00:12,880 Speaker 1: is great for the businesses embracing AI, but there is 4 00:00:12,880 --> 00:00:16,599 Speaker 1: a downside. More robots in the workforce also means more 5 00:00:16,720 --> 00:00:20,560 Speaker 1: people losing their jobs to computers. So how bad will 6 00:00:20,600 --> 00:00:23,919 Speaker 1: the robot revolution be and how will it reshape the 7 00:00:23,960 --> 00:00:37,200 Speaker 1: global economy? Welcome to Benchmark. I'm Scott Landman and economics 8 00:00:37,280 --> 00:00:40,840 Speaker 1: editor with Bloomberg News in Washington. Returning as a guest 9 00:00:40,880 --> 00:00:44,199 Speaker 1: co host is my colleague Chris Content. He's a reporter 10 00:00:44,320 --> 00:00:47,239 Speaker 1: covering the Federal Reserve and u S economy also here 11 00:00:47,280 --> 00:00:49,960 Speaker 1: in d C. Chris, glad to have you back, Happy 12 00:00:50,000 --> 00:00:52,479 Speaker 1: to be here. So this week we're talking about the 13 00:00:52,600 --> 00:00:55,840 Speaker 1: rise of the robots and how they will impact the economy. 14 00:00:56,280 --> 00:00:59,360 Speaker 1: As someone who writes about the Federal Reserve, Chris, what 15 00:00:59,600 --> 00:01:02,680 Speaker 1: interest you and AI? Well, I can tell you, Scott, 16 00:01:02,760 --> 00:01:06,640 Speaker 1: that there is a great capacity actually of artificial intelligence 17 00:01:06,680 --> 00:01:11,560 Speaker 1: to transform one of the biggest jobs of central banks. 18 00:01:11,600 --> 00:01:15,360 Speaker 1: As you know, central banks are trying to regulate economies, 19 00:01:15,480 --> 00:01:18,160 Speaker 1: keep them in that Goldilock zone, and they do this 20 00:01:18,360 --> 00:01:22,440 Speaker 1: through short term interest rates. But that tool has a 21 00:01:22,560 --> 00:01:25,080 Speaker 1: lag to it. It takes a good six st eighteen 22 00:01:25,200 --> 00:01:28,880 Speaker 1: months before a move and interest rates has an impact 23 00:01:29,280 --> 00:01:32,920 Speaker 1: in the real economy. So these central bankers they've got 24 00:01:32,959 --> 00:01:36,319 Speaker 1: to forecast what what's the economy gonna be like a 25 00:01:36,440 --> 00:01:39,760 Speaker 1: year from now, eighteen months from now. But they're not 26 00:01:40,080 --> 00:01:43,120 Speaker 1: very good at forecasting the future when it comes to 27 00:01:43,200 --> 00:01:49,120 Speaker 1: things like inflation, unemployment, GDP growth. So some central banks 28 00:01:49,160 --> 00:01:53,040 Speaker 1: are starting to turn for a little help to artificial 29 00:01:53,080 --> 00:01:58,040 Speaker 1: intelligence because artificial intelligence can be used quite effectively at 30 00:01:58,400 --> 00:02:03,280 Speaker 1: spotting patterns in past events and then using that to 31 00:02:03,440 --> 00:02:05,480 Speaker 1: predict the future. So it's an area it's going to 32 00:02:05,560 --> 00:02:07,920 Speaker 1: take a while, but it's an area of great promise 33 00:02:08,280 --> 00:02:13,000 Speaker 1: for economics. It's not just economics, but it's business and 34 00:02:13,560 --> 00:02:17,720 Speaker 1: the broader economy as a whole. Forecasting prediction, it's it's 35 00:02:17,720 --> 00:02:20,880 Speaker 1: a really big part of what AI is meant to do. 36 00:02:21,360 --> 00:02:24,080 Speaker 1: And our guest is here to talk about that today. 37 00:02:24,440 --> 00:02:27,400 Speaker 1: His name is Joshua Gains and he's a professor at 38 00:02:27,440 --> 00:02:31,680 Speaker 1: the University of Toronto's Rotman School of Management. He's one 39 00:02:31,680 --> 00:02:35,120 Speaker 1: of the authors of a brand new book called Prediction Machines, 40 00:02:35,320 --> 00:02:39,760 Speaker 1: The Simple Economics of Artificial Intelligence, just published by Harvard 41 00:02:39,800 --> 00:02:43,440 Speaker 1: Business Review Press. Joshua, thanks for joining us. Good to 42 00:02:43,440 --> 00:02:46,359 Speaker 1: be here. Thanks. So we've already sort of alluded to this, 43 00:02:46,400 --> 00:02:50,680 Speaker 1: but why is the book called Prediction Machines. Well, it's 44 00:02:50,880 --> 00:02:55,400 Speaker 1: uh called that and not some more interesting title such as, um, 45 00:02:55,440 --> 00:02:59,120 Speaker 1: you know you're wonderful, great new robot and how your 46 00:02:59,120 --> 00:03:02,280 Speaker 1: life is going to get better, simply because the recent 47 00:03:02,320 --> 00:03:07,560 Speaker 1: developments in artificial intelligence are not about completely replacing human 48 00:03:07,600 --> 00:03:11,560 Speaker 1: intelligence per se, but actually all of being about one thing, 49 00:03:11,720 --> 00:03:14,960 Speaker 1: and that is prediction. That is taking a whole lot 50 00:03:14,960 --> 00:03:18,160 Speaker 1: of information that you do have and converting it into 51 00:03:18,240 --> 00:03:21,919 Speaker 1: information you do not have. That's very, very different from 52 00:03:21,960 --> 00:03:24,840 Speaker 1: something that does all of your choices for you and 53 00:03:24,880 --> 00:03:28,840 Speaker 1: things like that. Uh, it really really just does prediction. 54 00:03:29,520 --> 00:03:33,400 Speaker 1: And when we talk about this prediction function and getting 55 00:03:33,440 --> 00:03:36,320 Speaker 1: all this information, can you bring us into the real 56 00:03:36,360 --> 00:03:39,920 Speaker 1: world here and what kinds of industries or jobs would 57 00:03:39,960 --> 00:03:42,680 Speaker 1: be most likely to or have the most room to 58 00:03:42,760 --> 00:03:47,880 Speaker 1: benefit from this kind of additional knowledge. Well, prediction is 59 00:03:47,920 --> 00:03:51,320 Speaker 1: something and from your business that you're in, you think 60 00:03:51,360 --> 00:03:55,440 Speaker 1: of it mainly about forecasting, for instance, economic variables and 61 00:03:55,480 --> 00:03:58,560 Speaker 1: things like that, and something of course I've worried about 62 00:03:58,600 --> 00:04:02,720 Speaker 1: as well. But what's really interesting about these new developments 63 00:04:02,960 --> 00:04:07,000 Speaker 1: is that they highlight problems that turn out to be 64 00:04:07,080 --> 00:04:11,600 Speaker 1: prediction problems. For instance, the whole issue of having a 65 00:04:11,640 --> 00:04:14,520 Speaker 1: digital image served up to you and knowing what's in 66 00:04:14,560 --> 00:04:17,960 Speaker 1: it is a prediction problem. Invariably, when you get an 67 00:04:17,960 --> 00:04:20,839 Speaker 1: image from the Internet, the label that's being attached to 68 00:04:20,880 --> 00:04:23,880 Speaker 1: it is the label that human would attach to it. 69 00:04:24,279 --> 00:04:26,960 Speaker 1: And so basically what Google are doing when you're searching 70 00:04:26,960 --> 00:04:31,080 Speaker 1: for a picture is predicting which pictures correspond to that label. 71 00:04:31,600 --> 00:04:34,680 Speaker 1: So that's a form of prediction. And it turns out 72 00:04:34,720 --> 00:04:40,760 Speaker 1: that that and language translation and understanding machines, understanding human speech, 73 00:04:40,800 --> 00:04:46,880 Speaker 1: and even self driving cars are all mainly a prediction problem, 74 00:04:46,960 --> 00:04:50,120 Speaker 1: and so that is where this new artificial intelligence is 75 00:04:50,160 --> 00:04:54,800 Speaker 1: being implemented. You wouldn't normally call that prediction problems. We 76 00:04:54,839 --> 00:04:58,200 Speaker 1: normally think about forecasting the weather or forecasting an earthquake 77 00:04:58,279 --> 00:05:02,200 Speaker 1: or something, but basically prediction is all around us. Joshua, 78 00:05:02,320 --> 00:05:04,719 Speaker 1: I think it's clear that this can have all sorts 79 00:05:04,720 --> 00:05:08,000 Speaker 1: of benefits in our lives and in the economy, but 80 00:05:08,040 --> 00:05:11,440 Speaker 1: I can also have some downsides, and particularly how how 81 00:05:11,520 --> 00:05:16,000 Speaker 1: these things are distributed throughout our society. Maybe a big 82 00:05:16,080 --> 00:05:19,160 Speaker 1: question mark first about the benefits. Where where do you 83 00:05:19,200 --> 00:05:23,760 Speaker 1: see that societally and also economically. So I think the 84 00:05:23,800 --> 00:05:28,240 Speaker 1: benefits come from wherever you think about where would it 85 00:05:28,240 --> 00:05:32,279 Speaker 1: be good to know things with greater certainty? And so 86 00:05:32,320 --> 00:05:36,120 Speaker 1: any kind of decisions that you're doing under uncertainty are 87 00:05:36,120 --> 00:05:40,760 Speaker 1: going to benefit from having better prediction so that you 88 00:05:40,800 --> 00:05:45,120 Speaker 1: can imagine being applied. For instance, you're trying to manage inventory. 89 00:05:45,240 --> 00:05:48,200 Speaker 1: If you've got better predictions regarding demand you're going to face, 90 00:05:48,240 --> 00:05:50,400 Speaker 1: you're going to be able to manage that inventory better. 91 00:05:50,760 --> 00:05:53,839 Speaker 1: You're going to be able to correctly adjust your behavior 92 00:05:53,880 --> 00:05:59,360 Speaker 1: to prevent shortfalls or worse, to prevent surpluses that end 93 00:05:59,440 --> 00:06:02,839 Speaker 1: up to over stocking your inventory. Those are the sorts 94 00:06:02,880 --> 00:06:07,119 Speaker 1: of places where prediction machines are going to work quite well. Uh. 95 00:06:07,120 --> 00:06:11,640 Speaker 1: And so basically anywhere where there's decision making being done 96 00:06:12,000 --> 00:06:15,839 Speaker 1: and there's uncertainty, there is room for better prediction, and 97 00:06:15,880 --> 00:06:18,760 Speaker 1: the machines might serve that up right, And that sounds, 98 00:06:18,800 --> 00:06:21,640 Speaker 1: of course like it will make our companies more efficient, 99 00:06:21,680 --> 00:06:26,280 Speaker 1: more productive, but also I think will disrupt how companies 100 00:06:26,320 --> 00:06:30,440 Speaker 1: operate and may disrupt people's lives. Do you think this 101 00:06:30,520 --> 00:06:33,119 Speaker 1: time is going to be any different. We've had lots 102 00:06:33,160 --> 00:06:37,719 Speaker 1: of technology disruption in our economy over the decades, in fact, 103 00:06:37,720 --> 00:06:41,679 Speaker 1: in centuries, um is the pace of change these days 104 00:06:41,720 --> 00:06:45,839 Speaker 1: going to be more disruptive and more problematic to adjust to. 105 00:06:46,520 --> 00:06:49,200 Speaker 1: I think there's a chance, as usual, whenever you've got 106 00:06:49,200 --> 00:06:54,640 Speaker 1: a very large and radical innovation occurring and being adopted, 107 00:06:55,000 --> 00:06:58,679 Speaker 1: there is the potential for disruptive change. But the hard 108 00:06:58,720 --> 00:07:01,400 Speaker 1: part is trying to predict exactly where that will be. 109 00:07:02,520 --> 00:07:05,000 Speaker 1: When we look back and we think about the rise 110 00:07:05,000 --> 00:07:08,400 Speaker 1: of the automobile, it's no surprise that we also point 111 00:07:08,440 --> 00:07:13,160 Speaker 1: to horses as being the disruptive workers in that disrupted 112 00:07:13,200 --> 00:07:17,040 Speaker 1: workers in that equation. When it comes to things like 113 00:07:17,520 --> 00:07:21,480 Speaker 1: better prediction, however, it's far subtler um. Some of the 114 00:07:21,480 --> 00:07:25,320 Speaker 1: discussion that's going around is saying, well, this is the 115 00:07:25,360 --> 00:07:27,840 Speaker 1: first Why is this time different? It's because it's the 116 00:07:27,880 --> 00:07:32,160 Speaker 1: first time you really have machines taking over cognitive tasks. 117 00:07:32,680 --> 00:07:35,720 Speaker 1: Well that's not true. We've had machines take over cognitive 118 00:07:35,760 --> 00:07:39,920 Speaker 1: task with computers, and you know, we seem to most 119 00:07:39,920 --> 00:07:42,600 Speaker 1: of us seem to be still be gainfully employed, and 120 00:07:42,640 --> 00:07:47,120 Speaker 1: the adjustments that took place have been worked out uh, 121 00:07:47,440 --> 00:07:49,920 Speaker 1: this time, you know, Okay, the computers are doing much 122 00:07:50,000 --> 00:07:53,480 Speaker 1: much more. They're doing much more in terms of thought. Yes, 123 00:07:53,600 --> 00:07:56,240 Speaker 1: But as we identify in the book, the one thing 124 00:07:56,240 --> 00:08:00,280 Speaker 1: that computers can't do is set goals. You still have 125 00:08:00,440 --> 00:08:04,400 Speaker 1: to have a human what we call human judgment, to 126 00:08:04,560 --> 00:08:07,560 Speaker 1: set the goals the trade offs. No prediction is going 127 00:08:07,600 --> 00:08:09,600 Speaker 1: to be perfect, so you have to work out what 128 00:08:10,320 --> 00:08:12,640 Speaker 1: how you're going to stomach errors and things like that. 129 00:08:13,360 --> 00:08:17,440 Speaker 1: Those are still roles for people to get into. It's 130 00:08:17,480 --> 00:08:21,720 Speaker 1: only where better prediction was like the final thing towards 131 00:08:21,800 --> 00:08:25,880 Speaker 1: getting full automation that you might see jobs actually replaced. 132 00:08:26,840 --> 00:08:31,400 Speaker 1: Speaking of not getting to full automation, one interesting example 133 00:08:31,520 --> 00:08:34,240 Speaker 1: you discussed a little bit in the book is that 134 00:08:34,400 --> 00:08:39,240 Speaker 1: of doctors and how their jobs might change under advances 135 00:08:39,280 --> 00:08:43,400 Speaker 1: in AI, in terms of they just won't have to 136 00:08:43,520 --> 00:08:46,600 Speaker 1: do the diagnoses as much anymore. The computer is going 137 00:08:46,640 --> 00:08:48,200 Speaker 1: to do it for them, and their jobs are going 138 00:08:48,240 --> 00:08:51,600 Speaker 1: to substantially change. Can you talk about that a bit well, 139 00:08:52,040 --> 00:08:55,640 Speaker 1: basically in a conference a couple of years ago. Jeff Hinton, 140 00:08:55,960 --> 00:09:00,000 Speaker 1: who is one of the pioneers of the new development 141 00:09:00,000 --> 00:09:02,600 Speaker 1: it's in AI. He now it works part time at Google. 142 00:09:03,240 --> 00:09:06,120 Speaker 1: He basically said to the conference, well, I think we 143 00:09:06,120 --> 00:09:10,199 Speaker 1: should stop training radiologists. Now, what he meant by that 144 00:09:10,440 --> 00:09:14,200 Speaker 1: was his view of what a radiologist does is look 145 00:09:14,280 --> 00:09:18,160 Speaker 1: at images and then decide, you know, is there a 146 00:09:18,240 --> 00:09:23,280 Speaker 1: problem or not that requires further treatment. Well, obviously, prediction 147 00:09:23,320 --> 00:09:26,800 Speaker 1: machines have the capability and have been demonstrated in some 148 00:09:26,840 --> 00:09:29,840 Speaker 1: settings to be far superior to people looking at those 149 00:09:29,880 --> 00:09:33,960 Speaker 1: pictures and identifying exactly what's going on with a far 150 00:09:34,040 --> 00:09:36,960 Speaker 1: greater degree of accuracy. So if that's your view of 151 00:09:36,960 --> 00:09:40,680 Speaker 1: what a radiologist does, well it sounds like curtains for them. 152 00:09:41,280 --> 00:09:44,240 Speaker 1: But actually radiologists have been dealing with these sorts of 153 00:09:44,240 --> 00:09:47,640 Speaker 1: issues for fifty years and they're quite aware of them. 154 00:09:47,640 --> 00:09:52,000 Speaker 1: That has technology improves their jobs change. What happens in 155 00:09:52,080 --> 00:09:55,320 Speaker 1: terms of radiologists is it's not some a simple choice. 156 00:09:56,280 --> 00:09:58,800 Speaker 1: You get a prediction of something and you know exactly 157 00:09:58,840 --> 00:10:01,199 Speaker 1: what to do. You're right. If that was the case, 158 00:10:01,240 --> 00:10:03,120 Speaker 1: then anyone could just look it up in a manual 159 00:10:03,480 --> 00:10:06,920 Speaker 1: and you wouldn't need a radiologist. But invariably there are 160 00:10:06,960 --> 00:10:11,520 Speaker 1: other factors, other criteria, and in particular the personal dimension 161 00:10:11,720 --> 00:10:15,760 Speaker 1: of the patient situations that will also impact on the 162 00:10:15,800 --> 00:10:19,560 Speaker 1: what the treatment decision actually is. So prediction is an 163 00:10:19,600 --> 00:10:23,240 Speaker 1: important input into that, but there are other factors going on, 164 00:10:23,800 --> 00:10:26,679 Speaker 1: and we're a long way off being able to automate 165 00:10:26,760 --> 00:10:31,120 Speaker 1: all of that. And thus far, the evidence is that 166 00:10:31,400 --> 00:10:34,319 Speaker 1: you make the radiologists actually better at their jobs by 167 00:10:34,360 --> 00:10:37,000 Speaker 1: having these prediction machines. They are able to act with 168 00:10:37,080 --> 00:10:41,120 Speaker 1: more certainty and therefore able to come up with more 169 00:10:41,120 --> 00:10:45,200 Speaker 1: confident recommendations. This allows them to save some time and 170 00:10:45,240 --> 00:10:48,400 Speaker 1: save some other things, and develop other skills, and may 171 00:10:48,520 --> 00:10:53,079 Speaker 1: change the allocation of tasks between radiologists and other medical 172 00:10:53,080 --> 00:10:57,320 Speaker 1: practitioners quite a bit. So I don't necessarily see it 173 00:10:57,480 --> 00:11:00,800 Speaker 1: is obvious that those jobs are going to be disrupted 174 00:11:00,800 --> 00:11:04,400 Speaker 1: as quickly as some of the engineers do. What kind 175 00:11:04,400 --> 00:11:08,760 Speaker 1: of jobs, Josh, would you say are the least vulnerable 176 00:11:08,960 --> 00:11:12,880 Speaker 1: to being replaced or even disrupted by this kind of technology. 177 00:11:13,480 --> 00:11:16,679 Speaker 1: Well that's a really interesting question. You know. We tend to, 178 00:11:17,120 --> 00:11:20,320 Speaker 1: when confronted with this thing, point to all the important 179 00:11:20,360 --> 00:11:24,319 Speaker 1: things that we do and how it can't be replaced 180 00:11:24,360 --> 00:11:27,959 Speaker 1: by a machine. For instance, some of the jobs that 181 00:11:28,080 --> 00:11:31,280 Speaker 1: people talk about as being hard to be replaced by 182 00:11:31,360 --> 00:11:36,640 Speaker 1: machines are ones that require emotional input. Interestingly, Danny Khneman 183 00:11:36,840 --> 00:11:41,600 Speaker 1: at a conference here just last year. He's the Nobel 184 00:11:41,679 --> 00:11:44,920 Speaker 1: Prize winning economist who has been responsible for behavior or 185 00:11:44,960 --> 00:11:49,200 Speaker 1: economics and thinking about decision making and judgment in the 186 00:11:49,240 --> 00:11:54,120 Speaker 1: context of psychology, and his answer was equivocal. He sees 187 00:11:54,200 --> 00:11:57,720 Speaker 1: humans as ultimately very flawed and doesn't see any reason 188 00:11:57,800 --> 00:12:01,360 Speaker 1: why machines wouldn't take over. His view was, do you 189 00:12:01,440 --> 00:12:05,560 Speaker 1: really want your care to be managed by disinterested children 190 00:12:06,040 --> 00:12:09,080 Speaker 1: when you've become elderly, or would you rather have a 191 00:12:09,160 --> 00:12:12,319 Speaker 1: robot who's been trained to work out exactly what your 192 00:12:12,360 --> 00:12:15,160 Speaker 1: needs are. So it's very hard to work out what 193 00:12:15,360 --> 00:12:18,120 Speaker 1: is safe and what isn't. All we can say right 194 00:12:18,160 --> 00:12:21,640 Speaker 1: now is that the tools in their current instantiation and 195 00:12:21,720 --> 00:12:23,840 Speaker 1: what we're going to see probably over the next five 196 00:12:23,920 --> 00:12:27,240 Speaker 1: years and maybe ten years, is all about that prediction function, 197 00:12:27,520 --> 00:12:30,680 Speaker 1: which leaves a lot of room open for people. Speaking 198 00:12:30,679 --> 00:12:33,880 Speaker 1: of jobs that are likely safe from AI disruption, I 199 00:12:33,880 --> 00:12:40,079 Speaker 1: think Chris will appreciate being in Washington, d C. Federal government, Congress, etcetera. 200 00:12:40,160 --> 00:12:43,120 Speaker 1: Those will probably be pretty safe from disruption for a 201 00:12:43,160 --> 00:12:46,160 Speaker 1: long time. So if we ever get some disruption that sector, 202 00:12:46,280 --> 00:12:49,560 Speaker 1: that would be something definitely to watch. But josh I 203 00:12:49,600 --> 00:12:53,040 Speaker 1: wanted to turn to another issue that you discuss when 204 00:12:53,120 --> 00:12:56,520 Speaker 1: when you're talking about economic impact in the book. Uh, 205 00:12:56,559 --> 00:13:00,760 Speaker 1: it's really intriguing that you talk about inequality and how 206 00:13:00,880 --> 00:13:05,960 Speaker 1: that would evolve if AI were to take hold in 207 00:13:06,040 --> 00:13:10,280 Speaker 1: the economy. Why could that get worse with the rise 208 00:13:10,400 --> 00:13:14,160 Speaker 1: of AI. Well, it's you know, as usual, it's difficult 209 00:13:14,160 --> 00:13:17,800 Speaker 1: to forecast these things. You never should trust economis on 210 00:13:18,160 --> 00:13:21,280 Speaker 1: on those sorts of big trend forecasts. But the concern 211 00:13:21,360 --> 00:13:26,720 Speaker 1: that we have is that certain skills become more valuable 212 00:13:26,840 --> 00:13:30,560 Speaker 1: valuable and other skills become less so. And one of 213 00:13:30,600 --> 00:13:33,640 Speaker 1: the problems is if you wanted to be the sort 214 00:13:33,679 --> 00:13:37,559 Speaker 1: of person who could take advantage of AI, it's invariably 215 00:13:37,880 --> 00:13:42,079 Speaker 1: not going to be a skill that is like routine. Necessarily, 216 00:13:42,720 --> 00:13:44,600 Speaker 1: once you know, once you have the sort of job 217 00:13:44,600 --> 00:13:46,640 Speaker 1: where you can look at a prediction and you know 218 00:13:46,679 --> 00:13:50,280 Speaker 1: exactly what to do, your place in that is devalued. 219 00:13:50,880 --> 00:13:53,640 Speaker 1: On the other hand, there are some situations where it's 220 00:13:53,679 --> 00:13:55,440 Speaker 1: going to take a bit of art to understand what 221 00:13:55,520 --> 00:13:59,000 Speaker 1: the prediction really is. We don't tend to think of 222 00:13:59,360 --> 00:14:02,800 Speaker 1: machine learn in computer sciences art, but there are so 223 00:14:02,840 --> 00:14:06,480 Speaker 1: many variables that the engineers have to control and think 224 00:14:06,480 --> 00:14:09,719 Speaker 1: about that. It does tend to have that quality to it, 225 00:14:10,120 --> 00:14:13,840 Speaker 1: and similarly, how we use those tools also has a 226 00:14:13,880 --> 00:14:16,720 Speaker 1: bit of an artistic flare to it. And by that 227 00:14:16,840 --> 00:14:20,440 Speaker 1: I mean that it's not a dents sure why someone 228 00:14:20,480 --> 00:14:24,520 Speaker 1: who's more productive is more productive, but somehow they get 229 00:14:24,560 --> 00:14:27,240 Speaker 1: it more and so the people who are able to 230 00:14:27,280 --> 00:14:30,120 Speaker 1: do that are probably going to fare better. The one 231 00:14:30,160 --> 00:14:33,280 Speaker 1: concern we have is when I'm a person who's using 232 00:14:33,280 --> 00:14:35,920 Speaker 1: an AI tool, I can use it at a much 233 00:14:36,080 --> 00:14:41,480 Speaker 1: larger scale and make decisions for many more across a 234 00:14:41,560 --> 00:14:45,040 Speaker 1: greater domain. And in that regard, I'm sort of made 235 00:14:45,040 --> 00:14:48,760 Speaker 1: a superhuman. But you can only have so many superhumans, 236 00:14:48,800 --> 00:14:52,520 Speaker 1: and that's where we might worry about inequality. Josh, talk 237 00:14:52,600 --> 00:14:55,080 Speaker 1: to us a little bit also about this idea explored 238 00:14:55,120 --> 00:14:59,680 Speaker 1: in your book on how artificial intelligence might actually contribute 239 00:14:59,680 --> 00:15:04,320 Speaker 1: to the concentration of certain industries in our economy, even 240 00:15:04,360 --> 00:15:09,240 Speaker 1: aid the creation of monopolies. Well, I think this is 241 00:15:09,280 --> 00:15:13,200 Speaker 1: something that has occurred with any digital platforms, especially ones 242 00:15:13,280 --> 00:15:17,160 Speaker 1: that work better with larger scale. So the current AI 243 00:15:17,360 --> 00:15:20,320 Speaker 1: runs on machine learning, and I have to emphasize the 244 00:15:20,400 --> 00:15:25,680 Speaker 1: learning part. You get better by putting your machines out 245 00:15:25,680 --> 00:15:30,840 Speaker 1: in the field and continuously adjusting them with new data 246 00:15:30,920 --> 00:15:34,080 Speaker 1: and new learning. In so, what that means is that 247 00:15:34,120 --> 00:15:37,040 Speaker 1: a company such as Google that has a lot more 248 00:15:37,560 --> 00:15:40,520 Speaker 1: people using its search engine is able to use AI 249 00:15:40,640 --> 00:15:43,920 Speaker 1: to improve that those search engines at a greater rate 250 00:15:43,960 --> 00:15:47,720 Speaker 1: than competitors such as Being and Duct Duct Go, and 251 00:15:47,960 --> 00:15:52,240 Speaker 1: so in that regard, you could have the preservation of 252 00:15:52,400 --> 00:15:55,840 Speaker 1: a dominance or the emergence of dominance. And similarly this 253 00:15:55,920 --> 00:15:59,040 Speaker 1: might hold for companies like Facebook, and it might hold 254 00:15:59,080 --> 00:16:02,760 Speaker 1: to a degree for companies like Apple and Amazon as well. 255 00:16:03,120 --> 00:16:05,680 Speaker 1: They just have a lot more activity and so their 256 00:16:05,720 --> 00:16:09,480 Speaker 1: potential for to to use AI to learn at a 257 00:16:09,560 --> 00:16:13,480 Speaker 1: faster rate might be there. But as with all of 258 00:16:13,520 --> 00:16:16,600 Speaker 1: these things, is sometimes these firms do get into a rut, 259 00:16:16,680 --> 00:16:20,680 Speaker 1: and sometimes people find better ways of learning and doing 260 00:16:20,720 --> 00:16:24,320 Speaker 1: things that might initially perform worse but have a better trajectory. 261 00:16:24,880 --> 00:16:26,600 Speaker 1: So it's not a given that we're going to have 262 00:16:27,120 --> 00:16:30,760 Speaker 1: the current monopolies, be the future monopolies or anything like that, 263 00:16:31,040 --> 00:16:34,000 Speaker 1: or the current large companies, but we might see new 264 00:16:34,040 --> 00:16:37,160 Speaker 1: ones develop on the basis of new tools. Every other time, 265 00:16:37,200 --> 00:16:40,800 Speaker 1: we've had a large technological revolution that has occurred, I'd 266 00:16:40,800 --> 00:16:44,240 Speaker 1: expect it to occur this time too. Just taking a 267 00:16:44,400 --> 00:16:50,040 Speaker 1: broader view of competition, you also get into which country 268 00:16:50,160 --> 00:16:52,800 Speaker 1: might be the dominant force in AI. If there's a 269 00:16:52,840 --> 00:16:57,080 Speaker 1: dominant force, the US has already has a clear lead, 270 00:16:57,480 --> 00:17:01,920 Speaker 1: and yet there's a lot of activity going on in China. 271 00:17:02,000 --> 00:17:06,000 Speaker 1: How do you see China ascending and competing against the 272 00:17:06,080 --> 00:17:10,200 Speaker 1: United States in AI. Well, this is where being able 273 00:17:10,240 --> 00:17:12,680 Speaker 1: to have access to the right sort of data comes 274 00:17:12,760 --> 00:17:15,880 Speaker 1: to play, not just data but also talent. So let's 275 00:17:15,880 --> 00:17:18,800 Speaker 1: talk about data first. The United States is sort of 276 00:17:18,800 --> 00:17:22,399 Speaker 1: a middle ground in terms of privacy regulation. Europe is 277 00:17:22,560 --> 00:17:26,040 Speaker 1: far more stringent, but China there's none at all, and 278 00:17:26,119 --> 00:17:29,920 Speaker 1: so Chinese firms have the ability to appropriate and use 279 00:17:30,359 --> 00:17:33,920 Speaker 1: consumer data to develop AI at a much greater rate 280 00:17:34,240 --> 00:17:37,160 Speaker 1: than you would be able to do inside the United States. 281 00:17:37,400 --> 00:17:41,280 Speaker 1: So there's a benefit. Secondly, there's the issue of capabilities. 282 00:17:41,800 --> 00:17:44,600 Speaker 1: At the moment, AI resources are thin on the ground. 283 00:17:44,720 --> 00:17:47,959 Speaker 1: It's hard to hire engineers they command six or seven 284 00:17:48,000 --> 00:17:51,080 Speaker 1: figure sums. But the United States at the moment is 285 00:17:51,080 --> 00:17:54,639 Speaker 1: cutting itself off from the global pool of talent in 286 00:17:55,600 --> 00:17:58,640 Speaker 1: machine learning, and this is something other countries aren't doing. 287 00:17:58,680 --> 00:18:01,639 Speaker 1: Not only they not cutting them selves off, They're also 288 00:18:01,760 --> 00:18:05,480 Speaker 1: providing resources. China is spending several billion dollars on a 289 00:18:05,520 --> 00:18:09,280 Speaker 1: technology cluster in this area. Russia are doing the same, 290 00:18:10,080 --> 00:18:14,199 Speaker 1: and even countries like Canada, the government's actively supporting the 291 00:18:14,240 --> 00:18:18,320 Speaker 1: development of news superclusters in the space. That's going to 292 00:18:18,400 --> 00:18:22,439 Speaker 1: help attract talent around the world, because talent wants to 293 00:18:22,440 --> 00:18:25,560 Speaker 1: be able to work, and I think that's another area 294 00:18:25,560 --> 00:18:29,040 Speaker 1: where the United States faces some risks. Let's end this 295 00:18:29,119 --> 00:18:33,080 Speaker 1: interview with the existential question that you address in your 296 00:18:33,080 --> 00:18:35,520 Speaker 1: book or try to address. We've all seen our share 297 00:18:35,560 --> 00:18:39,000 Speaker 1: of science fiction movies. The Rise of the Robots Terminator 298 00:18:39,080 --> 00:18:42,879 Speaker 1: to looms very large in my mind for me, josh 299 00:18:43,200 --> 00:18:45,640 Speaker 1: is this the end of the world as we know it? Well, 300 00:18:45,680 --> 00:18:49,000 Speaker 1: as we say in the book, not enough time yet 301 00:18:49,040 --> 00:18:51,280 Speaker 1: to tell. But you've got enough time to read our 302 00:18:51,320 --> 00:18:54,199 Speaker 1: book and be on the right side of it. All right, Well, 303 00:18:54,240 --> 00:18:57,240 Speaker 1: let's send it there. Joshua Against from the University of Toronto, 304 00:18:57,359 --> 00:19:00,280 Speaker 1: author of Prediction Machines, thank you very much for running 305 00:19:00,320 --> 00:19:06,800 Speaker 1: us on Benchmark. Thank you. Benchmark will be back next week. 306 00:19:06,920 --> 00:19:09,240 Speaker 1: Until then, you can find us on the Bloomberg terminal 307 00:19:09,240 --> 00:19:13,360 Speaker 1: Bloomberg dot com. Our Bloomberg app and podcast destinations such 308 00:19:13,400 --> 00:19:17,640 Speaker 1: as Apple Podcast, Spotify, or wherever you listen. We'd love 309 00:19:17,680 --> 00:19:19,679 Speaker 1: it if you took the time to post a review 310 00:19:19,720 --> 00:19:22,439 Speaker 1: of the show so more listeners can find us. You 311 00:19:22,480 --> 00:19:25,200 Speaker 1: can also check us out on Twitter, follow me at 312 00:19:25,280 --> 00:19:30,000 Speaker 1: at scott Landman, Chris You're at Chris Ja Conden, and 313 00:19:30,160 --> 00:19:33,240 Speaker 1: our guest Josh Gannes is at Josh gannes g. A 314 00:19:33,520 --> 00:19:37,080 Speaker 1: n S Benchmark is produced by Toper Foreheaz. The head 315 00:19:37,119 --> 00:19:40,720 Speaker 1: of Bloomberg Podcasts is Francesca Levy. Thanks for listening. To 316 00:19:40,800 --> 00:19:41,600 Speaker 1: see you next time.