1 00:00:02,520 --> 00:00:17,280 Speaker 1: Bloomberg Audio Studios, podcasts, radio news. 2 00:00:26,239 --> 00:00:30,760 Speaker 2: Algorithms are everywhere. They determine the price you pay for 3 00:00:30,800 --> 00:00:35,360 Speaker 2: your Uber, what gets fed to you on TikTok and Instagram, 4 00:00:35,800 --> 00:00:40,639 Speaker 2: and even the prices you pay in the supermarket. Is 5 00:00:40,680 --> 00:00:45,680 Speaker 2: all of this algorithmic impact helping or harming people? To 6 00:00:45,760 --> 00:00:49,360 Speaker 2: answer that question, let's bring in Cass Sunstein. He is 7 00:00:49,400 --> 00:00:53,280 Speaker 2: the author of a new book, Algorithmic Harm, Protecting People 8 00:00:53,360 --> 00:00:58,520 Speaker 2: in the Age of Artificial Intelligence, co written with ornbar Gil. 9 00:00:58,920 --> 00:01:02,600 Speaker 2: Cass is also a professor at Harvard Law School and 10 00:01:02,720 --> 00:01:06,520 Speaker 2: is perhaps best known for his books on Star Wars 11 00:01:06,720 --> 00:01:12,080 Speaker 2: and co authoring Nudge with Nobel Laureate Dick Taylor. So, pass, 12 00:01:12,160 --> 00:01:16,440 Speaker 2: let's just jump right into this and start by defining 13 00:01:16,920 --> 00:01:18,960 Speaker 2: what is algorithmic harm? 14 00:01:20,160 --> 00:01:23,760 Speaker 3: Okay, So let's use Star Wars. So let's say the 15 00:01:23,880 --> 00:01:28,840 Speaker 3: Jedi Knights use algorithms, and they give people things that 16 00:01:28,920 --> 00:01:32,639 Speaker 3: fit with their tastes and interests and information, and people 17 00:01:32,720 --> 00:01:35,920 Speaker 3: get If they're interested in books on behavioral economics, that's 18 00:01:35,959 --> 00:01:38,480 Speaker 3: what they get at a price that suits them. If 19 00:01:38,480 --> 00:01:41,280 Speaker 3: they're interested in a book on Star Wars, that's what 20 00:01:41,319 --> 00:01:44,240 Speaker 3: they get at a price that suits them. The Sith 21 00:01:44,319 --> 00:01:48,760 Speaker 3: by contrast, take advantage with algorithms of the fact that 22 00:01:48,840 --> 00:01:54,160 Speaker 3: some consumers lack information and some consumers suffer from behavioral biases. 23 00:01:54,440 --> 00:01:57,320 Speaker 3: So we're going to focus on consumers first. If people 24 00:01:57,560 --> 00:02:01,200 Speaker 3: don't know much, let's say about healthcare product, an algorithm 25 00:02:01,280 --> 00:02:03,720 Speaker 3: might know that that they're likely not to know much 26 00:02:04,480 --> 00:02:07,360 Speaker 3: and might say, we have a fantastic baldness cure for you. 27 00:02:08,200 --> 00:02:11,760 Speaker 3: Here it goes, and people will be duped and exploited. 28 00:02:12,040 --> 00:02:16,120 Speaker 3: So that's exploitation of absence of information. That's algorithmic harm. 29 00:02:16,360 --> 00:02:19,520 Speaker 3: If people are super optimistic and they think that some 30 00:02:19,680 --> 00:02:22,240 Speaker 3: new product is going to last forever, when it tends 31 00:02:22,240 --> 00:02:26,200 Speaker 3: to break on first usage, then the algorithm can know 32 00:02:26,240 --> 00:02:30,280 Speaker 3: those are unrealistically optimistic people and exploit their behavioral bias. 33 00:02:31,440 --> 00:02:37,680 Speaker 2: So I referenced a few obvious areas where algorithms are 34 00:02:37,760 --> 00:02:42,640 Speaker 2: taking place. Uber pricing is one the books you see 35 00:02:43,040 --> 00:02:48,280 Speaker 2: on Amazon is algorithmically driven. Clearly, a lot of social 36 00:02:48,360 --> 00:02:53,240 Speaker 2: media for better or WORSEUS algorithmically driven, and even things 37 00:02:53,320 --> 00:02:56,600 Speaker 2: like the sort of music you like on Pandora. What 38 00:02:56,680 --> 00:03:01,960 Speaker 2: are some of the less obvious examples of how algorithms 39 00:03:02,080 --> 00:03:06,400 Speaker 2: are affecting consumers and regular people? Every day. 40 00:03:07,320 --> 00:03:10,120 Speaker 3: Okay, So let's start with us straightforward once and then 41 00:03:10,160 --> 00:03:13,560 Speaker 3: we'll get a little subtle. So straightforwardly, it might be 42 00:03:13,680 --> 00:03:17,120 Speaker 3: that people are being asked to pay a price that 43 00:03:18,000 --> 00:03:21,160 Speaker 3: suits their economic situation. So if you have a lot 44 00:03:21,160 --> 00:03:23,639 Speaker 3: of money, the algorithm knows that maybe the price will 45 00:03:23,680 --> 00:03:26,080 Speaker 3: be twice as much as it would be if you 46 00:03:26,160 --> 00:03:31,080 Speaker 3: were less wealthy. That I think is basically okay. It 47 00:03:31,160 --> 00:03:34,239 Speaker 3: leads to greater efficiency in the system. It's like rich 48 00:03:34,320 --> 00:03:38,000 Speaker 3: people will pay more for the same product than poor people, 49 00:03:38,040 --> 00:03:41,160 Speaker 3: and the algorithm is aware of that. So that's not 50 00:03:41,280 --> 00:03:44,600 Speaker 3: that subtle. But it's important. Also, not that subtle is 51 00:03:45,080 --> 00:03:49,640 Speaker 3: targeting people based on what's known about their particular tastes 52 00:03:49,680 --> 00:03:52,600 Speaker 3: and preferences. Let's put wealth to one side, and so 53 00:03:52,680 --> 00:03:55,720 Speaker 3: it's known that certain people are super interested in dogs, 54 00:03:55,760 --> 00:03:58,440 Speaker 3: other people are interested in cats. And there we go, 55 00:03:59,000 --> 00:04:02,640 Speaker 3: and all that is very straightforward's happening. If consumers are 56 00:04:02,680 --> 00:04:07,240 Speaker 3: sophisticated and knowledgeable, that can be a great thing to 57 00:04:07,320 --> 00:04:10,240 Speaker 3: make markets work better. If they aren't, it can be 58 00:04:10,280 --> 00:04:14,200 Speaker 3: a terrible thing to make consumers get manipulated and hurt. 59 00:04:14,680 --> 00:04:18,159 Speaker 3: Here's something a little more subtle. If an algorithm knows 60 00:04:18,200 --> 00:04:22,400 Speaker 3: for example, that you like Olivia Rodrigo, and I hope 61 00:04:22,440 --> 00:04:24,840 Speaker 3: you do, because she's really good, then there are going 62 00:04:24,920 --> 00:04:27,400 Speaker 3: to be a lot of Olivia or Rodrigo songs that 63 00:04:27,440 --> 00:04:30,040 Speaker 3: are going to be put into your system. And let's 64 00:04:30,040 --> 00:04:33,159 Speaker 3: say no one's really like Olivia Rodrigo, but let's suppose 65 00:04:33,160 --> 00:04:36,000 Speaker 3: there are others who are vaguely like her, and you're 66 00:04:36,000 --> 00:04:38,320 Speaker 3: going to hear a lot of that. Now, that might 67 00:04:38,440 --> 00:04:42,000 Speaker 3: seem not like algorithmic harm, that might seem like a 68 00:04:42,040 --> 00:04:46,120 Speaker 3: triumph of freedom and markets, but it might mean that 69 00:04:46,240 --> 00:04:49,760 Speaker 3: piece of people's tastes will calcify, and we're going to 70 00:04:49,800 --> 00:04:54,080 Speaker 3: get very bulkanized culturally with respect to what people see 71 00:04:54,080 --> 00:04:56,679 Speaker 3: in here. So they're going to be Olivia or Arigo people, 72 00:04:56,720 --> 00:04:58,440 Speaker 3: and then they're going to be led Zeppelin people, and 73 00:04:58,480 --> 00:05:00,840 Speaker 3: they're going to be Frank Sinatra people. And there was 74 00:05:00,880 --> 00:05:03,360 Speaker 3: another singer called Bach. I guess I don't know much 75 00:05:03,360 --> 00:05:06,560 Speaker 3: about him, but there's Bach, and there would be Bach people. 76 00:05:06,960 --> 00:05:11,359 Speaker 3: And that's culturally damaging, and it's also damaging for the 77 00:05:11,440 --> 00:05:14,360 Speaker 3: development of individual tastes and preferences. 78 00:05:14,960 --> 00:05:19,400 Speaker 2: So let's put this into a little broader context than 79 00:05:19,680 --> 00:05:23,280 Speaker 2: simply musical tastes and I like all of those, so 80 00:05:23,560 --> 00:05:28,760 Speaker 2: I haven't become bulkanized yet. But when we look at 81 00:05:29,040 --> 00:05:32,920 Speaker 2: consumption of news media, when we look at consumption of information, 82 00:05:33,800 --> 00:05:37,960 Speaker 2: it really seems like the country has self divided itself 83 00:05:38,360 --> 00:05:42,560 Speaker 2: into these happy little media bubbles that are either far 84 00:05:42,680 --> 00:05:45,520 Speaker 2: left leaning or far right leaning, which are is kind 85 00:05:45,520 --> 00:05:48,440 Speaker 2: of weird because I always learned the bulk of the 86 00:05:48,480 --> 00:05:53,360 Speaker 2: country and the traditional bell curve, most people are somewhere 87 00:05:53,680 --> 00:05:56,279 Speaker 2: in the middle. Hey, maybe they're center right or center left, 88 00:05:56,600 --> 00:05:59,839 Speaker 2: but they're not out on the tails. How does the 89 00:06:00,000 --> 00:06:05,719 Speaker 2: these algorithms affect our consumption of news and information? 90 00:06:07,560 --> 00:06:10,320 Speaker 3: About fifteen twenty years ago, there is a lot of 91 00:06:10,400 --> 00:06:16,159 Speaker 3: concern that through individual choices, people would create echo chambers 92 00:06:16,240 --> 00:06:19,440 Speaker 3: in which they would live. And that's a fair concern 93 00:06:19,600 --> 00:06:23,520 Speaker 3: and it has created a number of, let's say, challenges 94 00:06:23,760 --> 00:06:29,120 Speaker 3: for self government and learning. What you're pointing to is 95 00:06:29,160 --> 00:06:33,839 Speaker 3: also emphasized in the book, which is that algorithms can 96 00:06:34,040 --> 00:06:38,040 Speaker 3: echo chamber you. An algorithm might say, you know, you're 97 00:06:38,279 --> 00:06:42,000 Speaker 3: keenly interested in immigration, and you have this point of view, 98 00:06:42,480 --> 00:06:44,679 Speaker 3: so boy, are we going to funnel to you lots 99 00:06:44,680 --> 00:06:48,039 Speaker 3: of information because clicks are money, and you're going to 100 00:06:48,040 --> 00:06:50,720 Speaker 3: be clicking, clicking, clickling, click kicking, And that might be 101 00:06:50,839 --> 00:06:53,960 Speaker 3: a very good thing from the standpoint of the seller, 102 00:06:54,080 --> 00:06:56,480 Speaker 3: so to speak, or the user of the algorithm, but 103 00:06:56,600 --> 00:07:00,200 Speaker 3: from the standpoint of view, it's not so fantastic. And 104 00:07:00,320 --> 00:07:03,920 Speaker 3: from the standpoint of our society it's less than not 105 00:07:04,080 --> 00:07:08,400 Speaker 3: so fantastic, because people will be living in algorithm driven 106 00:07:09,080 --> 00:07:13,600 Speaker 3: universes that are very separate from one another, and they 107 00:07:13,680 --> 00:07:15,680 Speaker 3: can end up not liking each other very much. 108 00:07:16,200 --> 00:07:19,240 Speaker 2: Even worse than not liking each other. Their view of 109 00:07:19,320 --> 00:07:23,240 Speaker 2: the world aren't based on the same facts or the 110 00:07:23,320 --> 00:07:28,520 Speaker 2: same reality. Everybody knows about Facebook and to a lesser degree, 111 00:07:28,680 --> 00:07:33,360 Speaker 2: TikTok and Instagram and how it very much bulkanized people 112 00:07:33,480 --> 00:07:37,480 Speaker 2: into things. And we've seen that in the world of media. 113 00:07:37,600 --> 00:07:40,200 Speaker 2: You have Fox News over here, in MSNBC over there. 114 00:07:41,640 --> 00:07:48,400 Speaker 2: How significant of a threat does algorithmic news feeds present 115 00:07:48,640 --> 00:07:57,119 Speaker 2: to the country as a democracy, self regulating, self determined democracy. 116 00:07:56,880 --> 00:08:01,320 Speaker 3: Really significant? And there's algorithms and then there's large language models, 117 00:08:01,800 --> 00:08:05,440 Speaker 3: and they can both be used to create situations in which, 118 00:08:05,560 --> 00:08:09,280 Speaker 3: let's say, the people in some city let's call Los Angeles, 119 00:08:09,400 --> 00:08:13,000 Speaker 3: are seeing stuff that creates a reality that's very different 120 00:08:13,400 --> 00:08:15,960 Speaker 3: from the reality that people are saying, and let's say, 121 00:08:16,280 --> 00:08:20,200 Speaker 3: boise Idaho. And that can be a real problem for 122 00:08:20,920 --> 00:08:25,480 Speaker 3: understanding one another and also for mutual problem solving. 123 00:08:26,360 --> 00:08:29,880 Speaker 2: So let's apply this a little bit more to consumers 124 00:08:29,960 --> 00:08:35,720 Speaker 2: and markets. You describe two specific types of algorithmic discrimination. 125 00:08:36,520 --> 00:08:40,360 Speaker 2: One is price discrimination and the other is quality discrimination. 126 00:08:41,520 --> 00:08:44,240 Speaker 2: Why should we be aware of this distinction? Do they 127 00:08:44,400 --> 00:08:46,440 Speaker 2: both deserve regulatory attention? 128 00:08:47,840 --> 00:08:52,480 Speaker 3: So if there is price discrimination through algorithms in which 129 00:08:52,520 --> 00:08:56,120 Speaker 3: different people get different offers depending on what the algorithm 130 00:08:56,280 --> 00:09:00,120 Speaker 3: knows about their wealth and tastes, that's one thing, and 131 00:09:00,240 --> 00:09:03,160 Speaker 3: it might be okay. People don't stand up and cheer 132 00:09:03,280 --> 00:09:06,079 Speaker 3: and say hooray. But if people who have a lot 133 00:09:06,160 --> 00:09:09,800 Speaker 3: of resources are given an offer that's not as let's 134 00:09:09,800 --> 00:09:14,160 Speaker 3: say seductive, as one that is given to people who 135 00:09:14,160 --> 00:09:17,040 Speaker 3: don't have a lot of resources, just because the price 136 00:09:17,160 --> 00:09:19,400 Speaker 3: is higher for the roots than the poor, that's okay. 137 00:09:19,559 --> 00:09:24,720 Speaker 3: There's something efficient and market friendly about that. If it's 138 00:09:24,800 --> 00:09:29,920 Speaker 3: the case that people who are let's say, not caring 139 00:09:30,080 --> 00:09:33,280 Speaker 3: much about whether a tennis racket is going to break 140 00:09:33,480 --> 00:09:36,840 Speaker 3: after multiple uses, and other people who think that tennis 141 00:09:36,920 --> 00:09:40,640 Speaker 3: racket really has to be solid because I play every day, 142 00:09:40,880 --> 00:09:42,640 Speaker 3: and I'm going to play for the next five years. 143 00:09:43,080 --> 00:09:47,320 Speaker 3: Then some people are given the let's say, immortal tennis racket, 144 00:09:47,440 --> 00:09:50,640 Speaker 3: and other people are given the one that's more fragile. 145 00:09:51,040 --> 00:09:55,240 Speaker 3: That's also okay, so long as we're dealing with people 146 00:09:55,280 --> 00:09:57,720 Speaker 3: who have a level of sophistication. They know what they're 147 00:09:57,760 --> 00:10:00,600 Speaker 3: getting and they know what they need. It's the case 148 00:10:00,720 --> 00:10:05,400 Speaker 3: that for either pricing or for quality, the algorithm is 149 00:10:05,520 --> 00:10:09,960 Speaker 3: aware of the fact that certain consumers are particularly likely 150 00:10:10,080 --> 00:10:14,480 Speaker 3: not to have relevant information, then everything goes haywire. And 151 00:10:14,600 --> 00:10:18,640 Speaker 3: if this isn't frightening enough, note that algorithms are an 152 00:10:18,760 --> 00:10:24,400 Speaker 3: increasingly or an increasingly excellent position to know. This person 153 00:10:24,480 --> 00:10:28,040 Speaker 3: with whom I'm dealing doesn't know a lot about whether 154 00:10:28,160 --> 00:10:31,160 Speaker 3: products are going to last, and I can exploit that. 155 00:10:31,840 --> 00:10:34,880 Speaker 3: Or this person is very focused on today and tomorrow 156 00:10:35,360 --> 00:10:38,400 Speaker 3: and next year doesn't really matter. The person's present biased 157 00:10:38,559 --> 00:10:41,679 Speaker 3: and I can exploit that. And that's something that can 158 00:10:42,040 --> 00:10:47,160 Speaker 3: damage vulnerable consumers a lot, either with respect to quality 159 00:10:47,320 --> 00:10:48,480 Speaker 3: or with respect to pricing. 160 00:10:48,960 --> 00:10:52,160 Speaker 2: So let's flesh that out a little more. I'm very 161 00:10:52,280 --> 00:10:56,800 Speaker 2: much aware that when Facebook sells ads, because I've been 162 00:10:56,920 --> 00:11:02,640 Speaker 2: pitched these from Facebook. They could target an audience based 163 00:11:02,720 --> 00:11:06,800 Speaker 2: on not just their likes and dislikes, but their geography, 164 00:11:07,000 --> 00:11:11,599 Speaker 2: their search history, their credit score, their purchase history. Like, 165 00:11:11,840 --> 00:11:15,240 Speaker 2: they know more about you than you know about yourself. 166 00:11:15,920 --> 00:11:21,280 Speaker 2: It seems like we've created an opportunity for some potentially 167 00:11:21,679 --> 00:11:27,679 Speaker 2: abusive behavior. Where is the line crossed from Hey, we 168 00:11:27,920 --> 00:11:30,640 Speaker 2: know that you like dogs and so we're going to 169 00:11:30,760 --> 00:11:34,360 Speaker 2: market dog food to you to we know everything there 170 00:11:34,440 --> 00:11:37,360 Speaker 2: is about you, and we're going to exploit your behavior 171 00:11:37,480 --> 00:11:41,520 Speaker 2: biases and some of your emotional weaknesses. 172 00:11:42,120 --> 00:11:46,520 Speaker 3: Okay, So suppose there's a population of Facebook users who 173 00:11:46,640 --> 00:11:53,480 Speaker 3: are super well informed about food and really rational about food. 174 00:11:54,080 --> 00:11:58,920 Speaker 3: So they particularly happen to be fond of sushi, and 175 00:11:59,480 --> 00:12:03,240 Speaker 3: Facebook is going hard at them with respect to offers 176 00:12:03,360 --> 00:12:08,520 Speaker 3: for sushi and so forth. Now let's suppose there's another population, 177 00:12:09,080 --> 00:12:12,199 Speaker 3: which is they know what they like about food, but 178 00:12:12,320 --> 00:12:18,320 Speaker 3: they have kind of hopes and false beliefs both about 179 00:12:19,040 --> 00:12:22,040 Speaker 3: the effect of food on health. Then you can really 180 00:12:22,160 --> 00:12:25,360 Speaker 3: market to them things that will lead to poor choices. 181 00:12:25,840 --> 00:12:29,440 Speaker 3: And I've made a stark distinction between fully rational which 182 00:12:29,520 --> 00:12:34,280 Speaker 3: is kind of economic speak, and you know, imperfectly informed 183 00:12:34,320 --> 00:12:38,520 Speaker 3: and behaviorally biased people. Also economic speak, but it's really intuitive. 184 00:12:38,920 --> 00:12:41,040 Speaker 3: There's a radio show maybe This Will Bring It Home, 185 00:12:41,080 --> 00:12:43,719 Speaker 3: that I listen to when I drive into work, and 186 00:12:43,880 --> 00:12:48,079 Speaker 3: there's a lot of marketing about a product that is 187 00:12:48,120 --> 00:12:52,280 Speaker 3: supposed to relieve pain. And I don't want to criticize 188 00:12:52,320 --> 00:12:55,599 Speaker 3: any producer of any product, but I have reason to 189 00:12:55,679 --> 00:12:59,319 Speaker 3: believe that the relevant product doesn't help much. But the 190 00:13:00,480 --> 00:13:04,400 Speaker 3: the station that is marketing this product to people, this 191 00:13:04,600 --> 00:13:09,359 Speaker 3: pain relief product, must know that the audience is vulnerable 192 00:13:09,679 --> 00:13:12,719 Speaker 3: to it, and they must know exactly how to get 193 00:13:12,760 --> 00:13:15,840 Speaker 3: at them. And that's not in the interest of that's 194 00:13:15,880 --> 00:13:17,640 Speaker 3: not going to make America great again. 195 00:13:18,240 --> 00:13:23,000 Speaker 2: To say the very least. So we've been talking about algorithms, 196 00:13:23,760 --> 00:13:29,000 Speaker 2: but obviously the subtext is artificial intelligence, which seems to 197 00:13:29,040 --> 00:13:33,839 Speaker 2: be the natural extension and for the development of algos 198 00:13:34,480 --> 00:13:39,239 Speaker 2: tell us how as AI becomes more sophisticated and pervasive, 199 00:13:39,760 --> 00:13:43,520 Speaker 2: how is this going to impact our lives as employees, 200 00:13:43,600 --> 00:13:46,079 Speaker 2: as consumers, as citizens. 201 00:13:48,440 --> 00:13:54,439 Speaker 3: Chat GPT, chances are, knows a lot about everyone who 202 00:13:54,640 --> 00:13:58,719 Speaker 3: uses set. So I actually asked chat GPT recently, I 203 00:13:58,840 --> 00:14:01,559 Speaker 3: use it some not huge. I asked it to say 204 00:14:01,640 --> 00:14:05,360 Speaker 3: some things about myself, and it said a few things 205 00:14:05,480 --> 00:14:10,160 Speaker 3: that were kind of scarily precise about me based on 206 00:14:10,640 --> 00:14:13,719 Speaker 3: some number dozens, not one hundreds. I don't think of 207 00:14:13,880 --> 00:14:18,240 Speaker 3: engagements with chatch ept. So large language models that track 208 00:14:18,360 --> 00:14:22,360 Speaker 3: your prompts can know a lot about you, and if 209 00:14:22,560 --> 00:14:26,440 Speaker 3: they're able also to know your name, they can, you know, instantly, 210 00:14:26,560 --> 00:14:30,080 Speaker 3: basically learn a ton about you online, and we need 211 00:14:30,160 --> 00:14:33,880 Speaker 3: to have privacy protections that are working there. Still, it's 212 00:14:33,960 --> 00:14:38,400 Speaker 3: the case that AI broadly is able to use algorithms, 213 00:14:38,720 --> 00:14:42,560 Speaker 3: and generative AI can go well beyond the algorithms we've 214 00:14:42,600 --> 00:14:48,640 Speaker 3: gotten familiar with both to make the beauty of algorithmic engagement, 215 00:14:48,800 --> 00:14:52,200 Speaker 3: that is, here's what you like, here's what you want, 216 00:14:52,360 --> 00:14:56,560 Speaker 3: we're going to help you, and the ugliness of algorithms, 217 00:14:57,280 --> 00:14:59,840 Speaker 3: here's how we can exploit you to get you to buy. 218 00:15:00,640 --> 00:15:03,200 Speaker 3: And of course I'm thinking of investments too, so in 219 00:15:03,280 --> 00:15:05,200 Speaker 3: your neck of the woods, it would be a child's 220 00:15:05,200 --> 00:15:09,600 Speaker 3: play to get people super excited about investments, which AI 221 00:15:09,840 --> 00:15:14,240 Speaker 3: knows the people with whom it's engaging are particularly susceptible to, 222 00:15:14,720 --> 00:15:17,320 Speaker 3: even though they're really dumb, engagements. 223 00:15:17,480 --> 00:15:23,440 Speaker 2: Really really interesting. So since we're talking about investing, I 224 00:15:23,600 --> 00:15:28,800 Speaker 2: can't help but bring up both AI and algorithms trying 225 00:15:28,920 --> 00:15:33,920 Speaker 2: to increase so called market efficiency, and I always go 226 00:15:34,200 --> 00:15:38,080 Speaker 2: back to Uber's surge pricing. As soon as it starts 227 00:15:38,160 --> 00:15:42,160 Speaker 2: to rain, the prices go up in the city. It's 228 00:15:42,360 --> 00:15:47,360 Speaker 2: obviously not an emergency. It's just an annoyance. However, we 229 00:15:47,520 --> 00:15:52,160 Speaker 2: do see situations of price gouging after a storm, after 230 00:15:52,240 --> 00:15:54,760 Speaker 2: a hurricane, people only have so many batteries and so 231 00:15:54,960 --> 00:15:58,680 Speaker 2: much plywood, and they kind of crank up prices. How 232 00:15:58,760 --> 00:16:02,400 Speaker 2: do we determine what is the line between something like 233 00:16:02,680 --> 00:16:07,600 Speaker 2: surge pricing and something like, you know, abusive price gouging. 234 00:16:08,160 --> 00:16:12,560 Speaker 3: Okay, so you're in a terrific area of behavioral economics. 235 00:16:12,880 --> 00:16:16,800 Speaker 3: So we know that in circumstances in which let's say 236 00:16:16,920 --> 00:16:22,360 Speaker 3: demand goes up high because everyone needs a shovel and 237 00:16:22,400 --> 00:16:26,600 Speaker 3: as a snowstorm, people are really mad if the prices 238 00:16:26,720 --> 00:16:30,280 Speaker 3: go up, though it might be just a sensible market adjustment. 239 00:16:31,000 --> 00:16:35,600 Speaker 3: So as a first approximation, if there's a spectacular need 240 00:16:35,760 --> 00:16:41,360 Speaker 3: for something, let's say, shovels or umbrellas, the market inflation 241 00:16:41,560 --> 00:16:44,800 Speaker 3: of the cost, while it's morally abhorrent to many, and 242 00:16:44,920 --> 00:16:49,520 Speaker 3: maybe in principle morally abhorrent from the standpoint of standard economics, 243 00:16:49,760 --> 00:16:53,480 Speaker 3: it's okay. Now, if it's the case that people under 244 00:16:53,920 --> 00:16:56,640 Speaker 3: short term pressure from the fact that there's a lot 245 00:16:56,720 --> 00:17:02,320 Speaker 3: of rain are especially vulnerable and some kind of emotionally 246 00:17:02,440 --> 00:17:06,680 Speaker 3: intense state, so they'll pay kind of anything for an umbrella, 247 00:17:07,480 --> 00:17:11,880 Speaker 3: then there's a behavioral bias which is motivating people's willingness 248 00:17:11,920 --> 00:17:14,040 Speaker 3: to pay a lot more than the product is worth. 249 00:17:14,840 --> 00:17:18,119 Speaker 2: So let's talk a little bit about disclosures and the 250 00:17:18,240 --> 00:17:21,880 Speaker 2: sort of mandates that are required when we look across 251 00:17:21,960 --> 00:17:24,840 Speaker 2: the pond, when we look at Europe, they're much more 252 00:17:24,880 --> 00:17:29,480 Speaker 2: aggressive about protecting privacy and making sure big tech companies 253 00:17:29,520 --> 00:17:32,640 Speaker 2: are disclosing all the things they have to disclose. How 254 00:17:32,760 --> 00:17:36,840 Speaker 2: far behind is the US and that generally and are 255 00:17:36,960 --> 00:17:40,399 Speaker 2: we behind when it comes to disclosures about algorithms or 256 00:17:40,480 --> 00:17:42,000 Speaker 2: AI I. 257 00:17:42,040 --> 00:17:45,560 Speaker 3: Think we're behind them in the sense that we're less 258 00:17:45,640 --> 00:17:50,200 Speaker 3: privacy focused. But it's not clear that that's bad, And 259 00:17:50,359 --> 00:17:53,920 Speaker 3: even if it isn't good, it's not clear that it's terrible. 260 00:17:54,560 --> 00:17:57,960 Speaker 3: I think neither Europe nor the US has put their 261 00:17:58,119 --> 00:18:03,280 Speaker 3: regulatory finger on the actual problem. So let's take the 262 00:18:03,400 --> 00:18:08,600 Speaker 3: problem of algorithms not figuring out what people want, but 263 00:18:08,760 --> 00:18:13,359 Speaker 3: algorithms exploiting a lack of information or a behavioral bias 264 00:18:13,800 --> 00:18:16,600 Speaker 3: to get people to buy things at prices that aren't 265 00:18:16,600 --> 00:18:19,880 Speaker 3: good for them. That's a problem. It's in the same 266 00:18:20,040 --> 00:18:23,040 Speaker 3: universe as fraud and deception, and the question is what 267 00:18:23,119 --> 00:18:25,480 Speaker 3: are we going to do about it. A first line 268 00:18:25,520 --> 00:18:29,520 Speaker 3: of defense is to try to ensure consumer protection, not 269 00:18:29,720 --> 00:18:32,960 Speaker 3: through heavy handed regulation. I'm a long time University of 270 00:18:33,040 --> 00:18:37,560 Speaker 3: Chicago person. I have in my DNA not liking heavy 271 00:18:37,600 --> 00:18:42,600 Speaker 3: handed regulation, but through helping people to know what they're 272 00:18:42,640 --> 00:18:47,000 Speaker 3: buying and helping people not to suffer from a behavioral 273 00:18:47,119 --> 00:18:52,160 Speaker 3: bias such as, let's say, incomplete attention or unrealistic optimism 274 00:18:52,200 --> 00:18:55,600 Speaker 3: when they're buying things. So these are standard consumer protection 275 00:18:55,800 --> 00:18:59,120 Speaker 3: things which many of our agencies in the US, home 276 00:18:59,160 --> 00:19:03,840 Speaker 3: grown America. They've done that, and that's good and we 277 00:19:04,000 --> 00:19:06,639 Speaker 3: need more of that. So that's first line of defense. 278 00:19:07,080 --> 00:19:12,120 Speaker 3: Second line of defense isn't to say, you know, privacy, privacy, privacy, 279 00:19:12,240 --> 00:19:14,600 Speaker 3: though maybe that's a good song to sing. It's to 280 00:19:14,680 --> 00:19:19,000 Speaker 3: say right to algorithmic transparency. So this is something which 281 00:19:19,119 --> 00:19:23,520 Speaker 3: neither the US, nor Europe, nor Asia, nor South America 282 00:19:23,800 --> 00:19:27,359 Speaker 3: nor Africa has been very advanced on. So this is 283 00:19:27,440 --> 00:19:29,800 Speaker 3: a coming thing where we need to know what the 284 00:19:29,880 --> 00:19:34,960 Speaker 3: algorithms are doing. So it's public what's Amazon's algorithm doing? 285 00:19:35,400 --> 00:19:37,879 Speaker 3: That would be good to know, and it shouldn't be 286 00:19:38,000 --> 00:19:43,480 Speaker 3: the case that some efforts to ensure transparency invade Amazon's 287 00:19:43,600 --> 00:19:44,480 Speaker 3: legitimate rights. 288 00:19:45,760 --> 00:19:51,080 Speaker 2: Really, really fascinating, thanks Cas. Anybody who is participating in 289 00:19:51,240 --> 00:19:56,920 Speaker 2: the American economy and society, consumers, investors, even just regular 290 00:19:57,040 --> 00:20:01,440 Speaker 2: readers of news, needs to be away of how algorithms 291 00:20:01,480 --> 00:20:05,480 Speaker 2: are affecting what they see, the prices they pay, and 292 00:20:05,560 --> 00:20:09,000 Speaker 2: the sort of information they're getting. So with a little 293 00:20:09,040 --> 00:20:13,520 Speaker 2: bit of forethought and the book Algorithmic Harm, you can 294 00:20:13,680 --> 00:20:18,760 Speaker 2: protect yourself from the worst aspects of algorithms. And AI, 295 00:20:19,640 --> 00:20:23,720 Speaker 2: I'm Barry Redults you're listening to Bloomberg's at the Money