1 00:00:03,040 --> 00:00:06,760 Speaker 1: Welcome to Stuff to Blow Your Mind, the production of iHeartRadio. 2 00:00:12,960 --> 00:00:14,680 Speaker 2: Hey you welcome to Stuff to Blow your Mind. My 3 00:00:14,760 --> 00:00:15,480 Speaker 2: name is Robert. 4 00:00:15,360 --> 00:00:18,319 Speaker 3: Lamb and I am Joe McCormick, and we're back with 5 00:00:18,440 --> 00:00:21,599 Speaker 3: part two in our series called The Devil You Know, 6 00:00:22,040 --> 00:00:26,600 Speaker 3: which is about ambiguity aversion. That's the observation that we 7 00:00:26,800 --> 00:00:32,920 Speaker 3: prefer risks with known probabilities over risks with unknown probabilities, 8 00:00:33,000 --> 00:00:35,559 Speaker 3: even if you have no reason for thinking that the 9 00:00:35,680 --> 00:00:39,879 Speaker 3: unknown risks will be worse. This preference is captured by 10 00:00:39,880 --> 00:00:42,400 Speaker 3: the folk saying, better the devil you know than the 11 00:00:42,400 --> 00:00:46,040 Speaker 3: devil you don't. For the most part, people don't like 12 00:00:46,280 --> 00:00:48,879 Speaker 3: taking bets when they don't know what the odds are, 13 00:00:49,200 --> 00:00:51,640 Speaker 3: and will pay up just to avoid having to deal 14 00:00:51,800 --> 00:00:55,680 Speaker 3: with ambiguity. And this is funny in a way because 15 00:00:55,800 --> 00:00:58,680 Speaker 3: almost all of the decisions that we make in our 16 00:00:58,720 --> 00:01:03,280 Speaker 3: actual lives are made with incomplete knowledge and some significant 17 00:01:03,360 --> 00:01:06,720 Speaker 3: amount of uncertainty about our likelihood of success. Very few 18 00:01:06,720 --> 00:01:10,240 Speaker 3: things in reality have clear objective odds, like a fair 19 00:01:10,319 --> 00:01:14,240 Speaker 3: coin flip or a dice roll. However, it's pretty well 20 00:01:14,360 --> 00:01:17,360 Speaker 3: established that most people, most of the time, do not 21 00:01:17,640 --> 00:01:20,760 Speaker 3: like making decisions under ambiguous conditions, and we will go 22 00:01:20,800 --> 00:01:25,240 Speaker 3: to great lengths to avoid taking risks with unknown probabilities. 23 00:01:25,400 --> 00:01:29,560 Speaker 3: So in the last episode, after we introduced the concept, 24 00:01:29,800 --> 00:01:32,319 Speaker 3: we talked a good bit about the original piece of 25 00:01:32,360 --> 00:01:35,959 Speaker 3: writing that made ambiguity aversion famous. This was a nineteen 26 00:01:36,040 --> 00:01:39,920 Speaker 3: sixty one paper by the American economist, anti war activist, 27 00:01:40,000 --> 00:01:44,800 Speaker 3: and whistleblower Daniel Ellsberg. The paper was called Risk Ambiguity 28 00:01:44,840 --> 00:01:49,000 Speaker 3: and the Savage Axioms, published in the Quarterly Journal of Economics, 29 00:01:49,600 --> 00:01:53,280 Speaker 3: and to briefly recap, this paper proposed the concept of 30 00:01:53,320 --> 00:01:57,360 Speaker 3: ambiguity aversion using a number of thought experiments where you 31 00:01:57,400 --> 00:02:01,200 Speaker 3: could take different bets based on like which color ball 32 00:02:01,360 --> 00:02:04,640 Speaker 3: you would draw by chance out of an urn. Elsberg's 33 00:02:04,640 --> 00:02:07,520 Speaker 3: point was that people would tend to prefer bets with 34 00:02:07,600 --> 00:02:10,520 Speaker 3: clear odds of winning, say a one in three chance, 35 00:02:11,080 --> 00:02:14,840 Speaker 3: over bets with unknown odds, where you could have anywhere 36 00:02:14,880 --> 00:02:17,480 Speaker 3: between a zero percent chance or a two out of 37 00:02:17,520 --> 00:02:22,840 Speaker 3: three chance. Elsberg's intuition here has been broadly supported by 38 00:02:22,880 --> 00:02:26,880 Speaker 3: real world experiments. There are a few exceptions, but most 39 00:02:26,880 --> 00:02:29,600 Speaker 3: of the time most people would rather take bets with 40 00:02:29,639 --> 00:02:33,480 Speaker 3: clear odds, even in ways that end up implying self 41 00:02:33,520 --> 00:02:37,919 Speaker 3: contradictory assumptions about the unknown odds. This was an important 42 00:02:37,919 --> 00:02:41,480 Speaker 3: discovery in the economic field known as decision theory because 43 00:02:41,520 --> 00:02:45,880 Speaker 3: it violated a framework known as Savage's axioms, which were 44 00:02:45,880 --> 00:02:49,800 Speaker 3: widely used to model how people make decisions in situations 45 00:02:49,840 --> 00:02:54,720 Speaker 3: of uncertainty or ambiguity, and this self contradictory behavior is 46 00:02:54,760 --> 00:02:58,680 Speaker 3: now known as the Ellsberg paradox. And then, finally, in 47 00:02:58,680 --> 00:03:01,880 Speaker 3: the last episode, after talking about the theoretical origins of 48 00:03:01,960 --> 00:03:06,720 Speaker 3: ambiguity aversion, we also talked about this phenomenon in practice 49 00:03:06,720 --> 00:03:09,480 Speaker 3: in the real world, and our main example here was 50 00:03:09,560 --> 00:03:14,079 Speaker 3: the negotiation strategy famously associated with the Nixon White House 51 00:03:14,560 --> 00:03:19,120 Speaker 3: known as madman theory, a strategy where you intentionally cultivate 52 00:03:19,520 --> 00:03:24,880 Speaker 3: a reputation for volatility and unpredictability. In essence, you make 53 00:03:24,919 --> 00:03:28,160 Speaker 3: your peers and your counterparties worry that you are a 54 00:03:28,200 --> 00:03:32,160 Speaker 3: madman who is capable of anything, and the idea here 55 00:03:32,600 --> 00:03:34,560 Speaker 3: we ended up talking about a lot of reasons for 56 00:03:34,639 --> 00:03:37,040 Speaker 3: thinking that this is actually not a good strategy, but 57 00:03:37,120 --> 00:03:40,160 Speaker 3: the idea here is that you will exploit their natural 58 00:03:40,200 --> 00:03:43,440 Speaker 3: ambiguity aversion to get them to make concessions that they 59 00:03:43,480 --> 00:03:46,240 Speaker 3: would not make otherwise, And then we did get into 60 00:03:46,320 --> 00:03:51,839 Speaker 3: some international relations research on Madman theory, like reasons why 61 00:03:51,880 --> 00:03:54,440 Speaker 3: it might not actually be a good strategy, especially for 62 00:03:54,480 --> 00:03:56,960 Speaker 3: achieving long term goals. In fact, I think in the 63 00:03:57,000 --> 00:03:59,920 Speaker 3: long term most experts seem to agree that it is 64 00:04:00,280 --> 00:04:04,360 Speaker 3: harmful to everyone involved, including the practitioner. But again, the 65 00:04:04,440 --> 00:04:07,640 Speaker 3: reason it may sometimes work for achieving short term wins 66 00:04:08,000 --> 00:04:11,480 Speaker 3: is by taking advantage of this fear people have of ambiguity, 67 00:04:11,520 --> 00:04:13,680 Speaker 3: the fact that a lot of people would rather just 68 00:04:13,760 --> 00:04:16,960 Speaker 3: take a bad deal than keep negotiating with somebody who 69 00:04:17,040 --> 00:04:18,240 Speaker 3: is highly unpredictable. 70 00:04:19,040 --> 00:04:21,360 Speaker 2: You know, I was thinking after our recording that another 71 00:04:23,600 --> 00:04:29,000 Speaker 2: often cited bit of supposed wisdom along these lines is 72 00:04:29,279 --> 00:04:33,840 Speaker 2: the idea in generally you'll see this reference out of context, 73 00:04:34,000 --> 00:04:37,760 Speaker 2: so someone will not be talking about actually going to prison. 74 00:04:37,839 --> 00:04:40,000 Speaker 2: They'll be talking about your first dai of new school 75 00:04:40,120 --> 00:04:42,080 Speaker 2: or something, and they'll jokingly say, well, you know what 76 00:04:42,120 --> 00:04:43,520 Speaker 2: you're supposed to do. You're supposed to go up to 77 00:04:43,560 --> 00:04:46,239 Speaker 2: the biggest person around and hit them with a steel chair, 78 00:04:46,720 --> 00:04:49,160 Speaker 2: And then everyone will be like, WHOA, that guy's crazy. 79 00:04:49,160 --> 00:04:53,039 Speaker 2: We better not mess with him. It seems to follow 80 00:04:53,080 --> 00:04:57,679 Speaker 2: similar logic, and also seem equally, if not more flawed 81 00:04:57,800 --> 00:05:00,560 Speaker 2: in its approach. It does not sound like a tactic 82 00:05:00,640 --> 00:05:04,400 Speaker 2: that would actually generate like, you know, long term benefits. 83 00:05:04,839 --> 00:05:07,560 Speaker 3: It's one of those pieces of advice that is probably 84 00:05:07,960 --> 00:05:11,400 Speaker 3: more often repeated because it's like memorable and funny and 85 00:05:11,520 --> 00:05:14,240 Speaker 3: makes for a good story because that many people would 86 00:05:14,279 --> 00:05:17,640 Speaker 3: think it's actually good advice, especially if you have relevant experience. 87 00:05:17,839 --> 00:05:18,200 Speaker 2: I don't know. 88 00:05:18,200 --> 00:05:20,400 Speaker 3: I mean, I don't know a lot about like prison strategy, 89 00:05:20,440 --> 00:05:23,719 Speaker 3: but that does seem like you could have some real downsides. 90 00:05:24,880 --> 00:05:26,760 Speaker 2: You know, it makes me think of that the you know, 91 00:05:26,800 --> 00:05:29,800 Speaker 2: the the this sense another bit of wisdom, another nugget 92 00:05:29,800 --> 00:05:33,960 Speaker 2: that's Offen dished out, and that is getting into the 93 00:05:33,960 --> 00:05:36,920 Speaker 2: philosophy of Nietzsche. Uh, you know that which doesn't kill 94 00:05:37,000 --> 00:05:39,680 Speaker 2: us makes us stronger. But I forget who it was 95 00:05:39,720 --> 00:05:41,320 Speaker 2: who did a spin on this and said, well that 96 00:05:41,320 --> 00:05:45,520 Speaker 2: that which doesn't kill us nearly kills us, and that 97 00:05:45,560 --> 00:05:47,680 Speaker 2: should be a reason to give us pause. 98 00:05:48,040 --> 00:05:50,400 Speaker 3: That's funny because that Nietzsche saying is also when I 99 00:05:50,800 --> 00:05:52,279 Speaker 3: know this has come up on the show before, I 100 00:05:52,320 --> 00:05:55,440 Speaker 3: know that like in huge number of cases that's not true. Yeah, 101 00:05:55,560 --> 00:05:59,679 Speaker 3: like sometimes it makes us stronger, like we do sometimes learn, 102 00:06:00,160 --> 00:06:02,960 Speaker 3: learn and grow and gain strength from adversity, but that's 103 00:06:03,000 --> 00:06:06,000 Speaker 3: like some subset of adversities. A lot of adversities leave 104 00:06:06,040 --> 00:06:08,760 Speaker 3: us much weaker and deeply shaken. 105 00:06:08,800 --> 00:06:11,719 Speaker 2: Yeah, shaken, or less trusting or just yeah, or just 106 00:06:11,800 --> 00:06:14,280 Speaker 2: completely weakened. And I know this may weaken us. We 107 00:06:14,320 --> 00:06:16,400 Speaker 2: survive it, but it's weakened our system and then we're 108 00:06:16,400 --> 00:06:19,200 Speaker 2: more susceptible to the next thing and so forth. So yeah, 109 00:06:19,320 --> 00:06:21,359 Speaker 2: I mean, it's great to take solace in some of 110 00:06:21,400 --> 00:06:24,560 Speaker 2: these sayings, but you can only take them so far, really, So. 111 00:06:24,480 --> 00:06:27,280 Speaker 3: Anyway, we're back today to talk about some more angles 112 00:06:27,279 --> 00:06:30,280 Speaker 3: on ambiguity aversion, and one of the first things I 113 00:06:30,279 --> 00:06:33,680 Speaker 3: want to get into is that in the last episode 114 00:06:34,720 --> 00:06:37,200 Speaker 3: we did talk a bit about the distinction that Daniel 115 00:06:37,240 --> 00:06:41,400 Speaker 3: Elsberg tried to make between ambiguity and risk. But I 116 00:06:41,400 --> 00:06:43,640 Speaker 3: think one thing that's really important to point out that 117 00:06:43,720 --> 00:06:47,039 Speaker 3: we didn't get into last time is the distinction between 118 00:06:47,240 --> 00:06:52,080 Speaker 3: ambiguity aversion and risk aversion. I think risk aversion is 119 00:06:52,080 --> 00:06:54,680 Speaker 3: a concept more people are probably familiar with, you know, 120 00:06:54,720 --> 00:06:57,800 Speaker 3: that comes up in conversation. Sometimes these are not the 121 00:06:57,880 --> 00:07:03,240 Speaker 3: same thing. One who is risk averse doesn't like to 122 00:07:03,320 --> 00:07:07,480 Speaker 3: make bets. They would prefer a sure thing, even if 123 00:07:07,520 --> 00:07:12,440 Speaker 3: the sure thing is statistically less valuable than the risk. So, 124 00:07:12,720 --> 00:07:16,400 Speaker 3: for example, imagine a fair coin flip game. We stipulate 125 00:07:16,440 --> 00:07:19,320 Speaker 3: that this is a fair coin. There's no cheating involved, 126 00:07:19,480 --> 00:07:21,800 Speaker 3: and on each round when I flip the coin, I 127 00:07:21,840 --> 00:07:24,840 Speaker 3: can either just give you forty dollars no matter how 128 00:07:24,880 --> 00:07:28,120 Speaker 3: the coin lands, or I could give you one hundred 129 00:07:28,160 --> 00:07:31,280 Speaker 3: dollars every time it lands heads and nothing every time 130 00:07:31,320 --> 00:07:32,280 Speaker 3: it lands tails. 131 00:07:33,120 --> 00:07:33,600 Speaker 2: So with the. 132 00:07:33,520 --> 00:07:37,720 Speaker 3: Gambling condition here, you will on average win fifty dollars 133 00:07:37,800 --> 00:07:40,120 Speaker 3: per round. It's a fifty to fifty chance of winning, 134 00:07:40,200 --> 00:07:43,080 Speaker 3: and the payout is one hundred dollars. So the odds 135 00:07:43,120 --> 00:07:46,000 Speaker 3: here are clear. If you play this game a thousand 136 00:07:46,000 --> 00:07:48,560 Speaker 3: times in a row, you will make more money by 137 00:07:48,600 --> 00:07:51,800 Speaker 3: gambling than by taking the guaranteed forty dollars every time. 138 00:07:52,200 --> 00:07:56,400 Speaker 3: It's statistically all but a sure thing. But still some 139 00:07:56,440 --> 00:08:00,200 Speaker 3: people would rather just take the actual sure thing. They 140 00:08:00,200 --> 00:08:03,480 Speaker 3: would rather just take the forty dollars, and that is 141 00:08:03,600 --> 00:08:08,160 Speaker 3: risk aversion. It is a psychological tendency to dislike and 142 00:08:08,360 --> 00:08:11,760 Speaker 3: avoid taking a stake in uncertain outcomes, even if you 143 00:08:11,840 --> 00:08:14,600 Speaker 3: know the exact odds of those outcomes and potentially even 144 00:08:14,640 --> 00:08:17,920 Speaker 3: if those odds will statistically bring you more benefit than 145 00:08:18,000 --> 00:08:20,640 Speaker 3: the sure thing. And another way of thinking about risk 146 00:08:20,680 --> 00:08:25,000 Speaker 3: aversion is that sometimes to some people, the feeling of 147 00:08:25,160 --> 00:08:28,680 Speaker 3: security that you get from taking the sure thing is 148 00:08:28,720 --> 00:08:32,560 Speaker 3: actually more valuable than the money. It's more valuable than 149 00:08:32,600 --> 00:08:35,240 Speaker 3: the extra money that you would take home on average 150 00:08:35,520 --> 00:08:39,120 Speaker 3: by placing a bet with a higher expected value. And 151 00:08:39,160 --> 00:08:43,080 Speaker 3: this emphasizes something that is true in decision theory and economics. 152 00:08:43,360 --> 00:08:47,480 Speaker 3: Value isn't just about getting or keeping money. We use 153 00:08:47,559 --> 00:08:51,480 Speaker 3: money because it's an easy way to quantify value and experiments. 154 00:08:51,520 --> 00:08:56,160 Speaker 3: But sometimes we would pay money just to feel certainty 155 00:08:56,240 --> 00:08:59,160 Speaker 3: and comfort. That feeling is of higher value to us 156 00:08:59,200 --> 00:09:02,679 Speaker 3: than the money is. But then, of course again, other 157 00:09:02,720 --> 00:09:04,880 Speaker 3: people at other times would not pay that. Some people 158 00:09:04,960 --> 00:09:07,800 Speaker 3: love the feeling of risk. I mean people gamble for pleasure. 159 00:09:08,559 --> 00:09:11,439 Speaker 2: Yeah, absolutely, but yeah, but other times it's like I 160 00:09:12,200 --> 00:09:13,680 Speaker 2: just don't want to take a chance on this. I'd 161 00:09:13,720 --> 00:09:17,680 Speaker 2: rather just go for what's going to be straightforward and guarantee, right. 162 00:09:17,800 --> 00:09:20,960 Speaker 3: So that's risk and risk aversion. Ambiguity a version is 163 00:09:21,040 --> 00:09:24,040 Speaker 3: different from risk aversion because it is not about a 164 00:09:24,080 --> 00:09:28,640 Speaker 3: dislike of risk, but specifically a desire to avoid ambiguous 165 00:09:28,760 --> 00:09:32,320 Speaker 3: risks with unclear odds. You might love gambling on a 166 00:09:32,320 --> 00:09:35,280 Speaker 3: fifty to fifty coin toss, but you might not want 167 00:09:35,320 --> 00:09:37,120 Speaker 3: to play the game where you have to guess how 168 00:09:37,160 --> 00:09:39,280 Speaker 3: many black balls or yellow balls there are and there's 169 00:09:39,280 --> 00:09:42,600 Speaker 3: no way to know. So a person could technically be 170 00:09:42,880 --> 00:09:45,840 Speaker 3: not very risk averse at all, but highly averse to 171 00:09:45,880 --> 00:09:50,400 Speaker 3: ambiguity the interaction. The more I thought about it, the 172 00:09:50,440 --> 00:09:53,479 Speaker 3: more I thought that the interaction between these two tendencies 173 00:09:53,559 --> 00:09:58,000 Speaker 3: in real life is interesting because I think to many 174 00:09:58,080 --> 00:10:01,920 Speaker 3: of us, it's not always if a fear or a 175 00:10:02,000 --> 00:10:07,120 Speaker 3: hesitancy we're experiencing about making a decision is rooted in 176 00:10:07,480 --> 00:10:11,680 Speaker 3: risk aversion or ambiguity aversion. For instance, this came up 177 00:10:11,679 --> 00:10:14,000 Speaker 3: because I was trying to think of examples of risk aversion, 178 00:10:14,840 --> 00:10:17,680 Speaker 3: and I thought of the idea of what to do 179 00:10:17,960 --> 00:10:20,640 Speaker 3: with savings. Imagine you save up a little bit of 180 00:10:20,720 --> 00:10:23,079 Speaker 3: extra money at work from your paycheck, and you want 181 00:10:23,120 --> 00:10:26,319 Speaker 3: to put that toward retirement, and you have multiple options 182 00:10:26,360 --> 00:10:28,880 Speaker 3: of what to do with that money. Imagine a person's 183 00:10:28,920 --> 00:10:32,040 Speaker 3: in that scenario and they discover that they would really 184 00:10:32,120 --> 00:10:35,400 Speaker 3: feel better just keeping the money in a bank account, 185 00:10:35,600 --> 00:10:39,760 Speaker 3: earning a relatively low rate of interest, rather than investing 186 00:10:39,800 --> 00:10:42,920 Speaker 3: it in like a stock market index fund, where with 187 00:10:43,000 --> 00:10:46,120 Speaker 3: the index fund, it is commonly assumed it is standard 188 00:10:46,160 --> 00:10:49,360 Speaker 3: financial wisdom that the balance will grow at a faster 189 00:10:49,480 --> 00:10:51,680 Speaker 3: rate than it will just sitting in a savings account, 190 00:10:51,880 --> 00:10:54,040 Speaker 3: and you will end up with more money years down 191 00:10:54,080 --> 00:10:56,040 Speaker 3: the road if you put it in the index fund. 192 00:10:56,760 --> 00:10:59,800 Speaker 3: This pattern of behavior could be taken as an example 193 00:10:59,880 --> 00:11:03,679 Speaker 3: of risk aversion, because it's like, I would choose to 194 00:11:03,840 --> 00:11:06,360 Speaker 3: probably end up with less money just so I can 195 00:11:06,440 --> 00:11:11,440 Speaker 3: avoid taking risks with what I have. But then I thought, well, no, 196 00:11:11,600 --> 00:11:15,880 Speaker 3: not necessarily. What if that same impulse is actually a 197 00:11:15,880 --> 00:11:20,800 Speaker 3: better example of ambiguity a version than risk aversion, Because 198 00:11:20,960 --> 00:11:24,040 Speaker 3: how do we actually know that a stock market index 199 00:11:24,120 --> 00:11:27,280 Speaker 3: fund will grow more over time than the balance of 200 00:11:27,320 --> 00:11:31,360 Speaker 3: a savings account. The answer is historical patterns. It's like 201 00:11:31,640 --> 00:11:34,680 Speaker 3: statistically true over the last one hundred years or so, 202 00:11:34,800 --> 00:11:38,040 Speaker 3: you know, within certain economies in living memory. But it's 203 00:11:38,080 --> 00:11:41,760 Speaker 3: not actually like a fixed odds casino game or a 204 00:11:41,880 --> 00:11:45,600 Speaker 3: lab experiment where the odds of winning are fair axiomatically 205 00:11:46,160 --> 00:11:48,679 Speaker 3: in the real world. As the fine print always says, 206 00:11:48,800 --> 00:11:53,439 Speaker 3: investing involves risk, including loss of principle, and past performance 207 00:11:53,440 --> 00:11:58,240 Speaker 3: does not guarantee future returns. So maybe actually preferring the 208 00:11:58,240 --> 00:12:01,679 Speaker 3: savings account is a form of ambiguity aversion rather than 209 00:12:01,760 --> 00:12:06,280 Speaker 3: risk aversion, responding to the idea that, well, what are 210 00:12:06,320 --> 00:12:10,160 Speaker 3: the odds that suddenly, somehow investing in the stock market 211 00:12:10,280 --> 00:12:12,800 Speaker 3: becomes a bad idea in a way that it has 212 00:12:12,920 --> 00:12:16,600 Speaker 3: not within living memory. It's hard to understand what the 213 00:12:16,640 --> 00:12:20,480 Speaker 3: odds of something like that are. Seems pretty unlikely, but 214 00:12:20,720 --> 00:12:24,280 Speaker 3: it's fundamentally ambiguous. You just can't say what the objective 215 00:12:24,280 --> 00:12:26,760 Speaker 3: odds are. There's really no way to calculate them. So 216 00:12:27,000 --> 00:12:32,920 Speaker 3: sometimes there's actually ambiguity about whether there is ambiguity, you know, 217 00:12:33,000 --> 00:12:35,880 Speaker 3: because like I think an invest an investment advisor might 218 00:12:35,880 --> 00:12:38,720 Speaker 3: tell you on one hand, like, well, you know, historical 219 00:12:38,880 --> 00:12:41,280 Speaker 3: performance of the stock market, you can pretty much depend 220 00:12:41,320 --> 00:12:43,640 Speaker 3: on that that's a known risk when you know you 221 00:12:43,640 --> 00:12:47,320 Speaker 3: have you have statistics that tell you that you'll most 222 00:12:47,440 --> 00:12:49,720 Speaker 3: likely get this rate of return on average, and you 223 00:12:49,840 --> 00:12:53,040 Speaker 3: just don't know how seriously to take the possibility that 224 00:12:53,080 --> 00:12:54,840 Speaker 3: the future will not be like the past. 225 00:12:55,720 --> 00:12:58,800 Speaker 2: Yeah. Absolutely, And another angle that I was looking at 226 00:12:58,800 --> 00:13:00,839 Speaker 2: here on all this is that it's not just the 227 00:13:00,880 --> 00:13:04,320 Speaker 2: ambiguity and risk, but it's perceptions of ambiguity and risk. 228 00:13:04,880 --> 00:13:08,440 Speaker 2: So you know, it ultimately isn't necessarily coming down as 229 00:13:08,480 --> 00:13:11,959 Speaker 2: much to like what are actually the hard probabilities here. 230 00:13:12,480 --> 00:13:15,520 Speaker 2: It can skew greatly depending on what a particular individual's 231 00:13:15,520 --> 00:13:18,640 Speaker 2: worldview is. So you could have somebody like you could 232 00:13:18,640 --> 00:13:23,880 Speaker 2: have like a you know, a Washington Bartholomew hog Wallop figure, 233 00:13:23,880 --> 00:13:26,040 Speaker 2: and he's decided not only is he not going to 234 00:13:26,040 --> 00:13:28,120 Speaker 2: invest in the stock market, he's not even gonna put 235 00:13:28,160 --> 00:13:29,880 Speaker 2: his money in the bank. It's better off in the 236 00:13:29,920 --> 00:13:31,640 Speaker 2: tin can buried in his backyard. 237 00:13:31,760 --> 00:13:32,480 Speaker 3: Yeah, there you go. 238 00:13:32,559 --> 00:13:36,160 Speaker 2: Yeah, even though there may be very good statistics that 239 00:13:36,240 --> 00:13:39,120 Speaker 2: show that is less safe. Yeah, and that is a 240 00:13:39,160 --> 00:13:43,920 Speaker 2: greater risk. Your worldview and perception could be Oh no, absolutely, 241 00:13:43,960 --> 00:13:46,880 Speaker 2: you can't trust the banks at all. I can only 242 00:13:46,920 --> 00:13:49,960 Speaker 2: trust myself, and therefore it's safer in my yard with 243 00:13:50,000 --> 00:13:59,880 Speaker 2: me standing over it with a shotgun. 244 00:14:01,600 --> 00:14:05,120 Speaker 3: But anyway, so this all brings us back to this 245 00:14:05,240 --> 00:14:09,880 Speaker 3: distinction between risk aversion and ambiguity aversion, and how difficult 246 00:14:09,880 --> 00:14:11,960 Speaker 3: it can be to tell the difference between the two, 247 00:14:12,120 --> 00:14:15,199 Speaker 3: Like when we feel aversion to making a bet of 248 00:14:15,240 --> 00:14:17,760 Speaker 3: some kind, which by way of analogy. That doesn't have 249 00:14:17,800 --> 00:14:19,760 Speaker 3: to mean literally a bet with money, but just any 250 00:14:19,800 --> 00:14:24,800 Speaker 3: important decision in our lives when we feel that, like fear, hesitancy, 251 00:14:25,000 --> 00:14:28,880 Speaker 3: or resistance. We often think of this as risk aversion, 252 00:14:29,440 --> 00:14:33,320 Speaker 3: but could it actually be more often ambiguity aversion That 253 00:14:33,360 --> 00:14:37,200 Speaker 3: we're responding subconsciously to the fact that we don't know 254 00:14:37,240 --> 00:14:40,160 Speaker 3: how to calculate the odds for the decision we're about 255 00:14:40,160 --> 00:14:43,520 Speaker 3: to make, and that makes us uncomfortable. Now, one of 256 00:14:43,600 --> 00:14:46,360 Speaker 3: the big things we left open after we introduce the 257 00:14:46,400 --> 00:14:49,400 Speaker 3: concept of ambiguity a version in the last episode is 258 00:14:49,960 --> 00:14:53,640 Speaker 3: how is decision theory supposed to make sense of this 259 00:14:53,840 --> 00:14:57,280 Speaker 3: finding of like the Elsberg paradox? And there has been 260 00:14:57,360 --> 00:15:00,000 Speaker 3: a great amount of inks billed in answering this question 261 00:15:00,080 --> 00:15:03,280 Speaker 3: in the fifty something years since Elsberg. I don't think 262 00:15:03,320 --> 00:15:05,520 Speaker 3: I can even attempt to cover the whole landscape of 263 00:15:05,560 --> 00:15:07,680 Speaker 3: responses here, so I want to be clear this will 264 00:15:07,680 --> 00:15:11,440 Speaker 3: not be a comprehensive technical survey of the ambiguity of 265 00:15:11,520 --> 00:15:13,720 Speaker 3: version literature. I'm just going to mention a couple of 266 00:15:13,720 --> 00:15:17,360 Speaker 3: broad trends and then a few ideas within each. So 267 00:15:17,920 --> 00:15:21,960 Speaker 3: within the responses there are two main categories of thought. 268 00:15:22,360 --> 00:15:27,600 Speaker 3: One assumes that ambiguity aversion is actually a component of 269 00:15:27,720 --> 00:15:31,680 Speaker 3: rational decision making, that it is, in a subjective sense 270 00:15:31,960 --> 00:15:35,760 Speaker 3: wise to avoid ambiguity even when you can't really prove 271 00:15:35,880 --> 00:15:40,040 Speaker 3: that the ambiguous bets will generally turn out bad. The 272 00:15:40,160 --> 00:15:43,560 Speaker 3: other main way of thinking is that ambiguity a version 273 00:15:43,720 --> 00:15:48,120 Speaker 3: is some type of error or cognitive bias. And here 274 00:15:48,160 --> 00:15:50,000 Speaker 3: I'm going to mention a couple of sources. The first 275 00:15:50,000 --> 00:15:53,080 Speaker 3: one I cited in our last episode. That is a 276 00:15:53,520 --> 00:15:58,440 Speaker 3: book chapter by Mark J. Maschina and Marciano Siniscalci called 277 00:15:58,480 --> 00:16:02,240 Speaker 3: Ambiguity and Ambiguity of Version in the Handbook of Economics 278 00:16:02,280 --> 00:16:06,440 Speaker 3: of Risk and Uncertainty published in twenty fourteen. And then 279 00:16:06,440 --> 00:16:08,040 Speaker 3: the second source I want to mention here is by 280 00:16:08,080 --> 00:16:12,800 Speaker 3: Nabil I al Najar and Jonathan Weinstein published in the 281 00:16:13,320 --> 00:16:16,920 Speaker 3: Journal of Economics and Philosophy called the Ambiguity of Version 282 00:16:16,960 --> 00:16:20,880 Speaker 3: Literature a Critical Assessment. This one is more in that 283 00:16:21,000 --> 00:16:24,880 Speaker 3: second category I mentioned. It's an approach arguing that ambiguity 284 00:16:24,920 --> 00:16:27,360 Speaker 3: of version is actually just a type of error basically. 285 00:16:28,440 --> 00:16:30,080 Speaker 3: So first I want to look at what are some 286 00:16:30,160 --> 00:16:33,160 Speaker 3: of the ways that ambiguity aversion and like the Elsberg 287 00:16:33,240 --> 00:16:36,880 Speaker 3: paradox might actually be a form of rational, self consistent 288 00:16:36,960 --> 00:16:42,040 Speaker 3: decision making. One theory here can be called the multiple 289 00:16:42,160 --> 00:16:46,680 Speaker 3: priors model, also known as max men expected utility max 290 00:16:46,720 --> 00:16:51,840 Speaker 3: men is max n max men expected utility or MEU. 291 00:16:52,680 --> 00:16:55,360 Speaker 3: Let's think about it like this. Imagine you are asked 292 00:16:55,400 --> 00:16:58,640 Speaker 3: to make a bet involving ambiguous odds, like we'll go 293 00:16:58,720 --> 00:17:00,760 Speaker 3: back to the balls in the urn example from the 294 00:17:00,760 --> 00:17:03,840 Speaker 3: first episode. So am I more likely to pull a 295 00:17:03,920 --> 00:17:06,400 Speaker 3: red ball or a black ball out of this urn? 296 00:17:07,560 --> 00:17:10,680 Speaker 3: And so to remember the details there? The three color 297 00:17:10,680 --> 00:17:13,880 Speaker 3: eurn experiment was that inside this urn you can't see inside, 298 00:17:13,960 --> 00:17:16,480 Speaker 3: you don't know what color you're reaching into grab, but 299 00:17:16,560 --> 00:17:20,359 Speaker 3: there are exactly thirty red balls and then sixty balls 300 00:17:20,359 --> 00:17:23,359 Speaker 3: that could be any mix of yellow and black. So 301 00:17:23,720 --> 00:17:27,200 Speaker 3: they could be zero yellow and sixty black, sixty yellow 302 00:17:27,200 --> 00:17:30,120 Speaker 3: and zero black, thirty and thirty, or anything in between. 303 00:17:30,240 --> 00:17:32,200 Speaker 3: You have no way of knowing the mix in advance. 304 00:17:33,000 --> 00:17:36,639 Speaker 3: In these situations, maximin expected utility theory would say that 305 00:17:36,920 --> 00:17:41,560 Speaker 3: instead of forming one consistent belief about the number of 306 00:17:41,600 --> 00:17:45,200 Speaker 3: black balls inside and then placing bets according to that prediction, 307 00:17:46,119 --> 00:17:50,919 Speaker 3: we allow ourselves to consider multiple different probabilities and we 308 00:17:51,000 --> 00:17:57,199 Speaker 3: apply those probabilities to different bets pessimistically. So essentially, for 309 00:17:57,320 --> 00:17:59,920 Speaker 3: each new bet that requires us to guess the number 310 00:17:59,880 --> 00:18:02,879 Speaker 3: of black balls, we ask ourselves what would be the 311 00:18:02,920 --> 00:18:06,080 Speaker 3: worst case scenario for me here, and then we assume 312 00:18:06,240 --> 00:18:09,880 Speaker 3: those worst case odds when considering the bet. So if 313 00:18:09,880 --> 00:18:13,400 Speaker 3: you're asked to choose between black and red, there could 314 00:18:13,440 --> 00:18:16,720 Speaker 3: be anywhere between zero and sixty black balls, you just 315 00:18:16,760 --> 00:18:20,119 Speaker 3: assume there are zero and you bet on red. I 316 00:18:20,160 --> 00:18:22,720 Speaker 3: tried to come up with a real life decision sequence 317 00:18:22,760 --> 00:18:26,879 Speaker 3: that might help the contradictory behavior under MEU makes sense. 318 00:18:27,320 --> 00:18:30,240 Speaker 3: So here's what I've got to imagine this. You're making 319 00:18:30,280 --> 00:18:32,919 Speaker 3: plans for the afternoon and you don't know what the 320 00:18:32,960 --> 00:18:35,679 Speaker 3: weather is going to be. It might rain, it might not. 321 00:18:35,880 --> 00:18:38,679 Speaker 3: Don't know how to calculate that. You were thinking about 322 00:18:38,680 --> 00:18:42,040 Speaker 3: maybe going out for a walk, but then again, considering 323 00:18:42,080 --> 00:18:44,120 Speaker 3: that it might rain, you don't want to be caught 324 00:18:44,119 --> 00:18:46,919 Speaker 3: out in a storm, so you decide instead to stay 325 00:18:46,920 --> 00:18:50,520 Speaker 3: home and do some indoor activities. At the same time, 326 00:18:50,680 --> 00:18:53,840 Speaker 3: you are also deciding what to do about the flowers 327 00:18:53,880 --> 00:18:56,760 Speaker 3: on your front porch. They're drooping and they need some water. 328 00:18:57,200 --> 00:18:59,919 Speaker 3: Now it might rain and then they would get all 329 00:19:00,080 --> 00:19:02,480 Speaker 3: the water they need from that, But then again it 330 00:19:02,560 --> 00:19:05,359 Speaker 3: might not rain, so you decide to make time to 331 00:19:05,400 --> 00:19:08,879 Speaker 3: fill up a watering can and water them manually. This 332 00:19:08,960 --> 00:19:13,520 Speaker 3: could appear inconsistent because you've made two different decisions, both 333 00:19:13,560 --> 00:19:15,919 Speaker 3: on the bet that it will rain in one case 334 00:19:16,000 --> 00:19:17,960 Speaker 3: and on the bet that it won't rain in the 335 00:19:18,000 --> 00:19:21,840 Speaker 3: other case. It's possible that that's just self contradictory, or 336 00:19:22,320 --> 00:19:26,440 Speaker 3: it could be perfectly self consistent and logical if you 337 00:19:26,480 --> 00:19:30,000 Speaker 3: are considering a range of different probabilities of rain for 338 00:19:30,080 --> 00:19:33,919 Speaker 3: the afternoon and then applying the most pessimistic one to 339 00:19:34,040 --> 00:19:38,080 Speaker 3: each decision rather than just assuming one probability that applies 340 00:19:38,119 --> 00:19:40,119 Speaker 3: to all decisions. Does that make sense? 341 00:19:40,760 --> 00:19:41,800 Speaker 2: Yeah, yeah, I think so. 342 00:19:42,560 --> 00:19:44,840 Speaker 3: And to be clear, there's no guarantee here that the 343 00:19:44,880 --> 00:19:49,480 Speaker 3: pessimistic assumptions will be correct, because again, the information environment 344 00:19:49,600 --> 00:19:53,439 Speaker 3: is truly ambiguous. You just can't know. That's what ambiguity is. 345 00:19:54,000 --> 00:19:57,920 Speaker 3: But considering multiple probabilities and acting according to the most 346 00:19:58,080 --> 00:20:02,320 Speaker 3: pessimistic one for each decision is theoretically a logical and 347 00:20:02,520 --> 00:20:06,879 Speaker 3: consistent strategy. So the authors that support this model argue 348 00:20:06,920 --> 00:20:09,920 Speaker 3: that it is actually a rational way. Ambiguity version might 349 00:20:10,000 --> 00:20:14,000 Speaker 3: produce the apparently self contradictory results we observe, like the 350 00:20:14,040 --> 00:20:16,919 Speaker 3: apparent contradiction comes from the fact that we're sort of 351 00:20:16,920 --> 00:20:20,480 Speaker 3: coming up with different predictions for each bet instead of 352 00:20:20,520 --> 00:20:23,639 Speaker 3: one prediction for the actual full state of affairs. 353 00:20:24,400 --> 00:20:26,560 Speaker 2: Yeah. Though, of course, making all your decisions based on 354 00:20:26,600 --> 00:20:29,320 Speaker 2: worst case scenario is not a great way to live your. 355 00:20:29,160 --> 00:20:31,199 Speaker 3: Life, right, And I'm going to come back to that 356 00:20:31,240 --> 00:20:33,359 Speaker 3: in a minute, because that is part of the critical assessment. 357 00:20:34,000 --> 00:20:37,640 Speaker 3: Another theory that would make the Elsberg paradox behavior rational 358 00:20:38,160 --> 00:20:42,360 Speaker 3: is what's known as show qu expected utility or CEU 359 00:20:42,520 --> 00:20:44,840 Speaker 3: and the show K. There. That's a French name. It's 360 00:20:45,200 --> 00:20:46,560 Speaker 3: cho q u e. 361 00:20:46,600 --> 00:20:48,800 Speaker 2: T show k sounds delicious. 362 00:20:49,320 --> 00:20:53,280 Speaker 3: Oh yeah, so sho K. Expected utility is the idea 363 00:20:53,320 --> 00:20:56,920 Speaker 3: that when we evaluate a bet and the odds are ambiguous, 364 00:20:57,320 --> 00:21:00,760 Speaker 3: we basically we do come up with a single internal 365 00:21:00,800 --> 00:21:04,760 Speaker 3: belief about probability, but it takes a weird form. Instead 366 00:21:04,760 --> 00:21:09,400 Speaker 3: of a probability, we use what the literature calls a capacity. 367 00:21:10,560 --> 00:21:12,960 Speaker 3: So most of the time when we're trying to judge 368 00:21:12,960 --> 00:21:17,119 Speaker 3: how likely something is to happen, together the likelihood of 369 00:21:17,160 --> 00:21:21,040 Speaker 3: that event and the likelihood of its negation of not 370 00:21:21,240 --> 00:21:24,280 Speaker 3: that event have to sum up to one hundred percent. Right, 371 00:21:24,359 --> 00:21:27,160 Speaker 3: So like under normal probability, if there is a one 372 00:21:27,280 --> 00:21:30,080 Speaker 3: third chance a ball will be read, there is a 373 00:21:30,119 --> 00:21:33,120 Speaker 3: two thirds chance the ball will not be read. These 374 00:21:33,160 --> 00:21:37,720 Speaker 3: complementary probabilities should always sum to one, or, if expressed 375 00:21:37,760 --> 00:21:42,000 Speaker 3: as a percentages up to one hundred percent. CEU theory 376 00:21:42,320 --> 00:21:48,520 Speaker 3: says that our internal representation of likelihoods for ambiguous events 377 00:21:48,680 --> 00:21:53,080 Speaker 3: is not like this. Instead, when the odds are ambiguous, 378 00:21:53,200 --> 00:21:56,959 Speaker 3: we operate on the basis of a capacity, which mathematically 379 00:21:57,000 --> 00:22:02,600 Speaker 3: incorporates our lack of confidence in the situation by representing 380 00:22:02,640 --> 00:22:06,960 Speaker 3: the chance of an outcome and its negation together in 381 00:22:07,000 --> 00:22:08,879 Speaker 3: a way that does not have to add up to 382 00:22:08,920 --> 00:22:13,320 Speaker 3: one hundred percent. And I apologize to the more informed people, 383 00:22:13,359 --> 00:22:15,960 Speaker 3: I'm skipping over some mathematical complexity here, but this is 384 00:22:16,080 --> 00:22:19,040 Speaker 3: the simplified version. Is if you take the example of 385 00:22:19,080 --> 00:22:22,000 Speaker 3: the three color eurn, a person might begin with the 386 00:22:22,080 --> 00:22:26,280 Speaker 3: objective information that you know thirty one third of the 387 00:22:26,320 --> 00:22:28,679 Speaker 3: balls are red, so you know you actually have a 388 00:22:28,680 --> 00:22:31,199 Speaker 3: one third chance that the ball you draws red, And 389 00:22:31,240 --> 00:22:33,480 Speaker 3: then for the yellow and black balls, instead of coming 390 00:22:33,560 --> 00:22:36,960 Speaker 3: up with with guesses that sum up to two thirds, 391 00:22:37,000 --> 00:22:40,000 Speaker 3: which would be the actual sort of objective probability. Whatever 392 00:22:40,040 --> 00:22:43,600 Speaker 3: the mix is, you pick some smaller probabilities, like you 393 00:22:43,680 --> 00:22:46,800 Speaker 3: add in a maybe a one fifth chance that the 394 00:22:46,840 --> 00:22:49,280 Speaker 3: ball is yellow and a one fifth chance that the 395 00:22:49,320 --> 00:22:52,639 Speaker 3: ball is black. Obviously, these these cannot be the real 396 00:22:52,760 --> 00:22:56,000 Speaker 3: objective odds. Again with the caveat on what are actually 397 00:22:56,080 --> 00:22:59,080 Speaker 3: objective odds, but they can't be the real They can't 398 00:22:59,160 --> 00:23:01,639 Speaker 3: represent reality because they don't add up to one and 399 00:23:01,680 --> 00:23:04,760 Speaker 3: there are no other colors in there. But again, the 400 00:23:04,840 --> 00:23:07,160 Speaker 3: values we come up with in these kinds of decision 401 00:23:07,160 --> 00:23:11,320 Speaker 3: making processes don't have to be perfect at predicting real outcomes, 402 00:23:11,640 --> 00:23:14,000 Speaker 3: and in fact they can't be because we are missing 403 00:23:14,000 --> 00:23:17,640 Speaker 3: information that is the ambiguity. The point is that this 404 00:23:17,720 --> 00:23:21,120 Speaker 3: is another self consistent way our minds could work while 405 00:23:21,160 --> 00:23:24,720 Speaker 3: producing the behaviors observed in these experiments, like the missing 406 00:23:24,800 --> 00:23:29,359 Speaker 3: percentage of likely outcomes and the calculation is missing to 407 00:23:29,520 --> 00:23:32,399 Speaker 3: reflect the level of ambiguity we feel about the bet. 408 00:23:32,920 --> 00:23:35,520 Speaker 3: So that's two main models, and there are other models 409 00:23:35,520 --> 00:23:38,760 Speaker 3: for making sense of the Elsberg paradox behavior in these 410 00:23:38,760 --> 00:23:41,600 Speaker 3: sorts of ways. But I think these give you a 411 00:23:41,760 --> 00:23:45,560 Speaker 3: general idea of how this strain of thinking works. The 412 00:23:45,720 --> 00:23:48,440 Speaker 3: other main way of thinking about ambiguity version is that 413 00:23:48,560 --> 00:23:51,359 Speaker 3: it is not actually rational, like you don't need to 414 00:23:51,400 --> 00:23:54,480 Speaker 3: find a way of making rational sense of it. Instead, 415 00:23:54,560 --> 00:23:57,880 Speaker 3: it is a form of error or cognitive bias. And 416 00:23:57,920 --> 00:23:59,960 Speaker 3: this is the thrust of the paper that I mentioned 417 00:24:00,080 --> 00:24:02,080 Speaker 3: earlier from two thousand and nine by on the jar 418 00:24:02,119 --> 00:24:06,560 Speaker 3: In Weinstein. The authors here argue in short, that ambiguity 419 00:24:06,560 --> 00:24:11,320 Speaker 3: a version is actually a misapplied heuristic. It meaning it's 420 00:24:11,359 --> 00:24:16,760 Speaker 3: a mental shortcut that might be truly useful in some scenarios, 421 00:24:16,800 --> 00:24:20,320 Speaker 3: for example, in situations where we need to be cautious 422 00:24:20,440 --> 00:24:25,040 Speaker 3: to avoid being scammed or manipulated, but this shortcut gets 423 00:24:25,080 --> 00:24:29,639 Speaker 3: mistakenly applied to scenarios where it is not helpful, and 424 00:24:29,960 --> 00:24:33,840 Speaker 3: misapplying the ambiguity version heuristic like this, they say, could 425 00:24:33,880 --> 00:24:37,399 Speaker 3: cause you to like make different bets about the same 426 00:24:37,520 --> 00:24:40,600 Speaker 3: thing when conditions are the same and no new information 427 00:24:40,680 --> 00:24:43,800 Speaker 3: has been learned, so you know, like that's producing this 428 00:24:43,920 --> 00:24:47,879 Speaker 3: contradictory or apparently irrational behavior, or it could cause you 429 00:24:47,960 --> 00:24:53,920 Speaker 3: to treat non non informative information as informative. They also 430 00:24:54,040 --> 00:24:57,240 Speaker 3: argue that relying on heuristics like pessimism coming back to 431 00:24:57,240 --> 00:25:00,560 Speaker 3: your comment earlier, Rob, Again, that's always assuming the worst 432 00:25:00,640 --> 00:25:04,800 Speaker 3: case scenario for ambiguous bets. If you actually do this, 433 00:25:05,119 --> 00:25:08,959 Speaker 3: if you rely on a pessimistic heuristic in real life, 434 00:25:09,359 --> 00:25:13,680 Speaker 3: this will systematically produce incorrect predictions and bad outcomes. It's 435 00:25:13,680 --> 00:25:17,560 Speaker 3: actually not a good strategy to follow. Failure to make 436 00:25:17,600 --> 00:25:21,680 Speaker 3: bets or decisions in the presence of ambiguity also causes 437 00:25:21,800 --> 00:25:25,560 Speaker 3: us to never gain useful information because a lot of 438 00:25:25,560 --> 00:25:29,080 Speaker 3: times we can only learn more about what the real 439 00:25:29,119 --> 00:25:33,320 Speaker 3: odds are by making decisions in ambiguous conditions. You know, 440 00:25:33,440 --> 00:25:38,119 Speaker 3: like past bets on ambiguity lead to more informed bets 441 00:25:38,119 --> 00:25:41,200 Speaker 3: with higher certainty in the future because the previous bets 442 00:25:41,200 --> 00:25:43,639 Speaker 3: and their outcomes are information gathering tools. 443 00:25:44,440 --> 00:25:46,159 Speaker 2: Yeah, and of course we see this played out in 444 00:25:46,200 --> 00:25:49,000 Speaker 2: the simplified world of games. If you want to get 445 00:25:49,000 --> 00:25:51,879 Speaker 2: good at you know, particular card game or chess or 446 00:25:51,920 --> 00:25:56,240 Speaker 2: anything like that, you need to be prepared to be 447 00:25:56,240 --> 00:25:59,280 Speaker 2: beat a lot, you know. I mean, that's that's part 448 00:25:59,320 --> 00:26:01,920 Speaker 2: of learning what your odds are with a native in play. 449 00:26:02,040 --> 00:26:05,199 Speaker 3: That's right. So this strain of thinking says, no, ambiguity 450 00:26:05,200 --> 00:26:09,399 Speaker 3: aversion is not a rational strategy for making decisions. Instead, 451 00:26:09,400 --> 00:26:12,720 Speaker 3: it's it's more like an emotional bias that causes us 452 00:26:12,760 --> 00:26:16,680 Speaker 3: to act on yes self contradictory beliefs and to make 453 00:26:16,720 --> 00:26:20,760 Speaker 3: bad decisions because we do not like the feeling of ambiguity. 454 00:26:21,359 --> 00:26:25,120 Speaker 3: And honestly, I'm not sure which strain of thinking is 455 00:26:25,200 --> 00:26:27,160 Speaker 3: more on track. I've been reading about these and I'm 456 00:26:27,160 --> 00:26:30,360 Speaker 3: not I don't know who's more convincing that like that 457 00:26:30,400 --> 00:26:35,159 Speaker 3: it's part of rationality overall, or that it's not. The 458 00:26:35,200 --> 00:26:38,600 Speaker 3: whole thing about ambiguity, of course, is that it's ambiguous 459 00:26:38,640 --> 00:26:40,480 Speaker 3: whether or not you should bet on it, so there 460 00:26:40,520 --> 00:26:44,760 Speaker 3: really is no objective answer, but I think there, I mean, 461 00:26:44,800 --> 00:26:49,520 Speaker 3: what we're instead looking for is what is the most 462 00:26:50,160 --> 00:26:54,119 Speaker 3: what is the most consistent way to explain the subjective behavior, 463 00:26:54,280 --> 00:26:57,320 Speaker 3: rather than like, who's actually right about you know, the 464 00:26:57,400 --> 00:26:59,639 Speaker 3: ambiguous outcomes because there's no way to know them in 465 00:26:59,680 --> 00:27:03,600 Speaker 3: advance by definition. I do think the critical assessment makes 466 00:27:03,640 --> 00:27:07,760 Speaker 3: a really good point about learning. You really do often 467 00:27:07,840 --> 00:27:10,879 Speaker 3: have to be bold and make decisions in the face 468 00:27:10,920 --> 00:27:14,359 Speaker 3: of ambiguity so that there will be more information and 469 00:27:14,480 --> 00:27:17,520 Speaker 3: less ambiguity in future decisions. That seems like a highly 470 00:27:17,520 --> 00:27:18,560 Speaker 3: salient point to me. 471 00:27:19,280 --> 00:27:22,800 Speaker 2: Yeah, yeah, And would you say based on what you 472 00:27:22,880 --> 00:27:27,600 Speaker 2: were looking at that it sounds like ambiguity aversion is 473 00:27:27,640 --> 00:27:31,920 Speaker 2: itself adaptive in the grand scheme of things. Maybe it's 474 00:27:31,920 --> 00:27:35,600 Speaker 2: one of these situations like type one errors in cognition 475 00:27:35,720 --> 00:27:40,120 Speaker 2: false pos positives versus type two errors and cognition's false negatives, 476 00:27:40,160 --> 00:27:44,040 Speaker 2: where in sort of the hypothetical tiger's hiding in the 477 00:27:44,040 --> 00:27:48,119 Speaker 2: bush's scenario, this is absolutely adaptive. But then when taken 478 00:27:48,280 --> 00:27:51,280 Speaker 2: into other aspects of life, especially modern life and all 479 00:27:51,280 --> 00:27:53,879 Speaker 2: its complexity, it's not necessarily useful. 480 00:27:54,240 --> 00:27:55,919 Speaker 3: Well, I mean that, I think you could think of 481 00:27:55,960 --> 00:27:57,840 Speaker 3: it that way, and that might be one way of 482 00:27:57,880 --> 00:28:03,280 Speaker 3: approaching the error theory, right, that it's a mental shortcut 483 00:28:03,320 --> 00:28:06,040 Speaker 3: that is useful in some scenarios, again, especially when you're 484 00:28:06,080 --> 00:28:09,679 Speaker 3: like trying to avoid being tricked or trying to avoid 485 00:28:09,720 --> 00:28:12,800 Speaker 3: being taken advantage of by somebody who has superior information 486 00:28:12,960 --> 00:28:17,080 Speaker 3: to you. But then you're misapplying it to scenarios where 487 00:28:17,080 --> 00:28:20,640 Speaker 3: that's not really the case. You're just like you're using 488 00:28:20,680 --> 00:28:23,480 Speaker 3: a defensive mental shortcut in a case where you don't 489 00:28:23,480 --> 00:28:26,480 Speaker 3: really need to, and it's causing you to act with 490 00:28:26,760 --> 00:28:30,400 Speaker 3: unnecessary levels of defense, which produce irrational behavior. And we've 491 00:28:30,400 --> 00:28:32,280 Speaker 3: talked about other ways that that can be true. So 492 00:28:32,600 --> 00:28:35,280 Speaker 3: I think that fits more with the error way of thinking. 493 00:28:35,960 --> 00:28:38,520 Speaker 3: I think the other way of thinking would just say that, yeah, 494 00:28:38,560 --> 00:28:41,600 Speaker 3: I mean this would probably assume it doesn't really comment 495 00:28:41,680 --> 00:28:44,360 Speaker 3: on this, but it would just assume probably that it's 496 00:28:44,400 --> 00:28:49,280 Speaker 3: adaptive because it is just part of our consistent decision 497 00:28:49,320 --> 00:28:51,960 Speaker 3: making toolkit, and it's you know, just the same way 498 00:28:52,000 --> 00:28:55,880 Speaker 3: that our brains decide that it's better to take a 499 00:28:55,920 --> 00:28:58,280 Speaker 3: bet with a ninety percent chance of winning than a 500 00:28:58,320 --> 00:29:01,600 Speaker 3: ten percent chance of winning. It also has this other function, 501 00:29:01,720 --> 00:29:03,680 Speaker 3: which is this ambiguity of version function. 502 00:29:04,320 --> 00:29:07,600 Speaker 2: Yeah, and like any toolkit, you want to use multiple 503 00:29:07,640 --> 00:29:11,640 Speaker 2: tools depending on the actual situation. If you have an 504 00:29:11,680 --> 00:29:15,360 Speaker 2: expansive toolkit and it's the screwdriver every time, even when 505 00:29:15,400 --> 00:29:18,160 Speaker 2: you need to hammer a nail, yeah, that's probably not 506 00:29:18,200 --> 00:29:20,680 Speaker 2: going to work out too well. But when you got 507 00:29:20,680 --> 00:29:23,479 Speaker 2: to screw a screw in, yeah, you know you're golden, 508 00:29:24,000 --> 00:29:26,000 Speaker 2: assuming it's the right dude ad at the end and 509 00:29:26,080 --> 00:29:27,240 Speaker 2: not one of it's not the star. 510 00:29:28,040 --> 00:29:30,480 Speaker 3: Do you go through the same one two process every 511 00:29:30,480 --> 00:29:32,640 Speaker 3: time you find a sharp screw out on the road 512 00:29:32,720 --> 00:29:35,240 Speaker 3: or on the sidewalk, where like I hit the same 513 00:29:35,280 --> 00:29:37,440 Speaker 3: two thoughts every time and be like, first of all, 514 00:29:37,480 --> 00:29:40,560 Speaker 3: it's like, ooh, pick it up, Glad somebody a child 515 00:29:40,640 --> 00:29:42,480 Speaker 3: didn't step on this or a car hit this with 516 00:29:42,560 --> 00:29:44,400 Speaker 3: their tire. And then the second thing is what did 517 00:29:44,480 --> 00:29:47,640 Speaker 3: this come out of? Was it holding something important together? 518 00:29:48,000 --> 00:29:50,200 Speaker 2: Oh? I never do the second step. I'm I always 519 00:29:50,280 --> 00:29:51,920 Speaker 2: just stick at the first step, where I'm like, oh, 520 00:29:52,000 --> 00:29:54,240 Speaker 2: glad nobody stepped on that or got that in their tire. 521 00:29:54,480 --> 00:29:57,080 Speaker 2: I have saved the day. And then I carry on. 522 00:29:57,360 --> 00:29:59,800 Speaker 3: If it's like near my house, I like, look at 523 00:29:59,800 --> 00:30:04,640 Speaker 3: my house, and I'm like, something on. I don't know. 524 00:30:05,120 --> 00:30:07,520 Speaker 2: If it's in the house, definitely, and I guess if 525 00:30:07,520 --> 00:30:09,479 Speaker 2: it's in close proximity to the house, But if it's 526 00:30:09,520 --> 00:30:13,000 Speaker 2: in the road, I'm just assuming, you know, I don't know. 527 00:30:13,040 --> 00:30:14,480 Speaker 2: I just never think about it falling out of a 528 00:30:14,560 --> 00:30:18,440 Speaker 2: vehicle in a like in a dangerous capacity. But maybe 529 00:30:18,480 --> 00:30:20,920 Speaker 2: I should. Maybe that's an additional level of worry that 530 00:30:20,960 --> 00:30:22,560 Speaker 2: I should begin to employ. 531 00:30:23,200 --> 00:30:25,480 Speaker 3: I didn't mean a vehicle specifically. I mean it might 532 00:30:25,520 --> 00:30:27,520 Speaker 3: be or a house or anything. There are all kinds 533 00:30:27,560 --> 00:30:29,760 Speaker 3: of things that really need to remain stuck together. 534 00:30:30,120 --> 00:30:32,680 Speaker 2: Yeah, I guess I just assume it's like it's work vehicles. 535 00:30:32,760 --> 00:30:35,800 Speaker 2: They have surplus screws rolling around and they're just rolling 536 00:30:35,840 --> 00:30:36,640 Speaker 2: out under the road. 537 00:30:37,440 --> 00:30:39,880 Speaker 3: Let's hope that's the case most of the time, but 538 00:30:39,920 --> 00:30:49,840 Speaker 3: I'm glad I could share that anxiety. 539 00:30:51,520 --> 00:30:53,920 Speaker 2: All Right, a few other angles we want to touch 540 00:30:53,920 --> 00:30:55,680 Speaker 2: on here. This first one. I'm not going to go 541 00:30:55,720 --> 00:30:57,320 Speaker 2: in super deep on this one because this one is 542 00:30:57,360 --> 00:31:02,880 Speaker 2: another highly economic topic, but it's a There's what's known 543 00:31:02,880 --> 00:31:05,640 Speaker 2: as the competence effect. So we've already touched on the 544 00:31:05,680 --> 00:31:07,280 Speaker 2: basics of this a little bit, but this is the 545 00:31:07,360 --> 00:31:09,640 Speaker 2: term for the phenomenon by which we tend to be 546 00:31:09,680 --> 00:31:13,400 Speaker 2: more ambiguity a verse in fields where we feel we 547 00:31:13,480 --> 00:31:18,960 Speaker 2: lack experience or expertise. In economics and finance, this manifests 548 00:31:19,000 --> 00:31:21,680 Speaker 2: in investors who are more willing to invest in familiar 549 00:31:21,720 --> 00:31:26,080 Speaker 2: domestic stocks rather than complex foreign markets. And if you're 550 00:31:26,120 --> 00:31:28,800 Speaker 2: like me, that sentence is like, yeah, well, I'm uncomfortable 551 00:31:28,800 --> 00:31:30,160 Speaker 2: with either. I don't know my way, I don't know 552 00:31:30,200 --> 00:31:32,680 Speaker 2: one from the other, so I just mark me down 553 00:31:32,680 --> 00:31:35,920 Speaker 2: as ambiguity averse to either. 554 00:31:36,720 --> 00:31:38,360 Speaker 3: But it seems to be making the point. 555 00:31:38,440 --> 00:31:42,520 Speaker 2: Yeah, but this is apparently a thing. I believe this 556 00:31:42,640 --> 00:31:44,760 Speaker 2: was first looked at in a paper by Chip Heath 557 00:31:44,800 --> 00:31:49,080 Speaker 2: and Amos Tversky titled Preference and Competence in Choice under 558 00:31:49,160 --> 00:31:51,880 Speaker 2: Uncertainty published in the Journal of Risk and Uncertainty in 559 00:31:51,960 --> 00:31:55,080 Speaker 2: ninety one, and this found that people tended to stick 560 00:31:55,120 --> 00:31:58,560 Speaker 2: to gut instincts in a familiar area versus even against 561 00:31:58,560 --> 00:32:02,040 Speaker 2: a rational, diversified approach that gets into areas they're not 562 00:32:02,080 --> 00:32:02,920 Speaker 2: that familiar with. 563 00:32:03,680 --> 00:32:06,240 Speaker 3: This rings true for me. I know I am more 564 00:32:06,400 --> 00:32:10,440 Speaker 3: ambiguity averse in domains of life that I don't understand 565 00:32:10,680 --> 00:32:14,400 Speaker 3: very well, which is funny because we have to make 566 00:32:14,480 --> 00:32:19,200 Speaker 3: decisions with ambiguous odds of success in both unfamiliar and 567 00:32:19,360 --> 00:32:24,520 Speaker 3: in familiar domains. I mean, ambiguity does not leave reality 568 00:32:24,640 --> 00:32:26,600 Speaker 3: just because you know what you're doing in a certain 569 00:32:26,680 --> 00:32:29,960 Speaker 3: knowledge domain or hobby area or something like that. Having experience. 570 00:32:30,600 --> 00:32:32,200 Speaker 3: You can have all the experience in the world and 571 00:32:32,240 --> 00:32:33,960 Speaker 3: still there are things where it's just like you don't 572 00:32:34,000 --> 00:32:37,840 Speaker 3: know what your odds are. So I think maybe the 573 00:32:37,960 --> 00:32:42,480 Speaker 3: explanation for this difference is that in the unfamiliar domains, 574 00:32:43,800 --> 00:32:49,600 Speaker 3: I'm more likely to worry that ambiguity is artificial and 575 00:32:49,680 --> 00:32:53,880 Speaker 3: that I am being tricked or manipulated by someone who 576 00:32:54,000 --> 00:32:59,040 Speaker 3: has superior knowledge of that domain, whereas in familiar situations, 577 00:33:00,120 --> 00:33:03,600 Speaker 3: understand what amount of information it is normal to have 578 00:33:04,200 --> 00:33:07,720 Speaker 3: and thus how much ambiguity is just natural and unavoidable, 579 00:33:08,080 --> 00:33:11,360 Speaker 3: and thus the ambiguity that is there becomes easier to embrace. 580 00:33:12,440 --> 00:33:13,240 Speaker 3: Does that make sense? 581 00:33:13,560 --> 00:33:16,680 Speaker 2: Yeah? Yeah, And of course all this ends up coming 582 00:33:16,720 --> 00:33:19,600 Speaker 2: back to the topic of not only like what do 583 00:33:19,640 --> 00:33:21,320 Speaker 2: you know, but what do you not know? And what 584 00:33:21,360 --> 00:33:23,880 Speaker 2: do you think that you know that you actually don't know. 585 00:33:25,400 --> 00:33:27,400 Speaker 2: We've talked about this before on the show, and it 586 00:33:27,440 --> 00:33:29,480 Speaker 2: always reminds me of a line. I had to look 587 00:33:29,520 --> 00:33:31,320 Speaker 2: this up. I'd forgotten the source, but it was from 588 00:33:31,360 --> 00:33:33,840 Speaker 2: the movie Body Heat. You have a characters played by 589 00:33:33,920 --> 00:33:36,880 Speaker 2: Mickey Rourke. He says that, and this is a I'm 590 00:33:36,920 --> 00:33:38,720 Speaker 2: going to paraphrase the quote a little bit. He says, 591 00:33:38,720 --> 00:33:41,000 Speaker 2: anytime you try to commit a crime, they are about 592 00:33:41,040 --> 00:33:42,880 Speaker 2: fifty ways that you could mess up, and if you 593 00:33:42,880 --> 00:33:46,800 Speaker 2: can think of twenty five of them, you're a genius. Now. 594 00:33:46,800 --> 00:33:48,800 Speaker 2: I don't know if those numbers are exact, but I 595 00:33:48,880 --> 00:33:52,080 Speaker 2: like the spirit of the thing, you know, Like he's saying, 596 00:33:52,120 --> 00:33:55,120 Speaker 2: You're going into a situation and you have this great 597 00:33:55,560 --> 00:33:58,400 Speaker 2: degree of overconfidence. You don't realize all of the ways 598 00:33:58,400 --> 00:34:02,680 Speaker 2: you could potentially fail, and nobody can have perfect knowledge, 599 00:34:02,680 --> 00:34:05,280 Speaker 2: like you know, this is a quote that does recognize 600 00:34:05,280 --> 00:34:07,520 Speaker 2: that there will always be a certain amount of ambiguity. 601 00:34:08,520 --> 00:34:12,880 Speaker 2: But if you have expertise, that sort of cloud of 602 00:34:12,920 --> 00:34:14,960 Speaker 2: ambiguity is going to be smaller. It's not going to 603 00:34:14,960 --> 00:34:17,600 Speaker 2: go away completely, but it will be smaller. 604 00:34:17,840 --> 00:34:19,160 Speaker 3: Well, I think yeah, and I think this. 605 00:34:19,520 --> 00:34:19,759 Speaker 2: Yeah. 606 00:34:19,800 --> 00:34:22,880 Speaker 3: The spirit of the quote is that like people often 607 00:34:22,920 --> 00:34:25,400 Speaker 3: go into a crime thinking they're betting on red, that 608 00:34:25,440 --> 00:34:28,120 Speaker 3: they're taking a known risk, when in fact they're actually 609 00:34:28,320 --> 00:34:31,560 Speaker 3: they're going into an ambiguous risk where they don't understand 610 00:34:31,560 --> 00:34:32,719 Speaker 3: how much risk there is. 611 00:34:33,800 --> 00:34:38,520 Speaker 2: So anyway minor cinematic diversion, there now another higher stakes 612 00:34:38,640 --> 00:34:43,960 Speaker 2: area for the individual concerning ambiguity and ambiguity. Aversion is 613 00:34:44,000 --> 00:34:48,160 Speaker 2: how ambiguity often factors into the healthcare experience. I was 614 00:34:48,200 --> 00:34:52,880 Speaker 2: reading about this. So there are two broad categories within healthcare. 615 00:34:52,880 --> 00:34:57,239 Speaker 2: There's diagnostic ambiguity, this is where the underlying disease. There's 616 00:34:57,280 --> 00:35:00,560 Speaker 2: ambiguity surrounding the underlying disease and how it works. And 617 00:35:00,600 --> 00:35:03,560 Speaker 2: then there's therapeutic ambiguity, like how do we treat it 618 00:35:03,680 --> 00:35:06,080 Speaker 2: or even cure it, and I imagine a lot of 619 00:35:06,080 --> 00:35:09,919 Speaker 2: you out there have encountered examples of this. In these 620 00:35:09,960 --> 00:35:14,160 Speaker 2: different ambiguities can lead to rather opposite choices, But due 621 00:35:14,160 --> 00:35:17,000 Speaker 2: to ambiguity aversion, a patient might choose a treatment with 622 00:35:17,160 --> 00:35:21,200 Speaker 2: known but terrible side effects over one with the chance of, say, 623 00:35:21,239 --> 00:35:23,600 Speaker 2: a better outcome, but also unknown risks. 624 00:35:24,440 --> 00:35:27,120 Speaker 3: Or because of ambiguity aversion, I think very often in 625 00:35:27,160 --> 00:35:32,799 Speaker 3: healthcare will gravitate towards someone who offers them false certainty 626 00:35:33,040 --> 00:35:38,160 Speaker 3: versus someone who is honestly communicating like unknowns about the 627 00:35:38,239 --> 00:35:38,960 Speaker 3: level of risk. 628 00:35:39,560 --> 00:35:43,120 Speaker 2: Yeah, communication is key, and this becomes important in a 629 00:35:43,200 --> 00:35:46,120 Speaker 2: number of different ways. You know, is an impact of 630 00:35:46,160 --> 00:35:50,240 Speaker 2: course on public health and highlights the importance of science, 631 00:35:50,239 --> 00:35:53,400 Speaker 2: communication and healthcare, as well as the dangers of medical 632 00:35:53,480 --> 00:35:58,520 Speaker 2: misinformation and conspiracy thinking. For instance, in the context of vaccination, 633 00:35:59,040 --> 00:36:01,960 Speaker 2: ambiguity a version and can definitely take the form of 634 00:36:02,120 --> 00:36:06,399 Speaker 2: vaccine hesitancy. And I want to stress again here that 635 00:36:06,440 --> 00:36:10,600 Speaker 2: the ambiguity need only be perceived ambiguity. So if one 636 00:36:10,640 --> 00:36:13,399 Speaker 2: has already bought into such statements, says, well, no one 637 00:36:13,400 --> 00:36:16,080 Speaker 2: really knows how what these vaccines are, or no one 638 00:36:16,160 --> 00:36:20,160 Speaker 2: really knows how they work. Just assuming that those statements 639 00:36:20,239 --> 00:36:23,560 Speaker 2: are true, there's some large degree of truth to them, 640 00:36:23,920 --> 00:36:28,000 Speaker 2: then there may be well enough ambiguity subjective ambiguity in 641 00:36:28,080 --> 00:36:31,960 Speaker 2: place in the individual's mind to make them rather adverse 642 00:36:32,160 --> 00:36:35,720 Speaker 2: to this perceived ambiguity and more likely to choose options 643 00:36:35,760 --> 00:36:39,400 Speaker 2: that bring with them greater risk. But more of a 644 00:36:39,560 --> 00:36:45,680 Speaker 2: communicated idea of certainty, like dubious alternative preventative and treatment methods, 645 00:36:45,840 --> 00:36:47,759 Speaker 2: or just the idea of doing nothing at all. 646 00:36:47,880 --> 00:36:49,960 Speaker 3: The body knows how to take care of itself, that 647 00:36:50,080 --> 00:36:52,920 Speaker 3: kind of thinking, Yeah, which sounds very certain when delivered 648 00:36:52,960 --> 00:36:54,840 Speaker 3: by a confident and charismatic speaker. 649 00:36:55,160 --> 00:36:57,880 Speaker 2: Yeah. I was looking at a paper in Behavioral Medicine 650 00:36:57,880 --> 00:37:03,320 Speaker 2: from earlier this year, Psychological Correlates of ambiguity aversion in 651 00:37:03,440 --> 00:37:07,680 Speaker 2: the contest of COVID nineteen vaccination, And this was by 652 00:37:08,239 --> 00:37:13,200 Speaker 2: Somanovic at All, and they found four different major takeaways 653 00:37:13,200 --> 00:37:16,080 Speaker 2: here that I thought were interesting. They found that Americans 654 00:37:16,120 --> 00:37:20,560 Speaker 2: who perceived higher ambiguity about COVID nineteen vaccines reported lower 655 00:37:20,600 --> 00:37:24,440 Speaker 2: worry and lower perceived severity of COVID nineteen, which were 656 00:37:24,440 --> 00:37:28,480 Speaker 2: each associated with lower vaccination intentions and lower information seeking 657 00:37:29,560 --> 00:37:31,600 Speaker 2: about COVID nineteen vaccines. 658 00:37:31,719 --> 00:37:34,560 Speaker 3: Okay, So if I'm understanding that, right, Americans who had 659 00:37:34,960 --> 00:37:39,560 Speaker 3: lower confidence in the efficacy and safety of COVID vaccines 660 00:37:40,000 --> 00:37:43,880 Speaker 3: were also for whatever reason, less worried about covid infection. 661 00:37:45,040 --> 00:37:47,799 Speaker 2: Right. But at the same time, they were also just 662 00:37:48,120 --> 00:37:50,160 Speaker 2: they were obvious, And this is kind of a no brainer, right, 663 00:37:50,200 --> 00:37:52,600 Speaker 2: It would seemed to follow they would be less inclined 664 00:37:52,800 --> 00:37:56,080 Speaker 2: to have any intention of being vaccinated, and then would 665 00:37:56,080 --> 00:37:59,600 Speaker 2: be less inclined to seek out any information about said vaccinations. 666 00:38:00,200 --> 00:38:02,200 Speaker 2: Their mind at this point is already kind of made up. 667 00:38:02,360 --> 00:38:04,520 Speaker 3: Well, I think that's right, But it's also interesting to 668 00:38:04,920 --> 00:38:10,240 Speaker 3: note like the counterintuitive ways forms that having your mind 669 00:38:10,360 --> 00:38:12,600 Speaker 3: made up in this way can take, because sometimes the 670 00:38:12,640 --> 00:38:15,320 Speaker 3: way that I think that is expressed is like sometimes 671 00:38:15,400 --> 00:38:17,520 Speaker 3: it's just a statement of like, oh, yeah, nobody knows, 672 00:38:17,560 --> 00:38:20,000 Speaker 3: nobody can know about the vaccines, and so it just 673 00:38:20,040 --> 00:38:25,279 Speaker 3: becomes like this infinitely and unsolvably dangerous thing out there 674 00:38:25,400 --> 00:38:27,920 Speaker 3: that you know, nobody you can never really have confidence 675 00:38:27,960 --> 00:38:30,200 Speaker 3: in your mind is made up about it, but it's 676 00:38:30,280 --> 00:38:32,680 Speaker 3: made up in a way that always just sort of 677 00:38:32,719 --> 00:38:35,080 Speaker 3: like holds it as an unresolved danger. 678 00:38:35,680 --> 00:38:41,000 Speaker 2: Yeah, yeah, yeah, And I'm sure that the individuals on 679 00:38:41,000 --> 00:38:43,479 Speaker 2: that side of the argument might well make the counter 680 00:38:43,600 --> 00:38:46,000 Speaker 2: argument that like, okay, well, people who trust in the vaccine, 681 00:38:46,000 --> 00:38:47,880 Speaker 2: they're just they trust blindly. They've just made up their 682 00:38:47,920 --> 00:38:50,720 Speaker 2: mind and they're not going to listen to any of 683 00:38:49,520 --> 00:38:53,120 Speaker 2: the criticisms. And I mean, I disagree with that. But 684 00:38:53,160 --> 00:38:57,320 Speaker 2: on the other hand, you know, I feel like, speaking personally, 685 00:38:57,400 --> 00:38:59,800 Speaker 2: like I can't be like I'm not going to pretend 686 00:38:59,840 --> 00:39:02,520 Speaker 2: to be like doctor Robert all the time where I'm 687 00:39:02,520 --> 00:39:04,960 Speaker 2: making all of all of these I'm doing all my 688 00:39:05,000 --> 00:39:08,719 Speaker 2: own research on you know, like health and dental concerns. 689 00:39:08,760 --> 00:39:11,640 Speaker 2: Like ultimately, I have to trust in a professional who 690 00:39:12,320 --> 00:39:15,120 Speaker 2: knows what they're doing and is certified in what they're 691 00:39:15,160 --> 00:39:18,239 Speaker 2: doing and leave it to them to tell me what 692 00:39:18,360 --> 00:39:21,040 Speaker 2: I should do, because I don't want to have all 693 00:39:21,080 --> 00:39:23,279 Speaker 2: these decisions on me. I don't want to do my 694 00:39:23,719 --> 00:39:26,200 Speaker 2: own research on everything. I've got enough research to do 695 00:39:26,239 --> 00:39:27,399 Speaker 2: in and out during the week. 696 00:39:27,840 --> 00:39:30,759 Speaker 3: Well, I mean about like medical matters. When people say 697 00:39:30,800 --> 00:39:34,400 Speaker 3: do your own research ninety nine point whatever percent of 698 00:39:34,440 --> 00:39:37,080 Speaker 3: the time, somebody who says that either is not doing 699 00:39:37,080 --> 00:39:39,680 Speaker 3: any research at all, and that just means go with 700 00:39:39,719 --> 00:39:42,360 Speaker 3: your gut feeling, or if they are doing research, it 701 00:39:42,640 --> 00:39:45,560 Speaker 3: just means doing very bad research. It means relying on 702 00:39:46,680 --> 00:39:49,360 Speaker 3: sources that you have no objective reason for thinking or 703 00:39:49,400 --> 00:39:53,360 Speaker 3: giving you good information, and probably are just confirming your priors. 704 00:39:53,520 --> 00:39:57,360 Speaker 2: Yeah, like I choose to research my health topic by 705 00:39:57,480 --> 00:39:59,200 Speaker 2: picking up chariots of the gods. 706 00:39:59,560 --> 00:40:02,720 Speaker 3: Yeah, I mean I would argue, of course, like doctors 707 00:40:02,719 --> 00:40:05,560 Speaker 3: and healthcare professionals can be wrong. That's possible, and it 708 00:40:05,560 --> 00:40:08,000 Speaker 3: happens all the time. But I think on average, you're 709 00:40:08,000 --> 00:40:10,040 Speaker 3: going to do a lot better just like listening to 710 00:40:10,120 --> 00:40:13,200 Speaker 3: doctors than like reading a Facebook post and then making 711 00:40:13,280 --> 00:40:14,680 Speaker 3: up your own mind about medicine. 712 00:40:15,080 --> 00:40:18,239 Speaker 2: Yeah. So this particular paper, I'm going to breathe through 713 00:40:18,280 --> 00:40:20,600 Speaker 2: the other main bullet points. Basically, they came down to 714 00:40:20,719 --> 00:40:24,239 Speaker 2: people who perceived higher ambiguity about COVID nineteen vaccines. They 715 00:40:24,600 --> 00:40:29,520 Speaker 2: reported higher anger about COVID nineteen vaccines, which was associated 716 00:40:29,520 --> 00:40:33,120 Speaker 2: with lower perceived severity of COVID nineteen. They reported lower 717 00:40:33,160 --> 00:40:36,040 Speaker 2: happiness about the vaccines, which was associated with both lower 718 00:40:36,080 --> 00:40:39,799 Speaker 2: worry and lower perceived a severity of COVID nineteen. And 719 00:40:40,160 --> 00:40:42,560 Speaker 2: they also found that both Americans and Israelis who were 720 00:40:42,600 --> 00:40:45,080 Speaker 2: looked at in the study who perceived higher ambiguity about 721 00:40:45,080 --> 00:40:48,560 Speaker 2: COVID nineteen vaccines, reported lower feelings of relaxation from the 722 00:40:48,560 --> 00:40:52,239 Speaker 2: COVID nineteen vaccine, which was associated with lower perceived severity 723 00:40:52,400 --> 00:40:53,680 Speaker 2: of the illness itself. 724 00:40:53,960 --> 00:40:57,280 Speaker 3: So you have these constellations of different tendencies acting together. 725 00:40:57,480 --> 00:41:01,480 Speaker 3: You've got like a lower confidence of lower trust in 726 00:41:02,600 --> 00:41:06,360 Speaker 3: a healthcare treatment that also seems to go along with 727 00:41:06,560 --> 00:41:12,320 Speaker 3: having a lower concern or lower level of seriousness about 728 00:41:12,360 --> 00:41:15,719 Speaker 3: the condition that that treatment is supposed to treat, less 729 00:41:15,719 --> 00:41:19,239 Speaker 3: emotional positivity about the idea of taking the treatment, and 730 00:41:20,000 --> 00:41:23,399 Speaker 3: all of this together. So it I mean, well, one 731 00:41:23,400 --> 00:41:26,520 Speaker 3: thing this sort of suggests to me is that, of 732 00:41:26,560 --> 00:41:29,640 Speaker 3: course this is obvious, but it just reminds us that 733 00:41:30,080 --> 00:41:34,120 Speaker 3: when we are making decisions, we do, to some extent 734 00:41:34,200 --> 00:41:37,120 Speaker 3: try to make rational decisions based on our beliefs, but 735 00:41:37,160 --> 00:41:40,120 Speaker 3: we're also just like highly emotional creatures, and our decision 736 00:41:40,160 --> 00:41:45,680 Speaker 3: making is deeply entangled with feelings we have about the 737 00:41:45,719 --> 00:41:48,319 Speaker 3: things we're making decisions about. You know, it's not really 738 00:41:48,360 --> 00:41:51,839 Speaker 3: possible for us to approach every decision in life as 739 00:41:51,920 --> 00:41:55,960 Speaker 3: just like purely like odds and benefit maximizing machines. Like 740 00:41:56,000 --> 00:41:58,719 Speaker 3: we have feelings about things and those feelings, whether we 741 00:41:58,800 --> 00:42:00,920 Speaker 3: want them to or not do, influence the way we 742 00:42:00,960 --> 00:42:01,680 Speaker 3: make decisions. 743 00:42:01,800 --> 00:42:04,480 Speaker 2: Yeah, and they will, they will override reason and and 744 00:42:04,560 --> 00:42:06,400 Speaker 2: all of us, all of us, is susceptible to this 745 00:42:06,440 --> 00:42:10,080 Speaker 2: sort of thing. But yeah, another way that it's I 746 00:42:10,080 --> 00:42:12,279 Speaker 2: think it's interesting and dreadful to think about all of 747 00:42:12,280 --> 00:42:16,400 Speaker 2: this is that is that again, there's generally some degree 748 00:42:16,400 --> 00:42:22,000 Speaker 2: of ambiguity with both medical diagnognosis and medical treatment. And 749 00:42:22,200 --> 00:42:23,919 Speaker 2: I can say I can speak from experience, and again 750 00:42:23,960 --> 00:42:26,560 Speaker 2: most of you can can can relate to this as well. 751 00:42:26,800 --> 00:42:28,760 Speaker 2: It can be frustrating when you go to a doctor 752 00:42:28,920 --> 00:42:31,000 Speaker 2: and instead of a clear course of action that you 753 00:42:31,080 --> 00:42:34,600 Speaker 2: might want or a definite solution that you are seeking, 754 00:42:34,640 --> 00:42:37,360 Speaker 2: you instead get a certain amount of ambiguity about what 755 00:42:37,480 --> 00:42:40,439 Speaker 2: might be wrong and how it might be addressed. Because 756 00:42:40,480 --> 00:42:41,839 Speaker 2: at the end of the day, I think we all 757 00:42:41,920 --> 00:42:46,200 Speaker 2: want the Star Trek medical tricorder experience where someone's able 758 00:42:46,200 --> 00:42:49,399 Speaker 2: to just scan us, perfect scan, see what ails us, 759 00:42:49,760 --> 00:42:51,880 Speaker 2: and then tell us exactly what needs they need to 760 00:42:51,880 --> 00:42:53,960 Speaker 2: do to fix this, probably with another device that they 761 00:42:54,040 --> 00:42:56,000 Speaker 2: just kind of run up and down our body. But 762 00:42:56,200 --> 00:42:59,279 Speaker 2: even with today's technology, it's it's more complex than that. 763 00:42:59,360 --> 00:43:03,520 Speaker 2: In most cases, Meanwhile, where do we actually encounter this 764 00:43:03,680 --> 00:43:07,240 Speaker 2: lack of ambiguity in the messaging? Again, it comes generally 765 00:43:07,320 --> 00:43:10,440 Speaker 2: in different forms of alternative medicine, or in snake oils 766 00:43:10,840 --> 00:43:15,280 Speaker 2: or conspiracy thinking, because none of these voices ever says, look, 767 00:43:15,480 --> 00:43:18,080 Speaker 2: there's absolutely no evidence this works, but give it a shot, 768 00:43:18,200 --> 00:43:20,719 Speaker 2: or we can't know for sure if lizard people run 769 00:43:20,800 --> 00:43:26,200 Speaker 2: a secret government in Antarctica. But there's always plenty of 770 00:43:26,239 --> 00:43:29,319 Speaker 2: ambiguity in their evidence, of course, if supplied at all, 771 00:43:29,640 --> 00:43:33,200 Speaker 2: but you'll often find them maneuvering around that in their messaging, 772 00:43:33,239 --> 00:43:37,520 Speaker 2: which again is low on ambiguity and high uncertainty. And 773 00:43:37,719 --> 00:43:42,080 Speaker 2: like you mentioned earlier, when it comes to legitimate medical messaging, 774 00:43:42,160 --> 00:43:45,640 Speaker 2: it tends to communicate both probability and risk. You'll get 775 00:43:45,960 --> 00:43:48,160 Speaker 2: that list of fast talk side effects at the end 776 00:43:48,160 --> 00:43:53,879 Speaker 2: of your advertisement for an actual certified medication, And again 777 00:43:53,920 --> 00:43:58,400 Speaker 2: this is just the inherent uncertainty in even very modern medicine. 778 00:44:07,600 --> 00:44:10,880 Speaker 2: Another paper I looked at was twenty twenty two's Vaccine 779 00:44:10,920 --> 00:44:15,040 Speaker 2: Hesitancy and Cognitive Biases by Kassigliami at All, and this 780 00:44:15,080 --> 00:44:19,080 Speaker 2: points out that vaccination education strategies long followed a fact 781 00:44:19,080 --> 00:44:23,440 Speaker 2: based approach, but given our aversion to ambiguity, along with 782 00:44:23,560 --> 00:44:28,480 Speaker 2: bias and other heuristics. These various cognitive biases in messaging 783 00:44:28,840 --> 00:44:31,200 Speaker 2: have to be factored into the outreach, and of course 784 00:44:31,360 --> 00:44:34,359 Speaker 2: this gets into the complex way in which public health 785 00:44:34,360 --> 00:44:37,720 Speaker 2: and vaccine messaging have to attempt to counteract all the noise, 786 00:44:38,880 --> 00:44:42,640 Speaker 2: sometimes by leaning more on empathetic and audience based approaches 787 00:44:42,760 --> 00:44:45,960 Speaker 2: rather than fact based debate, because again, in many cases, 788 00:44:46,000 --> 00:44:50,400 Speaker 2: the actual medical facts have baked in ambiguity, while the 789 00:44:50,480 --> 00:44:54,479 Speaker 2: non scientific counter claims do not. You know, it's again, 790 00:44:54,520 --> 00:44:58,960 Speaker 2: this is absolutely frustrating and increasingly dangerous, an increasingly dangerous 791 00:44:58,960 --> 00:45:02,600 Speaker 2: part of our reality. But yeah, the voice saying that 792 00:45:03,120 --> 00:45:05,840 Speaker 2: the vaccine has microchips in it that will turn you 793 00:45:05,880 --> 00:45:08,960 Speaker 2: into robots, like, you know, there's no ambiguity to that statement. 794 00:45:09,000 --> 00:45:12,759 Speaker 2: They're generally, it's generally very they're very clear. Uh, they're 795 00:45:12,840 --> 00:45:18,279 Speaker 2: very certain in it. Whereas again, actual medical facts and 796 00:45:18,440 --> 00:45:22,400 Speaker 2: actual medicine based arguments are going to have some level 797 00:45:22,440 --> 00:45:24,440 Speaker 2: of ambiguity in their messaging. 798 00:45:24,840 --> 00:45:27,759 Speaker 3: This is one of the things that makes communication about 799 00:45:27,840 --> 00:45:31,160 Speaker 3: science generally, but especially about something very high stakes like 800 00:45:31,160 --> 00:45:34,839 Speaker 3: like medical you know, medical science, medical treatments, makes it 801 00:45:34,960 --> 00:45:38,640 Speaker 3: so difficult just because you have to You have to 802 00:45:38,680 --> 00:45:41,600 Speaker 3: find this very difficult balance of like you you are 803 00:45:41,640 --> 00:45:45,640 Speaker 3: compelled ethically to be forthright and honest about what you know, 804 00:45:46,200 --> 00:45:48,120 Speaker 3: but you also have to wait to find a way 805 00:45:48,160 --> 00:45:51,319 Speaker 3: to present that truth in a way that will be 806 00:45:51,560 --> 00:45:56,640 Speaker 3: rhetorically effective and hit correctly. And uh, and that's so, 807 00:45:56,760 --> 00:45:59,640 Speaker 3: like you are balancing things that somebody who is not 808 00:46:00,160 --> 00:46:03,360 Speaker 3: bound by an obligation to be ethical and honest is 809 00:46:03,400 --> 00:46:06,120 Speaker 3: not bound like they can only focus on the rhetorical 810 00:46:06,160 --> 00:46:08,439 Speaker 3: part of the appeal, Like all they have to worry 811 00:46:08,480 --> 00:46:12,160 Speaker 3: about is what I'm saying exciting and charismatic and convincing. 812 00:46:13,600 --> 00:46:15,760 Speaker 3: They don't have the other side of the scale to balance. 813 00:46:16,200 --> 00:46:20,000 Speaker 2: That's right, that's right. And obviously this easily spills over 814 00:46:20,200 --> 00:46:26,000 Speaker 2: into other areas, you know, pseudo archaeology, pseudo paleontology, other pseudosciences, 815 00:46:26,120 --> 00:46:31,759 Speaker 2: alternative history, ufology, and more, because, as we I think 816 00:46:31,920 --> 00:46:35,720 Speaker 2: really strive to drive home and related episodes on this show, 817 00:46:36,360 --> 00:46:42,160 Speaker 2: actual history and archaeology inherently contain ambiguity. Like all histories are, 818 00:46:42,640 --> 00:46:44,960 Speaker 2: to some degree or another, imperfect. Some are just more 819 00:46:45,000 --> 00:46:48,400 Speaker 2: imperfect than others. You know, you know, as much as 820 00:46:48,440 --> 00:46:50,359 Speaker 2: we know about certain periods of the past and certain 821 00:46:50,400 --> 00:46:53,240 Speaker 2: individuals and events. We still have gaps, we have disagreements 822 00:46:53,280 --> 00:46:58,800 Speaker 2: about interpretations, we have uncertainties, and then we I'm reminded 823 00:46:58,800 --> 00:47:04,600 Speaker 2: too of particularly Alan Moore's use of the the Cox snowflake. 824 00:47:04,680 --> 00:47:08,279 Speaker 2: That's koc h in nineteen eighty Nine's from Hell, his 825 00:47:08,400 --> 00:47:13,480 Speaker 2: graphic novel about Jack the Ripper. So, the Cox snowflake 826 00:47:13,560 --> 00:47:16,840 Speaker 2: is a fractal curve that was first described by Neil's 827 00:47:16,880 --> 00:47:20,680 Speaker 2: Fabian Helga von Kock in nineteen oh four. The idea 828 00:47:20,680 --> 00:47:23,000 Speaker 2: here is that it has a finite area but an 829 00:47:23,040 --> 00:47:30,040 Speaker 2: infinitely long perimeter, so lots of cremulations. And Moore's treatment 830 00:47:30,080 --> 00:47:33,800 Speaker 2: of this is that history has a shape with infinite 831 00:47:33,960 --> 00:47:38,960 Speaker 2: perimeter and finite area, a labyrinth of connections and repeated patterns, 832 00:47:39,000 --> 00:47:42,360 Speaker 2: but that we never fully comprehend. Ah. 833 00:47:42,360 --> 00:47:45,359 Speaker 3: That's interesting. So if I understand that right, it sort 834 00:47:45,360 --> 00:47:49,080 Speaker 3: of explains why as you read more about history and 835 00:47:49,160 --> 00:47:53,040 Speaker 3: learn more about history, you can, by incorporating more and 836 00:47:53,080 --> 00:47:55,560 Speaker 3: more knowledge and holding it, you know, parallel in your mind, 837 00:47:56,000 --> 00:47:59,520 Speaker 3: you can have in some ways a stronger and realer 838 00:47:59,560 --> 00:48:04,359 Speaker 3: picture of the past. But also the the increasing informational 839 00:48:04,440 --> 00:48:10,040 Speaker 3: complexity it makes everything increasingly blurry as well, like you're 840 00:48:10,080 --> 00:48:14,759 Speaker 3: always finding it becomes harder to have it becomes harder 841 00:48:14,800 --> 00:48:18,360 Speaker 3: to have like causal through lines and clear narratives about 842 00:48:18,360 --> 00:48:20,840 Speaker 3: why history happens, because the more you know, the more 843 00:48:21,280 --> 00:48:25,120 Speaker 3: other causal factors you can consider when thinking about, like 844 00:48:25,160 --> 00:48:28,319 Speaker 3: why something happened in history. And ultimately it gets to 845 00:48:28,320 --> 00:48:31,200 Speaker 3: the point where it's so complex that it's like who 846 00:48:31,200 --> 00:48:32,440 Speaker 3: knows why anything happened? 847 00:48:32,760 --> 00:48:35,160 Speaker 2: Right, right? And that was his point about like who 848 00:48:35,280 --> 00:48:37,839 Speaker 2: was Jack the Ripper? You know, you can we end 849 00:48:37,920 --> 00:48:40,160 Speaker 2: up with all of these just so many books, so 850 00:48:40,239 --> 00:48:44,520 Speaker 2: many theories, so much literature and writing and conspiracy theories 851 00:48:44,520 --> 00:48:46,920 Speaker 2: and so forth, and for the most part, it's all 852 00:48:46,960 --> 00:48:50,080 Speaker 2: within the confines of that snowflake. The actual answer is 853 00:48:50,120 --> 00:48:52,520 Speaker 2: outside the snowflake. You know, we can never we can 854 00:48:52,520 --> 00:48:56,920 Speaker 2: never actually determine it with any degree of accuracy, just 855 00:48:56,960 --> 00:49:03,560 Speaker 2: be conjecture. So you know. At the same time, conspiracy 856 00:49:03,560 --> 00:49:06,560 Speaker 2: thinking itself engages in something akin to the cock snowflake, 857 00:49:06,600 --> 00:49:09,360 Speaker 2: you know, because there's it's it's often an infinite search 858 00:49:09,400 --> 00:49:11,799 Speaker 2: for hidden details, whether those details are there or not. 859 00:49:12,239 --> 00:49:15,239 Speaker 2: But still the basic premise holds true. Actual history is complex. 860 00:49:15,560 --> 00:49:18,960 Speaker 2: While the narrative provided by conspiracy thinking is generally simple 861 00:49:19,280 --> 00:49:22,160 Speaker 2: and single track, leading to a singular force at the 862 00:49:22,160 --> 00:49:26,840 Speaker 2: center of the entire conspiracy. Like it will be. It 863 00:49:26,880 --> 00:49:28,919 Speaker 2: will be more akin to like why did this happen? Well, 864 00:49:28,920 --> 00:49:32,200 Speaker 2: it's this evil cabal or this you know evil you 865 00:49:32,239 --> 00:49:36,200 Speaker 2: know corporation leader and so forth, as opposed to well, 866 00:49:36,239 --> 00:49:40,640 Speaker 2: there are multiple societal factors and historical anomalies that play 867 00:49:40,680 --> 00:49:46,280 Speaker 2: into this particular reality. And again, more ambiguity in real life, 868 00:49:46,640 --> 00:49:51,840 Speaker 2: less ambiguity or no ambiguity at all in the conspiracy argument. Yeah, 869 00:49:52,000 --> 00:49:54,080 Speaker 2: comes back to it's like stocks. You know, I don't 870 00:49:54,120 --> 00:49:55,640 Speaker 2: know anything about stocks, but I know that when I 871 00:49:55,719 --> 00:49:58,880 Speaker 2: see that little advertisement at the bottom of a blog 872 00:49:58,960 --> 00:50:03,360 Speaker 2: where and there's a picture of like an old person smiling, 873 00:50:03,400 --> 00:50:06,319 Speaker 2: and it's like, this is a surefire bet stock that 874 00:50:06,440 --> 00:50:08,760 Speaker 2: so and so has just identified, It's like, I'm instantly 875 00:50:08,840 --> 00:50:12,600 Speaker 2: suspicious because it seems a little bit too good to 876 00:50:12,600 --> 00:50:15,680 Speaker 2: be true. It seems like there's not much ambiguity to 877 00:50:15,719 --> 00:50:20,160 Speaker 2: this particular suggestion would be a risky click. All right, 878 00:50:20,200 --> 00:50:22,319 Speaker 2: one more area I want to touch here, because I 879 00:50:22,360 --> 00:50:26,680 Speaker 2: was looking at a really recent paper on ambiguity a version. Okay, 880 00:50:26,680 --> 00:50:31,120 Speaker 2: here's a hypothetical scenario. You're considering a couple of vacation options, 881 00:50:31,280 --> 00:50:33,759 Speaker 2: and we'll assume for the purposes of this experiment that 882 00:50:34,200 --> 00:50:37,480 Speaker 2: the cost and other matters are equal. Here, one is 883 00:50:37,520 --> 00:50:40,280 Speaker 2: a trip to a family vacation destination that you've visited 884 00:50:40,360 --> 00:50:44,120 Speaker 2: numerous times before, and the other is an entirely new experience. 885 00:50:44,200 --> 00:50:46,600 Speaker 2: Maybe it's visiting a country you've never traveled to, or 886 00:50:46,640 --> 00:50:50,040 Speaker 2: something of that nature. And then, as you're making up 887 00:50:50,080 --> 00:50:52,719 Speaker 2: your mind, what do you do? You flip on the 888 00:50:52,719 --> 00:50:57,040 Speaker 2: news and surprise, surprise, it's all negative. It's all traveling. 889 00:50:57,920 --> 00:51:01,640 Speaker 2: Would this exposure to negative news influence your decision making 890 00:51:01,680 --> 00:51:05,319 Speaker 2: and push you toward the less ambiguous choice, or would 891 00:51:05,360 --> 00:51:09,680 Speaker 2: you still basically make your decision as if you would 892 00:51:09,840 --> 00:51:11,600 Speaker 2: have if you didn't watch the news at all. 893 00:51:12,239 --> 00:51:14,600 Speaker 3: Well, I guess it could go multiple ways. But yeah, 894 00:51:14,640 --> 00:51:19,080 Speaker 3: I would tend to assume that increasing my anxiety would 895 00:51:19,160 --> 00:51:22,440 Speaker 3: make me more ambiguity averse, and that I would be 896 00:51:22,719 --> 00:51:28,600 Speaker 3: seeking more known odds and familiar experiences if my anxiety 897 00:51:28,680 --> 00:51:29,520 Speaker 3: levels are higher. 898 00:51:30,719 --> 00:51:33,440 Speaker 2: Yeah, yeah, you know, prior to reading this paper, I 899 00:51:33,520 --> 00:51:36,560 Speaker 2: might have jumped at this thinking as well. I have 900 00:51:36,760 --> 00:51:39,200 Speaker 2: very rarely altered travel plan to do to something I 901 00:51:39,280 --> 00:51:40,960 Speaker 2: saw in the news. But I'll be the first to 902 00:51:41,000 --> 00:51:43,560 Speaker 2: admit that news items can rattle my resolve with pet 903 00:51:43,600 --> 00:51:46,080 Speaker 2: you know, they can make me more anxious. So I 904 00:51:46,120 --> 00:51:48,759 Speaker 2: would have guessed, yeah, negative news can surely push one 905 00:51:48,760 --> 00:51:51,920 Speaker 2: away from risk, away from ambiguity, and back into the familiar, 906 00:51:52,040 --> 00:51:54,960 Speaker 2: and more to the point, might be likely to do 907 00:51:55,040 --> 00:51:55,920 Speaker 2: so on the whole. 908 00:51:56,520 --> 00:51:58,200 Speaker 3: Now, one of the only things I would think that 909 00:51:58,400 --> 00:52:00,239 Speaker 3: might push me back in the other direction and is 910 00:52:00,280 --> 00:52:04,280 Speaker 3: a finding we talked about in the previous episode where essentially, 911 00:52:04,440 --> 00:52:11,759 Speaker 3: if the known odds are bad, people's ambiguity aversion to 912 00:52:11,960 --> 00:52:15,480 Speaker 3: take the other bet lessons right. So, like in the 913 00:52:15,800 --> 00:52:18,920 Speaker 3: three color Earn experiment, if instead of thirty red balls 914 00:52:18,960 --> 00:52:22,320 Speaker 3: there are like five red balls, people at that point 915 00:52:22,360 --> 00:52:25,640 Speaker 3: become more likely to bet on black because it's like, well, 916 00:52:25,760 --> 00:52:28,560 Speaker 3: the odds of the thing you know about are terrible, 917 00:52:29,000 --> 00:52:32,680 Speaker 3: so why not take a gamble on something else? So 918 00:52:33,239 --> 00:52:36,120 Speaker 3: I wonder maybe if that could push you back the 919 00:52:36,160 --> 00:52:40,160 Speaker 3: other way. Like my intuition is that anxiety makes you 920 00:52:40,239 --> 00:52:43,200 Speaker 3: more ambiguity averse and just want to go with something 921 00:52:43,280 --> 00:52:46,400 Speaker 3: where the odds are clear, But maybe anxiety makes you 922 00:52:46,480 --> 00:52:50,880 Speaker 3: believe that Like essentially the odds of stasis are really bad, 923 00:52:51,280 --> 00:52:53,480 Speaker 3: and thus you might as well take a gamble on 924 00:52:53,520 --> 00:52:54,280 Speaker 3: something different. 925 00:52:55,200 --> 00:52:58,880 Speaker 2: Yeah, yeah, you know we can. We also have to 926 00:52:59,120 --> 00:53:03,520 Speaker 2: have to ignow knowledge that news reporting can certainly change 927 00:53:03,880 --> 00:53:07,440 Speaker 2: the perceived odds of something happening. Like if you were 928 00:53:07,440 --> 00:53:10,840 Speaker 2: to watch a news network, a hypothetical news network that 929 00:53:10,960 --> 00:53:13,879 Speaker 2: was just twenty four hours coverage of snake bites, that's 930 00:53:13,920 --> 00:53:17,480 Speaker 2: all they covered. Anytime anywhere in the world someone's beitten 931 00:53:17,520 --> 00:53:19,800 Speaker 2: by a snake, they are there to cover it in full. 932 00:53:20,320 --> 00:53:23,520 Speaker 2: This would maybe give you a certainly a heightened awareness 933 00:53:23,760 --> 00:53:26,399 Speaker 2: of maybe the legitimate risks of snake bite, but also 934 00:53:26,520 --> 00:53:30,680 Speaker 2: maybe an inflated feeling about your personal risks of snake bite. 935 00:53:31,000 --> 00:53:33,560 Speaker 3: Yeah, okay, so that's different in that what I was 936 00:53:33,600 --> 00:53:36,160 Speaker 3: just talking about was like just how general emotional affect 937 00:53:36,239 --> 00:53:38,560 Speaker 3: could affect the way you reason, but it could also 938 00:53:38,600 --> 00:53:39,640 Speaker 3: be content specific. 939 00:53:39,920 --> 00:53:43,359 Speaker 2: You're right, yeah, But anyway, mostly what we're dealing here 940 00:53:43,400 --> 00:53:48,040 Speaker 2: with is the effect here. So a recent study from 941 00:53:48,280 --> 00:53:51,200 Speaker 2: Adelphi University looked into this. It was published in Frontiers 942 00:53:51,200 --> 00:53:54,280 Speaker 2: and Psychology and titled negative news exposure does not affect 943 00:53:54,360 --> 00:53:57,840 Speaker 2: risk or ambiguity a version. So their answer is pretty 944 00:53:57,880 --> 00:54:00,319 Speaker 2: much in the headline. There is often the case. Now, 945 00:54:00,600 --> 00:54:02,680 Speaker 2: this is another one of those small studies, I believe 946 00:54:02,680 --> 00:54:05,160 Speaker 2: the working they worked with groups here of like eighty 947 00:54:05,200 --> 00:54:07,520 Speaker 2: four and two hundred and twenty nine, so you know, 948 00:54:07,960 --> 00:54:11,800 Speaker 2: as always, more research is required. This is not something 949 00:54:11,840 --> 00:54:13,440 Speaker 2: you can just take to the bank, but it's an 950 00:54:13,440 --> 00:54:17,319 Speaker 2: interesting possibility. The authors point out that while studies have 951 00:54:17,440 --> 00:54:21,520 Speaker 2: found that negative effect can increase risk aversion and ambiguity 952 00:54:21,560 --> 00:54:25,440 Speaker 2: a version, we didn't know I quote these effects generalized 953 00:54:25,440 --> 00:54:29,680 Speaker 2: to more realistic negative stimuli such as watching the news. 954 00:54:30,440 --> 00:54:34,120 Speaker 3: Oh okay, so maybe in previous studies it's been found 955 00:54:34,200 --> 00:54:39,960 Speaker 3: that negative emotions can increase ambiguity version and risk aversion, 956 00:54:40,400 --> 00:54:43,280 Speaker 3: and so you would just assume that stimuli like watching 957 00:54:43,320 --> 00:54:46,959 Speaker 3: the news would would also increase those versions. But maybe 958 00:54:47,000 --> 00:54:47,480 Speaker 3: it doesn't. 959 00:54:48,040 --> 00:54:49,840 Speaker 2: Yeah, and you can see why this would be something 960 00:54:49,920 --> 00:54:52,400 Speaker 2: worth knowing. I mean, not only from a general standpoint 961 00:54:52,440 --> 00:54:56,840 Speaker 2: of understanding. You know, how people make choices based on 962 00:54:57,080 --> 00:54:59,719 Speaker 2: their mood and what influences their mood. That's certainly valid 963 00:55:00,120 --> 00:55:02,560 Speaker 2: and that's the main point of the research here, but 964 00:55:02,680 --> 00:55:06,200 Speaker 2: also you can imagine it being very a very interesting 965 00:55:06,200 --> 00:55:10,480 Speaker 2: topic for individuals buying advertisement time on say twenty four 966 00:55:10,480 --> 00:55:15,399 Speaker 2: hour news network, especially if they're say, if they want 967 00:55:15,440 --> 00:55:17,760 Speaker 2: to have an advertisement for some sort of adventure travel 968 00:55:17,880 --> 00:55:21,600 Speaker 2: or something of that nature. You know, Bungee cords, I 969 00:55:21,600 --> 00:55:24,319 Speaker 2: don't know, whatever you happen to be selling. So, in 970 00:55:24,360 --> 00:55:27,760 Speaker 2: this particular study, one group was exposed to negative news, 971 00:55:27,960 --> 00:55:29,840 Speaker 2: and in this case it was like news about a 972 00:55:29,840 --> 00:55:32,879 Speaker 2: car crash, and the other group was exposed to neutral news, 973 00:55:32,920 --> 00:55:35,080 Speaker 2: so not great news, not good news, not like a 974 00:55:35,120 --> 00:55:39,240 Speaker 2: real fluff piece, but just something about train schedules, something 975 00:55:39,360 --> 00:55:41,920 Speaker 2: nice and boring and man, maybe it's a little interesting. 976 00:55:42,560 --> 00:55:45,600 Speaker 2: And they found that although participants who watched negative news 977 00:55:45,640 --> 00:55:49,440 Speaker 2: reported a significant increase in negative effect, they did not 978 00:55:49,600 --> 00:55:52,000 Speaker 2: differ from the neutral news group in their risk or 979 00:55:52,080 --> 00:55:53,880 Speaker 2: ambiguity preferences. 980 00:55:53,880 --> 00:55:57,680 Speaker 3: Interesting, Okay, so they said past studies had found negative 981 00:55:57,680 --> 00:56:03,040 Speaker 3: effect increases ambiguity of version. They showed people bad news 982 00:56:03,120 --> 00:56:06,800 Speaker 3: and it did increase negative affect, but did not increase 983 00:56:06,840 --> 00:56:07,840 Speaker 3: ambiguity version. 984 00:56:08,560 --> 00:56:11,959 Speaker 2: Right, right, So I don't think it means go ahead 985 00:56:11,960 --> 00:56:13,759 Speaker 2: and consume all the negative news you want. It's not 986 00:56:13,800 --> 00:56:16,960 Speaker 2: going to impact you. Like, no, it can and will 987 00:56:17,000 --> 00:56:19,640 Speaker 2: impact you. But what the study seems to be suggesting 988 00:56:19,760 --> 00:56:26,120 Speaker 2: rather is that like sort of incidental or realistic consumption 989 00:56:26,280 --> 00:56:29,879 Speaker 2: of negative news is that that exposure is not going 990 00:56:29,920 --> 00:56:32,200 Speaker 2: to be enough of a force in and of itself 991 00:56:32,280 --> 00:56:36,840 Speaker 2: to move the scale and actually impact your decision making process. Again, 992 00:56:36,920 --> 00:56:39,520 Speaker 2: on the whole this is this would be a generalization, 993 00:56:39,640 --> 00:56:41,880 Speaker 2: and again it's based on you know, smaller study, and 994 00:56:41,880 --> 00:56:43,880 Speaker 2: there's so many factors on top of this, so you 995 00:56:43,920 --> 00:56:47,880 Speaker 2: could throw into the scenario. But both groups were equally 996 00:56:48,040 --> 00:56:51,320 Speaker 2: likely to choose a certain gamble over an ambiguous one 997 00:56:51,520 --> 00:56:53,880 Speaker 2: following the consumption of their media. 998 00:56:53,920 --> 00:56:55,560 Speaker 3: Oh well, some just occurred to me. I have no 999 00:56:55,640 --> 00:56:57,759 Speaker 3: reason to think this is really true, because I'm sure 1000 00:56:57,800 --> 00:57:00,279 Speaker 3: the authors would have done a pretty good job of 1001 00:57:01,320 --> 00:57:03,600 Speaker 3: planning this out. But like, what if the control group 1002 00:57:03,680 --> 00:57:08,720 Speaker 3: the train schedule news was actually pretty demoralizing. It's supposed 1003 00:57:08,760 --> 00:57:11,920 Speaker 3: to be a neutral control, but actually that made people 1004 00:57:12,040 --> 00:57:13,560 Speaker 3: equally ambiguity averse. 1005 00:57:14,000 --> 00:57:16,080 Speaker 2: How that would have been an interesting outcome? No, no, no, 1006 00:57:16,120 --> 00:57:21,840 Speaker 2: But my understanding is that basically the normal parameters of 1007 00:57:21,960 --> 00:57:25,840 Speaker 2: ambiguity aversion were in place, so people were still averse 1008 00:57:25,880 --> 00:57:30,080 Speaker 2: to ambiguity, but in equal measure, whether they had just 1009 00:57:30,160 --> 00:57:32,280 Speaker 2: learned about a car crash or about the exciting world 1010 00:57:32,360 --> 00:57:33,360 Speaker 2: of train schedules. 1011 00:57:33,560 --> 00:57:37,720 Speaker 3: Yeah, so they basically just conformed roughly to Elsberg's predictions. 1012 00:57:38,360 --> 00:57:42,560 Speaker 2: Yes, exactly. But again, so many studies come out about this. 1013 00:57:42,960 --> 00:57:45,080 Speaker 2: They're new ones all the time. It's just such a 1014 00:57:45,760 --> 00:57:50,360 Speaker 2: fascinating way to sort of get in and crunch how 1015 00:57:50,600 --> 00:57:54,840 Speaker 2: we make decisions in life based on how we deal 1016 00:57:54,840 --> 00:57:58,280 Speaker 2: with ambiguity and the inherent ambiguity of life. 1017 00:57:58,680 --> 00:58:02,280 Speaker 3: Yeah. That again, I can't emphasize this enough, Like, because 1018 00:58:02,360 --> 00:58:05,680 Speaker 3: these studies want to quantify the effect, they need to 1019 00:58:05,760 --> 00:58:09,680 Speaker 3: use these like fixed games that include you know, certain 1020 00:58:10,160 --> 00:58:14,360 Speaker 3: like certain odds and risk conditions where the odds are 1021 00:58:14,480 --> 00:58:17,800 Speaker 3: very clear. But in real life you're dealing with basically 1022 00:58:17,840 --> 00:58:21,080 Speaker 3: top to bottom ambiguity. It's real ambiguity all the time. Like, really, 1023 00:58:21,200 --> 00:58:25,840 Speaker 3: rarely do we encounter decisions where our probability of success 1024 00:58:26,040 --> 00:58:30,720 Speaker 3: is absolutely clear, it's clearer than others. Some decisions have 1025 00:58:30,760 --> 00:58:34,160 Speaker 3: clearer odds than others, but you never really know what 1026 00:58:34,200 --> 00:58:34,880 Speaker 3: your odds are. 1027 00:58:35,600 --> 00:58:37,320 Speaker 2: Yeah, I mean, that's one of the reasons we've talked 1028 00:58:37,320 --> 00:58:38,480 Speaker 2: about this before and the show. Is one of the 1029 00:58:38,520 --> 00:58:42,360 Speaker 2: reasons we like things like zombie apocalypse scenarios, not because 1030 00:58:42,360 --> 00:58:45,400 Speaker 2: we necessarily like the idea of the world ending, or 1031 00:58:45,440 --> 00:58:47,240 Speaker 2: of the dead coming back to life and trying to 1032 00:58:47,240 --> 00:58:51,240 Speaker 2: eat our brains. But baked into those scenarios, there's often 1033 00:58:51,320 --> 00:58:55,760 Speaker 2: like a simplification of how the world works, a removal 1034 00:58:55,840 --> 00:58:58,800 Speaker 2: of some degree of the ambiguity. And there's often still 1035 00:58:59,000 --> 00:59:01,200 Speaker 2: in any good thing, I think you're going to have 1036 00:59:01,240 --> 00:59:05,040 Speaker 2: a contemplation of ambiguity as well. But in broad strokes, 1037 00:59:05,080 --> 00:59:07,960 Speaker 2: you might have a scenario boiled down to like, Okay, well, 1038 00:59:07,960 --> 00:59:10,280 Speaker 2: now it's the living versus the dead. Now it's the 1039 00:59:10,320 --> 00:59:14,200 Speaker 2: clear good guys versus the bad guys. And what is 1040 00:59:14,240 --> 00:59:16,920 Speaker 2: the answer? Well, it's always blasting it with a shotgun. 1041 00:59:16,560 --> 00:59:20,480 Speaker 3: Right, Isn't it interesting how in most stories, any major 1042 00:59:20,760 --> 00:59:25,080 Speaker 3: change that happens to the characters is because of whatever 1043 00:59:25,160 --> 00:59:27,720 Speaker 3: the struggle in the story is about. It's not like 1044 00:59:28,040 --> 00:59:31,280 Speaker 3: totally random, out of nowhere exogynous events come in and 1045 00:59:31,320 --> 00:59:35,400 Speaker 3: completely change the story. Occasionally that happens, and people, I think, 1046 00:59:35,840 --> 00:59:38,760 Speaker 3: I think often find that quite interesting in storytelling because 1047 00:59:38,760 --> 00:59:39,400 Speaker 3: it's pretty. 1048 00:59:39,240 --> 00:59:43,760 Speaker 2: Rare, you where the cause of the scenario is ambiguous. 1049 00:59:43,800 --> 00:59:46,240 Speaker 3: You mean, we're like a major change maybe comes in 1050 00:59:46,280 --> 00:59:48,200 Speaker 3: the middle of the story and it has nothing to 1051 00:59:48,280 --> 00:59:50,320 Speaker 3: do with what the main struggle or plot of the 1052 00:59:50,360 --> 00:59:53,479 Speaker 3: story is just comes out of nowhere. Yeah, I can't 1053 00:59:53,480 --> 00:59:55,080 Speaker 3: think of an example off the top of my head, 1054 00:59:55,120 --> 00:59:57,280 Speaker 3: but I know there are some like this right in 1055 00:59:57,320 --> 00:59:58,560 Speaker 3: with your example. 1056 00:59:58,520 --> 01:00:00,160 Speaker 2: All right, we're gonna gohea and close up this step. So, 1057 01:00:00,200 --> 01:00:01,720 Speaker 2: but we'd love to hear from everyone out there. We 1058 01:00:01,760 --> 01:00:05,000 Speaker 2: know that everyone has experience with ambiguity in life, judging it, 1059 01:00:05,200 --> 01:00:08,760 Speaker 2: being adverse to it, rolling the dice anyway, and so forth. 1060 01:00:09,080 --> 01:00:11,280 Speaker 2: So right in, we would love to hear from you. 1061 01:00:11,720 --> 01:00:13,320 Speaker 2: Just a reminder that Stuff to Blow Your Mind is 1062 01:00:13,360 --> 01:00:15,720 Speaker 2: primarily a science and culture podcast, with core episodes on 1063 01:00:15,760 --> 01:00:18,640 Speaker 2: Tuesdays and Thursdays, short form episodes on Wednesdays and on Fridays. 1064 01:00:18,680 --> 01:00:21,160 Speaker 2: We set aside most serious concerns to just talk about 1065 01:00:21,160 --> 01:00:23,240 Speaker 2: a weird film on Weird House Cinema. 1066 01:00:23,400 --> 01:00:27,160 Speaker 3: Huge thanks as always to our excellent audio producer JJ Posway. 1067 01:00:27,240 --> 01:00:28,800 Speaker 3: If you would like to get in touch with us 1068 01:00:28,800 --> 01:00:31,160 Speaker 3: with feedback on this episode or any other, to suggest 1069 01:00:31,240 --> 01:00:33,160 Speaker 3: a topic for the future, or just to say hello, 1070 01:00:33,600 --> 01:00:36,240 Speaker 3: you can email us at contact at stuff to Blow 1071 01:00:36,240 --> 01:00:44,200 Speaker 3: your Mind dot com. 1072 01:00:44,680 --> 01:00:47,600 Speaker 1: Stuff to Blow Your Mind is production of iHeartRadio. For 1073 01:00:47,680 --> 01:00:50,480 Speaker 1: more podcasts from my Heart Radio, visit the iHeartRadio, app, 1074 01:00:50,640 --> 01:01:06,040 Speaker 1: Apple podcasts, or wherever you listen to your favorite showstor