1 00:00:05,080 --> 00:00:08,039 Speaker 1: How do brain's time travel. We all know what a 2 00:00:08,119 --> 00:00:10,760 Speaker 1: prediction is, but why is the important thing to the brain? 3 00:00:11,119 --> 00:00:14,040 Speaker 1: What's called a prediction error? And what does any of 4 00:00:14,080 --> 00:00:16,239 Speaker 1: this have to do with the two thousand and eight 5 00:00:16,400 --> 00:00:20,279 Speaker 1: crash of the economy, or how we keep internal price 6 00:00:20,400 --> 00:00:23,600 Speaker 1: tags on everything, or what we should do with the 7 00:00:23,640 --> 00:00:30,440 Speaker 1: war on drugs. Welcome to Inner Cosmos with me David Eagelman. 8 00:00:30,560 --> 00:00:33,599 Speaker 1: I'm a neuroscientist and an author at Stanford and in 9 00:00:33,640 --> 00:00:37,839 Speaker 1: these episodes we dive deeply into our three pound universe 10 00:00:38,200 --> 00:00:41,440 Speaker 1: to uncover some of the most surprising aspects of our lives. 11 00:00:41,760 --> 00:00:52,280 Speaker 1: Today's episode is part two about decision making, so let's 12 00:00:52,320 --> 00:00:55,279 Speaker 1: quickly summarize last week's episode to get ourselves back up 13 00:00:55,280 --> 00:00:58,320 Speaker 1: to speed. In that episode, we saw how the brain 14 00:00:58,440 --> 00:01:03,160 Speaker 1: is a sophisticated decision making machine. It's constantly engaged in 15 00:01:03,640 --> 00:01:07,560 Speaker 1: choosing among options. Now, this can be something trivial like 16 00:01:07,600 --> 00:01:10,960 Speaker 1: deciding between a taco and a burrito, all the way 17 00:01:11,000 --> 00:01:14,800 Speaker 1: to life altering decisions like whether to take that job 18 00:01:14,920 --> 00:01:19,360 Speaker 1: or whether to propose marriage. Now, while early economists assumed 19 00:01:19,400 --> 00:01:23,520 Speaker 1: that humans make decisions rationally by weighing pros and cons 20 00:01:23,560 --> 00:01:27,319 Speaker 1: to arrive at the optimal choice. Modern day psychology and 21 00:01:27,360 --> 00:01:31,320 Speaker 1: neuroscience reveals a pretty different story and what we saw 22 00:01:31,480 --> 00:01:34,039 Speaker 1: last week, and something I've talked about before here a 23 00:01:34,040 --> 00:01:38,400 Speaker 1: lot is that the brain is composed of multiple networks, 24 00:01:38,600 --> 00:01:42,720 Speaker 1: each with its own goals and desires, and these often 25 00:01:43,000 --> 00:01:47,039 Speaker 1: compete with one another to influence the decisions that we make, 26 00:01:47,600 --> 00:01:51,960 Speaker 1: and this internal conflict is at the heart of how 27 00:01:52,000 --> 00:01:55,720 Speaker 1: we navigate the world. And by the way, the massive 28 00:01:55,800 --> 00:01:59,960 Speaker 1: battles under the hood largely occur below our conscious awarenes 29 00:02:00,560 --> 00:02:03,000 Speaker 1: So for example, when you look at a picture that 30 00:02:03,040 --> 00:02:06,520 Speaker 1: can either be perceived as a duck or a rabbit, 31 00:02:06,920 --> 00:02:10,600 Speaker 1: your brain chooses one interpretation over the other, and we 32 00:02:10,680 --> 00:02:14,360 Speaker 1: can measure what's happening in the activity of neurons to 33 00:02:14,480 --> 00:02:18,240 Speaker 1: see how one network wins over the other one. And 34 00:02:18,320 --> 00:02:20,919 Speaker 1: in the case of a picture like that, your interpretation 35 00:02:21,080 --> 00:02:23,720 Speaker 1: goes back and forth and back and forth. Now that 36 00:02:23,800 --> 00:02:26,799 Speaker 1: might not feel like a decision to you, but it's 37 00:02:26,840 --> 00:02:31,480 Speaker 1: the same neural process of having competing networks that are 38 00:02:31,520 --> 00:02:36,760 Speaker 1: fighting for dominance, and the same neural battle is mirrored 39 00:02:36,760 --> 00:02:40,600 Speaker 1: in everyday choices, like when you choose between different flavors 40 00:02:40,600 --> 00:02:43,000 Speaker 1: of a sports drink or whether to take the right 41 00:02:43,000 --> 00:02:46,800 Speaker 1: trail or the left trail. All your options are represented 42 00:02:47,120 --> 00:02:50,519 Speaker 1: by different coalitions of neurons in the brain, and these 43 00:02:50,560 --> 00:02:55,360 Speaker 1: coalitions compete for dominance and the winner determines the action 44 00:02:55,480 --> 00:02:58,639 Speaker 1: we take. And we saw simple tasks like this stroop 45 00:02:58,800 --> 00:03:03,000 Speaker 1: task where you have conflicting information like the word reed 46 00:03:03,639 --> 00:03:06,120 Speaker 1: printed in blue ink and you're supposed to name the 47 00:03:06,120 --> 00:03:10,040 Speaker 1: color of the ink. This highlights the tension between different 48 00:03:10,080 --> 00:03:13,480 Speaker 1: networks in the brain. And similarly, we are always facing 49 00:03:13,560 --> 00:03:17,240 Speaker 1: moral dilemmas like I mentioned the trolley problem last week, 50 00:03:17,480 --> 00:03:21,320 Speaker 1: and this demonstrates how emotional and rational networks can come 51 00:03:21,360 --> 00:03:24,920 Speaker 1: into conflict and it can lead to different outcomes depending 52 00:03:24,960 --> 00:03:27,960 Speaker 1: on which network prevails. And one of the things we 53 00:03:28,000 --> 00:03:32,200 Speaker 1: saw from that was that emotions play a really crucial 54 00:03:32,320 --> 00:03:37,240 Speaker 1: role in decision making. The importance of emotional input into 55 00:03:37,280 --> 00:03:40,480 Speaker 1: your decision making is seen really clearly when people get 56 00:03:40,800 --> 00:03:44,120 Speaker 1: brain injury like damage to the orbit or frontal cortex, 57 00:03:44,600 --> 00:03:48,480 Speaker 1: because then they can't integrate the emotional signals from the 58 00:03:48,520 --> 00:03:53,440 Speaker 1: body into a decision, and that leaves a person paralyzed 59 00:03:53,440 --> 00:03:57,360 Speaker 1: within decision even in very simple situations. So without the 60 00:03:57,400 --> 00:04:02,000 Speaker 1: ability to read these bodily signals which we're providing summaries 61 00:04:02,040 --> 00:04:06,920 Speaker 1: of different options. People struggle to make choices. So the 62 00:04:07,000 --> 00:04:11,200 Speaker 1: brain's decision making process is not a straightforward calculation of 63 00:04:11,240 --> 00:04:16,120 Speaker 1: pros and cons but a dynamic interplay of competing networks 64 00:04:16,120 --> 00:04:20,520 Speaker 1: and emotional signals. Every decision we make from what to 65 00:04:20,600 --> 00:04:23,240 Speaker 1: where in the morning, or which coffee shop to choose, 66 00:04:23,360 --> 00:04:27,080 Speaker 1: or how we respond in a crisis, the choices are 67 00:04:27,160 --> 00:04:31,080 Speaker 1: all the results of this ongoing conflict within our brains. 68 00:04:31,839 --> 00:04:35,039 Speaker 1: Now where we left off last week with seeing how 69 00:04:35,200 --> 00:04:38,839 Speaker 1: each decision involves our past experiences stored in the states 70 00:04:38,839 --> 00:04:42,320 Speaker 1: of our body and the present situation like do I 71 00:04:42,360 --> 00:04:44,440 Speaker 1: have enough money to buy this instead of that? Or 72 00:04:44,520 --> 00:04:47,560 Speaker 1: is this other option available? But there's one more part 73 00:04:47,640 --> 00:04:51,839 Speaker 1: to the story of decisions, and that is predictions about 74 00:04:52,000 --> 00:04:55,640 Speaker 1: the future. So that's what we're going to talk about today. Now, 75 00:04:55,680 --> 00:05:02,119 Speaker 1: across the animal kingdom, every creature is wired to see reward. Now, 76 00:05:02,120 --> 00:05:03,840 Speaker 1: surely this is something you know, but have you ever 77 00:05:03,880 --> 00:05:09,240 Speaker 1: thought about the question what is a reward? At its essence, 78 00:05:09,360 --> 00:05:13,640 Speaker 1: a reward is something that will move the body closer 79 00:05:14,000 --> 00:05:18,280 Speaker 1: to its ideal set points. So water is a reward 80 00:05:18,760 --> 00:05:22,719 Speaker 1: when your body is getting dehydrated. Food is a reward 81 00:05:23,080 --> 00:05:27,040 Speaker 1: when your energy stores are running down. Now, water and 82 00:05:27,080 --> 00:05:32,839 Speaker 1: food are called primary rewards because they directly address biological need. 83 00:05:33,040 --> 00:05:36,280 Speaker 1: But what's super interesting is that human behavior is steered 84 00:05:36,720 --> 00:05:41,400 Speaker 1: by secondary rewards, which are things that predict primary rewards. 85 00:05:41,440 --> 00:05:46,240 Speaker 1: For example, the sight of a metal cylinder wouldn't by 86 00:05:46,240 --> 00:05:49,480 Speaker 1: itself do much for your brain, but because you've learned 87 00:05:49,520 --> 00:05:53,200 Speaker 1: to recognize it as a can of sparkling water, then 88 00:05:53,240 --> 00:05:55,640 Speaker 1: the sight of it comes to be rewarding when you 89 00:05:55,720 --> 00:05:59,520 Speaker 1: are thirsty, and in the case of humans, much more 90 00:05:59,560 --> 00:06:02,599 Speaker 1: than anywhere else in the animal kingdom. We can find 91 00:06:02,760 --> 00:06:08,640 Speaker 1: even very abstract concepts rewarding, such as a political ideology 92 00:06:09,120 --> 00:06:12,120 Speaker 1: or the feeling that we are valued by our local community, 93 00:06:12,400 --> 00:06:15,920 Speaker 1: and unlike other animals, we can often put these rewards 94 00:06:16,040 --> 00:06:20,760 Speaker 1: ahead of biological needs. As my colleague Read Montacu points out, 95 00:06:21,240 --> 00:06:25,200 Speaker 1: sharks don't go on hunger strikes. In other words, the 96 00:06:25,240 --> 00:06:28,599 Speaker 1: rest of the animal kingdom only chases its basic needs, 97 00:06:28,960 --> 00:06:34,480 Speaker 1: while only humans regularly override our basic needs. In deference 98 00:06:34,600 --> 00:06:39,480 Speaker 1: to abstract ideals, concepts that we have decided are rewarding 99 00:06:39,520 --> 00:06:44,120 Speaker 1: to us. So whenever we're faced with different possibilities, we 100 00:06:44,200 --> 00:06:49,640 Speaker 1: integrate internal and external data to try to maximize reward. However, 101 00:06:49,760 --> 00:06:52,720 Speaker 1: reward is defined to you as an individual. Now here's 102 00:06:52,760 --> 00:06:56,120 Speaker 1: the point I want to make. The challenge with any reward, 103 00:06:56,279 --> 00:07:01,080 Speaker 1: whether basic or abstract, is that choices to typically don't 104 00:07:01,200 --> 00:07:05,440 Speaker 1: yield their fruits right away. We almost always have to 105 00:07:05,440 --> 00:07:09,160 Speaker 1: make decisions in which a chosen course of action will 106 00:07:09,320 --> 00:07:12,920 Speaker 1: yield reward at a later time. People go to school 107 00:07:12,920 --> 00:07:16,320 Speaker 1: for years because they value the future concept of having 108 00:07:16,400 --> 00:07:19,880 Speaker 1: a degree, or they slave through decades of work that 109 00:07:19,920 --> 00:07:22,840 Speaker 1: they don't like with the future hope of a promotion. 110 00:07:23,560 --> 00:07:27,480 Speaker 1: Or you push yourself through painful exercise with the goal 111 00:07:27,600 --> 00:07:30,760 Speaker 1: of being fit in the future. Now to understand this, 112 00:07:30,880 --> 00:07:35,760 Speaker 1: let's think about the concept of anticipated reward. To compare 113 00:07:35,920 --> 00:07:40,720 Speaker 1: different options means assigning a value to each one in 114 00:07:40,760 --> 00:07:46,000 Speaker 1: some common currency that of anticipated reward, and then choosing 115 00:07:46,040 --> 00:07:48,880 Speaker 1: the one with the highest value. So let's make this concrete. 116 00:07:49,040 --> 00:07:51,680 Speaker 1: Let's say you have a small window of free time 117 00:07:51,760 --> 00:07:55,000 Speaker 1: and you are trying to decide what to do. You 118 00:07:55,120 --> 00:07:57,840 Speaker 1: know that you need groceries, but you also know that 119 00:07:57,880 --> 00:07:59,640 Speaker 1: you really need to sit down and get that big 120 00:07:59,680 --> 00:08:03,160 Speaker 1: email I'll sent off, and you're also way behind on 121 00:08:03,280 --> 00:08:06,880 Speaker 1: arranging to meet with your friend at the coffee shop. Well, 122 00:08:06,880 --> 00:08:10,640 Speaker 1: these are all very different things groceries, email, friend. So 123 00:08:10,840 --> 00:08:15,280 Speaker 1: how does your brain arbitrate this menu of options. It 124 00:08:15,320 --> 00:08:19,000 Speaker 1: would be easy if you could directly compare these experiences 125 00:08:19,080 --> 00:08:23,040 Speaker 1: by living out each one and then rewinding time and 126 00:08:23,080 --> 00:08:26,120 Speaker 1: finally choosing your path based on which outcome was best. 127 00:08:26,560 --> 00:08:29,000 Speaker 1: But the mummer is that you can't travel in time, 128 00:08:30,320 --> 00:08:36,679 Speaker 1: or can't you? In fact, human's time travel daily time 129 00:08:36,760 --> 00:08:42,080 Speaker 1: travel is what the human brain does relentlessly. When faced 130 00:08:42,120 --> 00:08:47,360 Speaker 1: with the decision. Your brain simulates different outcomes to generate 131 00:08:47,440 --> 00:08:51,400 Speaker 1: a mockup of what your future might be. We call 132 00:08:51,480 --> 00:08:55,520 Speaker 1: this time traveling. Mentally, you can disconnect from the present 133 00:08:55,600 --> 00:08:59,640 Speaker 1: moment and voyage to a world that doesn't yet exist. 134 00:09:00,280 --> 00:09:03,560 Speaker 1: Now simulating a scenario in your mind. That's just the 135 00:09:03,559 --> 00:09:08,080 Speaker 1: first step. To decide between the imagined scenarios. You try 136 00:09:08,120 --> 00:09:12,320 Speaker 1: to estimate what the reward will be in each of 137 00:09:12,400 --> 00:09:18,080 Speaker 1: those potential futures. So when you simulate filling your pantry 138 00:09:18,120 --> 00:09:21,280 Speaker 1: with the groceries, you feel a sense of relief at 139 00:09:21,320 --> 00:09:27,000 Speaker 1: being organized and avoiding uncertainty. Finishing that email carries different 140 00:09:27,040 --> 00:09:30,440 Speaker 1: sorts of rewards, not only the primary reward of maybe 141 00:09:30,440 --> 00:09:33,520 Speaker 1: securing some money as a result, but more generally, the 142 00:09:33,920 --> 00:09:38,160 Speaker 1: kudos from your boss and a rewarding sense of accomplishment 143 00:09:38,200 --> 00:09:41,679 Speaker 1: in your career. Now you can also imagine yourself at 144 00:09:41,679 --> 00:09:45,280 Speaker 1: the coffee shop with your friend, and that inspires joy 145 00:09:45,360 --> 00:09:49,000 Speaker 1: and a sense of reward in terms of relationships and 146 00:09:49,040 --> 00:09:53,800 Speaker 1: closeness with other people. So your final decision between these 147 00:09:53,880 --> 00:09:59,040 Speaker 1: three options will be navigated by how each future stacks 148 00:09:59,160 --> 00:10:02,800 Speaker 1: up against the other others in the common currency of 149 00:10:02,840 --> 00:10:06,440 Speaker 1: your reward system. Now, this choice isn't easy, because all 150 00:10:06,480 --> 00:10:11,720 Speaker 1: these valuations are nuanced. The simulation of the grocery shopping 151 00:10:12,040 --> 00:10:15,880 Speaker 1: is accompanied by feelings of tedium, and your simulation of 152 00:10:15,880 --> 00:10:19,679 Speaker 1: writing the long email is attended by a sense of frustration, 153 00:10:20,280 --> 00:10:23,520 Speaker 1: and your simulation of your friend at the coffee shop 154 00:10:23,600 --> 00:10:26,880 Speaker 1: is accompanied with guilt about not getting work done. But 155 00:10:26,960 --> 00:10:30,280 Speaker 1: in all these cases, under the radar of awareness, your 156 00:10:30,320 --> 00:10:34,040 Speaker 1: brain simulates all these options one at a time and 157 00:10:34,120 --> 00:10:38,720 Speaker 1: does a gut check on each and that's how you decide. 158 00:10:38,800 --> 00:10:43,199 Speaker 1: But here's a question, how do you simulate these futures accurately? 159 00:10:43,640 --> 00:10:46,520 Speaker 1: How can you possibly predict what it will really be 160 00:10:46,800 --> 00:10:50,440 Speaker 1: like to go down these paths. The answer is that 161 00:10:50,480 --> 00:10:53,760 Speaker 1: you can't. There's no way to know that your predictions 162 00:10:53,800 --> 00:10:57,559 Speaker 1: will be perfectly accurate. Your simulations are only based on 163 00:10:57,600 --> 00:11:01,560 Speaker 1: your past experiences and your current model of how the 164 00:11:01,600 --> 00:11:05,000 Speaker 1: world works. The key business of brains is to predict, 165 00:11:05,040 --> 00:11:08,600 Speaker 1: and to do this well, we need to continually learn 166 00:11:08,760 --> 00:11:11,480 Speaker 1: about the world from our every experience so that we 167 00:11:11,520 --> 00:11:14,679 Speaker 1: can make the best guesses that we can. So in 168 00:11:14,720 --> 00:11:18,000 Speaker 1: this case, you place a value on each of these 169 00:11:18,200 --> 00:11:22,560 Speaker 1: options groceries or email or coffee shop, based on your 170 00:11:22,800 --> 00:11:27,280 Speaker 1: past experiences and using the Hollywood studios in your mind, 171 00:11:27,679 --> 00:11:31,720 Speaker 1: you travel in time to your imagined futures to see 172 00:11:32,120 --> 00:11:35,240 Speaker 1: how much value they'll have, and that's how you make 173 00:11:35,440 --> 00:11:40,000 Speaker 1: your choices. You compare possible futures against one another. That's 174 00:11:40,000 --> 00:11:45,080 Speaker 1: how you convert competing options into a common currency of 175 00:11:45,200 --> 00:11:49,840 Speaker 1: future reward. So think about your predicted reward value for 176 00:11:49,880 --> 00:11:53,880 Speaker 1: each of these options like an internal price tag that 177 00:11:54,080 --> 00:11:58,680 Speaker 1: stores how good something will be. Because grocery shopping is 178 00:11:58,720 --> 00:12:01,280 Speaker 1: going to supply you with food, say it's worth ten 179 00:12:01,360 --> 00:12:06,000 Speaker 1: reward units, and writing that stupid email is difficult but 180 00:12:06,240 --> 00:12:09,760 Speaker 1: necessary to your career, so it weighs in at twenty 181 00:12:09,840 --> 00:12:13,320 Speaker 1: five reward units, and you love spending time with your friend. 182 00:12:13,520 --> 00:12:17,040 Speaker 1: So going to the coffee shop is worth fifty reward units. 183 00:12:17,679 --> 00:12:22,120 Speaker 1: But there's an interesting twist here. The world is complicated, 184 00:12:22,200 --> 00:12:26,240 Speaker 1: and so your internal price tags are never written in 185 00:12:26,440 --> 00:12:31,439 Speaker 1: permanent ink. Your valuation of everything around you is changeable 186 00:12:31,720 --> 00:12:36,520 Speaker 1: because quite often our predictions don't match what actually happens. 187 00:12:37,160 --> 00:12:42,800 Speaker 1: The key to effective learning lies in tracking this prediction error. 188 00:12:43,240 --> 00:12:46,760 Speaker 1: The prediction error means the difference between what you thought 189 00:12:46,880 --> 00:12:50,400 Speaker 1: was going to happen and what actually happened. In the 190 00:12:50,440 --> 00:12:54,319 Speaker 1: case of this decision about groceries or email or coffee shop, 191 00:12:54,640 --> 00:12:58,959 Speaker 1: your brain has a prediction about how rewarding the coffee 192 00:12:59,000 --> 00:13:01,319 Speaker 1: shop is going to be. Now, let's say it turns 193 00:13:01,360 --> 00:13:04,000 Speaker 1: out that they have some new snack there, and also 194 00:13:04,080 --> 00:13:07,000 Speaker 1: you run into other friends there and the whole experience 195 00:13:07,040 --> 00:13:10,839 Speaker 1: is even better than you thought. That raises the price 196 00:13:10,960 --> 00:13:14,199 Speaker 1: tag the next time you're thinking about the coffee shop. 197 00:13:14,720 --> 00:13:18,320 Speaker 1: On the other hand, if the coffee is cold and 198 00:13:18,400 --> 00:13:22,360 Speaker 1: your friend is late and distracted, that might lower your 199 00:13:22,400 --> 00:13:40,840 Speaker 1: price tag for the next time around. Now, how does 200 00:13:40,880 --> 00:13:42,720 Speaker 1: this work in the brain? How do you have these 201 00:13:43,200 --> 00:13:47,600 Speaker 1: dynamic internal price tags. There's a tiny, ancient system in 202 00:13:47,640 --> 00:13:52,520 Speaker 1: the brain whose mission is to keep updating your assessments 203 00:13:52,600 --> 00:13:55,840 Speaker 1: of the world. The system is made of tiny groups 204 00:13:55,880 --> 00:13:58,640 Speaker 1: of cells in your midbrain that speak in the language 205 00:13:58,640 --> 00:14:04,320 Speaker 1: of a neurotransmitter called dopamine. When there's a mismatch between 206 00:14:04,360 --> 00:14:11,000 Speaker 1: your expectation and your reality, this midbrain dopamine system broadcasts 207 00:14:11,040 --> 00:14:14,959 Speaker 1: a signal that reevaluates the price point. The signal tells 208 00:14:14,960 --> 00:14:17,880 Speaker 1: the rest of the system whether things turn out to 209 00:14:17,920 --> 00:14:21,400 Speaker 1: be better than expected, in which case you get an 210 00:14:21,440 --> 00:14:25,560 Speaker 1: increased burst of dopamine, or worse than expected, in which 211 00:14:25,600 --> 00:14:29,040 Speaker 1: case you get a decrease in dopamine. And that prediction 212 00:14:29,440 --> 00:14:32,680 Speaker 1: error signal, meaning there was an error in the prediction 213 00:14:32,720 --> 00:14:35,600 Speaker 1: that I had made, allows the rest of the brain 214 00:14:35,920 --> 00:14:39,720 Speaker 1: to adjust its expectations to try to be closer to 215 00:14:39,800 --> 00:14:44,359 Speaker 1: reality next time. So the dopamine acts as an error corrector. 216 00:14:44,800 --> 00:14:49,200 Speaker 1: It's a chemical appraiser that always works to make your 217 00:14:49,200 --> 00:14:52,920 Speaker 1: price tags as updated as they can be. That way, 218 00:14:53,440 --> 00:14:57,840 Speaker 1: you can rank your decisions based on your optimized guesses 219 00:14:57,880 --> 00:15:02,160 Speaker 1: about the future. Fundamentally, what the brain pays attention to 220 00:15:02,280 --> 00:15:06,920 Speaker 1: are the unexpected outcomes, something being better than expected or 221 00:15:07,080 --> 00:15:10,360 Speaker 1: worse than expected, and this sensitivity is at the heart 222 00:15:10,400 --> 00:15:15,320 Speaker 1: of animal's abilities to adapt and learn. It's no surprise then, 223 00:15:15,400 --> 00:15:19,440 Speaker 1: that the brain architecture involved in learning from experience is 224 00:15:19,560 --> 00:15:23,920 Speaker 1: consistent across species. You see this from honeybees to humans. 225 00:15:24,400 --> 00:15:29,000 Speaker 1: In other words, brains discovered the basic principles of getting 226 00:15:29,120 --> 00:15:33,600 Speaker 1: feedback from reward a long time ago. So we've seen 227 00:15:33,720 --> 00:15:38,040 Speaker 1: how values get attached to different options. But there's a 228 00:15:38,080 --> 00:15:41,160 Speaker 1: twist that often gets in the way of good decision making, 229 00:15:41,600 --> 00:15:44,160 Speaker 1: which is that options that are right in front of 230 00:15:44,200 --> 00:15:48,120 Speaker 1: you tend to be valued higher than those that you 231 00:15:48,200 --> 00:15:52,360 Speaker 1: merely simulate. The thing that trips up good decision making 232 00:15:52,400 --> 00:15:56,480 Speaker 1: about the future is the present. I'll give you a 233 00:15:56,520 --> 00:15:59,680 Speaker 1: really interesting example of this. In two thousand and eight, 234 00:16:00,240 --> 00:16:03,120 Speaker 1: the US economy took a sharp downturn. Many of you 235 00:16:03,160 --> 00:16:06,360 Speaker 1: may remember that at the heart of all the trouble 236 00:16:06,480 --> 00:16:11,080 Speaker 1: was a simple fact, which is that many homeowners had overborrowed. 237 00:16:11,120 --> 00:16:14,160 Speaker 1: They had taken out these mortgage loans that had really 238 00:16:14,200 --> 00:16:16,640 Speaker 1: low interest rates for a period of a few years, 239 00:16:16,800 --> 00:16:19,320 Speaker 1: and the problem occurred at the end of that period 240 00:16:19,760 --> 00:16:23,479 Speaker 1: when the interest rate suddenly shot up to something quite high, 241 00:16:23,560 --> 00:16:27,760 Speaker 1: and a lot of homeowners suddenly found themselves facing down 242 00:16:27,840 --> 00:16:30,800 Speaker 1: the barrel of these higher interest rates. And they realized 243 00:16:31,080 --> 00:16:33,320 Speaker 1: they weren't going to be able to make the payments. 244 00:16:33,880 --> 00:16:38,080 Speaker 1: So close to a million American homes went into foreclosure, 245 00:16:38,120 --> 00:16:42,840 Speaker 1: and that sent shockwaves through the economy of the planet. Now, 246 00:16:42,960 --> 00:16:44,320 Speaker 1: what in the world does this have to do with 247 00:16:44,440 --> 00:16:49,720 Speaker 1: competing networks in the brain. Here's what these loans called 248 00:16:49,760 --> 00:16:54,320 Speaker 1: subprime loans. They allowed people to obtain a nice house 249 00:16:54,720 --> 00:16:59,200 Speaker 1: right now, with the high interest rates defferred until later. 250 00:16:59,760 --> 00:17:04,200 Speaker 1: As such, the offer perfectly appealed to the neural networks 251 00:17:04,200 --> 00:17:07,960 Speaker 1: that care about instant gratification. In other words, those networks 252 00:17:08,000 --> 00:17:11,960 Speaker 1: that want things right now. The idea was here, take 253 00:17:12,000 --> 00:17:15,280 Speaker 1: these keys, this house is yours. The mortgage payments are 254 00:17:15,320 --> 00:17:18,400 Speaker 1: pretty easy to make. Oh yeah, the rates will go up, 255 00:17:18,560 --> 00:17:21,680 Speaker 1: but that's a long way away. That's obscured in the 256 00:17:21,720 --> 00:17:24,200 Speaker 1: midst of the future. You don't have to worry about 257 00:17:24,200 --> 00:17:27,800 Speaker 1: that right now. That's a long way off. And because 258 00:17:27,840 --> 00:17:32,359 Speaker 1: of the seduction of the immediate satisfaction, as in, wow, 259 00:17:32,359 --> 00:17:34,239 Speaker 1: I can move into this house right now, I can 260 00:17:34,280 --> 00:17:37,919 Speaker 1: impress my parents and my friends and whatever. Because the 261 00:17:38,200 --> 00:17:43,160 Speaker 1: now pulls so strongly on our decision making, the two 262 00:17:43,200 --> 00:17:47,080 Speaker 1: thousand and eight housing bubble can be understood not simply 263 00:17:47,160 --> 00:17:50,520 Speaker 1: as an economic phenomenon, but as a neural one. And 264 00:17:50,560 --> 00:17:52,439 Speaker 1: by the way, the pull of the now wasn't just 265 00:17:52,480 --> 00:17:55,000 Speaker 1: about the people borrowing, of course, but it's also about 266 00:17:55,000 --> 00:17:58,880 Speaker 1: the lenders who are getting rich right now by offering 267 00:17:58,960 --> 00:18:02,040 Speaker 1: loans that they suspec might not ever get paid. They 268 00:18:02,280 --> 00:18:05,679 Speaker 1: rebundled these loans and sold them off. Practices like this 269 00:18:05,720 --> 00:18:09,840 Speaker 1: are unethical and illegal, but it proved too enticing to 270 00:18:09,960 --> 00:18:13,960 Speaker 1: many decision makers when they were faced with that temptation. Now, 271 00:18:14,080 --> 00:18:18,160 Speaker 1: this now versus the future battle. This doesn't just apply 272 00:18:18,240 --> 00:18:21,600 Speaker 1: to housing bubbles. It cuts across every aspect of our lives. 273 00:18:22,000 --> 00:18:24,879 Speaker 1: It's why car dealers want you to get in and 274 00:18:24,960 --> 00:18:28,480 Speaker 1: test drive the cars, because the seduction of the now 275 00:18:28,920 --> 00:18:31,640 Speaker 1: is probably going to override your thinking about the distant 276 00:18:31,680 --> 00:18:34,760 Speaker 1: future when you have to pay that monthly loan. It's 277 00:18:34,880 --> 00:18:37,680 Speaker 1: why clothing stores want you to try on the clothes, 278 00:18:37,760 --> 00:18:40,040 Speaker 1: because once you see yourself in front of the mirror there, 279 00:18:40,080 --> 00:18:43,000 Speaker 1: you'll think, wow, I can't pass that up. It's why 280 00:18:43,119 --> 00:18:46,040 Speaker 1: merchants want you to touch the merchandise. It's because your 281 00:18:46,400 --> 00:18:50,720 Speaker 1: mental simulations of the future can't compete well against the 282 00:18:50,800 --> 00:18:54,440 Speaker 1: experience of something right here, right now. In your hands. 283 00:18:55,200 --> 00:18:59,679 Speaker 1: In other words, to the brain, the future can only 284 00:18:59,800 --> 00:19:04,199 Speaker 1: ever be a pale shadow of the now. The power 285 00:19:04,280 --> 00:19:07,119 Speaker 1: of now explains why people make decisions that feel good 286 00:19:07,240 --> 00:19:11,119 Speaker 1: in the moment but have lousy consequences for their future. 287 00:19:11,280 --> 00:19:14,840 Speaker 1: So people who take a drink or a drug hit 288 00:19:15,000 --> 00:19:18,640 Speaker 1: even though they know they shouldn't, or married partners who 289 00:19:18,680 --> 00:19:22,120 Speaker 1: give in to an available affair, or athletes who take 290 00:19:22,119 --> 00:19:25,080 Speaker 1: anabolic steroids even though they know it's going to shave 291 00:19:25,200 --> 00:19:29,800 Speaker 1: years off their life. So can we do anything about 292 00:19:29,800 --> 00:19:33,199 Speaker 1: the seduction of the now? Well, with a little bit 293 00:19:33,240 --> 00:19:37,040 Speaker 1: of knowledge about the brain, we can. So consider this. 294 00:19:37,119 --> 00:19:39,879 Speaker 1: We all know that it's really difficult to do certain thing. 295 00:19:39,920 --> 00:19:43,360 Speaker 1: It's like go regularly to the gym. Most people want 296 00:19:43,400 --> 00:19:45,160 Speaker 1: to be in good shape, but when it comes down 297 00:19:45,200 --> 00:19:47,359 Speaker 1: to it, there are usually things right in front of 298 00:19:47,440 --> 00:19:50,800 Speaker 1: us that seem more enjoyable. The pull of what we're 299 00:19:50,840 --> 00:19:55,600 Speaker 1: doing is stronger than the abstract notion of future fitness. 300 00:19:55,960 --> 00:19:58,720 Speaker 1: So here's the solution to make sure you get there. 301 00:19:59,160 --> 00:20:01,639 Speaker 1: You can take in spiration from a man who lived 302 00:20:01,760 --> 00:20:04,199 Speaker 1: three thousand years ago. Now, I talked about this in 303 00:20:04,240 --> 00:20:06,560 Speaker 1: a recent episode, so I'll give the short version here. 304 00:20:06,600 --> 00:20:09,480 Speaker 1: But it's massively important. The man we're going to talk 305 00:20:09,520 --> 00:20:13,520 Speaker 1: about was in a more extreme version of the Jim scenario. 306 00:20:13,880 --> 00:20:16,800 Speaker 1: He had something he wanted to do, but knew that 307 00:20:16,920 --> 00:20:19,480 Speaker 1: he wouldn't be able to resist temptation when the time 308 00:20:19,520 --> 00:20:23,400 Speaker 1: came for him. It wasn't about getting a better physique. 309 00:20:23,440 --> 00:20:26,560 Speaker 1: It was about saving his life from a group of 310 00:20:26,640 --> 00:20:31,600 Speaker 1: mesmerizing maidens. So this was the legendary hero Ulysses, on 311 00:20:31,720 --> 00:20:35,280 Speaker 1: his way back from triumph in the Trojan War. At 312 00:20:35,280 --> 00:20:38,080 Speaker 1: some point on his long journey home, he realized that 313 00:20:38,240 --> 00:20:41,920 Speaker 1: his ship was going to be passing an island where 314 00:20:41,960 --> 00:20:46,119 Speaker 1: the beautiful sirens lived, and the sirens were famous for 315 00:20:46,240 --> 00:20:51,960 Speaker 1: singing songs so melodious that sailors were enchanted. And the 316 00:20:52,000 --> 00:20:55,119 Speaker 1: problem was that the sailors found this irresistible and they 317 00:20:55,119 --> 00:20:58,800 Speaker 1: would crash their ships into the rocks trying to get 318 00:20:58,840 --> 00:21:02,000 Speaker 1: to the sirens, and they would now Ulysses desperately wanted 319 00:21:02,040 --> 00:21:04,440 Speaker 1: to hear these legendary songs, but he didn't want to 320 00:21:04,480 --> 00:21:07,480 Speaker 1: kill himself and his crew, so he hatched a plan. 321 00:21:08,040 --> 00:21:10,800 Speaker 1: He knew that when he heard the music, he would 322 00:21:10,840 --> 00:21:14,239 Speaker 1: be unable to resist steering towards the island's rocks and 323 00:21:14,280 --> 00:21:18,119 Speaker 1: so he recognized that the problem wasn't his present rational self, 324 00:21:18,440 --> 00:21:23,960 Speaker 1: but instead the future illogical Ulysses, the person that he 325 00:21:24,119 --> 00:21:28,320 Speaker 1: would become. When the sirens came with an earshot, so 326 00:21:28,440 --> 00:21:32,119 Speaker 1: Ulysses ordered his men to lash him securely to the 327 00:21:32,240 --> 00:21:35,040 Speaker 1: mast of his ship, and they filled their ears with 328 00:21:35,119 --> 00:21:37,520 Speaker 1: beeswax so it's not to hear the sirens, and they 329 00:21:37,600 --> 00:21:40,679 Speaker 1: rode under strict orders to ignore any of his pleas 330 00:21:40,760 --> 00:21:44,080 Speaker 1: or cries or writhing around. So what was going on here? 331 00:21:44,240 --> 00:21:47,320 Speaker 1: Ulysses knew that his future self would be in no 332 00:21:47,480 --> 00:21:52,000 Speaker 1: position to make good decisions, so the Ulysses of sound 333 00:21:52,160 --> 00:21:56,159 Speaker 1: mind arranged things so that the future Ulysses could do 334 00:21:56,320 --> 00:22:00,840 Speaker 1: no wrong. This sort of deal between your self and 335 00:22:00,880 --> 00:22:04,479 Speaker 1: your future self is known as the Ulysses contract. In 336 00:22:04,520 --> 00:22:07,760 Speaker 1: the case of getting to the gym, my simple Ulysses 337 00:22:07,840 --> 00:22:11,240 Speaker 1: contract is to arrange in advance for a friend to 338 00:22:11,320 --> 00:22:13,520 Speaker 1: meet me at the gym. So then I feel it's 339 00:22:13,520 --> 00:22:17,439 Speaker 1: pressure to uphold the social contract, and that lashes me 340 00:22:17,560 --> 00:22:21,080 Speaker 1: to the mast, even though when the morning rolls around, 341 00:22:21,119 --> 00:22:22,919 Speaker 1: all I really want to do is just sleep in 342 00:22:23,000 --> 00:22:26,159 Speaker 1: or do something else. I've pre arranged a contract and 343 00:22:26,200 --> 00:22:29,359 Speaker 1: now I have to show up when you start looking 344 00:22:29,400 --> 00:22:32,280 Speaker 1: around for these ulysses contracts, you'll see that they're actually 345 00:22:32,320 --> 00:22:35,000 Speaker 1: all around you. Just as an example, I was recently 346 00:22:35,000 --> 00:22:37,920 Speaker 1: giving a talk on a college campus and learned that 347 00:22:38,240 --> 00:22:42,440 Speaker 1: during the week of final exams, some students will swap 348 00:22:42,520 --> 00:22:46,280 Speaker 1: their TikTok passwords with each other, and then each student 349 00:22:46,400 --> 00:22:48,679 Speaker 1: changes the other's passwords so that they can't log on, 350 00:22:49,240 --> 00:22:52,920 Speaker 1: and then when finals are over, they switch the passwords back. 351 00:22:53,359 --> 00:22:55,679 Speaker 1: So this way, during the week that they're supposed to 352 00:22:55,680 --> 00:22:58,119 Speaker 1: be studying, even though they know that they're going to 353 00:22:58,160 --> 00:23:01,320 Speaker 1: be tempted by the sirens song, they can't do anything 354 00:23:01,359 --> 00:23:05,880 Speaker 1: about it. Similarly, the first step for alcoholics in rehabilitation 355 00:23:06,000 --> 00:23:08,720 Speaker 1: programs is to clear all the alcohol out of their 356 00:23:08,720 --> 00:23:12,040 Speaker 1: house so that the temptation is not in front of 357 00:23:12,080 --> 00:23:16,320 Speaker 1: them when they're feeling weak. People with weight problems sometimes 358 00:23:16,359 --> 00:23:19,760 Speaker 1: get surgery to reduce their stomach volume so they physically 359 00:23:19,960 --> 00:23:23,400 Speaker 1: can't overeat. These are all things that you do right 360 00:23:23,440 --> 00:23:27,480 Speaker 1: now so that your future self won't do something that 361 00:23:27,560 --> 00:23:30,359 Speaker 1: you don't want to. It's a way to structure things 362 00:23:30,400 --> 00:23:33,399 Speaker 1: in the presence so that your future self cannot misbehave. 363 00:23:33,680 --> 00:23:37,360 Speaker 1: And that's the trick for getting around the seduction of 364 00:23:37,440 --> 00:23:40,960 Speaker 1: the now you strap yourself to the mast, and that 365 00:23:41,040 --> 00:23:44,520 Speaker 1: allows you to behave in better alignment with the kind 366 00:23:44,520 --> 00:23:47,879 Speaker 1: of person you would like to be. The key to 367 00:23:47,960 --> 00:23:50,760 Speaker 1: a Ulysses contract is just recognizing that we are different 368 00:23:50,800 --> 00:23:54,800 Speaker 1: people in different contexts. The ancient Greeks had a saying 369 00:23:54,840 --> 00:23:58,080 Speaker 1: they liked, which was know thyself. But I've always felt 370 00:23:58,080 --> 00:24:00,560 Speaker 1: from a neuroscience point of view that the the updated 371 00:24:00,640 --> 00:24:06,480 Speaker 1: version is know thyselves now. What's strange is that knowing 372 00:24:06,520 --> 00:24:09,919 Speaker 1: yourselves is not easy because how you act at different 373 00:24:10,000 --> 00:24:13,159 Speaker 1: times is not always something you can predict. Even though 374 00:24:13,200 --> 00:24:17,240 Speaker 1: you can make good guesses, things can drift around. So 375 00:24:17,359 --> 00:24:20,359 Speaker 1: sometimes you're really jazzed about going to the gym, and 376 00:24:20,440 --> 00:24:24,159 Speaker 1: sometimes not so much. Sometimes you'll stick with the diet 377 00:24:24,240 --> 00:24:27,760 Speaker 1: and sometimes you won't. Sometimes you're more capable of good 378 00:24:27,760 --> 00:24:31,680 Speaker 1: decision making, and other times your neural parliament won't come 379 00:24:31,680 --> 00:24:35,800 Speaker 1: out with the vote that you expected. Why it's because 380 00:24:35,960 --> 00:24:39,879 Speaker 1: the outcome depends on a lot of changing factors about 381 00:24:39,920 --> 00:24:43,080 Speaker 1: the state of your body. For example, states, which can 382 00:24:43,520 --> 00:24:46,000 Speaker 1: change hour to hour. So here's a good way for 383 00:24:46,080 --> 00:24:48,119 Speaker 1: us to appreciate this. There was a study done some 384 00:24:48,240 --> 00:24:52,120 Speaker 1: years ago looking at prisoners who are going in front 385 00:24:52,119 --> 00:24:54,960 Speaker 1: of a parole judge to see if the prisoner could 386 00:24:54,960 --> 00:24:59,720 Speaker 1: get his sentence shortened to go home early. So imagine 387 00:25:00,080 --> 00:25:03,119 Speaker 1: two prisoners both scheduled to appear in front of this 388 00:25:03,200 --> 00:25:06,520 Speaker 1: parole judge. One prisoner goes before the judge at eleven 389 00:25:06,600 --> 00:25:10,040 Speaker 1: twenty seven am. His crime is fraud and he's serving 390 00:25:10,080 --> 00:25:14,720 Speaker 1: thirty months. Another prisoner appears at one fifteen pm. He 391 00:25:14,760 --> 00:25:16,920 Speaker 1: has committed the same crime for which he's been given 392 00:25:17,000 --> 00:25:20,760 Speaker 1: the same sentence. Now, the first prisoner is denied parole, 393 00:25:21,240 --> 00:25:26,520 Speaker 1: the second is granted parole. Why what influenced the decision? 394 00:25:26,640 --> 00:25:29,679 Speaker 1: Was it? Race? Was at looks was at age? So 395 00:25:29,800 --> 00:25:33,320 Speaker 1: a study by Jonathan levav In colleagues analyzed one thousand 396 00:25:33,480 --> 00:25:37,600 Speaker 1: rulings from judges and found it wasn't about that. Whether 397 00:25:38,000 --> 00:25:42,000 Speaker 1: the prisoner is granted parole or not isn't just about 398 00:25:42,040 --> 00:25:46,920 Speaker 1: the prisoner. It's about the judge's biology. Specifically, it's about 399 00:25:46,960 --> 00:25:51,680 Speaker 1: the judge's hunger. So just after the judge had enjoyed 400 00:25:51,760 --> 00:25:56,159 Speaker 1: a food break, a prisoner's chance of parole rose to 401 00:25:56,200 --> 00:25:59,600 Speaker 1: its highest point of sixty five percent. But a prisoner 402 00:25:59,680 --> 00:26:03,560 Speaker 1: seen towards the end of the session had the lowest 403 00:26:03,680 --> 00:26:07,639 Speaker 1: chances just a twenty percent likelihood of a favorable outcome. 404 00:26:08,160 --> 00:26:13,560 Speaker 1: In other words, decisions get reprioritized as other needs rise 405 00:26:13,600 --> 00:26:19,040 Speaker 1: in importance. Your choices change as circumstances change. A prisoner's 406 00:26:19,080 --> 00:26:23,560 Speaker 1: fate is irrevocably intertwined with the judge's neural networks, which 407 00:26:23,640 --> 00:26:29,280 Speaker 1: operate according to biological needs. Some psychologists describe this effect 408 00:26:29,359 --> 00:26:34,560 Speaker 1: as ego depletion, meaning that higher level cognitive areas involved 409 00:26:34,560 --> 00:26:39,280 Speaker 1: in executive function and planning they get fatigued. So in 410 00:26:39,320 --> 00:26:44,439 Speaker 1: this view, willpower is a limited resources. We can run 411 00:26:44,880 --> 00:26:47,520 Speaker 1: low on it, just like a tank of fuel. In 412 00:26:47,560 --> 00:26:50,480 Speaker 1: the case of the judges, the more cases they had 413 00:26:50,480 --> 00:26:53,520 Speaker 1: to make decisions about, which was up to thirty five 414 00:26:53,560 --> 00:26:57,399 Speaker 1: in one sitting, the more energy depleted their brains became. 415 00:26:57,560 --> 00:26:59,480 Speaker 1: But after they ate something like a sandwich and a 416 00:26:59,520 --> 00:27:03,959 Speaker 1: piece of free their energy stores were refueled and different 417 00:27:04,040 --> 00:27:09,240 Speaker 1: drives had more power now in steering their decisions. Traditionally, 418 00:27:09,680 --> 00:27:14,760 Speaker 1: we assume that humans are rational decision makers. We absorb information, 419 00:27:15,000 --> 00:27:17,760 Speaker 1: we process it, we come up with an optimal answer 420 00:27:17,840 --> 00:27:21,399 Speaker 1: or solution. But real humans don't work this way. Even 421 00:27:21,840 --> 00:27:28,800 Speaker 1: judges striving for freedom from bias are imprisoned by their biology. 422 00:27:28,880 --> 00:27:32,160 Speaker 1: Our decisions are equally influenced when it comes to how 423 00:27:32,200 --> 00:27:35,959 Speaker 1: we act with our romantic partners. So consider the choice 424 00:27:36,000 --> 00:27:40,239 Speaker 1: of monogamy, bonding and staying with a single partner. So 425 00:27:40,280 --> 00:27:42,920 Speaker 1: this would seem like a decision that involves your culture 426 00:27:42,960 --> 00:27:45,320 Speaker 1: and your values and your morals and all. That's true, 427 00:27:45,320 --> 00:27:48,320 Speaker 1: But there's a deeper force acting on your decision making 428 00:27:48,359 --> 00:27:52,200 Speaker 1: as well, which is your hormones, and one in particular 429 00:27:52,240 --> 00:27:55,840 Speaker 1: called oxytocin, which is a key ingredient in the magic 430 00:27:55,960 --> 00:28:00,159 Speaker 1: of bonding. So in one recent study, men who are 431 00:28:00,160 --> 00:28:02,800 Speaker 1: in love with their female partners were given a small 432 00:28:02,920 --> 00:28:06,480 Speaker 1: dose of extra oxytocin. Then they were asked to rate 433 00:28:06,560 --> 00:28:11,240 Speaker 1: the attractiveness of different women. So with the extra oxytocin, 434 00:28:11,800 --> 00:28:15,240 Speaker 1: the men found their partners more attractive, but not the 435 00:28:15,320 --> 00:28:18,280 Speaker 1: other women. In fact, the men kept a bit more 436 00:28:18,359 --> 00:28:22,640 Speaker 1: physical distance from an attractive female research associate in the study, 437 00:28:23,160 --> 00:28:27,960 Speaker 1: So oxytocin only increased bonding to their partner. Why do 438 00:28:28,040 --> 00:28:33,040 Speaker 1: we have chemicals like oxytocin steering our decision making towards bonding? 439 00:28:33,480 --> 00:28:36,439 Speaker 1: After all, from an evolutionary perspective, we might expect that 440 00:28:36,480 --> 00:28:40,520 Speaker 1: a male shouldn't want monogamy if his biological mandate is 441 00:28:40,560 --> 00:28:44,280 Speaker 1: to spread genes as widely as possible. But for the 442 00:28:44,320 --> 00:28:48,320 Speaker 1: survival of the children, having two parents around is better 443 00:28:48,360 --> 00:28:51,160 Speaker 1: than one. And this simple fact is so important that 444 00:28:51,200 --> 00:28:54,840 Speaker 1: the brain has hidden ways to influence you or decision 445 00:28:54,840 --> 00:28:58,440 Speaker 1: making on it. Our decisions are often steered by invisibly 446 00:28:58,560 --> 00:29:19,440 Speaker 1: small molecules that we don't even necessarily know about. Now 447 00:29:19,600 --> 00:29:22,520 Speaker 1: we've been talking all about decision making and how it 448 00:29:22,560 --> 00:29:25,920 Speaker 1: arises from battling networks in the brain. Why does it 449 00:29:25,960 --> 00:29:29,320 Speaker 1: matter to understand this stuff? While a better understanding of 450 00:29:29,320 --> 00:29:32,960 Speaker 1: decision making allows us to better steer our own lives, 451 00:29:33,080 --> 00:29:37,160 Speaker 1: and also it opens the door to craft better social policy. 452 00:29:37,880 --> 00:29:40,880 Speaker 1: For example, all of us, in our own ways, we 453 00:29:40,920 --> 00:29:45,040 Speaker 1: always have to battle impulse control. At the extreme, we 454 00:29:45,120 --> 00:29:48,120 Speaker 1: can end up as slaves to the immediate cravings of 455 00:29:48,160 --> 00:29:51,360 Speaker 1: our impulses. And I think from this perspective we can 456 00:29:51,400 --> 00:29:55,680 Speaker 1: gain a more nuanced understanding of the efficacy of social 457 00:29:55,760 --> 00:29:59,160 Speaker 1: programs like the War on drugs. So take drug addiction. 458 00:29:59,280 --> 00:30:03,200 Speaker 1: It's a big problem for societies. It's about seven out 459 00:30:03,200 --> 00:30:07,160 Speaker 1: of ten prisoners meet the criteria for substance abuse or 460 00:30:07,200 --> 00:30:10,840 Speaker 1: dependence and one study found that over a third of 461 00:30:10,880 --> 00:30:13,640 Speaker 1: convicted inmates were under the influence of drugs at the 462 00:30:13,680 --> 00:30:18,080 Speaker 1: time of their crime. So drug abuse translates into tens 463 00:30:18,120 --> 00:30:22,280 Speaker 1: of billions of dollars, mostly in terms of drug related crime, 464 00:30:22,560 --> 00:30:26,640 Speaker 1: and what this has led to is a burgeoning prison population. Now, 465 00:30:26,640 --> 00:30:29,000 Speaker 1: what most countries do is they deal with the problem 466 00:30:29,080 --> 00:30:33,560 Speaker 1: of drug addiction by criminalizing it. The problem is that 467 00:30:33,880 --> 00:30:36,920 Speaker 1: the number of Americans in prison for drug crimes has 468 00:30:36,920 --> 00:30:40,160 Speaker 1: gone up eightfold since the War on Drugs was declared 469 00:30:40,160 --> 00:30:43,200 Speaker 1: in nineteen seventy one, and every year the US spends 470 00:30:43,280 --> 00:30:47,440 Speaker 1: twenty billion dollars on the War on drugs. But the 471 00:30:47,480 --> 00:30:51,880 Speaker 1: investment hasn't actually worked because since the War on drugs began, 472 00:30:52,600 --> 00:30:57,440 Speaker 1: drug use has actually expanded. So why hasn't it worked well? 473 00:30:57,480 --> 00:31:00,120 Speaker 1: As any economist will tell you, the difficulty with the 474 00:31:00,160 --> 00:31:02,840 Speaker 1: drug supply is that it's like a water balloon, So 475 00:31:02,880 --> 00:31:05,320 Speaker 1: if you push it down in one place, it comes 476 00:31:05,400 --> 00:31:10,479 Speaker 1: up somewhere else. So instead of attacking supply, the better 477 00:31:10,600 --> 00:31:16,040 Speaker 1: strategy is to address demand, and drug demand is in 478 00:31:16,080 --> 00:31:19,160 Speaker 1: the brain of the addict. And by the way, if 479 00:31:19,160 --> 00:31:21,840 Speaker 1: you look at who's incarcerated, the people behind the bars 480 00:31:21,920 --> 00:31:25,280 Speaker 1: aren't the cartel bosses or the big time dealers. Instead, 481 00:31:25,600 --> 00:31:28,080 Speaker 1: they are people locked up for possession of a small 482 00:31:28,120 --> 00:31:32,080 Speaker 1: amount of drugs, usually less than two grams. They're the users, 483 00:31:32,160 --> 00:31:38,440 Speaker 1: the addicts. Now, unfortunately, going to prison doesn't solve their problems. 484 00:31:38,440 --> 00:31:40,720 Speaker 1: It generally worsens it. The problem is that when you 485 00:31:41,160 --> 00:31:44,240 Speaker 1: lock someone up and break their social circles and their 486 00:31:44,280 --> 00:31:47,280 Speaker 1: employment opportunities, where you're doing is you're giving them new 487 00:31:47,320 --> 00:31:51,080 Speaker 1: social circles and new employment opportunities, and those usually don't 488 00:31:51,120 --> 00:31:54,400 Speaker 1: help them break their addiction. Some people argue that drug 489 00:31:54,440 --> 00:31:57,560 Speaker 1: addiction is about poverty and peer pressure, and those do 490 00:31:57,640 --> 00:31:59,960 Speaker 1: play a role, But at the core of the issue 491 00:32:00,480 --> 00:32:03,600 Speaker 1: is the biology of the brain. So just look at 492 00:32:03,880 --> 00:32:08,080 Speaker 1: laboratory experiments where you see rats self administering drugs. What 493 00:32:08,080 --> 00:32:10,760 Speaker 1: they'll do is they'll continually hit this lever over and 494 00:32:10,800 --> 00:32:13,120 Speaker 1: over to get drugs, and they'll do that at the 495 00:32:13,200 --> 00:32:17,120 Speaker 1: expense of food and drink. The rats aren't doing that 496 00:32:17,200 --> 00:32:21,360 Speaker 1: because of finances or social coercion. They're doing it because 497 00:32:21,400 --> 00:32:25,360 Speaker 1: the drugs tapp into fundamental reward circuitry in the brain. 498 00:32:25,840 --> 00:32:29,800 Speaker 1: The drugs effectively tell the brain that this decision is 499 00:32:29,840 --> 00:32:32,200 Speaker 1: better than all the other things it could be doing. 500 00:32:32,520 --> 00:32:35,120 Speaker 1: Other brain networks might be involved in the battle, and 501 00:32:35,120 --> 00:32:38,600 Speaker 1: they're representing all the reasons to resist the drug, but 502 00:32:38,800 --> 00:32:43,400 Speaker 1: in an addict, the craving network wins. In fact, if 503 00:32:43,440 --> 00:32:45,400 Speaker 1: you spend a lot of time talking to people who 504 00:32:45,400 --> 00:32:48,040 Speaker 1: are addicted to drugs, you'll find that the majority of them 505 00:32:48,680 --> 00:32:51,880 Speaker 1: want to quit, and they're perfectly able to list all 506 00:32:51,920 --> 00:32:54,600 Speaker 1: the reasons why they should quit, but they just can't 507 00:32:54,640 --> 00:32:58,760 Speaker 1: do it. Why. It's because they've become slaves to these 508 00:32:58,800 --> 00:33:03,240 Speaker 1: short term gratifies caation circuits. Now, because the problem with 509 00:33:03,360 --> 00:33:07,240 Speaker 1: drug addiction lies in the brain, it's plausible that the 510 00:33:07,280 --> 00:33:11,479 Speaker 1: solutions lie there. Also, one approach is to tip the 511 00:33:11,600 --> 00:33:16,480 Speaker 1: balance of impulse control. So, as an example, one successful 512 00:33:16,520 --> 00:33:20,960 Speaker 1: tactic is to ramp up the certainty and swiftness of punishment, 513 00:33:21,320 --> 00:33:25,560 Speaker 1: for example, by requiring drug offenders to undergo twice weekly 514 00:33:25,720 --> 00:33:30,239 Speaker 1: drug testing with automatic immediate jail time for failure, and 515 00:33:30,280 --> 00:33:33,200 Speaker 1: that way they don't have to rely on a distant 516 00:33:33,280 --> 00:33:38,480 Speaker 1: abstraction alone. Or here's something related. Some economists propose that 517 00:33:38,520 --> 00:33:41,560 Speaker 1: the drop in American crime since the early nineteen nineties 518 00:33:42,040 --> 00:33:45,520 Speaker 1: has been due in part to the increased presence of 519 00:33:45,600 --> 00:33:48,960 Speaker 1: police on the streets. So in the language of the brain, 520 00:33:49,400 --> 00:33:53,800 Speaker 1: the police visibility stimulates the networks of the brain that 521 00:33:53,960 --> 00:33:58,560 Speaker 1: way long term consequences. Now those are social approaches, but 522 00:33:58,640 --> 00:34:01,160 Speaker 1: because this is a neuroscience podcast, I'm going to tell 523 00:34:01,200 --> 00:34:04,560 Speaker 1: you about a brain based approach which is currently being 524 00:34:04,640 --> 00:34:08,480 Speaker 1: studied and this could end up being quite effective. This 525 00:34:08,560 --> 00:34:13,480 Speaker 1: involves giving real time feedback during brain imaging. So you're 526 00:34:13,560 --> 00:34:17,040 Speaker 1: allowing a person who's addicted to drugs to view their 527 00:34:17,080 --> 00:34:21,319 Speaker 1: brain activity and learn how to regulate it. So let 528 00:34:21,320 --> 00:34:24,319 Speaker 1: me tell you how this works. Imagine there's someone call 529 00:34:24,400 --> 00:34:28,480 Speaker 1: her Susan, who is addicted to heroin. So you put 530 00:34:28,520 --> 00:34:32,240 Speaker 1: Susan into the brain scanner. This is using functional magnetic 531 00:34:32,320 --> 00:34:36,840 Speaker 1: resonance imaging, also known as fMRI, and you show Susan 532 00:34:37,280 --> 00:34:39,879 Speaker 1: pictures of heroin and you say to her, I want 533 00:34:39,920 --> 00:34:42,640 Speaker 1: you to go ahead and crave this. Well, that's easy 534 00:34:42,680 --> 00:34:45,239 Speaker 1: for her to do, and it activates particular regions of 535 00:34:45,280 --> 00:34:49,160 Speaker 1: her brain that we can summarize as the craving network. 536 00:34:50,239 --> 00:34:53,520 Speaker 1: Then we ask her to suppress her craving. We asked 537 00:34:53,520 --> 00:34:56,520 Speaker 1: her to think about the cost that heroin has had 538 00:34:56,640 --> 00:35:00,560 Speaker 1: her in terms of finances, in terms of relation ships 539 00:35:00,800 --> 00:35:05,320 Speaker 1: in terms of employment. So that activates a different set 540 00:35:05,360 --> 00:35:09,759 Speaker 1: of brain areas, which we can summarize as the suppression network. 541 00:35:10,239 --> 00:35:13,280 Speaker 1: Now here's the key. The craving and the suppression networks 542 00:35:13,320 --> 00:35:17,160 Speaker 1: are always battling it out for supremacy, and whichever wins 543 00:35:17,320 --> 00:35:22,160 Speaker 1: at any moment determines what Susan does when she's offered heroin. 544 00:35:22,960 --> 00:35:26,880 Speaker 1: So in the scanner we can measure which network is winning. 545 00:35:26,960 --> 00:35:29,960 Speaker 1: The short term thinking of the craving network where the 546 00:35:30,160 --> 00:35:34,719 Speaker 1: long term thinking of the impulse control or suppression network. 547 00:35:35,239 --> 00:35:38,839 Speaker 1: So we give Susan real time visual feedback in the 548 00:35:38,880 --> 00:35:42,160 Speaker 1: form of a speedometer, so she can see how the 549 00:35:42,200 --> 00:35:46,560 Speaker 1: battle is going. When her craving is winning, the needle 550 00:35:46,640 --> 00:35:50,000 Speaker 1: is up in the red zone, and as she successfully 551 00:35:50,200 --> 00:35:54,240 Speaker 1: suppresses the craving, the needle moves to the blue zone. 552 00:35:54,719 --> 00:35:58,280 Speaker 1: So she can then use different mental approaches to discover 553 00:35:58,520 --> 00:36:02,680 Speaker 1: what works to take the balance of these networks. In 554 00:36:02,719 --> 00:36:06,759 Speaker 1: other words, we're giving her visual feedback about how the 555 00:36:06,840 --> 00:36:11,279 Speaker 1: battle is going on the inside, and by practicing over 556 00:36:11,320 --> 00:36:15,800 Speaker 1: and over, Susan gets better at understanding what she needs 557 00:36:15,840 --> 00:36:18,600 Speaker 1: to do to move that needle. She may or may 558 00:36:18,600 --> 00:36:21,560 Speaker 1: not be consciously aware of how she's doing it, but 559 00:36:21,600 --> 00:36:26,799 Speaker 1: by repeated practice, she can strengthen the neural circuitry that 560 00:36:26,880 --> 00:36:30,680 Speaker 1: allows her to suppress. So this technique is still in 561 00:36:30,760 --> 00:36:33,880 Speaker 1: its infancy, but the hope is that when she's next 562 00:36:34,040 --> 00:36:38,920 Speaker 1: offered heroin, she'll have the cognitive tools to overcome her 563 00:36:38,920 --> 00:36:42,879 Speaker 1: immediate cravings if she wants to. Now, the interesting part 564 00:36:42,920 --> 00:36:45,839 Speaker 1: about an approach like this, I think, is that this 565 00:36:45,920 --> 00:36:49,120 Speaker 1: training doesn't force Susan to behave in any particular way. 566 00:36:49,440 --> 00:36:53,560 Speaker 1: It simply gives her the cognitive skills to have more 567 00:36:53,640 --> 00:36:57,600 Speaker 1: control over her choice rather than to be a slave 568 00:36:57,920 --> 00:37:02,120 Speaker 1: of her impulses. And the heart of new approaches like 569 00:37:02,160 --> 00:37:05,640 Speaker 1: this is the understanding that drug addiction is a problem 570 00:37:05,880 --> 00:37:09,960 Speaker 1: for millions of people, but prisons aren't necessarily the place 571 00:37:10,040 --> 00:37:13,839 Speaker 1: to solve that problem. As we develop in understanding of 572 00:37:13,880 --> 00:37:17,480 Speaker 1: how human brains actually make decisions, we can develop new 573 00:37:17,520 --> 00:37:21,600 Speaker 1: approaches beyond just straight up punishment. As we come to 574 00:37:21,680 --> 00:37:25,920 Speaker 1: better understand the operations inside our brains, we can better 575 00:37:26,360 --> 00:37:31,920 Speaker 1: align our behavior with our best intentions, and even more generally, 576 00:37:32,000 --> 00:37:35,200 Speaker 1: as we get better at understanding decision making, we can 577 00:37:35,239 --> 00:37:38,720 Speaker 1: improve other aspects of our criminal justice system beyond addiction. 578 00:37:39,400 --> 00:37:42,640 Speaker 1: Putting into place policies which are more humane and more 579 00:37:42,680 --> 00:37:45,319 Speaker 1: cost effective. So what would that look like. It would 580 00:37:45,360 --> 00:37:50,680 Speaker 1: begin with an emphasis on rehabilitation over mass incarceration. So, 581 00:37:50,800 --> 00:37:54,320 Speaker 1: just as one example, young people don't have a fully 582 00:37:54,360 --> 00:37:59,080 Speaker 1: developed prefrontal cortex, and so they often make decisions impulsively 583 00:37:59,200 --> 00:38:04,400 Speaker 1: without a meaningful consideration of future consequences. And so facilities 584 00:38:04,440 --> 00:38:09,239 Speaker 1: with good rehab programs work to train people to improve 585 00:38:09,440 --> 00:38:12,719 Speaker 1: their self control. You can do this with mentoring and 586 00:38:12,760 --> 00:38:16,000 Speaker 1: counseling and rewards, but all of this is to simply 587 00:38:16,200 --> 00:38:20,880 Speaker 1: train young people to pause and consider the future outcome 588 00:38:21,239 --> 00:38:24,080 Speaker 1: of any choice they might make. You encourage them to 589 00:38:24,360 --> 00:38:27,480 Speaker 1: run stimulations of what might happen, and this is how 590 00:38:27,520 --> 00:38:32,000 Speaker 1: you strengthen the neural connections that can override the immediate 591 00:38:32,080 --> 00:38:35,040 Speaker 1: gratification of impulses. I'm going to come back to this 592 00:38:35,120 --> 00:38:37,880 Speaker 1: in a future episode. I'm going to talk about a 593 00:38:38,040 --> 00:38:41,920 Speaker 1: group called the Interrupters in Chicago that work to lower 594 00:38:42,080 --> 00:38:44,279 Speaker 1: crime on the streets with essentially a version of the 595 00:38:44,320 --> 00:38:47,920 Speaker 1: same technique. When some young person gets all hot headed 596 00:38:47,960 --> 00:38:51,520 Speaker 1: and wants to take revenge on someone else, the interrupter 597 00:38:52,120 --> 00:38:55,480 Speaker 1: poses questions that forces them to think about the future, 598 00:38:55,560 --> 00:38:58,520 Speaker 1: like hey, who's going to be with your girl when 599 00:38:58,560 --> 00:39:02,560 Speaker 1: you're sitting in jail. Ah? Okay, fine, I won't shoot 600 00:39:02,560 --> 00:39:05,719 Speaker 1: the guy now. Okay? Why because he was just confronted 601 00:39:05,760 --> 00:39:09,839 Speaker 1: with a future simulation that tipped the battle of the 602 00:39:09,880 --> 00:39:14,319 Speaker 1: decision making. The fact is that poor impulse control is 603 00:39:14,360 --> 00:39:17,960 Speaker 1: a hallmark characteristic of the majority of criminals in the 604 00:39:18,000 --> 00:39:22,080 Speaker 1: prison system. Most of the people on the wrong side 605 00:39:22,080 --> 00:39:25,160 Speaker 1: of the law generally know the difference between right and 606 00:39:25,160 --> 00:39:28,880 Speaker 1: wrong actions, and they understand the threat of punishment, but 607 00:39:28,960 --> 00:39:33,319 Speaker 1: they're hamstrung by poor impulse control. They see an opportunity 608 00:39:33,360 --> 00:39:36,160 Speaker 1: for something and they take it. They don't pause to 609 00:39:36,239 --> 00:39:42,399 Speaker 1: consider other options. The temptation of the now overrides any 610 00:39:42,480 --> 00:39:47,520 Speaker 1: consideration of the future consequences. Our current style of punishment 611 00:39:47,600 --> 00:39:51,680 Speaker 1: rests on a bedrock of personal volition and blame, because 612 00:39:51,719 --> 00:39:55,120 Speaker 1: the fact is we're all deeply wired with impulses to punish. 613 00:39:55,520 --> 00:39:59,400 Speaker 1: But as we get better at understanding decision making, we 614 00:39:59,480 --> 00:40:03,840 Speaker 1: can start to imagine a different kind of criminal justice system, 615 00:40:04,360 --> 00:40:09,359 Speaker 1: one with a closer relationship to the neuroscience of decisions. 616 00:40:09,760 --> 00:40:12,640 Speaker 1: It wouldn't let anybody off the hook. People who break 617 00:40:12,680 --> 00:40:15,280 Speaker 1: the law still need to get taken off the streets, 618 00:40:15,400 --> 00:40:18,399 Speaker 1: but the system we could evolve toward would be more 619 00:40:18,520 --> 00:40:22,160 Speaker 1: concerned with how to deal with law breakers with an 620 00:40:22,200 --> 00:40:26,000 Speaker 1: eye toward their future, rather than writing them off because 621 00:40:26,000 --> 00:40:29,640 Speaker 1: of their past. Again, people who break the social contracts 622 00:40:29,719 --> 00:40:32,239 Speaker 1: need to be off the streets for the safety of society. 623 00:40:32,440 --> 00:40:34,640 Speaker 1: But what happens in prison does not have to be 624 00:40:34,680 --> 00:40:40,640 Speaker 1: based only on bloodlust, but also on meaningful rehabilitation, and, 625 00:40:40,719 --> 00:40:45,319 Speaker 1: in the context of today's episode, rehabilitation based on an 626 00:40:45,360 --> 00:40:51,200 Speaker 1: increasingly better understanding of how the brain makes decisions. So 627 00:40:51,320 --> 00:40:53,680 Speaker 1: let's wrap up what we saw in these last two 628 00:40:53,680 --> 00:40:57,120 Speaker 1: episodes is that decision making lies at the heart of 629 00:40:57,239 --> 00:41:00,760 Speaker 1: everything about who we are and what we do. Without 630 00:41:00,800 --> 00:41:05,000 Speaker 1: the ability to weigh alternatives, we would be hostages to 631 00:41:05,040 --> 00:41:07,920 Speaker 1: our most basic drives. We wouldn't be able to wisely 632 00:41:08,080 --> 00:41:11,520 Speaker 1: navigate the now or plan our future lives. As we 633 00:41:11,640 --> 00:41:15,840 Speaker 1: keep gaining more and more insight into how choices battle 634 00:41:15,880 --> 00:41:18,359 Speaker 1: it out in the brain, we can learn to make 635 00:41:18,440 --> 00:41:25,560 Speaker 1: better choices for ourselves and for our society. Go to 636 00:41:25,600 --> 00:41:29,000 Speaker 1: Eagleman dot com slash podcast more information and to find 637 00:41:29,080 --> 00:41:32,480 Speaker 1: further reading. Send me an email at podcasts at eagleman 638 00:41:32,520 --> 00:41:35,160 Speaker 1: dot com with questions or discussion and check out and 639 00:41:35,160 --> 00:41:38,399 Speaker 1: subscribe to Inner Cosmos on YouTube for videos of each 640 00:41:38,440 --> 00:41:42,440 Speaker 1: episode and to leave comments until next time. I'm David Eagleman, 641 00:41:42,600 --> 00:41:45,000 Speaker 1: and you have made the nice decision to join me 642 00:41:45,120 --> 00:41:47,239 Speaker 1: here in the Inner Cosmos.