1 00:00:04,880 --> 00:00:07,400 Speaker 1: When you witness an event, when you see it with 2 00:00:07,480 --> 00:00:10,600 Speaker 1: your own eyes, it certainly feels like what you saw 3 00:00:10,960 --> 00:00:15,280 Speaker 1: can't be questioned. But how good is your memory? Really? 4 00:00:15,400 --> 00:00:20,400 Speaker 1: Can you misremember details? Is your memory like a video 5 00:00:21,000 --> 00:00:25,320 Speaker 1: or isn't it? Is your ability to remember changed by 6 00:00:25,320 --> 00:00:27,760 Speaker 1: how many things are going on in the scene, or 7 00:00:27,840 --> 00:00:31,800 Speaker 1: whether there's a gun pointed at you. Can something told 8 00:00:31,920 --> 00:00:36,040 Speaker 1: to you after the event change your memory of the event. 9 00:00:40,280 --> 00:00:43,400 Speaker 1: Welcome to Inner Cosmos with me David Eagleman. I'm a 10 00:00:43,479 --> 00:00:47,480 Speaker 1: neuroscientist and an author at Stanford and in these episodes 11 00:00:47,920 --> 00:00:51,960 Speaker 1: we sail deeply into our three pound universe to understand 12 00:00:52,280 --> 00:01:06,120 Speaker 1: the intersection between brains and our lives. Today's episode, we're 13 00:01:06,160 --> 00:01:09,959 Speaker 1: going to be talking about eyewitness testimony. I think it's 14 00:01:10,000 --> 00:01:12,959 Speaker 1: fair to say that most of us when we see 15 00:01:13,000 --> 00:01:17,800 Speaker 1: something happen, we know what we've just seen. This occurred, 16 00:01:18,080 --> 00:01:19,720 Speaker 1: this is what the person looked like, and so on. 17 00:01:20,360 --> 00:01:23,520 Speaker 1: And in fact, when jurors get interviewed on this point, 18 00:01:23,560 --> 00:01:27,520 Speaker 1: they'll often say something like, yes, I understand there are 19 00:01:27,560 --> 00:01:32,400 Speaker 1: problems with memory and eyewitness testimony, but my memory is 20 00:01:32,440 --> 00:01:35,160 Speaker 1: like a video recorder. But the question we're going to 21 00:01:35,240 --> 00:01:39,560 Speaker 1: ask today is how good is our memory really and 22 00:01:39,680 --> 00:01:43,520 Speaker 1: what does that mean for courts of law? So strap 23 00:01:43,560 --> 00:01:46,000 Speaker 1: in because we're going to see some amazing events and 24 00:01:46,080 --> 00:01:50,120 Speaker 1: court cases that may change your opinion on your own 25 00:01:50,160 --> 00:01:58,640 Speaker 1: memory and about what you take to be true. One 26 00:01:58,680 --> 00:02:01,720 Speaker 1: of the courses I teach it Anford is the Brain and 27 00:02:01,800 --> 00:02:05,640 Speaker 1: the Law. It's where neuroscience intersects with the legal system. 28 00:02:05,960 --> 00:02:08,320 Speaker 1: And a few months ago I was in the middle 29 00:02:08,360 --> 00:02:13,359 Speaker 1: of lecturing to my seventies students when something very wild happened. 30 00:02:13,440 --> 00:02:18,200 Speaker 1: I was talking to the auditorium and the back door opens, 31 00:02:20,360 --> 00:02:23,200 Speaker 1: and a middle aged woman comes in at the back 32 00:02:23,240 --> 00:02:26,639 Speaker 1: of the classroom, and she walks halfway down the aisle 33 00:02:26,720 --> 00:02:30,800 Speaker 1: toward me and then just interrupts me, as though I 34 00:02:30,880 --> 00:02:34,200 Speaker 1: weren't talking at all. She says, are you David Eagleman? 35 00:02:34,400 --> 00:02:36,480 Speaker 1: And I said yes, but I'm in the middle of 36 00:02:36,520 --> 00:02:40,040 Speaker 1: teaching a class. Who are you? And she starts shouting 37 00:02:40,440 --> 00:02:43,840 Speaker 1: how she's emailed me over and over and hasn't gotten 38 00:02:43,840 --> 00:02:47,920 Speaker 1: any reply. So I said, I'm sorry, I have an 39 00:02:48,000 --> 00:02:51,040 Speaker 1: overwhelmed inbox and I can't get to all my emails, 40 00:02:51,080 --> 00:02:53,480 Speaker 1: but I am teaching a class and I'll have to 41 00:02:53,520 --> 00:02:57,280 Speaker 1: talk with you later. But she's clearly very agitated, and 42 00:02:57,360 --> 00:03:00,760 Speaker 1: she doesn't leave, and she keeps shouting at me and 43 00:03:00,880 --> 00:03:05,080 Speaker 1: takes a step closer to me down the aisle and 44 00:03:05,240 --> 00:03:08,120 Speaker 1: asks if I'm ever planning to answer her, or whether 45 00:03:08,120 --> 00:03:11,800 Speaker 1: I'm going to continue to ignore her. And I say, ma'am, 46 00:03:11,840 --> 00:03:13,720 Speaker 1: you're going to have to leave and I'll talk with 47 00:03:13,760 --> 00:03:18,640 Speaker 1: you afterwards. And finally, after what seemed like forever, although 48 00:03:18,680 --> 00:03:22,200 Speaker 1: it was presumably less than a minute, she says, I'm 49 00:03:22,200 --> 00:03:25,120 Speaker 1: going to wait for you out here, and she goes 50 00:03:25,320 --> 00:03:31,480 Speaker 1: stomping out the back door. So I tried to maintain normalcy, 51 00:03:31,680 --> 00:03:35,200 Speaker 1: and I continued the lecture for about thirty more minutes. 52 00:03:35,720 --> 00:03:37,880 Speaker 1: And this class is a long one. I give three 53 00:03:37,960 --> 00:03:40,880 Speaker 1: hours of lecture, so normally at this point I give 54 00:03:40,920 --> 00:03:44,360 Speaker 1: my students a break at the midpoint, where we go 55 00:03:44,400 --> 00:03:47,480 Speaker 1: outside and we stretch for a few minutes. But I said 56 00:03:47,520 --> 00:03:50,720 Speaker 1: to the class, look, it's our break time. But I'm 57 00:03:50,760 --> 00:03:53,040 Speaker 1: not sure that it's going to make sense to go 58 00:03:53,120 --> 00:03:56,320 Speaker 1: out there. I'm going to call campus security just to 59 00:03:56,440 --> 00:03:59,320 Speaker 1: let them know about that woman, and I want to 60 00:03:59,360 --> 00:04:03,400 Speaker 1: give them a description, but I couldn't really tell her 61 00:04:03,440 --> 00:04:06,200 Speaker 1: height and wait very well from up here at the podium, 62 00:04:07,080 --> 00:04:08,960 Speaker 1: and I guess I was so caught up in the 63 00:04:09,040 --> 00:04:12,280 Speaker 1: surprise of the whole thing and thinking about the ways 64 00:04:12,360 --> 00:04:17,080 Speaker 1: this scenario could evolve badly that I didn't really encode 65 00:04:17,120 --> 00:04:19,240 Speaker 1: all the details that I wanted to, like what she 66 00:04:19,320 --> 00:04:22,279 Speaker 1: was wearing, or her hair color, or the details of 67 00:04:22,279 --> 00:04:25,240 Speaker 1: her face as well as I wish that I had 68 00:04:25,400 --> 00:04:29,080 Speaker 1: encoded these. So I told the class, look, I remember 69 00:04:29,120 --> 00:04:32,479 Speaker 1: the random fact that she had sunglasses perched on top 70 00:04:32,520 --> 00:04:35,279 Speaker 1: of her head, but I would really like your help 71 00:04:35,400 --> 00:04:39,479 Speaker 1: in getting a rich description. So I got everyone to 72 00:04:39,480 --> 00:04:41,880 Speaker 1: take out a piece of notebook paper, and I told 73 00:04:41,880 --> 00:04:45,120 Speaker 1: them the best way to accurately identify someone is to 74 00:04:45,160 --> 00:04:50,279 Speaker 1: have a collection of witnesses do this independently. So I said, look, 75 00:04:50,279 --> 00:04:52,880 Speaker 1: if you could draw a picture of what she looked 76 00:04:52,960 --> 00:04:55,320 Speaker 1: like and also write down her height and her weight, 77 00:04:55,920 --> 00:04:59,680 Speaker 1: then I'm going to compile these and give Stanford Security 78 00:05:00,120 --> 00:05:03,680 Speaker 1: an excellent description. So one thing I said is don't 79 00:05:03,720 --> 00:05:06,120 Speaker 1: look at each other's drawing. Just drawn on your own 80 00:05:06,279 --> 00:05:08,680 Speaker 1: and then hand it up to me. So everyone drew 81 00:05:08,720 --> 00:05:12,200 Speaker 1: their best version of how they remembered her, and some 82 00:05:12,240 --> 00:05:14,279 Speaker 1: people relied more on the drawing part and some on 83 00:05:14,320 --> 00:05:18,960 Speaker 1: the verbal descriptions. I collected these all up, and I said, great, 84 00:05:18,960 --> 00:05:22,200 Speaker 1: I'm going to have a very complete average version of 85 00:05:22,200 --> 00:05:25,359 Speaker 1: her that I can communicate to security. So I started 86 00:05:25,400 --> 00:05:28,760 Speaker 1: flipping through these and reading them out loud to the class, 87 00:05:29,440 --> 00:05:34,400 Speaker 1: and we were all shocked because one person described her 88 00:05:34,480 --> 00:05:37,880 Speaker 1: as five foot four and wearing a blue shirt and 89 00:05:37,920 --> 00:05:41,839 Speaker 1: the next described her as five foot eight and wearing 90 00:05:41,880 --> 00:05:46,279 Speaker 1: a white shirt. And the age estimates were between forty 91 00:05:46,320 --> 00:05:51,000 Speaker 1: and sixty, and the hair color was everything from brown 92 00:05:51,040 --> 00:05:53,920 Speaker 1: hair to gray hair to light hair and so on. 93 00:05:54,480 --> 00:05:59,200 Speaker 1: And the body weight ranged over about forty pounds. So 94 00:05:59,400 --> 00:06:04,000 Speaker 1: the whole collection provided me with no benefit at all 95 00:06:04,200 --> 00:06:08,599 Speaker 1: in calling security. So I didn't call security, but not 96 00:06:08,680 --> 00:06:11,159 Speaker 1: for the reason you might think. The reason I didn't 97 00:06:11,160 --> 00:06:14,560 Speaker 1: call is because the woman was a professional actor that 98 00:06:14,640 --> 00:06:17,720 Speaker 1: I had hired. And the whole point of the exercise 99 00:06:17,839 --> 00:06:22,960 Speaker 1: was to demonstrate firsthand to my students that vision is 100 00:06:23,120 --> 00:06:27,160 Speaker 1: not like a camera and memory is not like a recorder. 101 00:06:28,320 --> 00:06:30,839 Speaker 1: And from there I segued into the real topic of 102 00:06:30,880 --> 00:06:35,000 Speaker 1: the lecture, which was on the problems and challenges of 103 00:06:35,160 --> 00:06:40,120 Speaker 1: eyewitness testimony. So let's begin with a woman named Jennifer Thompson. 104 00:06:40,640 --> 00:06:43,960 Speaker 1: She was twenty two and living in North Carolina, and 105 00:06:44,040 --> 00:06:47,600 Speaker 1: a man broke into her apartment and raped her. It 106 00:06:47,720 --> 00:06:51,360 Speaker 1: was a long ordeal, and Jennifer really paid attention to 107 00:06:51,400 --> 00:06:54,480 Speaker 1: his face so that she would be able to identify 108 00:06:54,560 --> 00:06:58,000 Speaker 1: him later. When this whole terrible event was over and 109 00:06:58,040 --> 00:07:01,960 Speaker 1: he ran off, she drove herself to the hospital and 110 00:07:02,080 --> 00:07:04,960 Speaker 1: had a rape kit taken, and then she went straight 111 00:07:05,000 --> 00:07:07,279 Speaker 1: to the police, and with their help she was able 112 00:07:07,320 --> 00:07:10,440 Speaker 1: to create a composite image of what he looked like. 113 00:07:11,240 --> 00:07:14,040 Speaker 1: So the police went out and they gathered suspects, and 114 00:07:14,080 --> 00:07:17,440 Speaker 1: they put together a lineup of seven men, and Jennifer 115 00:07:17,480 --> 00:07:20,960 Speaker 1: concluded that one man in that lineup, a man named 116 00:07:21,040 --> 00:07:24,400 Speaker 1: Ronald Cotton, was the man who had raped her. So 117 00:07:24,520 --> 00:07:26,760 Speaker 1: Cotton went to jail for the crime, but the whole 118 00:07:26,800 --> 00:07:31,040 Speaker 1: time he was there he maintained his innocence, and many 119 00:07:31,120 --> 00:07:34,400 Speaker 1: years into his jail sense he saw the oj Simpson 120 00:07:34,440 --> 00:07:39,520 Speaker 1: trial on the prison television and he heard about DNA evidence, 121 00:07:39,560 --> 00:07:42,080 Speaker 1: which was something he had never heard of before, and 122 00:07:42,120 --> 00:07:44,920 Speaker 1: so he contacted his lawyer and he said, what is 123 00:07:44,960 --> 00:07:48,320 Speaker 1: this DNA evidence? Is this something we should look into? 124 00:07:48,880 --> 00:07:51,000 Speaker 1: And his lawyer was able to get a hold of 125 00:07:51,080 --> 00:07:54,520 Speaker 1: the rape kit from almost eleven years earlier, and they 126 00:07:54,520 --> 00:07:57,760 Speaker 1: were able to extract enough DNA from that, and it 127 00:07:57,840 --> 00:08:02,080 Speaker 1: turned out that Cotton was in fact innocent. He was 128 00:08:02,160 --> 00:08:04,680 Speaker 1: not the man who had done it. The man who 129 00:08:04,680 --> 00:08:08,200 Speaker 1: had raped Jennifer Thompson was a man named Bobby Poole, 130 00:08:08,680 --> 00:08:13,040 Speaker 1: and he was then apprehended and confessed to the rape. 131 00:08:13,120 --> 00:08:17,320 Speaker 1: And so Ronald Cotton was released after having spent eleven 132 00:08:17,480 --> 00:08:20,760 Speaker 1: years of his life in prison for a crime he 133 00:08:20,840 --> 00:08:25,480 Speaker 1: did not commit. So Jennifer was shattered by this because 134 00:08:25,520 --> 00:08:29,240 Speaker 1: she realized now that, based on her eyewitness testimony and 135 00:08:29,280 --> 00:08:33,240 Speaker 1: her certainty, she had sent an innocent man to jail 136 00:08:33,320 --> 00:08:36,000 Speaker 1: for eleven years. And she said that one of the 137 00:08:36,040 --> 00:08:38,480 Speaker 1: most important things to her as a rape victim was 138 00:08:38,520 --> 00:08:41,560 Speaker 1: to never feel any guilt over what had happened to her. 139 00:08:41,600 --> 00:08:44,240 Speaker 1: It wasn't her fault that she had been raped, But 140 00:08:44,480 --> 00:08:47,400 Speaker 1: all of a sudden, now she was crushed with guilt 141 00:08:47,800 --> 00:08:51,920 Speaker 1: because she had ruined an innocent man's life by erroneously 142 00:08:52,440 --> 00:08:57,199 Speaker 1: identifying him in a lineup. So, about three years after 143 00:08:57,320 --> 00:09:01,319 Speaker 1: Ronald Cotton got released, she decided she was going to 144 00:09:01,360 --> 00:09:04,360 Speaker 1: reach out to him, make contact with him, and of 145 00:09:04,400 --> 00:09:07,080 Speaker 1: course she was terrified about doing this. But she contacted 146 00:09:07,160 --> 00:09:10,320 Speaker 1: him anyway, and they met at a church and he 147 00:09:10,440 --> 00:09:13,880 Speaker 1: said that he had already forgiven her, and they talked 148 00:09:13,920 --> 00:09:18,720 Speaker 1: at length, and they both cried, and eventually they became friends, 149 00:09:19,320 --> 00:09:22,080 Speaker 1: and they ended up writing an excellent book together. It's 150 00:09:22,120 --> 00:09:26,080 Speaker 1: called Picking Cotton. And one of the things she does 151 00:09:26,120 --> 00:09:28,880 Speaker 1: in the book is wrestle with the question that was 152 00:09:29,400 --> 00:09:32,840 Speaker 1: most deeply burned into her mind. How could she have 153 00:09:32,960 --> 00:09:37,520 Speaker 1: picked the wrong person and have felt so certain about it. 154 00:09:37,960 --> 00:09:42,760 Speaker 1: How can one feel so sure and be wrong. So 155 00:09:42,840 --> 00:09:46,440 Speaker 1: they travel around and give talks together about the fallibility 156 00:09:46,480 --> 00:09:50,200 Speaker 1: of eyewitness testimony, about how you can believe one hundred 157 00:09:50,240 --> 00:09:54,600 Speaker 1: percent that's the face, but it doesn't actually necessitate that 158 00:09:54,640 --> 00:09:57,600 Speaker 1: you are correct. I met the two of them at 159 00:09:57,640 --> 00:10:00,360 Speaker 1: a conference when they were talking about the book. They're 160 00:10:00,400 --> 00:10:03,760 Speaker 1: close friends now, and they travel around together so much 161 00:10:03,840 --> 00:10:07,080 Speaker 1: that often people mistake them for a couple, and they say, oh, 162 00:10:07,120 --> 00:10:09,840 Speaker 1: how did you two meet, And they say, it's a 163 00:10:09,880 --> 00:10:14,280 Speaker 1: long story, okay. But the heart of the problem is 164 00:10:14,320 --> 00:10:18,320 Speaker 1: that memory is always a reconstruction. Memory is not like 165 00:10:18,360 --> 00:10:20,960 Speaker 1: a videotape. It's not as though your memory is something 166 00:10:21,000 --> 00:10:24,960 Speaker 1: where your visual system is taking in information and retaining 167 00:10:25,080 --> 00:10:27,960 Speaker 1: zeros and ones the way a cell phone video would. 168 00:10:28,280 --> 00:10:30,600 Speaker 1: All you ever see is what you believe you are 169 00:10:30,640 --> 00:10:33,040 Speaker 1: seeing out there. So if I were to ask you 170 00:10:33,120 --> 00:10:36,079 Speaker 1: the details of what's in front of you right now, 171 00:10:36,800 --> 00:10:39,640 Speaker 1: you'd be able to answer it once you pay attention 172 00:10:39,760 --> 00:10:43,000 Speaker 1: to things. But you weren't aware of that even though 173 00:10:43,000 --> 00:10:45,320 Speaker 1: it's been sitting on your retina this whole time. You're 174 00:10:45,320 --> 00:10:49,240 Speaker 1: not actually seeing the world like a camera. Instead, what 175 00:10:49,320 --> 00:10:52,680 Speaker 1: you have is a rough internal model of what's going on. 176 00:10:53,360 --> 00:10:56,400 Speaker 1: And when you need to take in more information, then 177 00:10:56,440 --> 00:10:59,160 Speaker 1: you do. You go out and point your eyes and 178 00:10:59,240 --> 00:11:01,920 Speaker 1: pull in more in. So when I ask you about 179 00:11:01,920 --> 00:11:05,360 Speaker 1: the scene in front of you, you need to employ 180 00:11:05,520 --> 00:11:08,760 Speaker 1: your attentional mechanisms to go and crawl the scene out 181 00:11:08,800 --> 00:11:12,280 Speaker 1: there and try to identify what's sitting there. So today's 182 00:11:12,360 --> 00:11:17,800 Speaker 1: question is how reliable is eyewitness testimony? In the courtroom, 183 00:11:18,120 --> 00:11:20,960 Speaker 1: for my students, I was able to demonstrate to them 184 00:11:21,440 --> 00:11:26,520 Speaker 1: the massive variety of their drawings tall, short, heavy, light, 185 00:11:27,000 --> 00:11:31,360 Speaker 1: curly hair, straight hair, glasses, no glasses, and none of 186 00:11:31,360 --> 00:11:36,320 Speaker 1: their drawings looked particularly like the actor at all. But 187 00:11:36,440 --> 00:11:39,160 Speaker 1: forget a classroom test, how does this play out in 188 00:11:39,200 --> 00:11:42,679 Speaker 1: the real world? How often does I witness testimony end 189 00:11:42,760 --> 00:11:46,160 Speaker 1: up convicting the wrong person, as happened with Jennifer and 190 00:11:46,240 --> 00:11:48,840 Speaker 1: Ronald Well. One thing you could do is look at 191 00:11:48,880 --> 00:11:52,760 Speaker 1: people who are found to be innocent years after they 192 00:11:52,760 --> 00:11:55,400 Speaker 1: were convicted for a crime, and then figure out what 193 00:11:55,559 --> 00:11:59,040 Speaker 1: percentage of them were convicted in whole or in part 194 00:11:59,160 --> 00:12:02,400 Speaker 1: based on iwold testimony. So you may have heard of 195 00:12:02,440 --> 00:12:05,559 Speaker 1: the Innocence Project. It is for people who are serving 196 00:12:05,600 --> 00:12:09,920 Speaker 1: time in prison and are maintaining their innocence. The lawyers 197 00:12:10,000 --> 00:12:13,600 Speaker 1: and forensic scientists at the Innocence Project go back and 198 00:12:13,679 --> 00:12:17,600 Speaker 1: open these cold cases and work to reassess the case 199 00:12:17,640 --> 00:12:21,960 Speaker 1: with DNA evidence. So what the Innocence Project found out 200 00:12:21,960 --> 00:12:25,760 Speaker 1: of the two hundred and forty people that they have exonerated, 201 00:12:26,520 --> 00:12:28,880 Speaker 1: in other words, who have been found innocent of the 202 00:12:28,880 --> 00:12:32,720 Speaker 1: crime they were convicted for, is that over sixty percent 203 00:12:32,800 --> 00:12:35,200 Speaker 1: of them went to jail in whole or in part 204 00:12:35,600 --> 00:12:39,960 Speaker 1: because of eyewitness testimony where someone said, I know that's 205 00:12:40,040 --> 00:12:42,520 Speaker 1: the guy. I saw him with my own eyes. There's 206 00:12:42,600 --> 00:12:45,560 Speaker 1: no doubt in my mind. Now this leads to a 207 00:12:45,679 --> 00:12:52,240 Speaker 1: question does eyewitness testimony matter? How swaying is it to jurors? Well, 208 00:12:52,280 --> 00:12:57,120 Speaker 1: the answer is it's enormously swaying. The US Supreme Court 209 00:12:57,200 --> 00:13:01,720 Speaker 1: Justice William Brennan pointed out that vote all the evidence 210 00:13:01,800 --> 00:13:05,319 Speaker 1: points rather strikingly to the conclusion that there is almost 211 00:13:05,640 --> 00:13:10,240 Speaker 1: nothing more convincing than a live human being who takes 212 00:13:10,320 --> 00:13:13,400 Speaker 1: the stand, points a finger at the defendants and says, 213 00:13:14,000 --> 00:13:34,079 Speaker 1: that's the one unquote. So imagine you have a prosecutor 214 00:13:34,200 --> 00:13:39,000 Speaker 1: showing circumstantial evidence, or a scientist comes up and says, look, 215 00:13:39,040 --> 00:13:42,880 Speaker 1: here's my neuroimaging evidence, or you have a lie detector 216 00:13:42,920 --> 00:13:46,360 Speaker 1: expert or whatever, and then you get an eyewitness and 217 00:13:46,400 --> 00:13:49,080 Speaker 1: she takes the stand and she has tears in her eyes. 218 00:13:49,160 --> 00:13:52,360 Speaker 1: She's genuine, she says, I know what I saw. He 219 00:13:52,800 --> 00:13:58,680 Speaker 1: is the one that has enormous way on jurors. Justice 220 00:13:58,720 --> 00:14:01,280 Speaker 1: Brennan goes on to point out that eyewent his testimony 221 00:14:01,360 --> 00:14:05,080 Speaker 1: is the type of evidence that quote, juries seem most 222 00:14:05,200 --> 00:14:09,800 Speaker 1: receptive to and not inclined to discredit end quote. In 223 00:14:09,840 --> 00:14:12,280 Speaker 1: other words, a scientist might say something on the stand 224 00:14:12,320 --> 00:14:15,720 Speaker 1: and the jury thinks, yeah, maybe, but there's another interpretation. 225 00:14:16,400 --> 00:14:19,960 Speaker 1: But when it comes to an eyewitness, we tend to say, Okay, 226 00:14:20,120 --> 00:14:23,520 Speaker 1: that's the gospel. Finally, Brennan points out that the court 227 00:14:23,560 --> 00:14:27,760 Speaker 1: has long recognized, at least since the nineteen sixties, the 228 00:14:27,920 --> 00:14:34,600 Speaker 1: quote inherently suspect qualities of eyewitness identification evidence unquote, and 229 00:14:34,640 --> 00:14:38,160 Speaker 1: that the court finds that kind of evidence quote notoriously 230 00:14:38,560 --> 00:14:43,120 Speaker 1: unreliable end quote. So that's Brennan, having spent a life 231 00:14:43,160 --> 00:14:45,960 Speaker 1: on the bench, describing what he knows to be true. 232 00:14:46,760 --> 00:14:49,600 Speaker 1: So why is it so swaying? Well, maybe we can 233 00:14:49,640 --> 00:14:53,880 Speaker 1: find some evidence from the laboratory. So my colleague, Elizabeth 234 00:14:53,960 --> 00:14:57,359 Speaker 1: loftis that you see, Irvine is probably the most famous 235 00:14:57,400 --> 00:15:02,560 Speaker 1: memory eyewitness researcher. Here's one example of her many many studies. 236 00:15:03,000 --> 00:15:06,840 Speaker 1: You tell mock jurors the details of a case. They 237 00:15:06,880 --> 00:15:09,640 Speaker 1: get to hear the attorneys on both sides, the prosecution 238 00:15:09,800 --> 00:15:13,320 Speaker 1: and the defense, give their arguments. Some of the jurors 239 00:15:13,560 --> 00:15:17,840 Speaker 1: only hear circumstantial evidence. There's no eyewitness testimony in that case. 240 00:15:18,440 --> 00:15:21,960 Speaker 1: And given the facts that they've heard, eighteen percent of 241 00:15:22,000 --> 00:15:25,600 Speaker 1: the mock jurors conclude that the guy is guilty. But 242 00:15:25,720 --> 00:15:30,280 Speaker 1: the other group, here's eyewitness testimony, everything else being the same, 243 00:15:30,760 --> 00:15:34,600 Speaker 1: and now seventy two percent of the mock jurors find 244 00:15:34,680 --> 00:15:37,480 Speaker 1: the guy guilty. So it goes from eighteen percent of 245 00:15:37,520 --> 00:15:40,560 Speaker 1: people finding him guilty to seventy two percent just based 246 00:15:40,560 --> 00:15:43,320 Speaker 1: on the eyewitness testimony. That means that even though we 247 00:15:43,520 --> 00:15:48,640 Speaker 1: know eyewitness testimony is inherently unreliable, just look at the 248 00:15:48,640 --> 00:15:52,640 Speaker 1: effect of saying, we have someone who saw that person. 249 00:15:53,000 --> 00:15:54,760 Speaker 1: Maybe it was a little bit far away, maybe it 250 00:15:54,800 --> 00:15:57,400 Speaker 1: was sort of dark out, but I'm pretty sure that 251 00:15:57,600 --> 00:16:00,120 Speaker 1: was the guy, and bam, it goes up to seventy two. 252 00:16:00,480 --> 00:16:04,200 Speaker 1: It's a huge difference, And for something that's so unreliable, 253 00:16:04,600 --> 00:16:07,640 Speaker 1: it's a little hard to justify that kind of pull. 254 00:16:08,880 --> 00:16:12,640 Speaker 1: So we know that eyewitness testimony is swaying, But why 255 00:16:12,760 --> 00:16:16,360 Speaker 1: is it so bad? Why can't we look at somebody 256 00:16:16,600 --> 00:16:19,760 Speaker 1: like the actor who broke into my classroom and just 257 00:16:19,880 --> 00:16:25,240 Speaker 1: describe accurately what she looked like. Well, for starters, it's 258 00:16:25,400 --> 00:16:29,160 Speaker 1: really really hard to remember what somebody looks like. Think 259 00:16:29,160 --> 00:16:32,080 Speaker 1: about the last time you went to the coffee shop, 260 00:16:32,280 --> 00:16:35,720 Speaker 1: what precisely did the person at the cash register look like? 261 00:16:36,480 --> 00:16:39,360 Speaker 1: Or if you had to describe your neighbor down the 262 00:16:39,440 --> 00:16:42,200 Speaker 1: street who you don't see that often to the police, 263 00:16:42,880 --> 00:16:45,720 Speaker 1: how well could you do so? If you really had 264 00:16:45,720 --> 00:16:47,800 Speaker 1: to pull out a piece of paper and draw the 265 00:16:47,840 --> 00:16:51,160 Speaker 1: face or describe it verbally. So police have worked for 266 00:16:51,200 --> 00:16:56,400 Speaker 1: decades to try to make this easier. The original approach 267 00:16:56,560 --> 00:16:59,880 Speaker 1: was to have a trained artist who sketches while you 268 00:17:00,520 --> 00:17:03,200 Speaker 1: and then you can work back and forth together. But 269 00:17:03,240 --> 00:17:06,159 Speaker 1: eventually the police put together systems where you could just 270 00:17:06,600 --> 00:17:09,919 Speaker 1: look at the individual features like the eyes and the 271 00:17:09,960 --> 00:17:12,280 Speaker 1: eyebrows and the nose and the mouth, and you flip 272 00:17:12,320 --> 00:17:14,800 Speaker 1: them until you get the face that you want out 273 00:17:14,800 --> 00:17:17,960 Speaker 1: of that. So instead of me asking you to take 274 00:17:17,960 --> 00:17:21,119 Speaker 1: out a piece of notebook paper and draw seth Rogen, 275 00:17:21,560 --> 00:17:23,800 Speaker 1: instead I say, look, here's a bunch of eyes and 276 00:17:23,880 --> 00:17:27,320 Speaker 1: noses and mouths, and I want you to flip these 277 00:17:27,359 --> 00:17:30,080 Speaker 1: around until you get a good picture of seth Rogen 278 00:17:30,520 --> 00:17:33,800 Speaker 1: or the person that you saw or the perpetrator. So 279 00:17:33,840 --> 00:17:36,120 Speaker 1: the first system for this was introduced in the late 280 00:17:36,240 --> 00:17:40,479 Speaker 1: nineteen fifties by the Los Angeles Police Department. It was 281 00:17:40,880 --> 00:17:44,280 Speaker 1: these features on transparent sheets and you rotate them around 282 00:17:44,320 --> 00:17:49,679 Speaker 1: and you reconstruct the face. This was called Identicit, and 283 00:17:49,880 --> 00:17:53,080 Speaker 1: Identicate was used by Scotland Yard in a case in 284 00:17:53,160 --> 00:17:57,600 Speaker 1: nineteen sixty one, a woman was stabbed to death in 285 00:17:57,640 --> 00:18:00,639 Speaker 1: an antique shop where she work, and the police interviewed 286 00:18:00,640 --> 00:18:04,119 Speaker 1: people and learned that someone suspicious had been in the 287 00:18:04,160 --> 00:18:07,680 Speaker 1: shop a few days earlier, and so Scotland Yard turned 288 00:18:07,920 --> 00:18:11,879 Speaker 1: to this invention from America and had their witnesses compile 289 00:18:12,040 --> 00:18:16,720 Speaker 1: the suspect's face using identicit, And some days later a 290 00:18:16,760 --> 00:18:20,120 Speaker 1: policeman saw a man named Edwin Bush on the street 291 00:18:20,520 --> 00:18:24,280 Speaker 1: and recognized him from the identicate pictures in the newspaper, 292 00:18:25,680 --> 00:18:27,840 Speaker 1: and in fact, when they arrested him and took him 293 00:18:27,840 --> 00:18:30,000 Speaker 1: to the station, they found he had a copy of 294 00:18:30,040 --> 00:18:33,960 Speaker 1: the newspaper drawing in his own pocket, and his shoes 295 00:18:34,080 --> 00:18:36,399 Speaker 1: matched the size of the shoes in the crime scene, 296 00:18:36,400 --> 00:18:39,119 Speaker 1: and so on, and he ended up confessing to the murder. 297 00:18:39,520 --> 00:18:42,040 Speaker 1: So this was considered a real success in how you 298 00:18:42,080 --> 00:18:47,280 Speaker 1: could crank up the reliability of identifying people and identic 299 00:18:47,320 --> 00:18:51,600 Speaker 1: It was eventually replaced by photo fit in the nineteen seventies, 300 00:18:51,760 --> 00:18:56,360 Speaker 1: where instead of using drawings, you use collages of photos 301 00:18:56,400 --> 00:18:58,720 Speaker 1: of eyes and ears and noses and mouth and hairline 302 00:18:58,720 --> 00:19:02,160 Speaker 1: and so on, and the photos give a better image 303 00:19:02,160 --> 00:19:05,679 Speaker 1: of the suspect's face rather than the line drawings. And 304 00:19:05,840 --> 00:19:09,280 Speaker 1: with due course all this evolved into software in which 305 00:19:09,280 --> 00:19:12,239 Speaker 1: you take a three dimensional model of the head and 306 00:19:12,280 --> 00:19:13,879 Speaker 1: you do the same sort of thing where you can 307 00:19:13,960 --> 00:19:15,960 Speaker 1: change the eyes and nose and mouth and eyebrows and 308 00:19:16,000 --> 00:19:18,560 Speaker 1: so on. And there were many stories that came out 309 00:19:18,560 --> 00:19:22,720 Speaker 1: of all these systems and that increased confidence in the approach. 310 00:19:23,840 --> 00:19:27,040 Speaker 1: But it turns out that despite success stories with this, 311 00:19:27,480 --> 00:19:29,520 Speaker 1: when you really start looking into it, it turns out 312 00:19:29,560 --> 00:19:33,720 Speaker 1: these face recognition approaches are generally not so good in 313 00:19:33,800 --> 00:19:38,320 Speaker 1: terms of being able to identify people. Some years ago, 314 00:19:38,440 --> 00:19:42,399 Speaker 1: an MIT researcher named Pawan Sinha got a hold of 315 00:19:42,440 --> 00:19:45,919 Speaker 1: the standard facial software that police forces use, and he 316 00:19:46,040 --> 00:19:49,720 Speaker 1: requested an expert with several years of experience with the 317 00:19:49,760 --> 00:19:54,320 Speaker 1: system to put together reconstructions directly from a picture and 318 00:19:54,480 --> 00:19:58,320 Speaker 1: without time constraints. That meant the operator didn't have to 319 00:19:58,320 --> 00:20:01,200 Speaker 1: rely on verbal descriptions. He could just look at the 320 00:20:01,200 --> 00:20:05,439 Speaker 1: photos and put this together. So each year in my class, 321 00:20:05,440 --> 00:20:08,560 Speaker 1: I show the students these four photographs and I offer 322 00:20:08,640 --> 00:20:12,159 Speaker 1: ten dollars to whoever can identify the pictures, and no 323 00:20:12,240 --> 00:20:15,679 Speaker 1: one ever can. There are lots of wrong guesses, but 324 00:20:15,760 --> 00:20:18,679 Speaker 1: so far no one has gotten them. Right. Now, the 325 00:20:18,760 --> 00:20:24,280 Speaker 1: pictures are of Bill Cosby, Carl Sagan, Ronald Reagan, Michael Jordan. 326 00:20:24,720 --> 00:20:27,360 Speaker 1: Now I get it. These are faces that the younger 327 00:20:27,440 --> 00:20:31,679 Speaker 1: generation can't necessarily identify anyway. But at the peak of 328 00:20:31,720 --> 00:20:35,680 Speaker 1: these guys popularity in the nineteen eighties, people couldn't identify 329 00:20:35,880 --> 00:20:40,800 Speaker 1: these pictures. Then the reconstructions just weren't close enough to 330 00:20:40,840 --> 00:20:44,560 Speaker 1: the actual face to make the match. And in case 331 00:20:44,600 --> 00:20:48,360 Speaker 1: you think this might just be a laboratory artifact, there 332 00:20:48,359 --> 00:20:50,880 Speaker 1: are plenty of real life cases where you look at 333 00:20:50,880 --> 00:20:54,920 Speaker 1: the identicate composite and then you look at the captured 334 00:20:55,040 --> 00:20:58,520 Speaker 1: person's photograph and you'll know right away that you would 335 00:20:58,520 --> 00:21:03,440 Speaker 1: have never identified that person from that drawing. Why. Well, 336 00:21:03,600 --> 00:21:06,680 Speaker 1: part of the problem is that the feature based composite 337 00:21:06,680 --> 00:21:10,720 Speaker 1: image is built from the pieces and parts. You say, 338 00:21:10,840 --> 00:21:12,760 Speaker 1: I think the guy's eyes looked sort of like this, 339 00:21:12,880 --> 00:21:14,480 Speaker 1: and his mouth looked sort of like that, and his 340 00:21:14,560 --> 00:21:16,840 Speaker 1: nose and his eyebrows and so on. But that's not 341 00:21:16,920 --> 00:21:21,040 Speaker 1: the way the human visual system works. It recognizes faces 342 00:21:21,080 --> 00:21:25,160 Speaker 1: based on the whole picture. What's called the gestalt, which 343 00:21:25,200 --> 00:21:28,359 Speaker 1: is the German word that signifies when a whole is 344 00:21:28,480 --> 00:21:33,000 Speaker 1: perceived as more than the sum of its parts. But 345 00:21:33,160 --> 00:21:37,480 Speaker 1: that is only one problem. Let's imagine that you could 346 00:21:37,520 --> 00:21:41,440 Speaker 1: somehow reconstruct a pretty good likeness there's also the problem 347 00:21:41,600 --> 00:21:44,960 Speaker 1: of similarity, which is to say, a lot of people 348 00:21:45,119 --> 00:21:47,879 Speaker 1: just look alike. So there was an amazing case some 349 00:21:47,960 --> 00:21:51,280 Speaker 1: years ago in which a man named Lawrence Berson, who 350 00:21:51,320 --> 00:21:55,080 Speaker 1: had curly hair and big square glasses, was convicted of 351 00:21:55,160 --> 00:21:58,720 Speaker 1: several rape cases and went to jail. And then another guy, 352 00:21:58,840 --> 00:22:02,960 Speaker 1: George Morales, with curly hair and big square glasses, was 353 00:22:03,000 --> 00:22:07,119 Speaker 1: convicted of several robberies and went to jail, and both 354 00:22:07,160 --> 00:22:10,399 Speaker 1: protested their innocence, and it turns out they were telling 355 00:22:10,440 --> 00:22:13,200 Speaker 1: the truth. The guy who did all of the crimes 356 00:22:13,640 --> 00:22:17,400 Speaker 1: was a guy named Richard Carbone, who had curly hair 357 00:22:17,520 --> 00:22:22,760 Speaker 1: and big square glasses. Carbone had been described by several eyewitnesses, 358 00:22:23,200 --> 00:22:26,600 Speaker 1: and these two other poor guys served jail time because 359 00:22:27,000 --> 00:22:29,879 Speaker 1: they were picked out of the police lineups. And putting 360 00:22:29,920 --> 00:22:32,399 Speaker 1: a photo of these guys on eagleman dot com slash 361 00:22:32,440 --> 00:22:36,400 Speaker 1: podcast because it's so striking how different people can look 362 00:22:36,440 --> 00:22:38,800 Speaker 1: similar to one another. So I've told you so far 363 00:22:38,840 --> 00:22:42,520 Speaker 1: that the brain in codes faces holistically, not in bits 364 00:22:42,520 --> 00:22:45,000 Speaker 1: and parts, and also that a lot of people look 365 00:22:45,119 --> 00:22:49,359 Speaker 1: roughly similar. But neither of those problems even competes with 366 00:22:49,480 --> 00:22:54,320 Speaker 1: the biggest problem of eyewitness testimony, and that is our 367 00:22:54,600 --> 00:22:58,520 Speaker 1: memories are terrible. We all like to think that we 368 00:22:58,560 --> 00:23:01,760 Speaker 1: can remember a face in describe it, but as my 369 00:23:01,880 --> 00:23:04,440 Speaker 1: students saw from trying to draw the woman who broke 370 00:23:04,480 --> 00:23:08,360 Speaker 1: into my classroom and spoke angrily with me for a minute, 371 00:23:08,840 --> 00:23:12,479 Speaker 1: it's really hard to remember the details of someone's face 372 00:23:12,560 --> 00:23:15,600 Speaker 1: and then try to reconstruct that. And the past few 373 00:23:15,680 --> 00:23:20,280 Speaker 1: decades have seen really great studies which undermine our confidence 374 00:23:20,400 --> 00:23:23,120 Speaker 1: that this should be easy to do. So the first 375 00:23:23,160 --> 00:23:26,840 Speaker 1: thing to appreciate is that there are two phases of memory. 376 00:23:26,880 --> 00:23:30,360 Speaker 1: The first is encoding, So when the woman burst into 377 00:23:30,400 --> 00:23:33,359 Speaker 1: my classroom, the students were encoding what they were seeing. 378 00:23:33,440 --> 00:23:36,639 Speaker 1: Their brains were writing it down, and the second phase 379 00:23:36,680 --> 00:23:41,159 Speaker 1: is retrieval, or pulling the memory back up later. In 380 00:23:41,240 --> 00:23:44,280 Speaker 1: the case of my classroom, I waited about thirty minutes 381 00:23:44,359 --> 00:23:47,720 Speaker 1: before asking them to draw the face, and it turns 382 00:23:47,760 --> 00:23:51,320 Speaker 1: out the forgetting curve is quite steep. By thirty minutes in, 383 00:23:51,400 --> 00:23:54,800 Speaker 1: they've forgotten most of the important details. So let's break 384 00:23:54,840 --> 00:23:57,639 Speaker 1: these down one at a time. We'll start with encoding, 385 00:23:57,720 --> 00:24:00,919 Speaker 1: writing down the memory. One of the biggest problems with 386 00:24:01,280 --> 00:24:06,320 Speaker 1: encoding a memory during a crime is what's called weapon focus. 387 00:24:06,720 --> 00:24:09,399 Speaker 1: So if a person has a knife on you, or 388 00:24:09,520 --> 00:24:12,560 Speaker 1: is wielding a gun, or has a baseball bat or whatever, 389 00:24:13,200 --> 00:24:16,400 Speaker 1: your brain can't help but to be focused on that, 390 00:24:17,119 --> 00:24:20,000 Speaker 1: and as a result, it's harder to remember details about 391 00:24:20,000 --> 00:24:24,640 Speaker 1: the person's face. So here's another study from Elizabeth loftis 392 00:24:24,720 --> 00:24:27,160 Speaker 1: she has you come into a room and she tells 393 00:24:27,200 --> 00:24:30,520 Speaker 1: you you're going to participate in some psychology exam. And 394 00:24:30,600 --> 00:24:34,080 Speaker 1: while you're waiting there, you hear two people arguing, and 395 00:24:34,119 --> 00:24:36,480 Speaker 1: then one person comes out of the room and he's 396 00:24:36,480 --> 00:24:38,880 Speaker 1: got some grease on his hands and he's holding a pen. 397 00:24:39,560 --> 00:24:43,359 Speaker 1: So that scenario one. Scenario two is to hear the 398 00:24:43,359 --> 00:24:45,920 Speaker 1: same thing, the two people arguing, and then the guy 399 00:24:45,960 --> 00:24:49,800 Speaker 1: comes out and he's holding a blood stained knife. And 400 00:24:49,800 --> 00:24:53,560 Speaker 1: then she studies how well you can identify the person 401 00:24:53,680 --> 00:24:56,560 Speaker 1: after that, And you can imagine what happens when the 402 00:24:56,560 --> 00:24:59,720 Speaker 1: guy is holding the blood stained knife. How good is 403 00:24:59,760 --> 00:25:03,240 Speaker 1: your identification about what his face looked like. It's terrible. 404 00:25:03,640 --> 00:25:07,439 Speaker 1: When there's a weapon involved, witnesses are not encoding the 405 00:25:07,560 --> 00:25:10,880 Speaker 1: details about the person. They have high anxiety and they're 406 00:25:10,920 --> 00:25:14,240 Speaker 1: staring at the weapon. It's the most salient thing in 407 00:25:14,280 --> 00:25:16,560 Speaker 1: the scene. It is the ball to keep your eye on, 408 00:25:17,359 --> 00:25:20,960 Speaker 1: so weapon focus is one problem. A second problem with 409 00:25:21,119 --> 00:25:24,159 Speaker 1: encoding memory during a crime scene is what is called 410 00:25:24,359 --> 00:25:28,399 Speaker 1: Q overload. When there's a really salient scene going on, 411 00:25:28,440 --> 00:25:31,320 Speaker 1: when there's lots of stuff happening, it's just hard to 412 00:25:31,480 --> 00:25:33,720 Speaker 1: encode all of that. So let's say that the woman 413 00:25:33,760 --> 00:25:36,320 Speaker 1: had come into my classroom and she was not only 414 00:25:36,400 --> 00:25:39,480 Speaker 1: yelling at me, but she starts throwing something at me. 415 00:25:39,720 --> 00:25:41,600 Speaker 1: And then as soon as that happens, I pull out 416 00:25:41,600 --> 00:25:43,880 Speaker 1: a taser and then a student over on the left 417 00:25:43,880 --> 00:25:46,680 Speaker 1: starts screaming, and suddenly a student on the right leaps 418 00:25:46,760 --> 00:25:49,320 Speaker 1: up and says, I'm going to protect doctor Eagleman and 419 00:25:49,480 --> 00:25:52,679 Speaker 1: dives in to tackle her. Imagine, all this stuff is happening, 420 00:25:52,720 --> 00:25:55,800 Speaker 1: bang bang bang, Well, it's really hard to encode all 421 00:25:55,840 --> 00:25:58,639 Speaker 1: that because there's just too much going on at the 422 00:25:58,680 --> 00:26:03,879 Speaker 1: same time, too many salient events happening at once. That's 423 00:26:03,960 --> 00:26:09,480 Speaker 1: the Q overload effect. A third problem with encoding memory 424 00:26:10,000 --> 00:26:13,160 Speaker 1: has to do with what's called the other race effect. 425 00:26:13,440 --> 00:26:17,040 Speaker 1: We're much better at encoding faces of people who look 426 00:26:17,200 --> 00:26:20,760 Speaker 1: like us, but for people who look different, it's harder 427 00:26:20,800 --> 00:26:24,200 Speaker 1: to catch the important details. Now, just to be clear, 428 00:26:24,520 --> 00:26:28,320 Speaker 1: this isn't racism. It's not that you're discriminating against a group. 429 00:26:28,760 --> 00:26:31,600 Speaker 1: The issue here is that the neurons in your visual 430 00:26:31,680 --> 00:26:35,560 Speaker 1: cortex just aren't trained up on the details of those faces. 431 00:26:35,960 --> 00:26:39,040 Speaker 1: It's a matter of what type of faces your brain 432 00:26:39,160 --> 00:26:42,800 Speaker 1: knows well. In other words, depending on where you've grown up, 433 00:26:43,240 --> 00:26:46,840 Speaker 1: you might have a difficult time making an accurate identification 434 00:26:47,080 --> 00:26:51,640 Speaker 1: if you were shown a lineup of Cambodians or Icelanders, 435 00:26:52,160 --> 00:26:57,000 Speaker 1: or mauor or Wigers or Inuits. It's not that you 436 00:26:57,040 --> 00:27:00,639 Speaker 1: have anything against these cultures. It's that you've grown training 437 00:27:00,680 --> 00:27:04,640 Speaker 1: your visual system on the faces particular to your culture, 438 00:27:04,640 --> 00:27:07,760 Speaker 1: whatever that is, and now you're dealing with measurements that 439 00:27:07,800 --> 00:27:11,440 Speaker 1: are all a little bit different, like the intraocular distance 440 00:27:11,480 --> 00:27:13,640 Speaker 1: and the nose length and the frenulum and the chin 441 00:27:13,720 --> 00:27:16,600 Speaker 1: shape and so on. And if that's not where your 442 00:27:16,840 --> 00:27:20,240 Speaker 1: experience and expertise lies, you're going to be worse at it. 443 00:27:20,640 --> 00:27:25,240 Speaker 1: So the other race effect often leaves eyewitness identification hobbled. 444 00:27:25,680 --> 00:27:29,000 Speaker 1: And the fourth problem with encoding memories is that it 445 00:27:29,160 --> 00:27:33,160 Speaker 1: often just performs worse when there's stress and trauma. During 446 00:27:33,160 --> 00:27:37,200 Speaker 1: an event, something really awful and unexpected happens and it's 447 00:27:37,200 --> 00:27:40,520 Speaker 1: simply not even part of what your world model thinks 448 00:27:40,640 --> 00:27:43,480 Speaker 1: is a possibility, and so your brain has a difficult 449 00:27:43,560 --> 00:27:47,240 Speaker 1: time encoding what the heck just happened. So these are 450 00:27:47,240 --> 00:27:50,760 Speaker 1: all problems with encoding the memory in the first place, 451 00:27:51,119 --> 00:27:56,000 Speaker 1: weapon focus, Q overload, other race effect, stress and trauma 452 00:27:56,080 --> 00:28:14,280 Speaker 1: during the event. Now, encoding is only half the game, 453 00:28:14,320 --> 00:28:17,800 Speaker 1: because retrieval is the other half. So let's say you've 454 00:28:17,800 --> 00:28:20,760 Speaker 1: written down some sort of rough memory of the woman 455 00:28:20,800 --> 00:28:24,400 Speaker 1: who came into the classroom. She's somewhere between five three 456 00:28:24,440 --> 00:28:27,679 Speaker 1: and five nine. Okay, you encoded that sort of, but 457 00:28:28,000 --> 00:28:30,760 Speaker 1: now there are problems with the retrieval of the memory, 458 00:28:30,800 --> 00:28:34,280 Speaker 1: pulling up what was there in your brain. One problem 459 00:28:34,640 --> 00:28:38,720 Speaker 1: is what's called the misinformation effect. If you're told something 460 00:28:39,120 --> 00:28:42,040 Speaker 1: about a crime in terms of what happened at the scene, 461 00:28:42,400 --> 00:28:44,760 Speaker 1: or who was there or what they looked like, that 462 00:28:44,840 --> 00:28:47,840 Speaker 1: will become part of your memory and you may not 463 00:28:47,920 --> 00:28:51,680 Speaker 1: be able to distinguish that from what actually happened. So 464 00:28:52,160 --> 00:28:56,480 Speaker 1: Elizabeth Loftus and others have studied this by showing people 465 00:28:56,520 --> 00:28:59,800 Speaker 1: a picture of a car to stop sign, and then afterwards, 466 00:28:59,760 --> 00:29:02,240 Speaker 1: after after the picture is gone, they give a text 467 00:29:02,400 --> 00:29:05,440 Speaker 1: description of the same picture, but in the text they 468 00:29:05,440 --> 00:29:08,000 Speaker 1: say it was a yield sign. And then they have 469 00:29:08,080 --> 00:29:11,400 Speaker 1: people draw the picture as they remember it, and they 470 00:29:11,560 --> 00:29:15,320 Speaker 1: draw the original scene but with a yield sign. So 471 00:29:15,560 --> 00:29:19,680 Speaker 1: they're told something after the memory was encoded. They're told 472 00:29:19,680 --> 00:29:22,160 Speaker 1: that it was a yield sign, and when they're retrieving 473 00:29:22,200 --> 00:29:25,240 Speaker 1: the memory, they believe that the whole time they saw 474 00:29:25,280 --> 00:29:28,200 Speaker 1: the yield sign there instead of the stop sign. In 475 00:29:28,240 --> 00:29:32,040 Speaker 1: the case of my classroom invader, you may remember that 476 00:29:32,080 --> 00:29:35,600 Speaker 1: I told the students that I remembered the woman had 477 00:29:35,720 --> 00:29:39,120 Speaker 1: sunglasses on top of her head. Well, she didn't have 478 00:29:39,200 --> 00:29:41,800 Speaker 1: sunglasses on her head. I made up that fact and 479 00:29:41,840 --> 00:29:45,080 Speaker 1: I asserted it, and a number of students incorporated that 480 00:29:45,200 --> 00:29:48,520 Speaker 1: false item into their memory what they think they saw, 481 00:29:49,240 --> 00:29:52,240 Speaker 1: and afterwards they felt certain that that was a part 482 00:29:52,280 --> 00:29:56,040 Speaker 1: of their original memory. Now that might sound crazy, because 483 00:29:56,040 --> 00:29:59,120 Speaker 1: you think I could distinguish my own memory from something 484 00:29:59,200 --> 00:30:03,000 Speaker 1: someone else said, but it turns out you often can't. 485 00:30:03,840 --> 00:30:08,280 Speaker 1: So misinformation after the fact is one problem that happens 486 00:30:08,280 --> 00:30:14,000 Speaker 1: with retrieval. A second problem is what's known as unconscious transference. 487 00:30:14,600 --> 00:30:16,520 Speaker 1: Now to explain this, I'm going to tell you an 488 00:30:16,600 --> 00:30:23,080 Speaker 1: absolutely incredible true story. There's a British psychologist named Donald Thompson, 489 00:30:23,400 --> 00:30:26,560 Speaker 1: and one evening he went on British television to talk 490 00:30:26,600 --> 00:30:31,440 Speaker 1: about eyewitness testimony and memory. Unbeknownst to him, while he 491 00:30:31,520 --> 00:30:34,920 Speaker 1: was on live television, a woman in England had her 492 00:30:34,960 --> 00:30:38,680 Speaker 1: apartment broken into and she was raped. She went to 493 00:30:38,760 --> 00:30:43,440 Speaker 1: the police and described Donald Thompson and had his face drawn. 494 00:30:43,520 --> 00:30:47,240 Speaker 1: She insisted this was her rapist. He was arrested the 495 00:30:47,280 --> 00:30:50,360 Speaker 1: next day, but he said, I have a watertight alibi, 496 00:30:50,400 --> 00:30:53,400 Speaker 1: which is that I was in a live television studio 497 00:30:53,800 --> 00:30:56,280 Speaker 1: sitting with the police commissioner. And it took a little 498 00:30:56,320 --> 00:30:58,160 Speaker 1: bit of time for all this to get worked out 499 00:30:58,200 --> 00:31:02,520 Speaker 1: and unraveled. But of course his alley was verifiable. What 500 00:31:02,840 --> 00:31:07,200 Speaker 1: happened was he was on the television set she was 501 00:31:07,200 --> 00:31:10,880 Speaker 1: getting raped, and she transferred her memory of the rapist's 502 00:31:10,960 --> 00:31:15,200 Speaker 1: face to his face. She got confused about whose face 503 00:31:15,560 --> 00:31:18,640 Speaker 1: was who. It's such a crazy irony that he is 504 00:31:18,680 --> 00:31:21,440 Speaker 1: a memory expert, because it could have been anybody on 505 00:31:21,480 --> 00:31:25,600 Speaker 1: the television, but there it is incredible but true. So 506 00:31:26,360 --> 00:31:30,920 Speaker 1: unconscious transference is a problem where a victim can't distinguish 507 00:31:31,280 --> 00:31:34,680 Speaker 1: between the perpetrator of a crime and some other face 508 00:31:34,720 --> 00:31:37,400 Speaker 1: that they saw in the same context or a totally 509 00:31:37,400 --> 00:31:40,120 Speaker 1: different context. So those are some of the problems with 510 00:31:40,360 --> 00:31:44,080 Speaker 1: memory retrieval. And one of the places where these problems 511 00:31:44,080 --> 00:31:47,160 Speaker 1: come up all the time is in the police lineup. 512 00:31:50,440 --> 00:31:52,920 Speaker 1: So imagine that you are presented a lineup with several 513 00:31:52,920 --> 00:31:56,200 Speaker 1: people and you have to make a choice about which 514 00:31:56,280 --> 00:31:59,840 Speaker 1: person you saw doing the crime. Well, one of the 515 00:31:59,880 --> 00:32:03,000 Speaker 1: things that started getting recognized in the nineteen sixties and 516 00:32:03,040 --> 00:32:07,880 Speaker 1: went to the Supreme Court was the idea of police suggestibility. 517 00:32:08,440 --> 00:32:11,240 Speaker 1: It turns out that if the police already have their 518 00:32:11,280 --> 00:32:14,160 Speaker 1: man in mind, they already think it's Fred. Whether or 519 00:32:14,200 --> 00:32:16,440 Speaker 1: not that's correct, they believe it's Fred, and they want 520 00:32:16,440 --> 00:32:19,080 Speaker 1: you to say it's Fred. There are all kinds of 521 00:32:19,120 --> 00:32:23,080 Speaker 1: ways that they can suggest that to you, including just 522 00:32:23,120 --> 00:32:27,680 Speaker 1: things like positive feedback. So if you say I think 523 00:32:27,720 --> 00:32:30,680 Speaker 1: that was the guy, they'll say, yeah, good job, that's 524 00:32:30,680 --> 00:32:35,160 Speaker 1: what we think also, And it turns out that influences 525 00:32:35,320 --> 00:32:39,400 Speaker 1: the confidence of the eyewitness. When the trial starts months later, 526 00:32:39,560 --> 00:32:43,400 Speaker 1: you'll say I'm absolutely certain that it was Fred, even 527 00:32:43,440 --> 00:32:46,160 Speaker 1: though you might not remember that you weren't certain at all, 528 00:32:46,560 --> 00:32:49,760 Speaker 1: But because of the positive feedback, which can even be 529 00:32:50,080 --> 00:32:52,800 Speaker 1: quite subtle, like you know, just a nod or a 530 00:32:52,840 --> 00:32:57,320 Speaker 1: smile or whatever, your confidence goes way up. And thirty 531 00:32:57,400 --> 00:33:02,080 Speaker 1: years of psychology studies in the laboratory have verified the 532 00:33:02,120 --> 00:33:07,400 Speaker 1: power of this suggestibility. As a result, psychologists have made 533 00:33:07,480 --> 00:33:10,360 Speaker 1: suggestions to police forces and this has spun all the 534 00:33:10,400 --> 00:33:12,880 Speaker 1: way up to the Supreme Court, with the result that 535 00:33:13,000 --> 00:33:17,880 Speaker 1: police are not allowed to have any suggestibility involved. One 536 00:33:17,880 --> 00:33:20,040 Speaker 1: of the ways to take care of this is to 537 00:33:20,040 --> 00:33:24,000 Speaker 1: make the lineup identification double blind. That means the police 538 00:33:24,000 --> 00:33:26,840 Speaker 1: officer who is running the lineup doesn't even know who 539 00:33:26,880 --> 00:33:29,680 Speaker 1: the main suspect is. And this way the person doing 540 00:33:29,680 --> 00:33:32,880 Speaker 1: the identifying gets no feedback at all. And by the way, 541 00:33:32,920 --> 00:33:35,640 Speaker 1: the main thing that everyone's worried about with lineups is 542 00:33:35,920 --> 00:33:39,680 Speaker 1: false identification. If the perpetrator is in the lineup and 543 00:33:39,760 --> 00:33:42,400 Speaker 1: you miss him, that's another kind of problem. But the 544 00:33:42,480 --> 00:33:45,840 Speaker 1: really terrible problem is sending an innocent person to prison 545 00:33:46,320 --> 00:33:49,720 Speaker 1: with the false belief that you have identified him. I'll 546 00:33:49,760 --> 00:33:53,440 Speaker 1: give you another problem that psychologists and legal theorists have studied, 547 00:33:53,720 --> 00:33:57,600 Speaker 1: and that's the issue of co witness contamination. The idea 548 00:33:57,720 --> 00:34:00,320 Speaker 1: is that if you see a crime and I'm standing 549 00:34:00,360 --> 00:34:02,560 Speaker 1: there and I see it too, and then we start 550 00:34:02,600 --> 00:34:05,520 Speaker 1: talking about it with one another, we can't help but 551 00:34:05,800 --> 00:34:10,040 Speaker 1: influence each other's memories. This is a close relative of 552 00:34:10,080 --> 00:34:13,959 Speaker 1: the misinformation problem. If you remember that she had curly hair, 553 00:34:14,000 --> 00:34:16,160 Speaker 1: but I say I'm pretty sure she had straight hair. 554 00:34:16,480 --> 00:34:18,719 Speaker 1: Or if you think she was unathletic and I say, no, 555 00:34:18,840 --> 00:34:22,600 Speaker 1: she was quite athletic, both our memories become contaminated by 556 00:34:22,640 --> 00:34:25,200 Speaker 1: the other one statement, and we more and more come 557 00:34:25,239 --> 00:34:28,799 Speaker 1: to believe things that we didn't originally. So one thing 558 00:34:28,840 --> 00:34:32,080 Speaker 1: that police know to do straight away is to separate witnesses. 559 00:34:32,320 --> 00:34:34,680 Speaker 1: And there are many many aspects that have come from 560 00:34:34,719 --> 00:34:37,480 Speaker 1: research that are now built into the way that police 561 00:34:37,600 --> 00:34:40,880 Speaker 1: optimally do lineups. For example, they try to make sure 562 00:34:40,960 --> 00:34:44,960 Speaker 1: that you don't see photographs of a suspect before the lineup, 563 00:34:45,040 --> 00:34:48,600 Speaker 1: because if you do, you're really likely to identify that guy. 564 00:34:49,120 --> 00:34:52,279 Speaker 1: One place this comes up is contamination by photos in 565 00:34:52,400 --> 00:34:56,600 Speaker 1: news stories. So CNN says the suspect looks like this, 566 00:34:56,960 --> 00:34:59,560 Speaker 1: and then you're brought into a lineup, and whether or 567 00:34:59,640 --> 00:35:03,399 Speaker 1: not you remember having seen that story on CNN, that 568 00:35:03,640 --> 00:35:07,840 Speaker 1: influences your performance in the lineup. So many guidelines have 569 00:35:07,880 --> 00:35:10,000 Speaker 1: come out of this research. I'll give you just a 570 00:35:10,000 --> 00:35:12,960 Speaker 1: few examples. One guideline is to make sure that the 571 00:35:13,080 --> 00:35:16,560 Speaker 1: eyewitness is aware that the perpetrator might not be in 572 00:35:16,640 --> 00:35:19,800 Speaker 1: the lineup. Another is this double blind procedure that I 573 00:35:19,920 --> 00:35:22,719 Speaker 1: mentioned that doesn't allow police to see the lineup, so 574 00:35:22,760 --> 00:35:25,560 Speaker 1: they can't subject to the eyewitness to any of their 575 00:35:25,640 --> 00:35:29,200 Speaker 1: suspicions as to who the suspect is, and courts are 576 00:35:29,239 --> 00:35:34,000 Speaker 1: increasingly recognizing that this speed of recognition matters. Generally speaking, 577 00:35:34,080 --> 00:35:38,320 Speaker 1: if the witness quickly identifies the perpetrator, then the selection 578 00:35:38,560 --> 00:35:42,319 Speaker 1: is more likely to be correct. Finally, if the appearance 579 00:35:42,640 --> 00:35:47,160 Speaker 1: of a person stands out amongst the otherwise undistinctive crowd, 580 00:35:47,640 --> 00:35:50,680 Speaker 1: then an eyewitness is more likely to select that person, 581 00:35:51,120 --> 00:35:53,840 Speaker 1: regardless of their own recollection of the criminal. This is 582 00:35:53,880 --> 00:35:57,440 Speaker 1: known as the distractor or dud effect. So there are 583 00:35:57,480 --> 00:36:01,480 Speaker 1: many places where scientific study has made d contributions to 584 00:36:01,560 --> 00:36:06,520 Speaker 1: the optimal way to run. I witnessed identification procedures. Now 585 00:36:06,680 --> 00:36:08,960 Speaker 1: what if somebody comes in and they say, I saw 586 00:36:09,000 --> 00:36:11,759 Speaker 1: the guy, and I know with one hundred percent certainty 587 00:36:12,080 --> 00:36:14,799 Speaker 1: that was the guy, Versus someone else who comes in 588 00:36:14,840 --> 00:36:17,560 Speaker 1: and says, I don't know, I was looking out a window. 589 00:36:17,880 --> 00:36:20,360 Speaker 1: It was a car robbery going on two stories below me. 590 00:36:20,400 --> 00:36:22,160 Speaker 1: It was kind of dark. I couldn't really see him 591 00:36:22,239 --> 00:36:24,799 Speaker 1: very well. I think that was the guy. I don't 592 00:36:24,840 --> 00:36:28,040 Speaker 1: really know. Do you think those two scenarios should be 593 00:36:28,120 --> 00:36:31,360 Speaker 1: treated differently? In other words, what is the relationship between 594 00:36:31,480 --> 00:36:36,359 Speaker 1: confidence and accuracy? Well? The United States Supreme Court had 595 00:36:36,400 --> 00:36:39,160 Speaker 1: to answer this question in nineteen seventy two in a 596 00:36:39,200 --> 00:36:43,240 Speaker 1: case called Neil versus Biggers. What happened was that the victim, 597 00:36:43,320 --> 00:36:47,239 Speaker 1: Margaret Beemer, was dragged into the woods and assaulted. She 598 00:36:47,440 --> 00:36:51,560 Speaker 1: described her attacker to the police only in very general terms. 599 00:36:52,120 --> 00:36:54,400 Speaker 1: Then she was shown a bunch of photographs and people 600 00:36:54,400 --> 00:36:57,719 Speaker 1: who met the description, but she couldn't identify anyone from 601 00:36:57,760 --> 00:37:01,680 Speaker 1: the photos or stand ups for seven months. So the 602 00:37:01,719 --> 00:37:04,520 Speaker 1: police then get a guy on another charge, a got 603 00:37:04,600 --> 00:37:07,400 Speaker 1: named Archie Biggers, and they decide to include him in 604 00:37:07,480 --> 00:37:10,359 Speaker 1: a stand up, But they couldn't find anyone who made 605 00:37:10,360 --> 00:37:13,120 Speaker 1: a good match to this guy's looks, so they did 606 00:37:13,160 --> 00:37:15,919 Speaker 1: what's called a show up instead, where there's no one 607 00:37:15,920 --> 00:37:20,239 Speaker 1: else there but the one suspect. So two detectives walk 608 00:37:20,360 --> 00:37:25,040 Speaker 1: Biggers passed the victim, and at her request, the police 609 00:37:25,080 --> 00:37:28,640 Speaker 1: directed him to say shut up or I will kill you. 610 00:37:29,280 --> 00:37:31,920 Speaker 1: The testimony a trial wasn't clear as to whether she 611 00:37:32,000 --> 00:37:34,280 Speaker 1: had first identified him and then asked that he repeat 612 00:37:34,360 --> 00:37:38,239 Speaker 1: those words, or made her identification after he'd spoken. In 613 00:37:38,320 --> 00:37:42,280 Speaker 1: any event, the victim testified that she had no doubt 614 00:37:42,320 --> 00:37:46,840 Speaker 1: about her identification, and so Biggers was convicted, and he 615 00:37:46,880 --> 00:37:49,360 Speaker 1: came back on appeal and argued it simply wasn't fair, 616 00:37:49,400 --> 00:37:53,160 Speaker 1: this was unreliable. The lower courts agreed with him and said, 617 00:37:53,600 --> 00:37:57,520 Speaker 1: no way, We're not accepting this as evidence because this 618 00:37:57,719 --> 00:38:03,000 Speaker 1: identification process was extremely suggestive. But the US Supreme Court 619 00:38:03,040 --> 00:38:07,239 Speaker 1: eventually heard this case and reversed that decision. They concluded 620 00:38:07,520 --> 00:38:12,480 Speaker 1: it was okay why because she said that she was certain. 621 00:38:13,040 --> 00:38:16,320 Speaker 1: In other words, they concluded that how confident a person 622 00:38:16,520 --> 00:38:20,560 Speaker 1: is relates to how good their evidence is. They felt 623 00:38:20,560 --> 00:38:24,359 Speaker 1: that high confidence allows you to judge the veracity of 624 00:38:24,440 --> 00:38:27,399 Speaker 1: a witness. So, in other words, the Supreme Court said 625 00:38:27,480 --> 00:38:31,520 Speaker 1: implicitly that they think there is a relationship between confidence 626 00:38:31,880 --> 00:38:37,919 Speaker 1: and accuracy. But generally, cognitive psychologists find that this relationship 627 00:38:38,040 --> 00:38:41,440 Speaker 1: is very weak. Because of all the things we've talked 628 00:38:41,440 --> 00:38:45,359 Speaker 1: about in this episode. Confidence does not work as a 629 00:38:45,480 --> 00:38:51,080 Speaker 1: yardstick of accuracy, especially as time goes on and I'm 630 00:38:51,120 --> 00:38:53,800 Speaker 1: going to do an episode soon about how the legal 631 00:38:53,840 --> 00:38:58,279 Speaker 1: system decides which technologies to allow into their courtrooms, But 632 00:38:58,320 --> 00:39:00,040 Speaker 1: for now, I'm just going to point out that it 633 00:39:00,080 --> 00:39:04,200 Speaker 1: requires passing high scientific standards. So how do you think 634 00:39:04,320 --> 00:39:09,359 Speaker 1: eyewitness testimony compares well? A version of this question hit 635 00:39:09,400 --> 00:39:13,560 Speaker 1: the US Supreme Court in twenty eleven Perry versus New Hampshire. 636 00:39:13,840 --> 00:39:16,719 Speaker 1: I spoke with Sanjay Gupta on CNN about this case 637 00:39:16,760 --> 00:39:19,920 Speaker 1: when it was getting decided. The question the court was 638 00:39:20,000 --> 00:39:24,760 Speaker 1: asking was essentially, if there's lousy eyewitness testimony in a case, 639 00:39:25,239 --> 00:39:28,879 Speaker 1: should it be allowed? When does that violate the right 640 00:39:29,000 --> 00:39:33,000 Speaker 1: to do process? You'll write to a fair trial. So 641 00:39:33,120 --> 00:39:35,359 Speaker 1: Perry was a young man in New Hampshire who got 642 00:39:35,400 --> 00:39:38,799 Speaker 1: convicted of stealing a car, and he was convicted based 643 00:39:38,840 --> 00:39:41,759 Speaker 1: in large part on a woman who was looking out 644 00:39:41,800 --> 00:39:45,759 Speaker 1: a second story window in the dark. She couldn't see 645 00:39:45,800 --> 00:39:48,560 Speaker 1: well from that distance. So Perry got convicted in a 646 00:39:48,600 --> 00:39:51,440 Speaker 1: New Hampshire court, and they came back and tried to appeal, 647 00:39:51,480 --> 00:39:54,240 Speaker 1: and he said, how can you possibly take this woman's 648 00:39:54,640 --> 00:39:57,440 Speaker 1: word for it? How could you send me to jail 649 00:39:57,520 --> 00:40:03,160 Speaker 1: based on such unreliable witness testimony? The lower courts upheld 650 00:40:03,239 --> 00:40:06,200 Speaker 1: the decision, and so this question spun up to the 651 00:40:06,320 --> 00:40:08,800 Speaker 1: US Supreme Court. The question that was on the table 652 00:40:08,960 --> 00:40:15,120 Speaker 1: is does unreliable eyewitness testimony violate your constitutional rights? And 653 00:40:15,200 --> 00:40:18,080 Speaker 1: what the court said is, look, the only thing we 654 00:40:18,160 --> 00:40:21,520 Speaker 1: care about is whether there was suggestibility by the police. 655 00:40:21,640 --> 00:40:25,640 Speaker 1: If the police manipulated the process, then the due process 656 00:40:25,719 --> 00:40:29,440 Speaker 1: clause of the Constitution is applicable. But they said, this 657 00:40:29,520 --> 00:40:35,720 Speaker 1: does not hold generally for eyewinness testimony. Using unreliable evidence 658 00:40:35,760 --> 00:40:39,799 Speaker 1: against you is not unconstitutional. There's no right that you'll 659 00:40:39,840 --> 00:40:42,920 Speaker 1: have great evidence. Instead, it's the job of the jury 660 00:40:43,280 --> 00:40:46,600 Speaker 1: to figure it out, to weigh the evidence. So that's 661 00:40:46,600 --> 00:40:50,440 Speaker 1: what the court decided. And reading between the lines, what 662 00:40:50,480 --> 00:40:52,800 Speaker 1: do you think one of the concerns of the court 663 00:40:53,000 --> 00:40:57,200 Speaker 1: was here, It's this, if you introduce a question about 664 00:40:57,239 --> 00:41:01,480 Speaker 1: the reliability of the witness, then you would, in theory, 665 00:41:01,640 --> 00:41:06,360 Speaker 1: require a pre trial hearing to determine if her testimony 666 00:41:06,480 --> 00:41:09,000 Speaker 1: is acceptable or not. And what does that do to 667 00:41:09,160 --> 00:41:12,920 Speaker 1: every single case running in the nation. One of these 668 00:41:12,920 --> 00:41:15,959 Speaker 1: Supreme Court justices posed this during the case. He said, 669 00:41:16,280 --> 00:41:19,320 Speaker 1: what about jailhouse testimony where someone in the jail says, 670 00:41:19,960 --> 00:41:23,680 Speaker 1: my cellmate told me x Y and Z. Sometimes it's unreliable, 671 00:41:23,760 --> 00:41:27,840 Speaker 1: sometimes it yields fruit. Do we require a pre trial 672 00:41:27,880 --> 00:41:32,600 Speaker 1: hearing every time because it might not be reliable? So 673 00:41:32,880 --> 00:41:36,560 Speaker 1: this is an issue that would overturn the way the 674 00:41:36,600 --> 00:41:39,799 Speaker 1: whole court system works. And so, at least for the 675 00:41:39,880 --> 00:41:45,840 Speaker 1: time being, the legal system knows that eyewitness testimony is terrible. 676 00:41:45,880 --> 00:41:49,879 Speaker 1: It can be very unreliable, and yet with all the caveats, 677 00:41:50,360 --> 00:41:53,680 Speaker 1: there's no choice but to let it remain. So in 678 00:41:53,840 --> 00:41:57,759 Speaker 1: closing your memory is not necessarily what you think it is. 679 00:41:58,239 --> 00:42:01,799 Speaker 1: Memory is not writing down what happened in zeros and 680 00:42:01,880 --> 00:42:05,880 Speaker 1: ones and then reading that back out. There are many 681 00:42:05,920 --> 00:42:09,560 Speaker 1: things that get in the way of good encoding and retrieval. 682 00:42:09,880 --> 00:42:14,360 Speaker 1: I really recommend Jennifer Thompson and Ronald Cotton's book. For Jennifer, 683 00:42:14,480 --> 00:42:17,560 Speaker 1: this was a complete pouring out of her guts on 684 00:42:17,600 --> 00:42:21,280 Speaker 1: the table a cathartic enterprise where she had to allow 685 00:42:21,360 --> 00:42:24,000 Speaker 1: that she put a man in jail for eleven years 686 00:42:24,520 --> 00:42:27,080 Speaker 1: based on her certainty that that was the face she 687 00:42:27,120 --> 00:42:30,600 Speaker 1: had looked into. She knew it was Ronald Cotton, and 688 00:42:30,680 --> 00:42:33,240 Speaker 1: she wanted him to die. That's how she felt about 689 00:42:33,239 --> 00:42:36,280 Speaker 1: this man. And then she found out that she was wrong. 690 00:42:36,880 --> 00:42:39,120 Speaker 1: So the book is about what it's like from the 691 00:42:39,160 --> 00:42:42,360 Speaker 1: inside to believe, you know, and all the steps that 692 00:42:42,480 --> 00:42:45,600 Speaker 1: happened along the way that manipulated her memory. So to 693 00:42:45,640 --> 00:42:48,960 Speaker 1: summarize what we talked about today, eyewitness testimony has a 694 00:42:49,080 --> 00:42:53,200 Speaker 1: terrific sway on jurors, and yet it is variable in 695 00:42:53,239 --> 00:42:56,719 Speaker 1: its accuracy. Now, just for clarity, it's not that eyewitness 696 00:42:56,800 --> 00:43:00,360 Speaker 1: testimony always has to be wrong. Plenty of times someone 697 00:43:00,480 --> 00:43:04,600 Speaker 1: IDs someone and it's correct. But memory is not like 698 00:43:04,600 --> 00:43:08,600 Speaker 1: a video camera. It is a reconstruction, and many factors 699 00:43:08,920 --> 00:43:13,080 Speaker 1: worse than the encoding and manipulate the retrieval. As a result, 700 00:43:13,239 --> 00:43:17,960 Speaker 1: we cannot say that confidence and accuracy are tightly linked, 701 00:43:18,120 --> 00:43:21,760 Speaker 1: especially as time goes on. Eyewitness testimony is not going 702 00:43:21,760 --> 00:43:25,280 Speaker 1: away from the courtroom because often it's the only evidence 703 00:43:25,320 --> 00:43:27,840 Speaker 1: that we can bring to bear in a case. So, 704 00:43:28,080 --> 00:43:31,080 Speaker 1: even though it may be the worst technology that we 705 00:43:31,200 --> 00:43:35,520 Speaker 1: allow in the courtrooms, it is presumably here to stay. 706 00:43:35,600 --> 00:43:39,360 Speaker 1: But remember, like much about our perception of the world 707 00:43:39,680 --> 00:43:43,560 Speaker 1: that I'll be discussing throughout these episodes, just because you 708 00:43:43,640 --> 00:43:48,000 Speaker 1: believe something to be true doesn't necessitate that it is. 709 00:43:52,680 --> 00:43:55,960 Speaker 1: Go to Eagleman dot Com, Slash podcast for more information 710 00:43:56,280 --> 00:43:59,000 Speaker 1: and to find further readings and to see some photographs. 711 00:43:59,520 --> 00:44:02,880 Speaker 1: Send me in email at podcasts at eagleman dot com 712 00:44:02,880 --> 00:44:06,160 Speaker 1: with questions or discussion, and I'll be making an episode 713 00:44:06,200 --> 00:44:11,160 Speaker 1: soon in which I address those. Until next time, I'm 714 00:44:11,239 --> 00:44:14,040 Speaker 1: Dave Eagleman, and this is Inner Cosmos