1 00:00:03,000 --> 00:00:04,840 Speaker 1: Welcome to Stuff to Blow Your Mind, a production of 2 00:00:04,880 --> 00:00:13,680 Speaker 1: I Heart Radios How Stuff Works. Hey you, welcome to 3 00:00:13,720 --> 00:00:16,040 Speaker 1: Stuff to Blow your Mind. My name is Robert Lamb 4 00:00:16,160 --> 00:00:19,200 Speaker 1: and I'm Joe McCormick, and we're back today with part 5 00:00:19,200 --> 00:00:23,480 Speaker 1: two of our exploration of facial recognition machinery. Last time, 6 00:00:23,520 --> 00:00:26,920 Speaker 1: of course, we talked about uh some tech biz world 7 00:00:27,120 --> 00:00:30,200 Speaker 1: stuff that that may be highly relevant to your life, 8 00:00:30,360 --> 00:00:33,519 Speaker 1: especially in the near future. We talked about an artificial 9 00:00:33,560 --> 00:00:36,200 Speaker 1: intelligence company that was recently profiled in the New York 10 00:00:36,240 --> 00:00:40,240 Speaker 1: Times as uh selling a service to law enforcement that 11 00:00:40,280 --> 00:00:43,319 Speaker 1: would use while they stole your face right off your 12 00:00:43,320 --> 00:00:46,240 Speaker 1: head and scraped it from the Internet, and now they're 13 00:00:46,280 --> 00:00:49,199 Speaker 1: selling that to law enforcement as a tool supposedly for 14 00:00:49,360 --> 00:00:53,080 Speaker 1: identifying people with a high rate of accuracy, uh, linking 15 00:00:53,120 --> 00:00:56,440 Speaker 1: your anonymous face to all of the digital information that's 16 00:00:56,440 --> 00:00:59,840 Speaker 1: out there about you. Long story short, we're all boned 17 00:01:00,840 --> 00:01:04,240 Speaker 1: unless we, uh, you know, we actually you know, put 18 00:01:04,280 --> 00:01:08,160 Speaker 1: into place various laws and protections that that either keep 19 00:01:08,280 --> 00:01:12,399 Speaker 1: these technologies from fully coming online or make sure that 20 00:01:12,480 --> 00:01:17,880 Speaker 1: they are restricted from destroying the privacy at least of 21 00:01:18,200 --> 00:01:20,560 Speaker 1: you know, private individuals. And we'll talk more about that 22 00:01:20,600 --> 00:01:23,040 Speaker 1: aspect of the subject. I think in the next episode, 23 00:01:23,040 --> 00:01:26,399 Speaker 1: when we get more into the modern technology today, we 24 00:01:26,440 --> 00:01:31,160 Speaker 1: wanted to focus more on the biological world of facial recognition. 25 00:01:31,800 --> 00:01:35,000 Speaker 1: What's been learned in in recent decades in psychology and 26 00:01:35,080 --> 00:01:39,520 Speaker 1: neuroscience about the recognition of faces by animals like us, Right, 27 00:01:39,560 --> 00:01:42,640 Speaker 1: Because ultimately, I guess the counter argument is, Hey, we're 28 00:01:42,640 --> 00:01:45,880 Speaker 1: just trying to teach computers and phones to do what 29 00:01:46,560 --> 00:01:49,200 Speaker 1: humans do and what animals can do, and that is 30 00:01:49,240 --> 00:01:52,320 Speaker 1: look at a face and respond to it, identify the 31 00:01:52,320 --> 00:01:55,200 Speaker 1: individual behind that face. Right. And while that might be 32 00:01:55,240 --> 00:01:58,240 Speaker 1: something that scary as a capability for the machine to have, 33 00:01:58,760 --> 00:02:01,960 Speaker 1: it's something that's uh part of our survival history and 34 00:02:02,120 --> 00:02:04,520 Speaker 1: an important part of our social lives. Oh yeah, because 35 00:02:04,560 --> 00:02:08,720 Speaker 1: we go around every day, we're walking around, we're driving where, 36 00:02:08,800 --> 00:02:11,520 Speaker 1: you know, in an exercise class, etcetera. And our brain 37 00:02:11,600 --> 00:02:15,000 Speaker 1: is engaging in that exercise. Of which human is that? 38 00:02:15,080 --> 00:02:16,600 Speaker 1: Do I know that human weight? I think I know 39 00:02:16,639 --> 00:02:19,919 Speaker 1: that human weight? Further analysis reveals I do not. It's 40 00:02:19,919 --> 00:02:23,360 Speaker 1: a really funny thing actually, when you notice how much 41 00:02:23,440 --> 00:02:26,600 Speaker 1: your brain is just going who's that is that? Yeah, 42 00:02:28,080 --> 00:02:31,079 Speaker 1: it's like a it's a ridiculous amount of your your 43 00:02:31,120 --> 00:02:35,280 Speaker 1: processing power is eaten up with that narrative. Yeah. In fact, 44 00:02:35,360 --> 00:02:37,799 Speaker 1: I mean it makes that That's why solitude is sometimes nice, 45 00:02:37,800 --> 00:02:41,720 Speaker 1: because it just removes us from that exercise. Now, granted, 46 00:02:41,840 --> 00:02:43,680 Speaker 1: you could have too much solitude, and I guess maybe 47 00:02:43,680 --> 00:02:45,600 Speaker 1: the brain ends up using all that energy that it 48 00:02:45,720 --> 00:02:48,960 Speaker 1: would use towards identifying or trying to identify strangers towards 49 00:02:49,760 --> 00:02:53,160 Speaker 1: new and destructive things. But yeah, for the most part, 50 00:02:53,200 --> 00:02:55,799 Speaker 1: it is an important part of making your way around 51 00:02:55,919 --> 00:02:58,200 Speaker 1: human society. Now, at the risk of sounding like I'm 52 00:02:58,240 --> 00:03:02,040 Speaker 1: making excuses, I gotta say, hand, this is a complicated subject. 53 00:03:02,240 --> 00:03:04,240 Speaker 1: This is one of those where the deeper I dug 54 00:03:04,240 --> 00:03:06,360 Speaker 1: into it, the more and more it just seemed like 55 00:03:06,400 --> 00:03:08,960 Speaker 1: we were missing out on So I mean, I think 56 00:03:09,000 --> 00:03:11,399 Speaker 1: we just have to preface this by saying it's impossible 57 00:03:11,440 --> 00:03:14,760 Speaker 1: for us to do the whole subject of biological facial 58 00:03:14,760 --> 00:03:17,640 Speaker 1: recognition justice in this episode. We'll do our best in 59 00:03:17,680 --> 00:03:20,720 Speaker 1: a reasonable length of time. Yeah, I find like it's 60 00:03:20,720 --> 00:03:23,680 Speaker 1: easy to sort of glimpse the complexity of it when 61 00:03:23,680 --> 00:03:26,800 Speaker 1: you engage in exercises like say, attempting to draw a 62 00:03:26,840 --> 00:03:30,240 Speaker 1: face that you know. And granted that involves artistic ability 63 00:03:30,240 --> 00:03:34,000 Speaker 1: and talent. That is, sometimes the talented is underdeveloped, but still, 64 00:03:34,040 --> 00:03:37,320 Speaker 1: like even I find myself without having that talent, just 65 00:03:37,360 --> 00:03:40,520 Speaker 1: even the mental exercise of them trying to figure out, Okay, 66 00:03:40,760 --> 00:03:43,440 Speaker 1: if I was to draw Joe's face, Wait, what does 67 00:03:43,560 --> 00:03:45,640 Speaker 1: Joe look like? Again? Okay, I have to form the 68 00:03:45,680 --> 00:03:48,160 Speaker 1: picture in my mind and then I have to then 69 00:03:48,160 --> 00:03:49,800 Speaker 1: I second guess it. I'm like, is that really what 70 00:03:49,880 --> 00:03:52,320 Speaker 1: Joe looks like? Or it's it's even it's even harder 71 00:03:52,360 --> 00:03:54,960 Speaker 1: if I'm not physically in the room with that individual 72 00:03:55,000 --> 00:04:00,520 Speaker 1: to really have horns and pointed teeth like that. Yeah, 73 00:04:00,720 --> 00:04:03,240 Speaker 1: So that's that's That's one side, but also just the 74 00:04:03,240 --> 00:04:06,640 Speaker 1: the idea of recalling faces like and granted we're dragging 75 00:04:06,640 --> 00:04:09,520 Speaker 1: in the complexity of of memory when we're doing that, 76 00:04:09,880 --> 00:04:13,880 Speaker 1: but I think it also hints that the it hints 77 00:04:13,920 --> 00:04:17,359 Speaker 1: at how difficult this is to really unwrap what happens 78 00:04:17,360 --> 00:04:20,599 Speaker 1: when we look at another face and identify it much 79 00:04:20,640 --> 00:04:23,320 Speaker 1: less when we recall it from memory. Now, we've discussed 80 00:04:23,320 --> 00:04:25,880 Speaker 1: face perception in the brain before, for example, in our 81 00:04:25,920 --> 00:04:28,919 Speaker 1: episodes on face blindness and in an episode called the 82 00:04:28,960 --> 00:04:32,719 Speaker 1: Doppelganger Network h And in these previous episodes, something that 83 00:04:32,760 --> 00:04:36,080 Speaker 1: we definitely talked about was the history of how our 84 00:04:36,200 --> 00:04:40,480 Speaker 1: understanding of facial recognition in the brain was illuminated by 85 00:04:40,520 --> 00:04:45,320 Speaker 1: studying cases of people with with malfunctions of facial recognition 86 00:04:45,360 --> 00:04:48,640 Speaker 1: in one way or another. Uh, primarily with the condition 87 00:04:48,680 --> 00:04:51,440 Speaker 1: that we talked about in the face blindness episode. It 88 00:04:51,560 --> 00:04:55,120 Speaker 1: is known as face blindness or prosopagnosia, which is a 89 00:04:55,160 --> 00:04:57,960 Speaker 1: condition with a somewhat misleading name if you go with 90 00:04:58,040 --> 00:05:02,320 Speaker 1: face blindness, because people with face blindness, I think it 91 00:05:02,320 --> 00:05:05,400 Speaker 1: would be best explained by saying they actually see face 92 00:05:05,560 --> 00:05:08,880 Speaker 1: is just fine. The real issue is that people with 93 00:05:08,960 --> 00:05:14,120 Speaker 1: this condition have difficulty recognizing faces, not seeing them right. Like. 94 00:05:14,120 --> 00:05:15,880 Speaker 1: One example I always come back to, and I think 95 00:05:15,880 --> 00:05:17,640 Speaker 1: I've probably brought this up in the show before, is 96 00:05:17,680 --> 00:05:21,560 Speaker 1: there's there's an excellent episode of that that television series 97 00:05:21,600 --> 00:05:25,400 Speaker 1: Hannibal about Hannibal Lecter, in which there's a character that 98 00:05:25,520 --> 00:05:28,920 Speaker 1: also has face blindness, and when they behold Hannibal Lecter 99 00:05:29,000 --> 00:05:30,960 Speaker 1: and a key scene, all they see is like a 100 00:05:31,400 --> 00:05:35,520 Speaker 1: featureless flesh mask because it's like they can't see the 101 00:05:35,560 --> 00:05:38,440 Speaker 1: face at all. That is not based on any of 102 00:05:38,440 --> 00:05:40,560 Speaker 1: the material we've looked at in an accounts that we've 103 00:05:40,560 --> 00:05:43,480 Speaker 1: read that is not what face blindness is. It sounds 104 00:05:43,520 --> 00:05:46,320 Speaker 1: like face blindness. The experience of face blindness is more 105 00:05:46,320 --> 00:05:49,920 Speaker 1: akin to say, when I look at some vegetation and 106 00:05:50,080 --> 00:05:52,320 Speaker 1: I asked myself, is that poison ivy? I know, I've 107 00:05:52,320 --> 00:05:54,080 Speaker 1: looked at a picture of poison ivy. I'm not sure 108 00:05:54,080 --> 00:05:57,000 Speaker 1: if that's poison ivy or not. It's not like I 109 00:05:57,040 --> 00:06:01,400 Speaker 1: don't see a plant. I just cannot identify it compared 110 00:06:01,440 --> 00:06:04,960 Speaker 1: to other plants of similar form and function, And therefore 111 00:06:05,000 --> 00:06:07,680 Speaker 1: I have to fall back on on Okay, well, I'm 112 00:06:07,680 --> 00:06:09,640 Speaker 1: gonna try and remember what are the what are the features? 113 00:06:09,680 --> 00:06:11,760 Speaker 1: Three leaves? Let it be how many leaves does this have? 114 00:06:11,839 --> 00:06:13,320 Speaker 1: And I started to have to gauge in a more 115 00:06:13,680 --> 00:06:16,720 Speaker 1: and a different kind of cognitive exercise to try and 116 00:06:16,760 --> 00:06:19,159 Speaker 1: make a positive identification. Yeah, I mean, I think it 117 00:06:19,279 --> 00:06:22,400 Speaker 1: might be even more complicated a task than that. It's 118 00:06:22,440 --> 00:06:25,640 Speaker 1: like the people who have typical powers of facial recognition 119 00:06:26,240 --> 00:06:30,200 Speaker 1: don't even recognize what a superpower this is that comes effortlessly. 120 00:06:30,720 --> 00:06:33,280 Speaker 1: The point of comparison I've used before, and I think 121 00:06:33,320 --> 00:06:35,560 Speaker 1: I heard back from some people after this episode saying 122 00:06:35,560 --> 00:06:38,200 Speaker 1: it was a good one. Was the idea of holly bushes. Like, 123 00:06:38,200 --> 00:06:40,800 Speaker 1: if you look at one holly bush, you can see 124 00:06:40,839 --> 00:06:42,839 Speaker 1: it just fine. You can note all the colors and 125 00:06:42,880 --> 00:06:45,480 Speaker 1: the shapes and all that. But imagine you're walking down 126 00:06:45,560 --> 00:06:47,400 Speaker 1: the street and you happen to pass by a place 127 00:06:47,440 --> 00:06:50,440 Speaker 1: where that same holly bush you looked at earlier has 128 00:06:50,440 --> 00:06:53,080 Speaker 1: been like dug up and replaced somewhere else. Would you 129 00:06:53,160 --> 00:06:56,280 Speaker 1: notice it was the same bush? I mean, it looks 130 00:06:56,279 --> 00:07:00,080 Speaker 1: like just another bush, right, unless you were engaging a 131 00:07:00,120 --> 00:07:04,120 Speaker 1: far more tedious exercise of like counting the branches on 132 00:07:04,200 --> 00:07:06,600 Speaker 1: the first bush, you know, really getting in there, or 133 00:07:06,800 --> 00:07:09,600 Speaker 1: you know, marking it with a sharpie, that sort of thing, exactly. Yeah, 134 00:07:09,600 --> 00:07:13,800 Speaker 1: because our brains are not specially wired to casually notice 135 00:07:13,840 --> 00:07:17,480 Speaker 1: and remember minor visual differences in individual plants of the 136 00:07:17,520 --> 00:07:22,160 Speaker 1: same species. But it appears that typical human brains are 137 00:07:22,320 --> 00:07:26,560 Speaker 1: specially wired to notice and remember minor visual differences in 138 00:07:26,640 --> 00:07:30,800 Speaker 1: the hundreds of honestly pretty similar oblong orbs of meat 139 00:07:30,840 --> 00:07:34,080 Speaker 1: and teeth that we interact with every day. Yeah, I mean, 140 00:07:34,080 --> 00:07:36,800 Speaker 1: because a lot of faces are similar, you know. And uh, 141 00:07:36,840 --> 00:07:39,120 Speaker 1: and and that's often where we get that initial like 142 00:07:39,240 --> 00:07:41,920 Speaker 1: miss characterization, where we glance and we think we see 143 00:07:41,960 --> 00:07:44,600 Speaker 1: somebody we know, but then we realize we don't. And 144 00:07:44,600 --> 00:07:48,000 Speaker 1: occasionally you'll get that kind of like triple tech moment 145 00:07:48,080 --> 00:07:50,640 Speaker 1: where somebody, Oh, at first glance, it seems right, and 146 00:07:50,680 --> 00:07:52,960 Speaker 1: then a second glance it seems almost right, And then 147 00:07:52,960 --> 00:07:56,080 Speaker 1: you always know there's just a very similar looking um 148 00:07:56,120 --> 00:07:58,840 Speaker 1: person to someone that you know, someone I've encountered before. 149 00:07:58,840 --> 00:08:00,640 Speaker 1: But this is in fact a strange Do you have 150 00:08:00,760 --> 00:08:04,960 Speaker 1: that one person who you see doppelgangers of all the time, 151 00:08:05,520 --> 00:08:09,320 Speaker 1: like one specific friend or celebrity that you always think 152 00:08:09,400 --> 00:08:13,360 Speaker 1: you see somewhere? I guess. I mean there are certain 153 00:08:13,640 --> 00:08:15,920 Speaker 1: you know there, there's certain looks that are you know, 154 00:08:16,200 --> 00:08:20,720 Speaker 1: that are common, certain styles addressing that are common. Um, 155 00:08:20,760 --> 00:08:22,480 Speaker 1: I've got a very weird one. Do you want to 156 00:08:22,520 --> 00:08:26,200 Speaker 1: hear us? Hear it? Okay? So for some reason I 157 00:08:26,560 --> 00:08:30,960 Speaker 1: keep thinking that I see the American UH physician and 158 00:08:31,000 --> 00:08:35,439 Speaker 1: geneticist Francis Collins everywhere, the guy who worked on the 159 00:08:35,520 --> 00:08:38,960 Speaker 1: Human Genome project. Seen a few pictures. I've never met him. 160 00:08:38,960 --> 00:08:42,160 Speaker 1: I've just seen a few pictures of him around UH, 161 00:08:42,200 --> 00:08:46,160 Speaker 1: and I see like basically an older white guy with 162 00:08:46,200 --> 00:08:49,160 Speaker 1: a mustache and glasses, And I think, is that Francis Collins. 163 00:08:49,679 --> 00:08:54,400 Speaker 1: I don't know why interesting. I mean, I find it. 164 00:08:55,080 --> 00:08:57,160 Speaker 1: There are people that I'm on like heightened alertness for 165 00:08:57,880 --> 00:09:01,400 Speaker 1: mainly like for instance, your us. You know, like I 166 00:09:01,400 --> 00:09:04,120 Speaker 1: think this is true of everyone for the most part. 167 00:09:04,200 --> 00:09:06,240 Speaker 1: You don't want to run into your boss. That's say, 168 00:09:06,240 --> 00:09:08,559 Speaker 1: the grocery store, because the grocery store, first of all, 169 00:09:08,800 --> 00:09:11,120 Speaker 1: is an awkward place to run into anybody. I just 170 00:09:11,200 --> 00:09:13,480 Speaker 1: ran into a coworker or the grocery store the other day, 171 00:09:13,640 --> 00:09:16,800 Speaker 1: the worst, great coworker. Nothing against this person at all, 172 00:09:16,840 --> 00:09:18,760 Speaker 1: but when I saw them, I was like, ah, yeah, 173 00:09:18,840 --> 00:09:22,400 Speaker 1: because it's like, let's have this awkward uh exchange now, 174 00:09:23,000 --> 00:09:25,560 Speaker 1: and let's do it again in one and a half 175 00:09:25,600 --> 00:09:28,840 Speaker 1: minutes on the next aisle, and then let's do it 176 00:09:28,880 --> 00:09:32,440 Speaker 1: another time. And it's just it's a terrible exercise, Andy, 177 00:09:32,600 --> 00:09:35,439 Speaker 1: And then you know your boss. It brings in additional complexities, 178 00:09:35,440 --> 00:09:37,840 Speaker 1: no matter how wonderful your boss happens to be. So 179 00:09:37,920 --> 00:09:41,000 Speaker 1: it's like it results at least in my weird mind 180 00:09:41,200 --> 00:09:43,959 Speaker 1: of you know, me being like hyper alert, like are 181 00:09:44,040 --> 00:09:46,240 Speaker 1: they is? You know, is the is my boss? Here? 182 00:09:46,280 --> 00:09:48,440 Speaker 1: Is a coworker here? I must hide if I see 183 00:09:48,480 --> 00:09:50,800 Speaker 1: them because I want to spare us both the awkwardness 184 00:09:50,840 --> 00:09:54,080 Speaker 1: of running into each other. And that's just around the office, right. 185 00:09:55,120 --> 00:09:57,839 Speaker 1: So yeah, so telling one human apart from another is 186 00:09:57,880 --> 00:10:01,440 Speaker 1: obviously a relevant survivals gill. So it's something that our 187 00:10:01,480 --> 00:10:05,720 Speaker 1: primate brains developed a unique capacity for, especially by means 188 00:10:05,760 --> 00:10:08,960 Speaker 1: of recognizing the visual features of the face, and in 189 00:10:09,080 --> 00:10:13,559 Speaker 1: people who have face blindness or prosopagnosia, this recognition capacity 190 00:10:13,640 --> 00:10:15,880 Speaker 1: has broken down, often due to some kind of brain 191 00:10:15,920 --> 00:10:20,120 Speaker 1: injury or lesion, and uh to the person with severe prosopagnosia, 192 00:10:20,440 --> 00:10:23,839 Speaker 1: human faces can present a problem similar to what we're 193 00:10:23,840 --> 00:10:25,880 Speaker 1: talking about earlier, like looking at a plant you know, 194 00:10:26,040 --> 00:10:29,559 Speaker 1: or looking at similar holly bushes the person with prosopagnosia 195 00:10:29,600 --> 00:10:32,200 Speaker 1: can see. The person can see the face, but the 196 00:10:32,240 --> 00:10:35,920 Speaker 1: faces don't really distinguish themselves from one another in memory 197 00:10:35,960 --> 00:10:39,400 Speaker 1: because of damage to the special recognition power. And as 198 00:10:39,440 --> 00:10:42,239 Speaker 1: a side note, there's another interesting fact about face blindness, 199 00:10:42,520 --> 00:10:45,720 Speaker 1: which is that people who have it also very often, 200 00:10:45,760 --> 00:10:48,720 Speaker 1: not always, but pretty often have a kind of location 201 00:10:48,840 --> 00:10:52,720 Speaker 1: blindness as well. They can become easily lost because they 202 00:10:52,760 --> 00:10:56,760 Speaker 1: don't remember visual characteristics of even familiar locations like the 203 00:10:56,800 --> 00:10:59,760 Speaker 1: building where they work or their house. Yes, I seem 204 00:10:59,800 --> 00:11:02,880 Speaker 1: to have call Oliver Sacks writing about this, um totally. Yes, 205 00:11:03,080 --> 00:11:06,840 Speaker 1: the late author and psychologist who who had face blindness 206 00:11:06,840 --> 00:11:10,319 Speaker 1: as well. Yeah, yeah he did. He wrote about it autobiographically, 207 00:11:10,400 --> 00:11:11,959 Speaker 1: I believe, in a piece for The New Yorker that 208 00:11:12,120 --> 00:11:13,840 Speaker 1: was really good that we talked about in our face 209 00:11:13,840 --> 00:11:19,360 Speaker 1: blindness episode. Um So. Historically, autopsies on the brains of 210 00:11:19,440 --> 00:11:24,559 Speaker 1: people with acquired prosopagnosia were very informative because these brains 211 00:11:24,600 --> 00:11:29,880 Speaker 1: almost always showed lesions on the bottom of a brain 212 00:11:30,000 --> 00:11:33,960 Speaker 1: region known as the occipito temporal cortex. And if you 213 00:11:33,960 --> 00:11:37,120 Speaker 1: want to picture this, it's kind of the rear middle 214 00:11:37,200 --> 00:11:39,960 Speaker 1: underside of the brain, so you think, go down from 215 00:11:39,960 --> 00:11:42,760 Speaker 1: your temples and then back a little bit and on 216 00:11:42,800 --> 00:11:45,679 Speaker 1: the underside of the brain. Uh. This region of the 217 00:11:45,720 --> 00:11:48,640 Speaker 1: brain is also known as the fusiform gyrus, and brain 218 00:11:48,720 --> 00:11:51,319 Speaker 1: imaging like CT scans and m r I on living 219 00:11:51,360 --> 00:11:55,079 Speaker 1: people also confirmed this correlation. Lesions on the fuse form 220 00:11:55,160 --> 00:11:58,640 Speaker 1: gyrus on the underside of the occipito temporal cortex were 221 00:11:58,679 --> 00:12:03,160 Speaker 1: commonly associated with the inability to recognize faces. Meanwhile, real 222 00:12:03,240 --> 00:12:06,439 Speaker 1: time brain imaging like fm r I has also associated 223 00:12:06,480 --> 00:12:10,000 Speaker 1: face processing with increased activity in this part of the brain. 224 00:12:10,040 --> 00:12:13,000 Speaker 1: So if you look at a human face, your fusiform 225 00:12:13,080 --> 00:12:16,360 Speaker 1: gyros tends to get more blood flow, and for that reason, 226 00:12:16,440 --> 00:12:18,600 Speaker 1: this region of the brain has come to be known 227 00:12:18,600 --> 00:12:22,120 Speaker 1: as the fusiform face area. Now, it's really important to 228 00:12:22,160 --> 00:12:25,240 Speaker 1: note that multiple networks of the brain are involved in 229 00:12:25,320 --> 00:12:28,600 Speaker 1: face perception, and we'll talk about some more studies about 230 00:12:28,600 --> 00:12:31,280 Speaker 1: that as we go on, but it appears somehow the 231 00:12:31,280 --> 00:12:35,360 Speaker 1: fusiform gyrus is especially important and that damage to it 232 00:12:35,400 --> 00:12:38,600 Speaker 1: can tend to cause this another way, Uh, that I 233 00:12:38,600 --> 00:12:40,920 Speaker 1: wanted to complicate the idea we were talking about earlier 234 00:12:40,920 --> 00:12:45,480 Speaker 1: that you can usually see faces correctly with prosopagnosia, but 235 00:12:45,600 --> 00:12:49,040 Speaker 1: that you have trouble recognizing. A complication to that is 236 00:12:49,080 --> 00:12:52,559 Speaker 1: like one study I remember seeing video of where there 237 00:12:52,600 --> 00:12:55,600 Speaker 1: was a patient who had an electrode implanted directly to 238 00:12:55,679 --> 00:12:58,880 Speaker 1: stimulate his fuse form gyros, and he was awake and 239 00:12:58,920 --> 00:13:02,120 Speaker 1: could talk about in real time when there was a 240 00:13:02,160 --> 00:13:04,680 Speaker 1: current applied to this part of the brain. He said 241 00:13:04,720 --> 00:13:08,040 Speaker 1: that his vision remained normal except for people's faces, and 242 00:13:08,080 --> 00:13:10,920 Speaker 1: when the current was applied, people's faces would tend to 243 00:13:11,000 --> 00:13:14,960 Speaker 1: kind of metamorphose. That like their features would appear to 244 00:13:15,040 --> 00:13:18,880 Speaker 1: move around and stuff. Oh interesting, like more so than 245 00:13:18,920 --> 00:13:21,720 Speaker 1: just the experience of staring at somebody's face till the 246 00:13:21,760 --> 00:13:25,080 Speaker 1: information starts, you know, loses kind of consistency. Oh is 247 00:13:25,120 --> 00:13:28,400 Speaker 1: that a thing you experience? Uh, yeah, to a certain extent, 248 00:13:28,559 --> 00:13:30,800 Speaker 1: I mean, with even like saying a word until it 249 00:13:30,840 --> 00:13:33,240 Speaker 1: loses meaning. Yeah, yeah, I mean it's kind of the 250 00:13:33,320 --> 00:13:35,840 Speaker 1: effect to of of looking in a mirror too long, 251 00:13:35,880 --> 00:13:39,000 Speaker 1: you know, where you're not really presented with any new data, 252 00:13:39,400 --> 00:13:42,080 Speaker 1: Like you've you've absorbed all the data that is necessary 253 00:13:42,679 --> 00:13:47,439 Speaker 1: to to properly react and uh and situate yourself in reality, 254 00:13:47,480 --> 00:13:50,480 Speaker 1: but then you keep feeding on the same informational source, 255 00:13:51,600 --> 00:13:53,840 Speaker 1: which you know, is kind of like the road to madness, 256 00:13:54,679 --> 00:14:00,760 Speaker 1: especially in situations of sensory deprivation. I've certainly the experience 257 00:14:00,800 --> 00:14:03,199 Speaker 1: where I stare at somebody's face, or stare at say 258 00:14:03,200 --> 00:14:06,679 Speaker 1: a dog's face long enough that like it's it starts 259 00:14:06,720 --> 00:14:09,280 Speaker 1: to it doesn't look any different, but it starts to 260 00:14:09,400 --> 00:14:12,520 Speaker 1: decohere as like the seat of the soul, and instead 261 00:14:12,559 --> 00:14:16,960 Speaker 1: becomes textures of organs. Do dogs have faces? Of course 262 00:14:17,000 --> 00:14:19,960 Speaker 1: they do. I don't know, it don't really what is wrong? 263 00:14:20,080 --> 00:14:21,720 Speaker 1: I don't think. I mean, I don't think of cats 264 00:14:21,720 --> 00:14:23,920 Speaker 1: as having faces either. They just kind of faces. They 265 00:14:23,960 --> 00:14:27,200 Speaker 1: just kind of have the fronts of heads. You know, Um, 266 00:14:27,400 --> 00:14:31,280 Speaker 1: humans have faces. Where does the Cheshire Cats grin live 267 00:14:31,440 --> 00:14:33,880 Speaker 1: if not on its face? Well, it's a cartoon character. 268 00:14:34,040 --> 00:14:37,360 Speaker 1: Cartoon characters have faces because they are they're made in 269 00:14:37,800 --> 00:14:40,640 Speaker 1: at least partially in our likeness. I've just discovered something 270 00:14:40,840 --> 00:14:43,640 Speaker 1: very sinister about you. I guess a pug kind of 271 00:14:43,680 --> 00:14:47,320 Speaker 1: has a face, definitely, as we've we've reread the pug 272 00:14:47,440 --> 00:14:49,480 Speaker 1: enough to where it it is is close to having 273 00:14:49,480 --> 00:14:53,280 Speaker 1: a faces any any dog can really claim to. Now, 274 00:14:53,320 --> 00:14:56,480 Speaker 1: there's another interesting fact about biological face perception. I think 275 00:14:56,480 --> 00:14:59,160 Speaker 1: I mentioned this in the last episode, but just to reiterate, 276 00:14:59,200 --> 00:15:03,560 Speaker 1: the brain, it turns out processes familiar versus unfamiliar faces 277 00:15:03,680 --> 00:15:06,880 Speaker 1: very differently. Like, when I face is familiar, the brain 278 00:15:07,000 --> 00:15:10,760 Speaker 1: is extremely good at recognizing it accurately, even under difficult 279 00:15:10,840 --> 00:15:15,000 Speaker 1: viewing conditions bad light, weird angles, partial view and all that. 280 00:15:15,560 --> 00:15:19,920 Speaker 1: Less familiar faces fail to be recognized under these same conditions. 281 00:15:19,960 --> 00:15:22,200 Speaker 1: So what's going on with the brain here? Well, just 282 00:15:22,280 --> 00:15:25,440 Speaker 1: to reference one specific study by Sophia M. Landy and 283 00:15:25,440 --> 00:15:28,400 Speaker 1: Win rich A fry wall To published in Science in 284 00:15:28,440 --> 00:15:32,440 Speaker 1: two thousand seventeen, called two areas for familiar face recognition 285 00:15:32,440 --> 00:15:35,760 Speaker 1: and the primate brain UH. The authors found quote familiar 286 00:15:35,800 --> 00:15:41,600 Speaker 1: faces recruited two hitherto unknown face areas at anatomically conserved 287 00:15:41,720 --> 00:15:46,320 Speaker 1: locations within the perinal cortex and the temporal pole. So 288 00:15:46,360 --> 00:15:48,880 Speaker 1: in fMRI I, these two areas of the brain, but 289 00:15:49,120 --> 00:15:53,160 Speaker 1: not the rest of the face processing network, responded dramatically 290 00:15:53,240 --> 00:15:56,720 Speaker 1: to familiar faces emerging from a blur, but they didn't 291 00:15:56,720 --> 00:16:00,040 Speaker 1: show any special activity when presented with unfamiliar faces. So 292 00:16:00,040 --> 00:16:04,160 Speaker 1: sounds like the brain also recruits these special additional networks 293 00:16:04,200 --> 00:16:08,120 Speaker 1: in addition to the regular fusiform face area for identification 294 00:16:08,200 --> 00:16:13,760 Speaker 1: when it detects a more familiar face. Now, of course, historically, evolutionarily, 295 00:16:14,040 --> 00:16:16,920 Speaker 1: those familiar faces would be the faces of individuals that 296 00:16:16,960 --> 00:16:19,200 Speaker 1: we are, that are that are part of our society, 297 00:16:19,280 --> 00:16:22,760 Speaker 1: that are part of a close knit group um or 298 00:16:22,800 --> 00:16:27,040 Speaker 1: I guess potentially enemies that you've encountered physically in the past. 299 00:16:27,440 --> 00:16:30,280 Speaker 1: But the modern media version of that is that we 300 00:16:30,320 --> 00:16:32,840 Speaker 1: have all these additional faces as well, like all the 301 00:16:33,080 --> 00:16:37,400 Speaker 1: all the actors we've memorized from watching TV and movies 302 00:16:37,440 --> 00:16:40,440 Speaker 1: and surfing IMDb for example. Yeah, well, I think one 303 00:16:40,440 --> 00:16:43,479 Speaker 1: thing that's important is that when a face is familiar, 304 00:16:43,520 --> 00:16:45,800 Speaker 1: it tends to come with a very complex suite of 305 00:16:45,800 --> 00:16:49,760 Speaker 1: emotional reactions that are implied by the face. You know, 306 00:16:49,840 --> 00:16:51,760 Speaker 1: you see somebody and you know them to be an 307 00:16:51,760 --> 00:16:54,600 Speaker 1: adversary or you know them to be a family member 308 00:16:54,680 --> 00:16:58,120 Speaker 1: or friends that you've got all these complex emotions that 309 00:16:58,320 --> 00:17:02,720 Speaker 1: come out of this emotional response called familiarity. I'd imagine 310 00:17:02,720 --> 00:17:06,840 Speaker 1: the brain's response to unfamiliar faces or less familiar faces 311 00:17:06,840 --> 00:17:09,600 Speaker 1: tends to be more flat, probably, right, Like there's less 312 00:17:09,640 --> 00:17:13,600 Speaker 1: differentiation in the response, right, right, And there's probably a 313 00:17:13,600 --> 00:17:15,040 Speaker 1: lot to be said, And this may be an area 314 00:17:15,040 --> 00:17:17,840 Speaker 1: of separate study, like what happens when you encounter faces 315 00:17:18,359 --> 00:17:21,400 Speaker 1: in real life that you have thus far only encountered 316 00:17:21,480 --> 00:17:24,159 Speaker 1: via media, You know, I mean, it's a it's a 317 00:17:24,200 --> 00:17:26,720 Speaker 1: different scenario, if for nothing else, if nothing else, the 318 00:17:26,920 --> 00:17:29,199 Speaker 1: lighting and the makeup is going to be different. And 319 00:17:29,240 --> 00:17:32,440 Speaker 1: they're so short. Oh that's the thing I'm surprised we've 320 00:17:32,480 --> 00:17:34,720 Speaker 1: never looked into before. There's got to be research on that, 321 00:17:34,760 --> 00:17:37,920 Speaker 1: Like why every you assume that movie stars are seven 322 00:17:37,920 --> 00:17:40,080 Speaker 1: feet tall until you see them in person. Well, I 323 00:17:40,080 --> 00:17:43,840 Speaker 1: think it's because they're standing on apple boxes a lot 324 00:17:43,880 --> 00:17:47,680 Speaker 1: of the times. Um. Now, there's another interesting debate in 325 00:17:47,720 --> 00:17:50,440 Speaker 1: the history of face processing research that we've discussed on 326 00:17:50,480 --> 00:17:52,760 Speaker 1: the show once before. I wasn't able to find a 327 00:17:52,800 --> 00:17:55,160 Speaker 1: resolution here, but but it is sort of a dispute 328 00:17:55,760 --> 00:17:59,399 Speaker 1: among these researchers. So to look at a foundational kind 329 00:17:59,440 --> 00:18:02,440 Speaker 1: of study here, there was a study published in Nature 330 00:18:02,440 --> 00:18:06,240 Speaker 1: Neuroscience in two thousand by Isabel Gauthier. At all in 331 00:18:06,280 --> 00:18:09,600 Speaker 1: the background here was that research had already shown that 332 00:18:09,960 --> 00:18:12,959 Speaker 1: people who had been trained to have an expertise in 333 00:18:13,280 --> 00:18:17,600 Speaker 1: previously unfamiliar objects called greebles will come back to them 334 00:18:17,600 --> 00:18:21,000 Speaker 1: in a second. People who had that expertise would recruit 335 00:18:21,119 --> 00:18:23,879 Speaker 1: parts of the brain that are usually used in the 336 00:18:23,920 --> 00:18:27,560 Speaker 1: processing of faces, such as a fuse form gyrus and 337 00:18:27,600 --> 00:18:31,680 Speaker 1: the occipital lobe. And so greebles are these weird little 338 00:18:31,760 --> 00:18:35,800 Speaker 1: chess piece like objects with abstract kind of goblin ears 339 00:18:35,840 --> 00:18:39,000 Speaker 1: and spikes and stuff. I really like the greebels, you know, 340 00:18:39,040 --> 00:18:44,159 Speaker 1: I was reading about greeple's and uh, greebel's also another 341 00:18:44,280 --> 00:18:47,320 Speaker 1: definition are the and it's pretty closely related, I guess, 342 00:18:47,600 --> 00:18:49,840 Speaker 1: are also the little bits of plastic glued to the 343 00:18:49,880 --> 00:18:53,200 Speaker 1: tops of objects to make them seem more complex. Star Destroyer, Yeah, 344 00:18:53,240 --> 00:18:57,320 Speaker 1: the star Destroyer, I guess, so what the Death Star itself? 345 00:18:57,440 --> 00:19:01,280 Speaker 1: Or a great example of the background on Mystery Sense 346 00:19:01,320 --> 00:19:05,640 Speaker 1: Theater three thousand, at least for a number of seasons there, 347 00:19:06,040 --> 00:19:08,760 Speaker 1: you could if you look closely, you could recognize the 348 00:19:08,800 --> 00:19:11,800 Speaker 1: everyday objects that we're serving as Greebels, such as I 349 00:19:11,800 --> 00:19:15,240 Speaker 1: think a millennium falcon toy was back there as a Greebel. 350 00:19:15,320 --> 00:19:18,280 Speaker 1: But yeah, the more junk that is glued to it, 351 00:19:19,040 --> 00:19:20,840 Speaker 1: the more it looks like it has a lot of 352 00:19:20,880 --> 00:19:24,000 Speaker 1: surface complexity to it. That The board cube is another 353 00:19:24,000 --> 00:19:26,720 Speaker 1: example of this. It's not just a cube, which of 354 00:19:26,760 --> 00:19:30,280 Speaker 1: course it's a model ship, but then they have all 355 00:19:30,320 --> 00:19:32,119 Speaker 1: these little bits on the outside of it and it 356 00:19:32,119 --> 00:19:34,920 Speaker 1: looks even more complicated. Yeah, it's got texture that gives 357 00:19:34,920 --> 00:19:37,760 Speaker 1: it the illusion of functionality. In fact, it's just a 358 00:19:37,880 --> 00:19:41,119 Speaker 1: It's just a surface that hides nothing real behind it. Yes, 359 00:19:41,400 --> 00:19:43,200 Speaker 1: and a similar thing would be true of the greebel's 360 00:19:43,280 --> 00:19:45,400 Speaker 1: used in these studies. So they're like a little imagine 361 00:19:45,400 --> 00:19:48,800 Speaker 1: a little chess piece that's just got different kinds of 362 00:19:48,880 --> 00:19:51,959 Speaker 1: little spikes and features poken out of it. And so 363 00:19:52,000 --> 00:19:54,480 Speaker 1: you can train people on these things and say you 364 00:19:54,640 --> 00:19:57,760 Speaker 1: learn the name for this Greeble versus that greeble, and 365 00:19:57,760 --> 00:19:59,720 Speaker 1: and they'll get names for you know, a group of 366 00:19:59,760 --> 00:20:02,760 Speaker 1: them over time. If people train with objects like this, 367 00:20:02,880 --> 00:20:05,040 Speaker 1: they can learn the names of the different Greebels. They 368 00:20:05,119 --> 00:20:08,680 Speaker 1: look mostly indistinguishable if you haven't trained with them. Even 369 00:20:08,720 --> 00:20:10,520 Speaker 1: though this is again these are like just made for 370 00:20:10,600 --> 00:20:13,880 Speaker 1: the experiment. There's no like pre existing greebel set right, right, 371 00:20:13,880 --> 00:20:16,240 Speaker 1: but you can train people right And so what previous 372 00:20:16,280 --> 00:20:19,119 Speaker 1: research had found is that people who get trained on 373 00:20:19,160 --> 00:20:22,480 Speaker 1: these greebles look at the Greebel's and it seems to 374 00:20:22,520 --> 00:20:24,600 Speaker 1: recruit the parts of the brain that are usually used 375 00:20:24,600 --> 00:20:27,760 Speaker 1: for face processing. This study from two thousand I mentioned 376 00:20:27,800 --> 00:20:31,879 Speaker 1: extended this principle to other areas of visual expertise, including 377 00:20:32,000 --> 00:20:36,080 Speaker 1: birds and cars. So it found that when people had 378 00:20:36,119 --> 00:20:41,240 Speaker 1: acquired an expertise for birds and cars, the brain recruited 379 00:20:41,320 --> 00:20:44,920 Speaker 1: more of the face processing associated networks of the brain, 380 00:20:45,359 --> 00:20:48,600 Speaker 1: such as the fusiform gyros when looking at the objects 381 00:20:48,600 --> 00:20:52,119 Speaker 1: they were experts in. Interesting, okay, and so at some 382 00:20:52,160 --> 00:20:54,840 Speaker 1: points this two thousand study has been used to argue 383 00:20:54,880 --> 00:20:57,800 Speaker 1: that maybe the fusiform face area of the brain is 384 00:20:57,880 --> 00:21:01,520 Speaker 1: more of a visual expertise center than a face center. 385 00:21:02,320 --> 00:21:04,280 Speaker 1: But I think there's also a lot of evidence that's 386 00:21:04,320 --> 00:21:06,199 Speaker 1: going the other way that it has a natural and 387 00:21:06,240 --> 00:21:09,560 Speaker 1: somewhat dedicated role in face perception. Uh That this other 388 00:21:09,640 --> 00:21:13,120 Speaker 1: side saying that it's naturally dedicated to faces is known 389 00:21:13,160 --> 00:21:17,400 Speaker 1: as the domain specificity hypothesis. Uh. So there's stuff going 390 00:21:17,440 --> 00:21:19,520 Speaker 1: back and forth, but just decide. Another one that I 391 00:21:19,600 --> 00:21:23,080 Speaker 1: thought was an interesting follow up to that two thousand study. 392 00:21:23,160 --> 00:21:27,040 Speaker 1: This one was by Yaoda Zoo uh called Revisiting the 393 00:21:27,160 --> 00:21:30,320 Speaker 1: Role of the fusiform Face Area and Visual Expertise, published 394 00:21:30,320 --> 00:21:34,080 Speaker 1: in Cerebral Cortex in two thousand five. It followed up 395 00:21:34,080 --> 00:21:36,679 Speaker 1: from the two thousand study about birds and cars asking 396 00:21:36,680 --> 00:21:40,200 Speaker 1: a reasonable question. The author here says, Okay, if people 397 00:21:40,240 --> 00:21:43,720 Speaker 1: with expertise and birds and cars show increased activation of 398 00:21:43,760 --> 00:21:45,800 Speaker 1: the f f A when they look at birds and 399 00:21:45,840 --> 00:21:49,639 Speaker 1: cars specifically, what if this is quote due to experts 400 00:21:49,680 --> 00:21:54,200 Speaker 1: taking advantage of the faceness of the stimuli. After all, 401 00:21:54,560 --> 00:21:58,959 Speaker 1: birds have faces, and three quarter frontal views of cars 402 00:21:59,160 --> 00:22:02,080 Speaker 1: resemble face is which was funny, But I was like, 403 00:22:02,160 --> 00:22:04,320 Speaker 1: that's actually that's a good question. Well, I think the 404 00:22:04,560 --> 00:22:07,360 Speaker 1: faces of cars came up on a previous episode. We 405 00:22:07,359 --> 00:22:09,680 Speaker 1: were talking about like the our experience as a driver 406 00:22:09,760 --> 00:22:13,560 Speaker 1: of a car and identifying with cars about you know, 407 00:22:13,760 --> 00:22:16,240 Speaker 1: the headlights and the grill. It looks like a face. 408 00:22:16,920 --> 00:22:18,960 Speaker 1: I don't know about birds having faces. I think, I'm 409 00:22:18,960 --> 00:22:22,840 Speaker 1: I'm also I find it hard to believe that that 410 00:22:22,840 --> 00:22:24,680 Speaker 1: that I'm looking at a bird's face when I'm looking 411 00:22:24,720 --> 00:22:27,000 Speaker 1: at the front of its head. So cats, no faces, 412 00:22:27,080 --> 00:22:30,840 Speaker 1: Dogs no faces, Birds no faces. I mean, I guess 413 00:22:30,880 --> 00:22:33,160 Speaker 1: that chimpanzee has a face. Gorillas, you know, I would 414 00:22:33,160 --> 00:22:36,520 Speaker 1: give that. I would have attribute faces too, you know, 415 00:22:36,560 --> 00:22:40,399 Speaker 1: the primates, especially higher primates. I don't know about lesser 416 00:22:40,400 --> 00:22:43,679 Speaker 1: primates though, you know, uh, I have to think about that. Wow, 417 00:22:44,800 --> 00:22:47,240 Speaker 1: this is blowing my mind right here. I mean, does 418 00:22:47,280 --> 00:22:51,679 Speaker 1: a shark have a face, Yeah, the shark has got eyes, mouth, Yeah, okay. 419 00:22:52,160 --> 00:22:56,400 Speaker 1: Clams don't have faces, no, no, okay, Oysters don't have face? 420 00:22:57,800 --> 00:22:59,400 Speaker 1: All right. Well, I would be interested to hear from 421 00:22:59,440 --> 00:23:03,160 Speaker 1: listeners about this, they might alone and how I feel 422 00:23:03,200 --> 00:23:06,760 Speaker 1: about faces. We'll see. So. The author here mentions that 423 00:23:07,119 --> 00:23:11,080 Speaker 1: the effects could also be due to attentional modulation in 424 00:23:11,119 --> 00:23:14,480 Speaker 1: other words to differences in how experts versus non experts 425 00:23:14,720 --> 00:23:17,040 Speaker 1: paid attention to what they were looking at. That also 426 00:23:17,040 --> 00:23:21,600 Speaker 1: seems like a reasonable explanation. Uh, And so they ultimately 427 00:23:21,600 --> 00:23:24,320 Speaker 1: find here quote in this study, using both side view 428 00:23:24,359 --> 00:23:28,360 Speaker 1: car images that do not resemble faces and bird images 429 00:23:28,440 --> 00:23:32,280 Speaker 1: in an event related fMRI I design that minimizes attentional 430 00:23:32,320 --> 00:23:35,879 Speaker 1: modulation and expertise effect, and the right f A is 431 00:23:35,920 --> 00:23:39,720 Speaker 1: observed in both car and bird experts, although a baseline 432 00:23:39,720 --> 00:23:44,000 Speaker 1: bias makes the bird expertise effect less reliable. These results 433 00:23:44,000 --> 00:23:47,040 Speaker 1: are consistent with those of Gauthier at all, and suggests 434 00:23:47,080 --> 00:23:49,960 Speaker 1: that this suggests the involvement of the right f A 435 00:23:50,040 --> 00:23:53,399 Speaker 1: and processing non face expertise visual stimuli. Okay, so this 436 00:23:53,440 --> 00:23:56,000 Speaker 1: one seems to hold up the two thousand study. But 437 00:23:56,119 --> 00:23:58,080 Speaker 1: I said that, you know, there was a dispute and 438 00:23:58,080 --> 00:24:02,160 Speaker 1: that it's complicated. I found any of other sources saying that, 439 00:24:02,200 --> 00:24:04,640 Speaker 1: you know, there's all this independent evidence that the brain 440 00:24:04,720 --> 00:24:07,640 Speaker 1: has a dedicated role. Uh, this region of the brain 441 00:24:07,720 --> 00:24:09,800 Speaker 1: or these networks in the brain have dedicated roles in 442 00:24:09,840 --> 00:24:14,359 Speaker 1: face perception. The domain specificity hypothesis and other studies have 443 00:24:14,359 --> 00:24:18,119 Speaker 1: found conflicting results and argued against the expertise theory. For example, 444 00:24:18,119 --> 00:24:21,120 Speaker 1: there was one in two thousand seven in Cognition by 445 00:24:21,280 --> 00:24:25,600 Speaker 1: Rachel Robbins and Eleanor McCone uh that found basically, dog 446 00:24:25,640 --> 00:24:29,359 Speaker 1: experts showed no special face like processing for dogs in 447 00:24:29,480 --> 00:24:33,520 Speaker 1: non face identification tasks. Another thing I was reading is 448 00:24:33,560 --> 00:24:36,960 Speaker 1: some researchers arguing that the engagement of the fusiform face 449 00:24:37,000 --> 00:24:40,879 Speaker 1: area in areas of visual expertise was still somehow maybe 450 00:24:40,960 --> 00:24:44,800 Speaker 1: just an artifact of how attention was being stimulated in 451 00:24:44,840 --> 00:24:48,240 Speaker 1: those test conditions. UH So, I'm not sure if the 452 00:24:48,280 --> 00:24:51,439 Speaker 1: opinion of neuroscientists has shifted largely to one side or 453 00:24:51,440 --> 00:24:53,520 Speaker 1: the other of this debate in the years since. It 454 00:24:53,600 --> 00:24:56,040 Speaker 1: does seem like there's a very solid consensus that at 455 00:24:56,080 --> 00:25:00,639 Speaker 1: least some inherent domain specificity exists for the f A, 456 00:25:00,920 --> 00:25:04,520 Speaker 1: at least in some way it is naturally dedicated to faces. 457 00:25:04,560 --> 00:25:06,520 Speaker 1: But at least as far as I could tell, it 458 00:25:06,720 --> 00:25:09,600 Speaker 1: could be possible to split the difference here, like maybe 459 00:25:10,080 --> 00:25:13,119 Speaker 1: it could be that there's a face perception network of 460 00:25:13,160 --> 00:25:17,639 Speaker 1: the brain shaped by evolution quite specifically to recognize faces, 461 00:25:17,720 --> 00:25:20,400 Speaker 1: and maybe it also just happens to be a good 462 00:25:20,440 --> 00:25:23,440 Speaker 1: part of the brain to recruit for minute visual discrimination 463 00:25:23,720 --> 00:25:26,720 Speaker 1: in other areas that the brain becomes highly adapted to 464 00:25:26,880 --> 00:25:29,680 Speaker 1: through training. Yeah, either way to shake it. I mean 465 00:25:29,680 --> 00:25:33,840 Speaker 1: the take home is that faces are incredibly important, right, so, 466 00:25:34,200 --> 00:25:37,280 Speaker 1: and we see that reflected in the neural machinery devoted 467 00:25:37,320 --> 00:25:39,359 Speaker 1: to it. I think that's exactly right. It's a good point. 468 00:25:39,400 --> 00:25:42,080 Speaker 1: So either side of this debate, whichever one is right, 469 00:25:42,520 --> 00:25:46,399 Speaker 1: it's either that we've got this inbuilt recognition capacity for 470 00:25:46,480 --> 00:25:50,840 Speaker 1: faces that makes faces uniquely special, or we've got a 471 00:25:50,920 --> 00:25:55,359 Speaker 1: visual expertise center that in most people becomes most highly 472 00:25:55,359 --> 00:25:57,960 Speaker 1: attuned at looking at faces. And the only things that 473 00:25:58,040 --> 00:26:01,040 Speaker 1: really rival that engagement of the visi ual expertise center 474 00:26:01,400 --> 00:26:03,960 Speaker 1: is like when you get super into a subject, like 475 00:26:04,000 --> 00:26:07,320 Speaker 1: you're obsessed with birds, right, and it becomes the same 476 00:26:07,320 --> 00:26:10,520 Speaker 1: sort of visual experience too, where you know, you turn 477 00:26:10,560 --> 00:26:13,359 Speaker 1: to somebody say it's airplanes, um, where you're like, I 478 00:26:13,480 --> 00:26:14,960 Speaker 1: wonder what kind of airplane that is? You turn to 479 00:26:15,000 --> 00:26:17,840 Speaker 1: your buddy who's an aviation geek, and they're like they're 480 00:26:17,840 --> 00:26:19,760 Speaker 1: just a glance. They're like, oh yeah, that's a that's 481 00:26:19,800 --> 00:26:21,400 Speaker 1: a spit fire. In the same way that you might 482 00:26:21,440 --> 00:26:24,320 Speaker 1: turn and say, oh yeah, that's Doug, Right, Yeah, when 483 00:26:24,359 --> 00:26:27,119 Speaker 1: when somebody's got visual expertise and you asked them to 484 00:26:27,160 --> 00:26:30,439 Speaker 1: recognize something, you notice how they emotionally light up the 485 00:26:30,480 --> 00:26:32,359 Speaker 1: same way that like you or I do when we 486 00:26:32,400 --> 00:26:35,680 Speaker 1: suddenly recognize an actor in a B movie. You see 487 00:26:35,680 --> 00:26:40,280 Speaker 1: that comparison. Yeah, yeah, yeah, exactly Like it's um, I mean, 488 00:26:40,520 --> 00:26:42,600 Speaker 1: it's like, this is what I've been training for. Yeah, 489 00:26:42,800 --> 00:26:47,040 Speaker 1: that's Robert England out of the Freddy makeup? Is that 490 00:26:47,440 --> 00:26:50,320 Speaker 1: a more generalized reaction? Is that not just us that, 491 00:26:50,400 --> 00:26:54,399 Speaker 1: like people don't just look around for people who have 492 00:26:54,440 --> 00:26:57,720 Speaker 1: familiar faces and recognize them, but get really excited when 493 00:26:57,760 --> 00:27:02,200 Speaker 1: they suddenly recognize somebody. Yeah, I think so. I mean 494 00:27:02,200 --> 00:27:04,480 Speaker 1: I think I see it in other people. So I 495 00:27:04,520 --> 00:27:07,879 Speaker 1: presume that it is part of the you know, normal 496 00:27:07,920 --> 00:27:10,800 Speaker 1: experience or the you know, the traditional experience, because I 497 00:27:10,840 --> 00:27:14,000 Speaker 1: guess if you were to apply it back to again, 498 00:27:14,160 --> 00:27:19,760 Speaker 1: like a small society model, it would be recognizing a friend, right, Like, 499 00:27:19,800 --> 00:27:23,359 Speaker 1: on some level, the the actor that we associate with 500 00:27:23,400 --> 00:27:27,119 Speaker 1: films that we like, like we we we we value 501 00:27:27,160 --> 00:27:29,280 Speaker 1: them on some level, it's almost like they are a friend, 502 00:27:29,280 --> 00:27:32,480 Speaker 1: and spotting them in another film is like spotting a friend. 503 00:27:32,880 --> 00:27:35,400 Speaker 1: Again within the context of films. It might be different 504 00:27:35,440 --> 00:27:36,639 Speaker 1: if you saw him on the street, because you're like 505 00:27:36,680 --> 00:27:38,560 Speaker 1: I would be like, oh, it's that act. That's weird, 506 00:27:38,600 --> 00:27:41,560 Speaker 1: that's that actor from those be movies I've seen. Um, 507 00:27:41,600 --> 00:27:44,040 Speaker 1: you know, there's some like and then I I'll be 508 00:27:44,040 --> 00:27:46,560 Speaker 1: thinking about them covered in blood or something. But you know, 509 00:27:46,560 --> 00:27:48,400 Speaker 1: but within the context of the films, it's like, oh, 510 00:27:48,640 --> 00:27:51,040 Speaker 1: my friend is in this. I don't remember their name, 511 00:27:51,080 --> 00:27:52,359 Speaker 1: but they were in you know, a whole bunch of 512 00:27:52,400 --> 00:27:54,800 Speaker 1: old British TV shows and uh and I'm and I 513 00:27:54,840 --> 00:27:59,480 Speaker 1: and I feel, you know, the arousal of recognizing them. Well, 514 00:27:59,520 --> 00:28:02,560 Speaker 1: I think there is some evidence that there are extreme 515 00:28:02,600 --> 00:28:05,600 Speaker 1: similarities in the way the brain reacts to images of 516 00:28:05,640 --> 00:28:08,480 Speaker 1: celebrities and the way the brain reacts to images of 517 00:28:08,520 --> 00:28:12,000 Speaker 1: known friends. I mean, there's a lot of the same 518 00:28:12,040 --> 00:28:15,199 Speaker 1: stuff going on. So I think the brain when we 519 00:28:15,400 --> 00:28:17,359 Speaker 1: see the same face over and over again on a 520 00:28:17,440 --> 00:28:20,000 Speaker 1: TV the brain sort of treats it as if we're 521 00:28:20,000 --> 00:28:22,600 Speaker 1: seeing the same face over and over again next to 522 00:28:22,640 --> 00:28:24,520 Speaker 1: the fire. Yeah, like really that. I mean, that's why 523 00:28:24,600 --> 00:28:27,600 Speaker 1: they called the television show friends. That's why people watched 524 00:28:27,600 --> 00:28:30,480 Speaker 1: it religiously. While people I mean, there's articles today about 525 00:28:30,520 --> 00:28:33,919 Speaker 1: like how important the Netflix deal was to to have 526 00:28:34,080 --> 00:28:37,160 Speaker 1: friends on Netflix because the TV show off the concept 527 00:28:37,240 --> 00:28:39,840 Speaker 1: of friends both because I think they're the same. I 528 00:28:39,840 --> 00:28:43,160 Speaker 1: think based on the way they say people consume the show, 529 00:28:43,680 --> 00:28:46,440 Speaker 1: it is like the familiarity of it. It is encountering 530 00:28:46,440 --> 00:28:49,640 Speaker 1: these same people over and over again. Uh, it is 531 00:28:49,760 --> 00:28:51,880 Speaker 1: like they are your friends. And I mean, I I 532 00:28:52,360 --> 00:28:54,800 Speaker 1: never really watched that particular show, but I remember having 533 00:28:54,840 --> 00:28:57,480 Speaker 1: like a similar relationship with I think it was news 534 00:28:57,600 --> 00:28:59,760 Speaker 1: radio back in the day, and I would watch it 535 00:28:59,760 --> 00:29:01,200 Speaker 1: when I was in college, and it's like I could 536 00:29:01,200 --> 00:29:05,000 Speaker 1: turn it on and and uh, innocence, they were like 537 00:29:05,040 --> 00:29:08,800 Speaker 1: my TV friends. There is I think there's a lot 538 00:29:08,880 --> 00:29:11,040 Speaker 1: to that. I think that goes on with say The 539 00:29:11,120 --> 00:29:14,479 Speaker 1: Office Today we read about like how much people stream 540 00:29:14,560 --> 00:29:16,840 Speaker 1: The Office, and I think a lot of it's not 541 00:29:16,920 --> 00:29:19,560 Speaker 1: even I mean, they're not even like trying to see 542 00:29:19,560 --> 00:29:22,160 Speaker 1: how the plot plays out anymore. It might not even 543 00:29:22,240 --> 00:29:25,239 Speaker 1: necessarily be about the comedy. It's just like, you know, 544 00:29:25,360 --> 00:29:29,000 Speaker 1: it's a very comfortable, cozy kind of place you can 545 00:29:29,000 --> 00:29:32,000 Speaker 1: go with familiar faces. Of course, we'll have to leave 546 00:29:32,000 --> 00:29:34,280 Speaker 1: the details of that to the The Journal of Sitcom 547 00:29:34,320 --> 00:29:37,280 Speaker 1: study to be reviewed later on. Maybe we need to 548 00:29:37,280 --> 00:29:42,920 Speaker 1: take a break. Let's do it. Thank alright, we're back. 549 00:29:43,480 --> 00:29:46,720 Speaker 1: We're talking about facial recognition. More specifically, we're talking about 550 00:29:46,800 --> 00:29:51,680 Speaker 1: the facial recognition that occurs uh inside the human brain. Yeah, uh, 551 00:29:51,720 --> 00:29:54,040 Speaker 1: and in the brains of other animals, though there are 552 00:29:54,040 --> 00:29:57,320 Speaker 1: some obvious parallels there. So we discussed to the beginning 553 00:29:57,360 --> 00:29:59,920 Speaker 1: how this this story just gets more and more complic 554 00:30:00,000 --> 00:30:01,840 Speaker 1: hated the more you look at it. And I want 555 00:30:01,840 --> 00:30:05,440 Speaker 1: to complicate things further with a really interesting article that 556 00:30:05,480 --> 00:30:08,400 Speaker 1: I was reading in uh In in the journal Nature. 557 00:30:08,440 --> 00:30:11,320 Speaker 1: Their their news section, they had a news feature by 558 00:30:11,320 --> 00:30:14,040 Speaker 1: a writer named Alison Abbott which was about the work 559 00:30:14,160 --> 00:30:19,320 Speaker 1: of the Caltech neuroscientist Doris Sao, who studies facial recognition. 560 00:30:19,960 --> 00:30:22,000 Speaker 1: And so I'll try to give a brief summary of this. 561 00:30:22,080 --> 00:30:25,440 Speaker 1: So basically, in the late two thousands, uh, sal in 562 00:30:25,520 --> 00:30:30,520 Speaker 1: our colleagues, we're doing repeated brain imaging and targeted electrode 563 00:30:30,560 --> 00:30:34,200 Speaker 1: stimulation studies on the brains of macaques, a type of 564 00:30:34,200 --> 00:30:38,720 Speaker 1: Old World monkey, which allowed them to identify six different 565 00:30:38,880 --> 00:30:41,920 Speaker 1: patches of a part of the brain called the inferior 566 00:30:41,960 --> 00:30:45,720 Speaker 1: temporal cortex on each side of the macaque brain, which 567 00:30:45,760 --> 00:30:50,000 Speaker 1: would react specifically when the monkey saw a face of 568 00:30:50,040 --> 00:30:53,040 Speaker 1: a human or another monkey, but not when looking at 569 00:30:53,040 --> 00:30:57,160 Speaker 1: other objects like a spoon, And stimulation of one of 570 00:30:57,160 --> 00:31:00,640 Speaker 1: these patches would cause activation in all the others. They 571 00:31:00,640 --> 00:31:04,760 Speaker 1: were sort of chained together for simultaneous neural activity. And 572 00:31:04,800 --> 00:31:08,960 Speaker 1: what the researchers learned over time was that individual cells 573 00:31:09,280 --> 00:31:14,400 Speaker 1: in individual patches tended to be specialized to specific parts 574 00:31:14,440 --> 00:31:18,560 Speaker 1: of faces. So one spot in this matrix would respond 575 00:31:18,680 --> 00:31:22,960 Speaker 1: by firing faster consistently based on how far apart the 576 00:31:23,080 --> 00:31:25,560 Speaker 1: eyes were, like say, if the eyes are farther apart, 577 00:31:25,640 --> 00:31:29,080 Speaker 1: it fires faster. If they're closer together, at fire slower. 578 00:31:29,800 --> 00:31:33,360 Speaker 1: And then others would respond specifically to changes in other 579 00:31:33,440 --> 00:31:37,040 Speaker 1: features like the size of the nose or in the irises. 580 00:31:37,960 --> 00:31:40,800 Speaker 1: And they use this knowledge to create what has been 581 00:31:40,800 --> 00:31:43,920 Speaker 1: called now a face code, a kind of top level 582 00:31:44,000 --> 00:31:49,200 Speaker 1: system for sorting faces along these major dimensions that the 583 00:31:49,240 --> 00:31:52,960 Speaker 1: brain responds to in a specialized way. So you know, 584 00:31:53,040 --> 00:31:55,600 Speaker 1: kind of like if you're creating a character in a 585 00:31:55,600 --> 00:31:59,360 Speaker 1: wrestling video game, you've got like maybe sixty different values 586 00:31:59,400 --> 00:32:02,680 Speaker 1: that you can just the sliders on. And so it 587 00:32:02,680 --> 00:32:04,920 Speaker 1: turns out that the brain, at least according to this 588 00:32:05,000 --> 00:32:09,800 Speaker 1: research appears to have individual neurons dedicated to each of 589 00:32:09,840 --> 00:32:13,520 Speaker 1: those sliders, so like as the as the slider goes 590 00:32:13,560 --> 00:32:17,040 Speaker 1: from zero to one hundred, that individual neuron starts to 591 00:32:17,120 --> 00:32:20,400 Speaker 1: fire faster and faster. So you can see these like 592 00:32:20,480 --> 00:32:23,960 Speaker 1: coded regions of the brain that map to individual elements 593 00:32:24,000 --> 00:32:26,680 Speaker 1: within the face. Now, an interesting thing here was that 594 00:32:26,720 --> 00:32:30,640 Speaker 1: the outermost cells in the cortex seemed to respond to 595 00:32:30,680 --> 00:32:34,480 Speaker 1: the most obvious stimulis, such as like face shape with 596 00:32:34,880 --> 00:32:38,120 Speaker 1: you know, things like distance between the eyes or length 597 00:32:38,200 --> 00:32:42,440 Speaker 1: of the mouth, whereas deeper cells seem to focus more 598 00:32:42,480 --> 00:32:46,280 Speaker 1: on more minute data like things about the texture of 599 00:32:46,360 --> 00:32:48,960 Speaker 1: the skin and stuff like. I guess to some extent 600 00:32:49,000 --> 00:32:52,040 Speaker 1: that like lines up with our experience of of glimpsing 601 00:32:52,120 --> 00:32:55,240 Speaker 1: somebody and then maybe doing that second look or that 602 00:32:55,360 --> 00:33:00,520 Speaker 1: more detailed look to follow up on the initial impression. Yes, yeah, 603 00:33:00,560 --> 00:33:04,680 Speaker 1: I think that's exactly right. But then to read a 604 00:33:04,760 --> 00:33:07,400 Speaker 1: quote here quote, The research seemed to point to a 605 00:33:07,440 --> 00:33:12,520 Speaker 1: mechanism by which individual cells in the cortex interpret increasingly 606 00:33:12,600 --> 00:33:17,800 Speaker 1: complex visual information until at the deepest points individual cells 607 00:33:18,120 --> 00:33:22,479 Speaker 1: code for particular people. And this goes with a finding 608 00:33:22,480 --> 00:33:26,920 Speaker 1: by a researcher named Rodrigo Keion Kuroga, who earlier in 609 00:33:26,960 --> 00:33:29,600 Speaker 1: the two thousands discovered something that was called in the 610 00:33:29,640 --> 00:33:34,040 Speaker 1: media Jennifer Aniston cells uh to come back to friends, 611 00:33:34,080 --> 00:33:37,560 Speaker 1: because these were literally single neurons that appeared to respond 612 00:33:37,920 --> 00:33:42,920 Speaker 1: to pictures of specific famous or familiar people. Uh. And 613 00:33:43,000 --> 00:33:45,680 Speaker 1: it was also found that this so if you have 614 00:33:45,720 --> 00:33:48,880 Speaker 1: a cell for Jennifer Aniston in your brain, the Jennifer 615 00:33:48,920 --> 00:33:52,360 Speaker 1: Aniston cell would respond to the evocation of a concept 616 00:33:52,400 --> 00:33:54,480 Speaker 1: of that person as well as to the picture. So 617 00:33:54,520 --> 00:33:57,480 Speaker 1: it would respond to seeing a picture of of Jennifer Aniston, 618 00:33:57,760 --> 00:34:01,080 Speaker 1: or to like seeing her name written, or even to 619 00:34:01,800 --> 00:34:06,080 Speaker 1: seeing lists of movies that she appeared in. And am 620 00:34:06,080 --> 00:34:08,160 Speaker 1: I correct? And remember what Jennifer Anderson was one of 621 00:34:08,160 --> 00:34:10,960 Speaker 1: the friends, right, Yes, just going to be sure on 622 00:34:11,000 --> 00:34:14,520 Speaker 1: that that I wasn't making that up. Okay, you're talking 623 00:34:14,560 --> 00:34:17,520 Speaker 1: about friends, like you know, Well, I I was pretty sure, 624 00:34:17,560 --> 00:34:20,319 Speaker 1: but I wasn't one percent sure. Well, they could just 625 00:34:20,360 --> 00:34:22,879 Speaker 1: as easily have been David Schwimmer cells. Yeah. I mean 626 00:34:22,880 --> 00:34:25,080 Speaker 1: the thing is, I can definitely picture her in my mind, 627 00:34:25,160 --> 00:34:28,319 Speaker 1: and I can picture David Schwimmer like they're they're just 628 00:34:28,440 --> 00:34:32,600 Speaker 1: coded in there. I mean, there's no denying their faces. 629 00:34:33,160 --> 00:34:36,000 Speaker 1: It does make me wonder if you could conceivably, like, 630 00:34:36,120 --> 00:34:39,640 Speaker 1: knowing about these cells, that Jennifer Anderson cells, could you 631 00:34:39,680 --> 00:34:43,680 Speaker 1: remove Jennifer Jennifer Anderson from your mind? Oh, I wonder. Yeah, 632 00:34:44,000 --> 00:34:47,719 Speaker 1: I don't know exactly how that works theoretically speaking, obviously 633 00:34:47,760 --> 00:34:50,600 Speaker 1: not like do it yourself at home kind of a thing. 634 00:34:50,680 --> 00:34:53,759 Speaker 1: But uh, it's where I wonder if that would be 635 00:34:53,800 --> 00:34:57,600 Speaker 1: an interesting sort of eternal Sunshine in the Spotless Mind 636 00:34:57,719 --> 00:35:00,960 Speaker 1: kind of spin off idea, because of course that the 637 00:35:01,239 --> 00:35:03,760 Speaker 1: nature of that film was like completely removing a person 638 00:35:03,840 --> 00:35:06,560 Speaker 1: or experience from the brain, like wholesale memories at all. 639 00:35:07,000 --> 00:35:10,919 Speaker 1: But what if you could only remove the face of, 640 00:35:11,320 --> 00:35:14,120 Speaker 1: say an individual who brought you stress or grief, Like, 641 00:35:14,200 --> 00:35:17,400 Speaker 1: what would that alone do? How would that impact the 642 00:35:17,440 --> 00:35:20,600 Speaker 1: other information that is there if it itself is untouched. 643 00:35:20,680 --> 00:35:24,560 Speaker 1: I don't know. I mean, as usual, the things inside 644 00:35:24,600 --> 00:35:28,320 Speaker 1: the brain turn out to have a much more complicated 645 00:35:28,400 --> 00:35:34,040 Speaker 1: relationship to our you know, our phenomenal experience of the 646 00:35:34,080 --> 00:35:37,600 Speaker 1: world and our internal experience of thoughts usually than would 647 00:35:37,640 --> 00:35:39,920 Speaker 1: be implied just by like a single cell change in 648 00:35:39,960 --> 00:35:42,719 Speaker 1: the brain has this clear effect on life. But I 649 00:35:42,760 --> 00:35:45,920 Speaker 1: don't know, I mean, uh, I would suspect that changing 650 00:35:45,960 --> 00:35:49,279 Speaker 1: that one cell would not entirely eliminate this person from 651 00:35:49,280 --> 00:35:52,440 Speaker 1: the brain, because you have complex networks of memories and 652 00:35:52,480 --> 00:35:57,200 Speaker 1: emotions that will involve familiar people in celebrities. Yeah, you 653 00:35:57,200 --> 00:35:59,360 Speaker 1: know another thing. This is not something we came across 654 00:35:59,360 --> 00:36:01,320 Speaker 1: in the study. But it also makes me wonder about 655 00:36:02,600 --> 00:36:06,799 Speaker 1: the faces of individuals in literature. Then that one might 656 00:36:06,840 --> 00:36:10,120 Speaker 1: read like when you've never seen you've never seen them, 657 00:36:10,200 --> 00:36:13,080 Speaker 1: and but on some level it is probably not like 658 00:36:13,120 --> 00:36:16,440 Speaker 1: the detailed version of a face unless you're doing an 659 00:36:16,440 --> 00:36:19,960 Speaker 1: exercise that I would do almost religiously as a young 660 00:36:20,000 --> 00:36:22,719 Speaker 1: reader and still fall back on occasionally, and that is 661 00:36:22,719 --> 00:36:25,920 Speaker 1: subbing an actor into a role casting the book. I 662 00:36:25,920 --> 00:36:27,959 Speaker 1: would do that all the time when I was a kid, 663 00:36:28,840 --> 00:36:31,120 Speaker 1: and again I'll still sometimes fall into it today. But 664 00:36:31,120 --> 00:36:34,000 Speaker 1: then other times there will be of a kind of 665 00:36:34,040 --> 00:36:36,279 Speaker 1: there will be a face or an almost face in 666 00:36:36,360 --> 00:36:39,560 Speaker 1: my head. Maybe it's not super detailed, it's not as 667 00:36:39,560 --> 00:36:41,520 Speaker 1: detailed as a real person, but it's there, kind of 668 00:36:41,520 --> 00:36:43,480 Speaker 1: floating around in my head, and when I think of 669 00:36:43,520 --> 00:36:46,719 Speaker 1: that character, that face emerges. I think we've missed the 670 00:36:46,719 --> 00:36:49,600 Speaker 1: time window for this, But I'm now recasting Dune with 671 00:36:49,640 --> 00:36:53,200 Speaker 1: the cast of Friends. Right, so Duke Leto is David 672 00:36:53,239 --> 00:36:57,960 Speaker 1: Swimmer and uh and let's see Joey is is Paula Trades? Right? 673 00:36:59,040 --> 00:37:02,120 Speaker 1: I guess so? No, Actually, Hollywood people, if you're listening, 674 00:37:02,160 --> 00:37:08,439 Speaker 1: here's my pitch redo The Punisher starring David Swimmer as Punisher. Well, 675 00:37:08,480 --> 00:37:10,799 Speaker 1: I mean it's Swimmer has been good in something, so 676 00:37:11,120 --> 00:37:13,880 Speaker 1: I guess I can I can imagine him playing the Punisher. 677 00:37:13,880 --> 00:37:15,719 Speaker 1: I'll go ahead and go that far. It's only one 678 00:37:15,760 --> 00:37:19,960 Speaker 1: way to know. So Eventually, after doing all this research 679 00:37:20,000 --> 00:37:22,480 Speaker 1: about these sort of like neurons or patches of the 680 00:37:22,480 --> 00:37:25,399 Speaker 1: brain that are coding for individual variables that can vary 681 00:37:25,440 --> 00:37:28,520 Speaker 1: with the human face, sal and our colleagues began researching 682 00:37:28,840 --> 00:37:33,600 Speaker 1: um broader variables for visual recognition of objects that worked 683 00:37:33,760 --> 00:37:37,759 Speaker 1: very much along the same lines as the face variable neurons. 684 00:37:38,040 --> 00:37:40,960 Speaker 1: So some examples that were cited in Abbot's feature on 685 00:37:41,000 --> 00:37:45,520 Speaker 1: this neurons that appear to respond specifically to quote, whether 686 00:37:45,600 --> 00:37:49,200 Speaker 1: something is spiky like a camera tripod or stubby like 687 00:37:49,239 --> 00:37:51,840 Speaker 1: a USB stick. So you could have kind of a 688 00:37:51,880 --> 00:37:54,640 Speaker 1: slider in the brain that corresponds with a specific tiny 689 00:37:54,680 --> 00:37:57,759 Speaker 1: patch about whether it's got spikes or whether it's kind 690 00:37:57,760 --> 00:38:01,959 Speaker 1: of rounded or something the kiki bubba thing. But then 691 00:38:02,440 --> 00:38:05,719 Speaker 1: other things might correspond to whether something is animate like 692 00:38:05,800 --> 00:38:09,960 Speaker 1: a cat or inanimate like a spoon. Uh. And there 693 00:38:09,960 --> 00:38:13,000 Speaker 1: can be things in between that maybe a washing machine 694 00:38:13,480 --> 00:38:16,600 Speaker 1: is a little more a little more animate than a spoon, 695 00:38:16,680 --> 00:38:20,480 Speaker 1: but less animate than a cat. And again this would 696 00:38:20,520 --> 00:38:24,759 Speaker 1: be expressed by how rapidly that's that neuron fires when 697 00:38:24,960 --> 00:38:28,640 Speaker 1: viewing that particular stimuli. But Sal and her colleagues got 698 00:38:28,640 --> 00:38:31,600 Speaker 1: to the point where they could predict the appearance of 699 00:38:31,640 --> 00:38:34,719 Speaker 1: an object that a subject was looking at with reasonable 700 00:38:34,760 --> 00:38:38,360 Speaker 1: accuracy based on sampling the firing rate of just about 701 00:38:38,360 --> 00:38:40,760 Speaker 1: four hundred neurons. So you can get all these different 702 00:38:40,840 --> 00:38:43,920 Speaker 1: variables just by looking at how fast those neurons are firing. 703 00:38:44,400 --> 00:38:47,320 Speaker 1: And this suggests that there could be a feature based 704 00:38:47,400 --> 00:38:51,719 Speaker 1: coding system that may operate across the whole brain. Uh. 705 00:38:51,760 --> 00:38:55,200 Speaker 1: And so taking away from this research, Sal is talking 706 00:38:55,239 --> 00:38:57,439 Speaker 1: to the to the author here and and she says 707 00:38:57,480 --> 00:39:01,160 Speaker 1: that you know that the brain is not like quote, 708 00:39:01,160 --> 00:39:06,080 Speaker 1: a sequence of passive sieves fishing out faces, food or ducks. Instead, 709 00:39:06,200 --> 00:39:10,719 Speaker 1: she says, quote, it's a hallucinating engine that is generating 710 00:39:10,719 --> 00:39:14,680 Speaker 1: a version of reality. Based on the current best internal 711 00:39:14,760 --> 00:39:17,920 Speaker 1: model of the world. And I think this is a 712 00:39:18,000 --> 00:39:21,600 Speaker 1: really important and interesting way to think about visual perception 713 00:39:21,640 --> 00:39:26,520 Speaker 1: and recognition. There's so much going on in any image 714 00:39:26,560 --> 00:39:29,400 Speaker 1: of the world that you look at. It seems almost 715 00:39:29,400 --> 00:39:35,560 Speaker 1: impossible that your brain is actually registering all the information constantly, 716 00:39:35,680 --> 00:39:40,520 Speaker 1: simultaneously and updating based on you know, what is actually 717 00:39:40,560 --> 00:39:43,120 Speaker 1: taking place in the world. It seems more like your 718 00:39:43,120 --> 00:39:46,040 Speaker 1: brain is kind of creating an illusion that you are 719 00:39:46,080 --> 00:39:49,800 Speaker 1: looking at the world and then pretty frequently updating little 720 00:39:49,880 --> 00:39:53,480 Speaker 1: key bits of data about that illusion. Yeah. I mean, 721 00:39:53,520 --> 00:39:56,200 Speaker 1: it's kind of like in my experience to bring us 722 00:39:56,200 --> 00:39:58,200 Speaker 1: back to Dungeons and Dragons, it's like playing Dungeons and 723 00:39:58,280 --> 00:40:01,200 Speaker 1: Dragons and tell yourself, yeah, I know what all the 724 00:40:01,280 --> 00:40:04,840 Speaker 1: rules are, but then when individual rules come up, you're like, actually, 725 00:40:04,880 --> 00:40:06,840 Speaker 1: I need to check that rule again. Maybe I don't 726 00:40:06,880 --> 00:40:09,759 Speaker 1: know that rule. That's kind of what it's like to 727 00:40:10,080 --> 00:40:13,520 Speaker 1: walk around the world and and take in visual sense data. 728 00:40:14,080 --> 00:40:15,799 Speaker 1: Uh and and but but I love this idea of 729 00:40:15,800 --> 00:40:18,279 Speaker 1: the hallucinating engine of the of the brain because this 730 00:40:18,280 --> 00:40:22,200 Speaker 1: this does this description matches up so much of what 731 00:40:22,280 --> 00:40:27,320 Speaker 1: we discussed on the show that your memories are not reality, 732 00:40:27,360 --> 00:40:30,520 Speaker 1: that your perception is not reality, that your feelings are 733 00:40:30,560 --> 00:40:33,200 Speaker 1: not reality. Not to say that all three of those 734 00:40:33,200 --> 00:40:36,400 Speaker 1: things are lies. That they are based on they're based 735 00:40:36,440 --> 00:40:43,040 Speaker 1: on reality, but they themselves are not accurate. They are 736 00:40:43,080 --> 00:40:46,600 Speaker 1: not one. They're not a reflection of the world. They 737 00:40:46,640 --> 00:40:50,799 Speaker 1: are at best a distorted reflection of the absolute reality. 738 00:40:51,400 --> 00:40:53,799 Speaker 1: And even then, like it's hard to even say what 739 00:40:54,000 --> 00:40:57,880 Speaker 1: what that is, right, I mean, the vision. Your vision 740 00:40:57,960 --> 00:41:01,560 Speaker 1: is not a camera feed. It is not recording passively, 741 00:41:01,600 --> 00:41:04,960 Speaker 1: objectively everything that happens in front of your face. It 742 00:41:05,120 --> 00:41:09,279 Speaker 1: is instead sort of a hallucination that is quite frequently 743 00:41:09,400 --> 00:41:12,680 Speaker 1: updated with little bits of data. Right. And that's without 744 00:41:12,680 --> 00:41:15,759 Speaker 1: even getting into discussions of how our vision and other 745 00:41:15,800 --> 00:41:19,680 Speaker 1: senses match up against the other organic senses of various 746 00:41:19,719 --> 00:41:23,279 Speaker 1: creatures in our world, things with far sharper vision that 747 00:41:23,360 --> 00:41:26,800 Speaker 1: can see in different wavelengths, things was far sharper hearing, 748 00:41:26,840 --> 00:41:30,640 Speaker 1: and sent that that therefore live in what I've I've 749 00:41:30,680 --> 00:41:34,839 Speaker 1: often seen described as like a different sensory world. Um, 750 00:41:34,880 --> 00:41:38,359 Speaker 1: but you can't walk around the world reminding yourself of that. 751 00:41:38,960 --> 00:41:41,279 Speaker 1: But ultimately, like the version that you form in your 752 00:41:41,320 --> 00:41:44,479 Speaker 1: head has to be your working model of reality. And 753 00:41:44,680 --> 00:41:48,560 Speaker 1: you know, otherwise that you just go mad. Yeah, there's 754 00:41:48,800 --> 00:41:51,000 Speaker 1: a really interesting thing that gets pursued at the end 755 00:41:51,040 --> 00:41:54,880 Speaker 1: of Abbott's article here where she talks about the idea 756 00:41:55,080 --> 00:41:57,200 Speaker 1: of like what's the best model for sort of the 757 00:41:57,200 --> 00:42:00,560 Speaker 1: whole brain visual perception of what you're seeing in front 758 00:42:00,560 --> 00:42:03,440 Speaker 1: of you and uh, and she makes reference to this 759 00:42:03,480 --> 00:42:08,360 Speaker 1: idea of predictive processing. Quote. The brain operates by predicting 760 00:42:08,400 --> 00:42:13,400 Speaker 1: how its immediate surroundings will change millisecond by millisecond and 761 00:42:13,520 --> 00:42:17,000 Speaker 1: comparing that prediction with the information it receives through the 762 00:42:17,080 --> 00:42:21,480 Speaker 1: various senses. It uses any mismatch or prediction error to 763 00:42:21,719 --> 00:42:24,600 Speaker 1: update its model of the world. So maybe you know, 764 00:42:24,719 --> 00:42:28,480 Speaker 1: you're you're kind of simultaneously simulating the world in front 765 00:42:28,520 --> 00:42:30,960 Speaker 1: of you at the same time you're watching it, and 766 00:42:31,000 --> 00:42:34,480 Speaker 1: the watching could be there to note little ways in 767 00:42:34,520 --> 00:42:37,320 Speaker 1: which your prediction is turning out wrong and then trying 768 00:42:37,360 --> 00:42:41,200 Speaker 1: to fix it right, or being hypersensitive to the ways 769 00:42:41,320 --> 00:42:46,319 Speaker 1: that the that your sensory input matches your uh, your simulation, 770 00:42:46,520 --> 00:42:48,960 Speaker 1: which can be a great way of just wandering into delusion, 771 00:42:49,040 --> 00:42:52,320 Speaker 1: you know, or living in a state of paranoia because 772 00:42:52,320 --> 00:42:54,520 Speaker 1: you're just you're just looking for the the the the 773 00:42:54,600 --> 00:42:57,920 Speaker 1: sense data that will back up the version of reality 774 00:42:57,960 --> 00:42:59,920 Speaker 1: that you have stored in your mind that you're you're 775 00:43:00,120 --> 00:43:04,359 Speaker 1: cultivating in your mind. Yeah. Absolutely. Uh. There's one more 776 00:43:04,440 --> 00:43:06,560 Speaker 1: point of comparison that I thought was interesting because the 777 00:43:06,640 --> 00:43:10,239 Speaker 1: article makes reference to optical illusions. You know, there's this 778 00:43:10,360 --> 00:43:12,560 Speaker 1: question of so when you look at an optical illusions 779 00:43:12,560 --> 00:43:15,320 Speaker 1: one of those things that has like a double image valance, 780 00:43:15,560 --> 00:43:18,680 Speaker 1: it's the duck and it's the rabbit, you don't see 781 00:43:18,719 --> 00:43:21,680 Speaker 1: the duck in the rabbit halfway. You know, you don't 782 00:43:21,760 --> 00:43:24,120 Speaker 1: see it both at the same time or halfway between 783 00:43:24,680 --> 00:43:26,600 Speaker 1: you see it. I mean, most people tend to see 784 00:43:26,640 --> 00:43:29,480 Speaker 1: fully duck and then there's a flip in the brain 785 00:43:29,880 --> 00:43:32,520 Speaker 1: and the brain readjusts and then you see fully rabbit. 786 00:43:33,040 --> 00:43:35,960 Speaker 1: Isn't that interesting? Like, what's causing that flip? Nothing has 787 00:43:36,080 --> 00:43:38,880 Speaker 1: changed in terms of what you're looking at, But suddenly 788 00:43:38,920 --> 00:43:42,440 Speaker 1: the brain undergoes some kind of internal change and it 789 00:43:42,520 --> 00:43:46,200 Speaker 1: has completely reversed what you perceive yourself, what you perceive 790 00:43:46,280 --> 00:43:48,759 Speaker 1: in front of you. Like another example would be when 791 00:43:49,200 --> 00:43:52,279 Speaker 1: the the accidental face in a design is pointed out 792 00:43:52,280 --> 00:43:55,799 Speaker 1: to you and then you cannot unsee it um or 793 00:43:56,280 --> 00:43:59,640 Speaker 1: like one one for me is the double hangar that 794 00:44:00,000 --> 00:44:02,080 Speaker 1: it looks like a drunken octopus that wants to box 795 00:44:03,280 --> 00:44:05,319 Speaker 1: a second. Yes, yes, on the back of a door 796 00:44:05,760 --> 00:44:08,640 Speaker 1: that's got two little prongs. Yes. Before it was just 797 00:44:08,719 --> 00:44:10,799 Speaker 1: a code hanger, but then once that was pointed out, 798 00:44:11,080 --> 00:44:13,000 Speaker 1: that's all I can see, Like, that's how it's coded 799 00:44:13,000 --> 00:44:17,680 Speaker 1: in my brain. It's fighting Joe octopus. Yeah. Or if 800 00:44:17,719 --> 00:44:20,799 Speaker 1: you look at Edvard monks the scream. But if has 801 00:44:20,840 --> 00:44:22,480 Speaker 1: anybody ever told you to look at it and see 802 00:44:22,480 --> 00:44:25,120 Speaker 1: if you can see the Springer spaniel. No, I don't 803 00:44:25,120 --> 00:44:27,840 Speaker 1: think I've done that exercise. Okay, look at the head 804 00:44:27,960 --> 00:44:30,960 Speaker 1: on the screen next time and just think Springer spaniel 805 00:44:31,040 --> 00:44:33,600 Speaker 1: and then you won't be able to unsee it. So 806 00:44:33,640 --> 00:44:36,600 Speaker 1: there's another interesting development about facial recognition in the brain 807 00:44:36,640 --> 00:44:40,200 Speaker 1: that I was reading about in a article by a 808 00:44:40,200 --> 00:44:43,560 Speaker 1: couple of researchers named Anna K. Boback and Sarah Bate 809 00:44:43,719 --> 00:44:47,320 Speaker 1: who at the time we're conducting research on face perception 810 00:44:47,360 --> 00:44:52,120 Speaker 1: at Bournemouth University in England. Uh And so they point 811 00:44:52,120 --> 00:44:55,839 Speaker 1: out that one aspect of a typical human brains face 812 00:44:55,880 --> 00:44:59,319 Speaker 1: perception is the ability to engage what they call a 813 00:44:59,320 --> 00:45:04,120 Speaker 1: configure role or holistic strategy for visual processing, meaning that 814 00:45:04,239 --> 00:45:07,399 Speaker 1: these human brains are able to sort of see faces 815 00:45:07,440 --> 00:45:11,799 Speaker 1: as a whole rather than examining the independent features of 816 00:45:11,800 --> 00:45:14,279 Speaker 1: a face one at a time. And I've actually read 817 00:45:14,320 --> 00:45:17,239 Speaker 1: there was something similar going on with visual expertise that 818 00:45:17,360 --> 00:45:20,879 Speaker 1: like when people have visual expertise for cars, they're much 819 00:45:20,920 --> 00:45:23,439 Speaker 1: better able to get an idea of what a car 820 00:45:23,640 --> 00:45:27,359 Speaker 1: is with a holistic, sort of one glance view rather 821 00:45:27,440 --> 00:45:30,160 Speaker 1: than having to look at individual parts of the car. 822 00:45:30,560 --> 00:45:33,680 Speaker 1: And this ties into something I've read about people with prosopagnosia. 823 00:45:34,120 --> 00:45:37,600 Speaker 1: Oliver Sacks actually describes this process of of a sort 824 00:45:37,600 --> 00:45:40,920 Speaker 1: of hack or work around for their condition that basically 825 00:45:40,960 --> 00:45:45,440 Speaker 1: involves examining the elements of a face for special identifying 826 00:45:45,560 --> 00:45:48,880 Speaker 1: marks or features, the way you might look for, you know, 827 00:45:48,920 --> 00:45:52,160 Speaker 1: a known dint or bumper sticker to identify a familiar 828 00:45:52,200 --> 00:45:54,719 Speaker 1: car from others of the same model in color, or 829 00:45:54,719 --> 00:45:57,440 Speaker 1: a particular hairstyle I think is sometimes brought up right, 830 00:45:57,520 --> 00:46:00,960 Speaker 1: or style or mode of dress. So bo Back and 831 00:46:01,040 --> 00:46:04,240 Speaker 1: Bait describe some research they conducted on people with typical 832 00:46:04,320 --> 00:46:09,040 Speaker 1: face perception versus people with prosopagnosia versus people sometimes known 833 00:46:09,040 --> 00:46:12,160 Speaker 1: as super recognizers who were sort of the opposite end 834 00:46:12,239 --> 00:46:17,880 Speaker 1: of prosopagnosia. They have unusually high accuracy in remembering and 835 00:46:17,920 --> 00:46:20,680 Speaker 1: recognizing faces, even for people that haven't seen in a 836 00:46:20,680 --> 00:46:23,480 Speaker 1: long time. And the authors here right that they used 837 00:46:23,600 --> 00:46:27,799 Speaker 1: eye tracking software to see where these different groups of 838 00:46:27,840 --> 00:46:31,280 Speaker 1: people tended to look when they were examining a human face, 839 00:46:31,320 --> 00:46:34,160 Speaker 1: and there were some interesting differences here. So they found 840 00:46:34,160 --> 00:46:37,600 Speaker 1: on average, people with typical face perception would tend to 841 00:46:37,680 --> 00:46:41,800 Speaker 1: focus basically around the eyes most when trying to identify 842 00:46:41,840 --> 00:46:44,840 Speaker 1: a face. Um and they note some previous research on 843 00:46:44,880 --> 00:46:48,200 Speaker 1: people with acquired prosopagnosia, including a two thousand eight study 844 00:46:48,280 --> 00:46:52,680 Speaker 1: from the Journal of Neuropsychology by Orbon disagree at all, 845 00:46:52,880 --> 00:46:55,759 Speaker 1: and it found that people with face blindness tend to 846 00:46:55,800 --> 00:46:58,719 Speaker 1: look less at the eyes and at the upper area 847 00:46:58,800 --> 00:47:00,840 Speaker 1: of the face, and tend to look more at lower 848 00:47:00,880 --> 00:47:03,319 Speaker 1: regions of the face like the mouth when trying to 849 00:47:03,360 --> 00:47:06,880 Speaker 1: identify faces. And the authors note that their their recent 850 00:47:06,920 --> 00:47:10,560 Speaker 1: research again showed people with prosopagnosia we're looking less at 851 00:47:10,560 --> 00:47:14,600 Speaker 1: the eyes than typical subjects. Meanwhile, they note that super 852 00:47:14,640 --> 00:47:18,520 Speaker 1: recognizers in their studies tended to on average focus more 853 00:47:18,600 --> 00:47:22,520 Speaker 1: on the nose which was kind of strange, but they 854 00:47:22,560 --> 00:47:25,080 Speaker 1: had an idea about that, is, so, is it something 855 00:47:25,120 --> 00:47:28,960 Speaker 1: special about the nose itself as a feature of the face, 856 00:47:29,640 --> 00:47:31,600 Speaker 1: or is it, as they kind of propose, more of 857 00:47:31,640 --> 00:47:37,000 Speaker 1: a diagnostic center of the gaze, Uh, that that gravitates 858 00:47:37,000 --> 00:47:39,200 Speaker 1: towards the center of the face when we are better 859 00:47:39,239 --> 00:47:42,040 Speaker 1: at getting a holistic sense of a face from a 860 00:47:42,080 --> 00:47:46,359 Speaker 1: glance rather than trying to examine its individual features one 861 00:47:46,440 --> 00:47:48,880 Speaker 1: by one. And so the authors here argued that it 862 00:47:49,000 --> 00:47:51,280 Speaker 1: is the center of the face, rather than the eyes 863 00:47:51,320 --> 00:47:54,560 Speaker 1: in particular or any other feature that optimally engages the 864 00:47:54,560 --> 00:47:58,920 Speaker 1: brain's facial recognition systems. Interesting. I mean, one thing that 865 00:47:59,040 --> 00:48:02,239 Speaker 1: it brings to mind when you look. It's kind of 866 00:48:02,239 --> 00:48:04,200 Speaker 1: the old adage right to look someone in the eyes, 867 00:48:04,600 --> 00:48:07,760 Speaker 1: to to sort of engage in a more direct theory 868 00:48:07,800 --> 00:48:09,719 Speaker 1: of mind with them, right to try and sort of 869 00:48:10,160 --> 00:48:12,480 Speaker 1: it's like you're having a like a mind melt moment 870 00:48:12,600 --> 00:48:15,360 Speaker 1: right where it seems like if you're looking at someone's nose. 871 00:48:15,800 --> 00:48:19,040 Speaker 1: I mean that that reminds me of exercises people say, oh, 872 00:48:19,200 --> 00:48:21,120 Speaker 1: you know, if you want to, you know, cut down 873 00:48:21,200 --> 00:48:25,440 Speaker 1: on anxiety during a um like an interview, look at 874 00:48:25,440 --> 00:48:27,400 Speaker 1: the person's forehead, you know, don't look at them in 875 00:48:27,440 --> 00:48:30,920 Speaker 1: the eyes. So it feels like a holistic view of 876 00:48:30,960 --> 00:48:34,280 Speaker 1: the face is also an impersonal view of the face. 877 00:48:34,400 --> 00:48:36,400 Speaker 1: It feels, at least to me, it feels like if 878 00:48:36,400 --> 00:48:38,880 Speaker 1: you're looking somebody in the eyes, you're also engaging in 879 00:48:38,920 --> 00:48:43,279 Speaker 1: consideration of their mind, which might conceivably be distracting from 880 00:48:43,320 --> 00:48:46,640 Speaker 1: the identification process. Right, So maybe it's better to to 881 00:48:46,760 --> 00:48:48,600 Speaker 1: look at the nose, like, don't think of this person 882 00:48:48,600 --> 00:48:50,440 Speaker 1: as a person, think of them as a face that 883 00:48:50,480 --> 00:48:53,360 Speaker 1: matches up with a name. They didn't mention that, and 884 00:48:53,360 --> 00:48:55,040 Speaker 1: I hadn't thought about that, but I think that's a 885 00:48:55,160 --> 00:48:59,200 Speaker 1: very interesting point, Yes, that like, perhaps by focusing a 886 00:48:59,200 --> 00:49:02,640 Speaker 1: little bit less directly on the eyes, you are somewhat 887 00:49:02,760 --> 00:49:07,000 Speaker 1: deep personalizing the experience of the face recognition, and thus 888 00:49:07,080 --> 00:49:10,200 Speaker 1: you can you can cut out some emotional distraction. Now, 889 00:49:10,320 --> 00:49:14,359 Speaker 1: then maybe that's just my individual like social anxiety speaking there, 890 00:49:14,440 --> 00:49:15,799 Speaker 1: you know, I don't know, I mean, we don't know 891 00:49:15,840 --> 00:49:18,400 Speaker 1: that's the case. That's just like an interesting possibility, yeah, 892 00:49:18,680 --> 00:49:20,759 Speaker 1: because well, I mean, it reminds me how in the 893 00:49:20,840 --> 00:49:24,200 Speaker 1: last episode we were talking about technology for facial recognition, 894 00:49:24,239 --> 00:49:26,840 Speaker 1: of course being used by law enforcement. And one of 895 00:49:26,880 --> 00:49:29,040 Speaker 1: the things the author's note in this article here is 896 00:49:29,080 --> 00:49:33,040 Speaker 1: that human super recognizers are in many places now being 897 00:49:33,080 --> 00:49:36,799 Speaker 1: directly sought out and employed by law enforcement. So like, 898 00:49:37,000 --> 00:49:39,640 Speaker 1: you know, to be able to like look at video 899 00:49:39,800 --> 00:49:43,280 Speaker 1: feeds and try to match people with known photos of 900 00:49:43,280 --> 00:49:46,359 Speaker 1: of wanted criminals and stuff. Again, that kind of like 901 00:49:46,480 --> 00:49:50,640 Speaker 1: impersonal recognizing thing, especially you know, in a law enforcement context, 902 00:49:51,120 --> 00:49:53,880 Speaker 1: seems like it's possible it could be aided if you 903 00:49:53,920 --> 00:49:56,920 Speaker 1: are seeing less of a person's humanity when looking at 904 00:49:56,920 --> 00:49:59,680 Speaker 1: their face and just literally trying to make the most 905 00:49:59,719 --> 00:50:02,799 Speaker 1: act you're a match of features. Has this been exploited 906 00:50:02,800 --> 00:50:06,480 Speaker 1: in a network crime solving series yet? Oh, like the 907 00:50:06,520 --> 00:50:10,800 Speaker 1: dexter of super recognizing. Yeah, and I would be shocked 908 00:50:10,840 --> 00:50:12,600 Speaker 1: if it has not happened or at least been pitched 909 00:50:12,600 --> 00:50:17,960 Speaker 1: to a major studio super recognized heer i'd given an episode. 910 00:50:19,600 --> 00:50:21,440 Speaker 1: But you know, this also makes me think about the 911 00:50:21,440 --> 00:50:24,799 Speaker 1: different types of machine face recognition systems out there, of which, 912 00:50:24,800 --> 00:50:27,160 Speaker 1: of course, you know, we know there are many. Some 913 00:50:27,280 --> 00:50:31,240 Speaker 1: are more oriented around specific details of the face. For example, 914 00:50:31,280 --> 00:50:34,640 Speaker 1: I've seen the idea of distance between the eyes. Again, 915 00:50:34,840 --> 00:50:38,279 Speaker 1: this is something that humans and macaques apparently used as 916 00:50:38,360 --> 00:50:41,719 Speaker 1: a major metric for face evaluation, but it's also a 917 00:50:41,760 --> 00:50:45,640 Speaker 1: common thing used by machines. Uh. But but others probably 918 00:50:45,640 --> 00:50:48,600 Speaker 1: take a more holistic approach. I'm not an expert in AI, 919 00:50:48,680 --> 00:50:51,840 Speaker 1: but I imagine that the neural net based facial recognition 920 00:50:51,880 --> 00:50:55,880 Speaker 1: algorithms trained on wild photo data might be more reasonably 921 00:50:55,960 --> 00:50:59,840 Speaker 1: compared to the face as a whole biological process. That 922 00:51:00,000 --> 00:51:01,759 Speaker 1: makes sense. All right, On that note, we're gonna take 923 00:51:01,800 --> 00:51:07,000 Speaker 1: a quick break, but we'll be right back. Than alright, 924 00:51:07,000 --> 00:51:10,920 Speaker 1: we're back. So we've been talking about facial recognition largely 925 00:51:11,440 --> 00:51:16,040 Speaker 1: in this episode. We've been talking about the complexity of 926 00:51:16,400 --> 00:51:20,120 Speaker 1: organic facial recognition, the kind of facial recognition is going 927 00:51:20,160 --> 00:51:23,440 Speaker 1: on inside the human brain and in the brains of 928 00:51:23,480 --> 00:51:27,840 Speaker 1: animals as opposed to UH that going on with AI 929 00:51:28,000 --> 00:51:30,080 Speaker 1: right now. One of the things I know we talked 930 00:51:30,080 --> 00:51:32,560 Speaker 1: about it in the last episode was among our many 931 00:51:32,680 --> 00:51:37,080 Speaker 1: concerns about artificial intelligence for facial recognition, where there are 932 00:51:37,200 --> 00:51:40,040 Speaker 1: various types of bias that have been documented to show 933 00:51:40,120 --> 00:51:44,640 Speaker 1: up in in UH computer based AI for facial recognition. Yeah, specifically, 934 00:51:44,640 --> 00:51:48,960 Speaker 1: we're talking about issues involving problems with these AI programs 935 00:51:49,280 --> 00:51:54,640 Speaker 1: recognizing black and or Asian faces because this also this 936 00:51:54,680 --> 00:51:56,799 Speaker 1: is interesting because it also forces us to confront not 937 00:51:56,880 --> 00:52:00,759 Speaker 1: only racial bias in the creation of programs and AI, 938 00:52:00,880 --> 00:52:04,360 Speaker 1: it also mirrors our organic issues with facial recognition for 939 00:52:04,520 --> 00:52:08,040 Speaker 1: races other than our own. UM. There there's been a 940 00:52:08,080 --> 00:52:12,040 Speaker 1: lot written about this topic. There's been a number of studies, 941 00:52:12,400 --> 00:52:16,040 Speaker 1: but just in UH, just last year from July, there 942 00:52:16,080 --> 00:52:18,600 Speaker 1: was an article in The Guardian title of a perception 943 00:52:18,640 --> 00:52:23,400 Speaker 1: of other races look alike rooted in visual process says study. 944 00:52:23,760 --> 00:52:26,040 Speaker 1: And this looks at a Stanford University study on this 945 00:52:26,160 --> 00:52:29,920 Speaker 1: oft researched issue. At one point that the researcher stressed 946 00:52:30,040 --> 00:52:32,319 Speaker 1: it was something we were just talking about earlier. What 947 00:52:32,480 --> 00:52:35,359 Speaker 1: are human senses pick up on is not necessarily an 948 00:52:35,360 --> 00:52:40,000 Speaker 1: accurate representation of reality and if as we've discussed before, 949 00:52:40,040 --> 00:52:42,880 Speaker 1: there's a lot of consolidation involved, the loose stitching of 950 00:52:42,880 --> 00:52:46,400 Speaker 1: things together based on actual perceived details, on memories, on 951 00:52:46,520 --> 00:52:51,600 Speaker 1: preconceived notions, on fears, suggestions, and more. And this is 952 00:52:51,640 --> 00:52:54,879 Speaker 1: a particular m r I assisted study. UH. It only 953 00:52:54,920 --> 00:52:58,720 Speaker 1: involved twenty white individuals evaluating the faces of black faces 954 00:52:58,760 --> 00:53:02,279 Speaker 1: and white faces, but it showed greater activation of of 955 00:53:02,520 --> 00:53:05,799 Speaker 1: of face recognition regions in the brain. When when a 956 00:53:05,880 --> 00:53:10,000 Speaker 1: white test subject looked at white faces compared to black faces, now, 957 00:53:10,120 --> 00:53:14,160 Speaker 1: dissimilar faces, that being you know, phases that are no 958 00:53:14,320 --> 00:53:17,040 Speaker 1: matter what you know, the race of the individual might 959 00:53:17,040 --> 00:53:21,680 Speaker 1: be stand out more. Um, dissimilar faces resulted in a spike, 960 00:53:21,920 --> 00:53:24,640 Speaker 1: but apparently the spike was still greater in cases of 961 00:53:24,680 --> 00:53:28,120 Speaker 1: dissimilar white faces. Now, to be clear, this is not 962 00:53:28,200 --> 00:53:31,479 Speaker 1: a case of oh, we as humans do this because look, 963 00:53:31,520 --> 00:53:34,640 Speaker 1: here's our brains doing it. You know. A lot uh, 964 00:53:34,719 --> 00:53:37,320 Speaker 1: you know, a lot was was not taken into account 965 00:53:37,320 --> 00:53:39,560 Speaker 1: with the studies such as the social backgrounds of the 966 00:53:39,600 --> 00:53:43,680 Speaker 1: individuals and all. As always, one assumes an interplay of 967 00:53:43,680 --> 00:53:48,000 Speaker 1: of neural software and socio cultural conditioning. But above all 968 00:53:48,080 --> 00:53:50,200 Speaker 1: they want to drive them. It's also not proof that 969 00:53:50,320 --> 00:53:53,880 Speaker 1: racial prejudice is to be dismissed as being just a 970 00:53:53,560 --> 00:53:56,680 Speaker 1: a neurological reality. Well why would that mean it should 971 00:53:56,680 --> 00:53:59,120 Speaker 1: be dismissed. I mean, even if it is a neurological reality, 972 00:53:59,160 --> 00:54:03,000 Speaker 1: that doesn't make it okay, right absolutely Uh. Here's the 973 00:54:03,080 --> 00:54:07,080 Speaker 1: quote from doctor Brent Hughes, a co author um of 974 00:54:07,080 --> 00:54:10,120 Speaker 1: of of the paper from University of California, riverside quote, 975 00:54:10,160 --> 00:54:12,080 Speaker 1: individuals should not be let off the hook for their 976 00:54:12,120 --> 00:54:15,960 Speaker 1: prejudicial attitudes just because we see evidence of race biases 977 00:54:16,000 --> 00:54:19,320 Speaker 1: in perception. To the contrary, these race biases and perception 978 00:54:19,360 --> 00:54:23,520 Speaker 1: are malleable and subject to individual motivations and goals. So 979 00:54:23,560 --> 00:54:26,760 Speaker 1: again coming back to the interplay between software and hardware, 980 00:54:27,800 --> 00:54:29,279 Speaker 1: but I think I do think there's a lot to 981 00:54:29,280 --> 00:54:32,480 Speaker 1: contemplate here. The way are organic and and currently our 982 00:54:32,520 --> 00:54:37,000 Speaker 1: technological facial recognition systems are subject to racial bias. But 983 00:54:37,080 --> 00:54:39,280 Speaker 1: then in both cases they are malleable. There are ways 984 00:54:39,320 --> 00:54:42,239 Speaker 1: to tweak and improve, just as there's a there's room 985 00:54:42,280 --> 00:54:45,719 Speaker 1: to allow these imperfect perceptions of reality to color what 986 00:54:45,800 --> 00:54:48,080 Speaker 1: we believe about the world. Probably one of the most 987 00:54:48,120 --> 00:54:51,200 Speaker 1: important things is for people not to be lulled in 988 00:54:51,400 --> 00:54:55,040 Speaker 1: by the misperception that because something is a computer algorithm, 989 00:54:55,120 --> 00:54:57,799 Speaker 1: or that it's a machine, that it's impossible for it 990 00:54:57,840 --> 00:55:00,080 Speaker 1: to have a bias. I mean, clearly, we just know 991 00:55:00,200 --> 00:55:03,439 Speaker 1: that that's not true. I mean, obviously the machine isn't 992 00:55:03,480 --> 00:55:07,120 Speaker 1: motivated emotionally. The machine doesn't say hate people or care 993 00:55:07,160 --> 00:55:10,040 Speaker 1: about people in whatever way, but it's guided by rules 994 00:55:10,120 --> 00:55:13,279 Speaker 1: that are created by training based on data sets that 995 00:55:13,320 --> 00:55:16,200 Speaker 1: are in the real world, which might incorporate racial biases, 996 00:55:16,640 --> 00:55:19,480 Speaker 1: or it can be trained, you know, on explicit rules 997 00:55:19,480 --> 00:55:23,040 Speaker 1: generated by people, whether by malice or just by mistake, 998 00:55:23,200 --> 00:55:26,680 Speaker 1: have some kind of racial bias incorporated in them. Yeah, 999 00:55:26,920 --> 00:55:29,360 Speaker 1: and and on the human side of things, this is 1000 00:55:29,400 --> 00:55:33,000 Speaker 1: only a glimpse at very broad facial perception because also 1001 00:55:33,000 --> 00:55:36,000 Speaker 1: consider how cute into facial expressions we are and how 1002 00:55:36,040 --> 00:55:39,680 Speaker 1: this too can be biased. I was looking at what's 1003 00:55:39,680 --> 00:55:43,480 Speaker 1: in a face, how face gender and current effect influence 1004 00:55:43,520 --> 00:55:46,439 Speaker 1: perceived emotion from two thousands sixteens was in the front 1005 00:55:46,680 --> 00:55:51,000 Speaker 1: Frontiers and Psychology, and the findings included a a bias 1006 00:55:51,080 --> 00:55:54,719 Speaker 1: to perceive male faces as more negative and the perceptions 1007 00:55:54,719 --> 00:55:57,560 Speaker 1: of female faces depended on current mood. So to summarize 1008 00:55:57,600 --> 00:56:01,040 Speaker 1: both cases, the male face that an individual perceived and 1009 00:56:01,080 --> 00:56:03,799 Speaker 1: needs to be happier looking compared to a female face 1010 00:56:03,840 --> 00:56:07,600 Speaker 1: to elect an interpretation of even just neutral emotion. So 1011 00:56:07,760 --> 00:56:10,919 Speaker 1: just male faces in general are interpreted as having more 1012 00:56:10,960 --> 00:56:14,960 Speaker 1: negative emotion in them. Yes. And then meanwhile, the happier 1013 00:56:15,040 --> 00:56:18,640 Speaker 1: a given male observer is, the more inclined they are 1014 00:56:18,719 --> 00:56:22,879 Speaker 1: to see a female's face as happy, which is which 1015 00:56:22,920 --> 00:56:24,680 Speaker 1: is kind of completely but that comes back again to 1016 00:56:24,840 --> 00:56:28,320 Speaker 1: like what is my emotional state? That is then uh, 1017 00:56:28,400 --> 00:56:31,800 Speaker 1: that is then affecting the emotional state I perceive in 1018 00:56:31,880 --> 00:56:34,960 Speaker 1: other people. And all of this is adding to my 1019 00:56:35,080 --> 00:56:38,560 Speaker 1: perception of what's going on in reality. Oh, this is 1020 00:56:38,560 --> 00:56:41,320 Speaker 1: the classic like, oh, yeah, she thought the joke was funny. 1021 00:56:41,360 --> 00:56:44,840 Speaker 1: I was laughing. Yeah. Now, this is just one study 1022 00:56:44,840 --> 00:56:46,600 Speaker 1: I'm referring to here and should be taken as the 1023 00:56:46,600 --> 00:56:48,800 Speaker 1: gold standard or anything. But it does provide a glimpse 1024 00:56:48,800 --> 00:56:51,800 Speaker 1: and it just again how complex and unreal our perception 1025 00:56:51,840 --> 00:56:54,799 Speaker 1: of reality is. And I think, you know, it makes 1026 00:56:54,800 --> 00:56:57,880 Speaker 1: sense because we are we are such social creatures that 1027 00:56:57,960 --> 00:57:02,120 Speaker 1: the social reality uh a human is of tremendous importance. 1028 00:57:02,520 --> 00:57:04,720 Speaker 1: But of course, reading the social reality of a person 1029 00:57:04,840 --> 00:57:08,439 Speaker 1: is rooted in various conscious and subconscious processes. It also 1030 00:57:08,480 --> 00:57:11,719 Speaker 1: depends on theory of mind. It's highways susceptible to to 1031 00:57:11,719 --> 00:57:15,359 Speaker 1: to buy us based on conditioning, culture and more. Now 1032 00:57:15,480 --> 00:57:18,480 Speaker 1: now currently mostly what we've talked about with UM, with 1033 00:57:18,560 --> 00:57:21,640 Speaker 1: AI and facial recognition software, it is concerning just the 1034 00:57:21,680 --> 00:57:24,120 Speaker 1: measurements of the face, the appearance of the face, and 1035 00:57:24,160 --> 00:57:28,000 Speaker 1: not so much emotional states. But that's uh, that's not 1036 00:57:28,040 --> 00:57:31,760 Speaker 1: to say that that the programmers of these these AI 1037 00:57:31,840 --> 00:57:34,720 Speaker 1: are not interested in reading that information as well, or 1038 00:57:34,760 --> 00:57:37,720 Speaker 1: at least the marketers right, But well no, I mean 1039 00:57:37,720 --> 00:57:39,880 Speaker 1: I guess both because yeah, to do a little more 1040 00:57:39,920 --> 00:57:42,440 Speaker 1: on faces and emotion. I think some of the same 1041 00:57:42,520 --> 00:57:46,600 Speaker 1: problems with human perception of emotion in other people's faces 1042 00:57:46,640 --> 00:57:50,840 Speaker 1: are translated now to technology, say, except made even more 1043 00:57:50,840 --> 00:57:55,800 Speaker 1: blunt and inaccurate. Um. So many technology companies in recent years, 1044 00:57:55,800 --> 00:58:01,000 Speaker 1: including IBM, Amazon, Google, Microsoft, etcetera, have all been advertising 1045 00:58:01,040 --> 00:58:04,800 Speaker 1: AI that can read human emotions by inferring them from 1046 00:58:04,840 --> 00:58:08,360 Speaker 1: facial expressions. And there are some cases where even companies 1047 00:58:08,400 --> 00:58:11,920 Speaker 1: that have shied away from doing facial recognition, as in like, 1048 00:58:12,000 --> 00:58:15,520 Speaker 1: you know this is Jeff's face, have still said it's 1049 00:58:15,560 --> 00:58:18,560 Speaker 1: okay to try to just look at a face anonymously 1050 00:58:18,720 --> 00:58:21,440 Speaker 1: and judge what its emotional state is. And this is 1051 00:58:21,480 --> 00:58:25,360 Speaker 1: being advertised as useful for evaluating candidates in a job interview, 1052 00:58:25,520 --> 00:58:29,600 Speaker 1: or analyzing emotional states of customers in a retail environment 1053 00:58:29,680 --> 00:58:33,280 Speaker 1: you know you want happy customers, or assessing potential threats 1054 00:58:33,320 --> 00:58:36,520 Speaker 1: from people trying to conceal anger all kinds of stuff. 1055 00:58:37,000 --> 00:58:38,880 Speaker 1: Even saw one that was trying to sell it as 1056 00:58:39,000 --> 00:58:41,880 Speaker 1: a as like a driving safety feature. You know, I'm 1057 00:58:41,960 --> 00:58:46,400 Speaker 1: detecting like road rage on the face. So just one example. 1058 00:58:46,760 --> 00:58:49,320 Speaker 1: In August twenty nineteen piece I was reading in Wired 1059 00:58:49,680 --> 00:58:54,480 Speaker 1: discussing Amazon's image analysis software known as Recognition with a 1060 00:58:54,640 --> 00:58:58,560 Speaker 1: k uh yeah, just the spelling of that is terrible. 1061 00:58:59,800 --> 00:59:02,360 Speaker 1: But uh so, at the time, this was claiming to 1062 00:59:02,360 --> 00:59:11,040 Speaker 1: be able to assess emotions in faces, including happiness, sadness, anger, surprise, discussed, calmness, confusion, 1063 00:59:11,480 --> 00:59:13,560 Speaker 1: and then the newest one they had just added to 1064 00:59:13,600 --> 00:59:17,120 Speaker 1: the list when this article came out was fear. Okay, 1065 00:59:17,440 --> 00:59:20,040 Speaker 1: well that's a big one. Was at last, I don't know, 1066 00:59:20,480 --> 00:59:22,920 Speaker 1: that's the one they brought online last. That makes me 1067 00:59:22,960 --> 00:59:28,000 Speaker 1: think of the end of Starship Troopers. It's afraid. Uh So, 1068 00:59:28,440 --> 00:59:31,240 Speaker 1: what is the scientific research tell us about how well 1069 00:59:31,320 --> 00:59:36,120 Speaker 1: these algorithms should be expected to do in reading emotions? 1070 00:59:36,280 --> 00:59:39,560 Speaker 1: I was looking at a paper by Lisa Feldman, Barrett, 1071 00:59:39,640 --> 00:59:43,880 Speaker 1: Ralph Adolph's, Stacy, Marcella, alex In Martinez, and Seth D. 1072 00:59:44,040 --> 00:59:47,320 Speaker 1: Pollock in Psychological Science in the Public Interest published in 1073 00:59:47,320 --> 00:59:52,200 Speaker 1: twenty nineteen called Emotional Expressions Reconsidered Challenges to inferring emotion 1074 00:59:52,280 --> 00:59:56,160 Speaker 1: from human facial movements, and they looked at you know, 1075 00:59:56,240 --> 00:59:59,200 Speaker 1: like a ton of I think, like over a thousand studies. 1076 00:59:59,240 --> 01:00:02,720 Speaker 1: It was huge to review, and they conclude that the 1077 01:00:02,760 --> 01:00:06,560 Speaker 1: whole premise on which these algorithms is based is close 1078 01:00:06,600 --> 01:00:11,520 Speaker 1: to worthless because, shocker, there is a little bit of 1079 01:00:11,560 --> 01:00:15,959 Speaker 1: information about emotional states encoded in human faces, but it's 1080 01:00:15,960 --> 01:00:18,800 Speaker 1: not nearly enough to give you a very accurate picture 1081 01:00:18,800 --> 01:00:22,880 Speaker 1: of internal states. People's faces reflect all kinds of strange, complicated, 1082 01:00:22,960 --> 01:00:26,840 Speaker 1: fleeting emotions back and forth. They might be faking emotions 1083 01:00:26,960 --> 01:00:30,080 Speaker 1: with their faces, and they even when humans read each 1084 01:00:30,120 --> 01:00:32,160 Speaker 1: other's emotions, which we were just talking about, you know, 1085 01:00:32,200 --> 01:00:35,360 Speaker 1: they're not always totally good at doing, but when humans 1086 01:00:35,400 --> 01:00:38,000 Speaker 1: do it, they incorporate way more than just the face. 1087 01:00:38,040 --> 01:00:41,840 Speaker 1: They incorporate body language, tone, all kinds of things to 1088 01:00:41,920 --> 01:00:44,920 Speaker 1: read emotion. And the the AI s are not even 1089 01:00:45,000 --> 01:00:47,560 Speaker 1: that good. They're just going off the face. And the 1090 01:00:47,600 --> 01:00:51,000 Speaker 1: researchers say that, you know, the evidence concludes that looking 1091 01:00:51,040 --> 01:00:55,400 Speaker 1: at the face alone is completely insufficient to get an 1092 01:00:55,440 --> 01:00:58,520 Speaker 1: accurate picture of internal emotional states. And it's kind of 1093 01:00:58,640 --> 01:01:02,160 Speaker 1: dangerous to suggest that you can get an accurate picture 1094 01:01:02,160 --> 01:01:05,960 Speaker 1: of emotional states just with facial analysis. To read a quote. 1095 01:01:06,200 --> 01:01:09,600 Speaker 1: Scientists agree that facial movements convey a range of information 1096 01:01:09,640 --> 01:01:13,600 Speaker 1: are important for social communication, emotional or otherwise, but our 1097 01:01:13,640 --> 01:01:17,000 Speaker 1: review suggests an urgent need for research that examines how 1098 01:01:17,040 --> 01:01:21,480 Speaker 1: people actually move their faces to express emotions and other 1099 01:01:21,600 --> 01:01:24,960 Speaker 1: social information in the variety of contexts that make up 1100 01:01:25,000 --> 01:01:27,840 Speaker 1: everyday life, as well as a careful study of the 1101 01:01:27,840 --> 01:01:31,960 Speaker 1: mechanisms by which people perceive instances of emotion in one another. 1102 01:01:32,800 --> 01:01:35,400 Speaker 1: Uh So, the way to read their conclusion is these, 1103 01:01:35,440 --> 01:01:40,200 Speaker 1: these facial recognition algorithms might be able to predict emotion 1104 01:01:40,480 --> 01:01:44,160 Speaker 1: with a rate slightly better than chance based on faces. 1105 01:01:44,640 --> 01:01:46,360 Speaker 1: You know. So they read your face and they see 1106 01:01:46,360 --> 01:01:48,520 Speaker 1: a smile on it, and they say, this person is happy. 1107 01:01:48,960 --> 01:01:51,640 Speaker 1: And that's a little bit better than guessing your emotional 1108 01:01:51,680 --> 01:01:54,200 Speaker 1: state at random, but not a lot better. You have 1109 01:01:54,360 --> 01:01:58,120 Speaker 1: these these programmers never heard tracks of my tears. They 1110 01:01:58,200 --> 01:02:01,520 Speaker 1: not know how how smiles. But it does sound like 1111 01:02:01,560 --> 01:02:03,280 Speaker 1: we could get to the point where we could be 1112 01:02:03,360 --> 01:02:07,120 Speaker 1: driving automobiles that tell us to smile more to to 1113 01:02:07,880 --> 01:02:10,960 Speaker 1: because you know, we already have them. That they try 1114 01:02:10,960 --> 01:02:13,680 Speaker 1: and sort of judge what are like state of wakefulness 1115 01:02:13,720 --> 01:02:16,200 Speaker 1: is based on our driving performance. You know, where they'll say, 1116 01:02:16,320 --> 01:02:17,720 Speaker 1: do you need a break, and they'll be like a 1117 01:02:17,720 --> 01:02:22,280 Speaker 1: coffee cup symbol, a little pop up on the dash. Uh. 1118 01:02:22,520 --> 01:02:25,400 Speaker 1: It's it's not that difficult to imagine a scenario where 1119 01:02:25,400 --> 01:02:28,120 Speaker 1: one will will you know, pick up on some very 1120 01:02:28,200 --> 01:02:33,000 Speaker 1: broad signs of displeasure and start chiming in with some advice. 1121 01:02:33,160 --> 01:02:34,960 Speaker 1: I don't know why I'm but just thinking about this 1122 01:02:35,040 --> 01:02:38,240 Speaker 1: is making me mad. I want to say, go download 1123 01:02:38,280 --> 01:02:42,640 Speaker 1: some malware computer, you don't know me get broken. What well, 1124 01:02:42,640 --> 01:02:44,479 Speaker 1: what if it was more subtle than that. What if 1125 01:02:44,600 --> 01:02:48,320 Speaker 1: if if your car picked up on some very you know, 1126 01:02:48,400 --> 01:02:51,080 Speaker 1: overt signs of displeasure. What if your cards just told 1127 01:02:51,120 --> 01:02:53,680 Speaker 1: you that it loved you. I think I would fall 1128 01:02:53,800 --> 01:02:56,720 Speaker 1: for that. I would, you know, if it was presented appropriately, 1129 01:02:56,720 --> 01:03:00,960 Speaker 1: I would be like, yes, thank you. Finally, wrap your 1130 01:03:01,000 --> 01:03:04,680 Speaker 1: hands across my engines. All right, that's enough, Bruce. We 1131 01:03:05,040 --> 01:03:07,960 Speaker 1: are we ready to wrap up for today? Yes, okay, 1132 01:03:08,000 --> 01:03:09,840 Speaker 1: but I think we will be back with at least 1133 01:03:10,040 --> 01:03:11,959 Speaker 1: one more part right where we're going to talk about 1134 01:03:11,960 --> 01:03:15,280 Speaker 1: the history of facial recognition technology and a little more 1135 01:03:15,320 --> 01:03:20,440 Speaker 1: about the modern implications, possible regulation schemes and stuff like that. Absolutely, 1136 01:03:20,960 --> 01:03:23,360 Speaker 1: in the meantime, certainly, we'd love to hear from anyone 1137 01:03:23,360 --> 01:03:26,600 Speaker 1: out there because we all have faces, we have some 1138 01:03:26,760 --> 01:03:31,120 Speaker 1: experience with with with facial recognition and varying levels of 1139 01:03:31,120 --> 01:03:33,480 Speaker 1: facial recognition. I know we've heard from listeners who have 1140 01:03:34,480 --> 01:03:37,439 Speaker 1: who have you know, varying degrees of difficulty or I'd 1141 01:03:37,480 --> 01:03:39,240 Speaker 1: love to hear from someone who thinks they might be 1142 01:03:39,280 --> 01:03:42,680 Speaker 1: a super recognizer or is like a what a verified 1143 01:03:42,720 --> 01:03:45,439 Speaker 1: super recognizer. In the meantime, if you want to listen 1144 01:03:45,440 --> 01:03:47,520 Speaker 1: to other episodes of the show, you can find them 1145 01:03:47,520 --> 01:03:49,840 Speaker 1: wherever you get your podcasts. If you go to stuff 1146 01:03:49,880 --> 01:03:52,160 Speaker 1: to Blow your Mind dot com, that will shoot you 1147 01:03:52,240 --> 01:03:54,840 Speaker 1: over to the I heart listening for this show. But 1148 01:03:54,920 --> 01:03:57,440 Speaker 1: wherever you get the show, make sure that you rate, review, 1149 01:03:57,440 --> 01:03:59,439 Speaker 1: and subscribe. Those are the ways you can help us out. 1150 01:03:59,760 --> 01:04:02,480 Speaker 1: And don't forget about Invention. That's our other show. That 1151 01:04:02,560 --> 01:04:05,920 Speaker 1: is a journey through human techno history and what right now, 1152 01:04:05,960 --> 01:04:09,280 Speaker 1: we're talking about fire technology over there, we're talking about 1153 01:04:09,960 --> 01:04:14,800 Speaker 1: matches and also just the the ability, the massive step 1154 01:04:15,120 --> 01:04:18,280 Speaker 1: forward in human technology that enabled us to not only 1155 01:04:18,360 --> 01:04:21,040 Speaker 1: capture fire, but to re create it. Huge thanks as 1156 01:04:21,080 --> 01:04:24,480 Speaker 1: always to our excellent audio producer Seth Nicholas Johnson. If 1157 01:04:24,480 --> 01:04:26,320 Speaker 1: you'd like to get in touch with us with feedback 1158 01:04:26,360 --> 01:04:28,560 Speaker 1: on this episode or any other, to suggest a topic 1159 01:04:28,600 --> 01:04:30,520 Speaker 1: for the future, or just to say hi, you can 1160 01:04:30,600 --> 01:04:33,920 Speaker 1: email us at contact at stuff to Blow your Mind 1161 01:04:34,120 --> 01:04:43,600 Speaker 1: dot com. Stuff to Blow Your Mind is production of 1162 01:04:43,640 --> 01:04:46,120 Speaker 1: iHeart Radio's How Stuff Works. For more podcasts from my 1163 01:04:46,160 --> 01:04:48,920 Speaker 1: heart Radio is at the iHeart Radio app, Apple Podcasts, 1164 01:04:49,000 --> 01:05:02,240 Speaker 1: or wherever you listen to your favorite shows. Bids Witty 1165 01:05:04,080 --> 01:05:04,480 Speaker 1: Problem