1 00:00:05,760 --> 00:00:07,320 Speaker 1: Hey, are you welcome to Stuff to Blow your Mind? 2 00:00:07,320 --> 00:00:09,960 Speaker 1: My name is Robert Lamb and I'm Joe McCormick, and 3 00:00:10,000 --> 00:00:13,200 Speaker 1: it's Saturday. Let's head into the vault this week. It 4 00:00:13,560 --> 00:00:16,200 Speaker 1: is part two of our series on the Coolest Jov Effect. 5 00:00:16,320 --> 00:00:25,040 Speaker 1: This episode originally aired January. I hope you enjoy Welcome 6 00:00:25,120 --> 00:00:34,760 Speaker 1: to Stuff to Blow Your Mind production of My Heart Radio. Hey, 7 00:00:34,920 --> 00:00:36,920 Speaker 1: welcome to Stuff to Blow your Mind. My name is 8 00:00:37,000 --> 00:00:39,920 Speaker 1: Robert Lamb and I'm Joe McCormick, and we're back with 9 00:00:40,000 --> 00:00:43,200 Speaker 1: part two of our series on the Coolest Jov Effect. Now, 10 00:00:43,240 --> 00:00:46,040 Speaker 1: as I explained last time, this is one that originally 11 00:00:46,080 --> 00:00:48,040 Speaker 1: was going to be one episode. We ended up splitting 12 00:00:48,080 --> 00:00:51,480 Speaker 1: it into so we're doing a little time traveling right now. 13 00:00:51,520 --> 00:00:54,400 Speaker 1: This is an out of sequence introduction, but I guess 14 00:00:54,400 --> 00:00:56,720 Speaker 1: from here we'll just jump right back into the middle 15 00:00:56,760 --> 00:01:00,320 Speaker 1: of our conversation from last time. Let's do it well. Anyway, 16 00:01:00,400 --> 00:01:03,040 Speaker 1: So I wanted to talk about a very interesting paper 17 00:01:03,520 --> 00:01:07,280 Speaker 1: that analyzed the the history and meaning of the cool 18 00:01:07,319 --> 00:01:12,080 Speaker 1: Shov effect and then also tried to recreate the Mojukan experiment. 19 00:01:13,120 --> 00:01:16,960 Speaker 1: So this paper was published in the Cinema Journal by 20 00:01:17,400 --> 00:01:21,559 Speaker 1: Stephen Prince and Wayne E. Hensley called the Coolershov Effect 21 00:01:21,680 --> 00:01:26,360 Speaker 1: Recreating the classic experiment your nineteen. I think both of 22 00:01:26,400 --> 00:01:30,399 Speaker 1: the authors on this paper were at the time professors 23 00:01:30,440 --> 00:01:34,280 Speaker 1: at Virginia Tech. Stephen Prince is a is a film 24 00:01:34,319 --> 00:01:35,800 Speaker 1: scholar who I know has done a lot of work 25 00:01:35,800 --> 00:01:39,080 Speaker 1: on a Kirakurasawa. And I'm not going to cover the 26 00:01:39,200 --> 00:01:41,480 Speaker 1: entire paper, but i just want to note some parts 27 00:01:41,480 --> 00:01:44,120 Speaker 1: of it that struck me as as relevant and interesting. 28 00:01:44,280 --> 00:01:47,880 Speaker 1: So they start off by telling the story of the 29 00:01:47,920 --> 00:01:51,680 Speaker 1: Kolashov effect experiment, the experiment with that actor mojuk In 30 00:01:51,760 --> 00:01:54,400 Speaker 1: making the neutral face and then either being intercut with 31 00:01:54,400 --> 00:01:57,280 Speaker 1: with soup or with the woman in the coffin, and 32 00:01:57,400 --> 00:02:01,040 Speaker 1: the audience is raving about how how expressive and powerful 33 00:02:01,120 --> 00:02:04,560 Speaker 1: the emotions in the performance were. Now, one thing they 34 00:02:04,560 --> 00:02:06,760 Speaker 1: do at the beginning is they note some differences in 35 00:02:06,800 --> 00:02:10,239 Speaker 1: the details of the story that arise from different recountings 36 00:02:10,240 --> 00:02:12,880 Speaker 1: of it, and so they end up casting doubt on 37 00:02:13,680 --> 00:02:17,000 Speaker 1: whether the accounts of this experiment are first of all, 38 00:02:17,080 --> 00:02:21,720 Speaker 1: historically accurate and second analytically valid, and so the author's 39 00:02:21,840 --> 00:02:24,320 Speaker 1: right quote. The goal here is to provide a clearer 40 00:02:24,360 --> 00:02:29,560 Speaker 1: contextualization of Kolashev's work distinguishing between its incontrovertible importance for 41 00:02:29,600 --> 00:02:33,919 Speaker 1: an understanding of how cinema communicates and certain of its limitations, 42 00:02:34,040 --> 00:02:39,360 Speaker 1: especially it's incautious merging of theoretical claim and observational assertion. 43 00:02:39,760 --> 00:02:42,440 Speaker 1: As we will see, Kulashev may have been right, but 44 00:02:42,560 --> 00:02:45,960 Speaker 1: perhaps for the wrong reasons. So the top line of 45 00:02:45,960 --> 00:02:49,359 Speaker 1: this paper is that they try to recreate the Majukan 46 00:02:49,440 --> 00:02:52,000 Speaker 1: experiment as it is usually described, and they do not 47 00:02:52,200 --> 00:02:56,200 Speaker 1: produce the same result. But this doesn't necessarily mean that 48 00:02:56,240 --> 00:03:00,800 Speaker 1: the broader implications of the Kolashov effect are wrong theoretically, uh, 49 00:03:00,840 --> 00:03:03,320 Speaker 1: but it might mean something about the specific claims about 50 00:03:03,320 --> 00:03:08,000 Speaker 1: a neutral face. Um. So they start off talking Aboutkulashev's 51 00:03:08,080 --> 00:03:11,560 Speaker 1: belief in the power of montage and his arguments that 52 00:03:11,760 --> 00:03:14,680 Speaker 1: editing is far more important the meaning generated by a 53 00:03:14,720 --> 00:03:18,119 Speaker 1: film than the contents of the shots. So they talked 54 00:03:18,120 --> 00:03:20,760 Speaker 1: about the masou Can experiment and then the other things 55 00:03:20,760 --> 00:03:25,160 Speaker 1: we mentioned, creative geography and creative anatomy, and they described 56 00:03:25,200 --> 00:03:29,480 Speaker 1: the general takeaway from the masuk And experiment as follows. Quote. 57 00:03:30,080 --> 00:03:35,480 Speaker 1: Naturalistic emotive performances by actors were not considered by Kulashev 58 00:03:35,520 --> 00:03:39,800 Speaker 1: to be essential to cinema because of the demands of montage, 59 00:03:39,960 --> 00:03:44,840 Speaker 1: actors were to provide minimal restraint and fairly unambiguous gestural 60 00:03:44,920 --> 00:03:48,680 Speaker 1: and facial expressions. As kola Chev puts it, quote, the 61 00:03:48,720 --> 00:03:54,760 Speaker 1: presence of montage necessitated that the shots should be constructed simply, clearly, distinctly. 62 00:03:55,320 --> 00:03:58,440 Speaker 1: Otherwise the flickering of a rapid montage would not be 63 00:03:58,480 --> 00:04:01,920 Speaker 1: sufficient for a full scrutin any of its contents. And 64 00:04:01,960 --> 00:04:05,200 Speaker 1: then the authors go on reacting, partly against the over 65 00:04:05,280 --> 00:04:09,200 Speaker 1: emoting found in some silent films. Kulashov noted that quote, 66 00:04:09,200 --> 00:04:14,120 Speaker 1: a preoccupation with psychologism rooted in the actor's performance was 67 00:04:14,400 --> 00:04:17,680 Speaker 1: quite useless for the cinema. So in a in a 68 00:04:17,680 --> 00:04:19,679 Speaker 1: lot of ways, it sounds like Kolashov kind of wanted 69 00:04:19,720 --> 00:04:22,599 Speaker 1: to take the acting out of acting. He was like, 70 00:04:22,839 --> 00:04:26,080 Speaker 1: there's too much psychology and acting. What we need instead 71 00:04:26,320 --> 00:04:29,840 Speaker 1: is just sort of like shots of actors doing kind 72 00:04:29,839 --> 00:04:33,680 Speaker 1: of like plain, unambiguous moments that can then be selected 73 00:04:33,720 --> 00:04:37,160 Speaker 1: by the editor to insert in a sequence to make 74 00:04:37,240 --> 00:04:40,839 Speaker 1: meaning of m Yeah, that's I mean, it reminds me 75 00:04:40,839 --> 00:04:45,920 Speaker 1: of so many other discussions we've had about performance and direction. Uh. 76 00:04:46,360 --> 00:04:49,480 Speaker 1: I'm always reminded of that that final sequence from A 77 00:04:49,560 --> 00:04:53,359 Speaker 1: Geary The Wrath of God, where you have you have 78 00:04:53,440 --> 00:04:56,520 Speaker 1: what ends up being a rather balanced and and and 79 00:04:56,720 --> 00:05:00,720 Speaker 1: interesting performance by klaus Kinsky. But apparently it's because Vernon 80 00:05:00,760 --> 00:05:03,760 Speaker 1: Herzag just wore him out, made him do take after 81 00:05:03,839 --> 00:05:07,880 Speaker 1: take until he wasn't doing like a frenzied um uh 82 00:05:07,920 --> 00:05:12,040 Speaker 1: you know, over almost you know, overacting overly intense performance. 83 00:05:12,560 --> 00:05:15,000 Speaker 1: He's not raging, he's just actually you know, you know, 84 00:05:15,920 --> 00:05:18,120 Speaker 1: emoting it at the level that the director wants and 85 00:05:18,120 --> 00:05:22,400 Speaker 1: then can therefore be uh be used effectively in the edit. Yeah, 86 00:05:22,839 --> 00:05:24,560 Speaker 1: and though if that story is true, it may have 87 00:05:24,600 --> 00:05:26,960 Speaker 1: worked in this case. So I want to say I 88 00:05:27,000 --> 00:05:32,039 Speaker 1: do not necessarily endorse directing by exhaustion. No. Now that 89 00:05:32,160 --> 00:05:35,599 Speaker 1: was a special relationship obviously, But you often see this 90 00:05:35,640 --> 00:05:37,720 Speaker 1: brought up, and you know, there's this idea of like, 91 00:05:37,880 --> 00:05:40,919 Speaker 1: is this is it the is is this about the 92 00:05:40,960 --> 00:05:44,920 Speaker 1: actor and the acting performance? Is it about uh editing? 93 00:05:45,000 --> 00:05:48,080 Speaker 1: Is it about the director's vision? And you do often 94 00:05:48,080 --> 00:05:49,960 Speaker 1: see that sort of push and pull be a you know, 95 00:05:50,080 --> 00:05:54,120 Speaker 1: Klauskinski in vern Or Herzog or Jimmy Stewart and Alfred Hitchcock. Uh, 96 00:05:54,160 --> 00:05:56,479 Speaker 1: you know, the the actor has a certain vision about 97 00:05:56,480 --> 00:05:58,960 Speaker 1: how things want need to be, and then the director 98 00:05:59,000 --> 00:06:01,839 Speaker 1: had maybe has another eye dea not not only about 99 00:06:01,880 --> 00:06:04,640 Speaker 1: like this particular character in this particular performance, but how 100 00:06:04,720 --> 00:06:07,960 Speaker 1: it fits into the the overall film, how it fits 101 00:06:07,960 --> 00:06:10,480 Speaker 1: into the final edit. And so you could you can 102 00:06:10,560 --> 00:06:13,000 Speaker 1: imagine somebody going into it with this this sort of 103 00:06:13,080 --> 00:06:16,159 Speaker 1: very kulashav idea of just shoot, all we needed as 104 00:06:16,240 --> 00:06:19,640 Speaker 1: neutral actors, we don't really need any of this emotion 105 00:06:19,680 --> 00:06:22,159 Speaker 1: one way or another. And um, I don't know, there's 106 00:06:22,160 --> 00:06:24,880 Speaker 1: probably some examples of filmmakers who tend to lean in 107 00:06:24,920 --> 00:06:29,559 Speaker 1: that direction with very neutral performances. Yeah, you could almost 108 00:06:29,600 --> 00:06:32,200 Speaker 1: look at that approach as uh, something that might be 109 00:06:32,240 --> 00:06:35,440 Speaker 1: more common saying like music videos and stuff than in 110 00:06:35,560 --> 00:06:38,520 Speaker 1: narrative films. Being probably find at some in narrative films 111 00:06:38,520 --> 00:06:43,160 Speaker 1: as well, where the filming part of the filmmaking process 112 00:06:43,440 --> 00:06:46,000 Speaker 1: is just sort of like creating a bunch of building 113 00:06:46,040 --> 00:06:50,560 Speaker 1: blocks that can later be used in various arrangements to 114 00:06:50,640 --> 00:06:55,359 Speaker 1: do whatever the director editor later decides to do with them. Yeah, 115 00:06:55,520 --> 00:06:58,200 Speaker 1: it also reminds me how, you know, a lot of 116 00:06:58,240 --> 00:07:00,760 Speaker 1: the films we're watching weird how cinema will sometimes feature 117 00:07:01,520 --> 00:07:05,120 Speaker 1: non actors or you know, very very green actors, But 118 00:07:05,960 --> 00:07:09,200 Speaker 1: the right sort of non actor can really excel in 119 00:07:09,240 --> 00:07:11,560 Speaker 1: a scene if utilized correctly, you know, like not the 120 00:07:11,640 --> 00:07:14,160 Speaker 1: kind of non actor where they're just really outrageous with 121 00:07:14,800 --> 00:07:17,720 Speaker 1: you know, but but where they're just sort of very 122 00:07:17,920 --> 00:07:20,200 Speaker 1: they're very neutral, they're they're almost barely there at all, 123 00:07:20,680 --> 00:07:22,520 Speaker 1: and if enough of the other stuff is in the 124 00:07:22,600 --> 00:07:25,080 Speaker 1: right place, it can really work. Now. I gotta say, though, 125 00:07:25,120 --> 00:07:28,560 Speaker 1: as this paper ends up describing kola Chev's theory of 126 00:07:28,720 --> 00:07:32,720 Speaker 1: film and montage, I think I can't agree with with 127 00:07:32,760 --> 00:07:35,320 Speaker 1: what it sounds like Coolshov's vision actually was, because cool 128 00:07:35,320 --> 00:07:38,640 Speaker 1: a Show apparently said things like the film shot is 129 00:07:38,680 --> 00:07:42,120 Speaker 1: not a still photograph. The shot is a sign a 130 00:07:42,320 --> 00:07:46,320 Speaker 1: letter for montage. So I think he's saying like a 131 00:07:46,360 --> 00:07:49,360 Speaker 1: still photograph can have meaning on its own, but a 132 00:07:49,400 --> 00:07:52,560 Speaker 1: shot in a movie is more like a letter in 133 00:07:52,720 --> 00:07:56,160 Speaker 1: a sentence. Something which does not have meaning on its own, 134 00:07:56,560 --> 00:08:00,320 Speaker 1: but is combined in sequence to make meaning clearly has 135 00:08:00,360 --> 00:08:03,040 Speaker 1: some truth to it, because, as we've said, editing does 136 00:08:03,280 --> 00:08:06,080 Speaker 1: constitute a major part of the the sense making or 137 00:08:06,120 --> 00:08:08,280 Speaker 1: meaning making of a film. But I think that's also 138 00:08:08,320 --> 00:08:11,120 Speaker 1: pretty overstated. You know, a lot of meaning lies in 139 00:08:11,160 --> 00:08:13,800 Speaker 1: the editing, but the contents of the shots also stand 140 00:08:13,800 --> 00:08:17,240 Speaker 1: alone to a greater extent and and matter a lot 141 00:08:17,320 --> 00:08:21,400 Speaker 1: more than Kolashaw was giving credit here. Um. Though, again, 142 00:08:21,440 --> 00:08:23,400 Speaker 1: to be fair, I think it's important for us not 143 00:08:23,480 --> 00:08:26,600 Speaker 1: to forget that in the nineteen teens and early nineteen twenties, 144 00:08:27,240 --> 00:08:30,440 Speaker 1: you know, film was still fairly young, Editing was still 145 00:08:30,480 --> 00:08:34,760 Speaker 1: fairly new in cinema, and its powers were still being discovered. Uh. 146 00:08:34,800 --> 00:08:37,040 Speaker 1: You know, it's like, like we talked about the very 147 00:08:37,120 --> 00:08:40,439 Speaker 1: earliest films from the eighteen nineties and such, we're usually 148 00:08:40,480 --> 00:08:43,199 Speaker 1: not edited at all. They'd just be one continuous shot. 149 00:08:43,720 --> 00:08:46,960 Speaker 1: And even after editing was introduced, films of the Silent 150 00:08:47,040 --> 00:08:50,200 Speaker 1: era typically did not have as many cuts as movies 151 00:08:50,200 --> 00:08:54,120 Speaker 1: were used to today. Furthermore, the authors of of this 152 00:08:54,720 --> 00:08:58,640 Speaker 1: paper argue that a theory comparing film to language is 153 00:08:58,679 --> 00:09:01,200 Speaker 1: actually not super useful because there's just a lot of 154 00:09:01,200 --> 00:09:04,120 Speaker 1: ways in which that doesn't work. Like film does things 155 00:09:04,200 --> 00:09:07,480 Speaker 1: language cannot do. So you don't have to learn a 156 00:09:07,559 --> 00:09:10,400 Speaker 1: language to appreciate the meanings of films. You you learn 157 00:09:10,520 --> 00:09:13,040 Speaker 1: some conventions, but you know, you can just watch a 158 00:09:13,080 --> 00:09:15,040 Speaker 1: movie and make some sense of it, even if you're 159 00:09:15,040 --> 00:09:18,480 Speaker 1: not familiar with conventions. You to understand the language, you 160 00:09:18,520 --> 00:09:22,520 Speaker 1: have to learn the language. Um. Meanwhile, language does things 161 00:09:22,520 --> 00:09:25,600 Speaker 1: that film can't do, like photographic images used in a 162 00:09:25,679 --> 00:09:29,480 Speaker 1: film cannot be recombined freely to make endless meaning the 163 00:09:29,480 --> 00:09:33,280 Speaker 1: way a language can. There's also an interesting digression in 164 00:09:33,280 --> 00:09:37,160 Speaker 1: this paper about Kolashaw being influenced by the ideology of 165 00:09:37,240 --> 00:09:42,160 Speaker 1: industrial efficiency on the model of the American engineer Frederick Taylor, 166 00:09:42,280 --> 00:09:46,040 Speaker 1: who was a big proponent of finding ways to make 167 00:09:46,080 --> 00:09:49,640 Speaker 1: you know, production processes and factories more efficient, finding all 168 00:09:49,679 --> 00:09:53,360 Speaker 1: the places where waste and and and problems creep in 169 00:09:53,440 --> 00:09:58,679 Speaker 1: and eliminating those, And that Taylor's ideas of industrial efficiency 170 00:09:58,679 --> 00:10:01,439 Speaker 1: were apparently very popular in the Soviet Union at the time, 171 00:10:02,080 --> 00:10:05,240 Speaker 1: and that in a way, the authors say that you 172 00:10:05,280 --> 00:10:10,480 Speaker 1: could view Kolashaw's emphasis on economy and acting as a 173 00:10:10,600 --> 00:10:15,080 Speaker 1: type of industrial efficiency technique applied to film theory. Yeah, 174 00:10:15,080 --> 00:10:17,160 Speaker 1: And based on what I was reading it, it doesn't. 175 00:10:17,360 --> 00:10:18,920 Speaker 1: It does seem like a lot of his work was 176 00:10:18,960 --> 00:10:22,080 Speaker 1: based in let's figure out what's working, and then how 177 00:10:22,120 --> 00:10:24,480 Speaker 1: we can we can do that? How how do we 178 00:10:24,559 --> 00:10:27,760 Speaker 1: make how do what is the most economic means of 179 00:10:27,800 --> 00:10:31,680 Speaker 1: making effective film now? Ultimately, Prince and Hensley make the 180 00:10:31,720 --> 00:10:35,560 Speaker 1: case that Koulishev really was trying to dress up his 181 00:10:35,720 --> 00:10:40,360 Speaker 1: theoretical convictions about how film works with the imper mater 182 00:10:40,480 --> 00:10:43,520 Speaker 1: of empirical science with this alleged experiment them as you 183 00:10:43,600 --> 00:10:47,079 Speaker 1: can experiment, uh, And I think I'm pretty convinced by 184 00:10:47,200 --> 00:10:49,280 Speaker 1: their description of it that way. I think this is 185 00:10:49,320 --> 00:10:51,880 Speaker 1: something you've always got to be cautious of because obviously, 186 00:10:52,480 --> 00:10:56,280 Speaker 1: you know, I don't object in principle to exploring or 187 00:10:56,320 --> 00:11:00,480 Speaker 1: building upon artistic theories with empirical methods. But I would 188 00:11:00,480 --> 00:11:02,800 Speaker 1: also say, my personal opinion is that a lot of 189 00:11:02,840 --> 00:11:08,320 Speaker 1: these efforts to inject scientific methods into esthetics and and 190 00:11:08,520 --> 00:11:12,640 Speaker 1: art and stuff can be confusing and unnecessary. Like I 191 00:11:12,679 --> 00:11:16,679 Speaker 1: don't think you have to have an empirical scientific justification 192 00:11:17,080 --> 00:11:19,959 Speaker 1: for an opinion about where meaning comes from in art 193 00:11:20,080 --> 00:11:23,360 Speaker 1: or in film. Obviously, I'm a huge believer in empirical science. 194 00:11:23,679 --> 00:11:26,559 Speaker 1: I just don't think it has to pervade every domain, 195 00:11:26,760 --> 00:11:31,040 Speaker 1: Like aesthetics and art don't necessarily need scientific evidence and 196 00:11:31,120 --> 00:11:34,240 Speaker 1: theories behind them that those fields just you know, work 197 00:11:34,320 --> 00:11:36,680 Speaker 1: by different standards. And I think also a lot of 198 00:11:36,679 --> 00:11:42,600 Speaker 1: times if you try to generate empirical scientific justifications for 199 00:11:42,679 --> 00:11:46,680 Speaker 1: your beliefs about art or aesthetics or whatever, you're often 200 00:11:46,720 --> 00:11:50,760 Speaker 1: just gonna end up doing sloppy experiments or drawing unjustified conclusions, 201 00:11:50,760 --> 00:11:54,080 Speaker 1: even if you do a good one. Yeah. Um, like 202 00:11:54,320 --> 00:11:56,360 Speaker 1: I'm reminded, you know of the fact that obviously you 203 00:11:56,400 --> 00:11:58,960 Speaker 1: have a such thing. There's such a thing as outsider 204 00:11:59,080 --> 00:12:04,040 Speaker 1: art and oursider cinema. Um, and and examples of outsider 205 00:12:04,120 --> 00:12:06,800 Speaker 1: art and outsider cinema can be amazing, uh, you know. 206 00:12:07,080 --> 00:12:10,040 Speaker 1: And on the other side of things, you don't hear 207 00:12:10,080 --> 00:12:16,600 Speaker 1: as much about maybe outsider architecture, outsider structural engineering, things 208 00:12:16,600 --> 00:12:20,640 Speaker 1: of this nature. Outsider medicine is probably you know, best 209 00:12:20,880 --> 00:12:23,760 Speaker 1: avoided if you can, no matter how it's being dressed up. Well, 210 00:12:23,760 --> 00:12:26,880 Speaker 1: I mean, I think empirical methods are good for fields 211 00:12:26,880 --> 00:12:31,360 Speaker 1: in which you are trying to achieve very clearly specified goals, 212 00:12:31,559 --> 00:12:35,240 Speaker 1: certain kinds of outcomes and get them as reliably as possible. 213 00:12:35,720 --> 00:12:39,679 Speaker 1: And empirical methods are are less important in fields where 214 00:12:39,720 --> 00:12:43,000 Speaker 1: you're you're just trying to be expressive or be creative 215 00:12:43,000 --> 00:12:46,560 Speaker 1: and see what kind of emergent results come out. But 216 00:12:46,600 --> 00:12:49,480 Speaker 1: if it's like like this turns my mind to like 217 00:12:49,520 --> 00:12:54,840 Speaker 1: a b testing and focus groups used in film and television. Um. 218 00:12:54,880 --> 00:12:57,200 Speaker 1: You know, not not necessarily a bad idea at all, 219 00:12:57,280 --> 00:12:59,800 Speaker 1: especially when you're dealing again with a very mainstream product 220 00:12:59,840 --> 00:13:02,440 Speaker 1: you want to appeal to a you know, a wide 221 00:13:02,440 --> 00:13:06,040 Speaker 1: population of individuals. Um. But you know, there are plenty 222 00:13:06,080 --> 00:13:07,960 Speaker 1: of arguments to be made about it as a potential, 223 00:13:08,280 --> 00:13:11,160 Speaker 1: you know, sloppy experiment. As you say, perhaps one of 224 00:13:11,200 --> 00:13:14,000 Speaker 1: the best critiques of all of this is that that 225 00:13:14,040 --> 00:13:16,640 Speaker 1: episode of The Simpsons, the Itchy and Scratchy and Pucci Show, 226 00:13:18,480 --> 00:13:22,000 Speaker 1: one of my favorites. It's just an old, creaky mirror. 227 00:13:22,120 --> 00:13:26,079 Speaker 1: Sometimes it sounds like it's coughing or talking softly. Yes, 228 00:13:33,920 --> 00:13:36,920 Speaker 1: But anyway, to come back to uh, Prince and Henley's 229 00:13:36,960 --> 00:13:43,760 Speaker 1: description of methodological problems with the common descriptions of Kolashov's 230 00:13:43,760 --> 00:13:46,720 Speaker 1: alleged experiment the Masukan experiment with the neutral face and 231 00:13:46,760 --> 00:13:48,920 Speaker 1: the soup and the and the coffin and stuff. And 232 00:13:48,960 --> 00:13:51,400 Speaker 1: they list a bunch of questions, they say, quote for 233 00:13:51,440 --> 00:13:54,840 Speaker 1: such a seminal and basically uncontested study, there is virtually 234 00:13:54,840 --> 00:13:59,360 Speaker 1: no information available about Kolashov's actual method and procedure. Did he, 235 00:13:59,480 --> 00:14:02,840 Speaker 1: for example, will interview the subjects individually or in a group. 236 00:14:03,200 --> 00:14:05,520 Speaker 1: What did he tell them beforehand about the purpose of 237 00:14:05,520 --> 00:14:08,400 Speaker 1: the presentation, What, if anything, did he tell them about 238 00:14:08,440 --> 00:14:11,520 Speaker 1: the nature of film editing or montage. What was the 239 00:14:11,559 --> 00:14:14,480 Speaker 1: frequency of outlier opinions e g. People who did not 240 00:14:14,760 --> 00:14:19,360 Speaker 1: think Masukan was saddened by the dead woman. Published accounts 241 00:14:19,360 --> 00:14:23,800 Speaker 1: suggest the responses were uniform. Was this so? Unfortunately we 242 00:14:23,880 --> 00:14:27,000 Speaker 1: do not know the answers to any of these questions. So, 243 00:14:27,360 --> 00:14:31,880 Speaker 1: given these limitations, they attempt to recreate and try to 244 00:14:31,920 --> 00:14:34,920 Speaker 1: replicate as best they can the conditions of the original 245 00:14:34,960 --> 00:14:38,000 Speaker 1: experiment to see if they get the same result. So 246 00:14:38,040 --> 00:14:40,600 Speaker 1: what they did was they put together a videotape that 247 00:14:40,880 --> 00:14:44,880 Speaker 1: had some auditions for actors to produce a close up 248 00:14:44,880 --> 00:14:47,680 Speaker 1: shot of a face that was just totally neutral and 249 00:14:47,760 --> 00:14:49,360 Speaker 1: expression list And they had to go through a couple 250 00:14:49,400 --> 00:14:53,080 Speaker 1: of rounds because in the first round the actor's neutral 251 00:14:53,120 --> 00:14:57,600 Speaker 1: face was not perceived as neutral enough by the control group. Um. 252 00:14:57,800 --> 00:15:00,240 Speaker 1: But so so they got a neutral face on a 253 00:15:00,320 --> 00:15:02,760 Speaker 1: video and they did the same thing. They intercut it 254 00:15:03,240 --> 00:15:06,240 Speaker 1: with a woman lying in a coffin, a girl playing 255 00:15:06,240 --> 00:15:08,240 Speaker 1: with a teddy bear and a bowl of soup on 256 00:15:08,280 --> 00:15:10,800 Speaker 1: a table, and they tried as best they could to 257 00:15:11,320 --> 00:15:15,760 Speaker 1: follow Kolashov's cues about what what the cinematography techniques for 258 00:15:15,880 --> 00:15:17,960 Speaker 1: making this work the best would be, so it would 259 00:15:17,960 --> 00:15:22,120 Speaker 1: be UH people visible on a darkened black velvet background. 260 00:15:22,600 --> 00:15:25,440 Speaker 1: Apparently the actors were told that they just needed someone 261 00:15:25,520 --> 00:15:29,680 Speaker 1: to uh to model for an instructional video in which 262 00:15:29,720 --> 00:15:32,840 Speaker 1: they would be required to do an expressionless or neutral face. 263 00:15:33,640 --> 00:15:36,400 Speaker 1: So one difference is that instead of one long sequence 264 00:15:36,440 --> 00:15:39,280 Speaker 1: intercutting with all of them, they did separate sequences for 265 00:15:39,360 --> 00:15:43,840 Speaker 1: each reaction. So for example, it might go face soup, 266 00:15:44,280 --> 00:15:47,840 Speaker 1: face fade out or face coffin, face fade out, and 267 00:15:47,920 --> 00:15:51,400 Speaker 1: each shot was seven seconds long. And the separate sequences 268 00:15:51,440 --> 00:15:53,520 Speaker 1: make sense to me because you might get a different 269 00:15:53,560 --> 00:15:57,120 Speaker 1: reaction with some pairings than you would with others. So 270 00:15:57,400 --> 00:16:00,160 Speaker 1: viewers each saw one sequence selected at random, and they 271 00:16:00,160 --> 00:16:04,080 Speaker 1: were told that the experimenters needed help evaluating an acting performance. 272 00:16:04,600 --> 00:16:07,040 Speaker 1: And then the viewers were supposed to select from a 273 00:16:07,080 --> 00:16:09,640 Speaker 1: list of emotions that they thought were being portrayed by 274 00:16:09,680 --> 00:16:16,960 Speaker 1: the actor. Options included happiness, sadness, anger, fear, Surprise, discussed hunger, 275 00:16:17,320 --> 00:16:22,000 Speaker 1: no emotion, and other Apparently the participants were undergrads at 276 00:16:22,000 --> 00:16:24,800 Speaker 1: a mid Atlantic university. I'm going to assume based on 277 00:16:24,840 --> 00:16:29,640 Speaker 1: the author's affiliations, this was probably Virginia Tech. They said 278 00:16:29,680 --> 00:16:33,840 Speaker 1: that interestingly, film students were excluded from the experiments since 279 00:16:33,880 --> 00:16:37,640 Speaker 1: they might detect the connection to Kolashov and understand what 280 00:16:37,720 --> 00:16:41,040 Speaker 1: the experiment was getting at, which could bias results. And 281 00:16:41,160 --> 00:16:43,560 Speaker 1: in support of this decision, I mean, it seems like 282 00:16:43,600 --> 00:16:46,040 Speaker 1: a good choice either way. But to justify this decision, 283 00:16:46,440 --> 00:16:50,240 Speaker 1: they wrote about another recent attempt to replicate the Monsieur 284 00:16:50,240 --> 00:16:55,000 Speaker 1: can experiment in France among film students who allegedly gave 285 00:16:55,040 --> 00:16:58,320 Speaker 1: answers like the following quote. We know that the man 286 00:16:58,360 --> 00:17:01,280 Speaker 1: does not change his expression, but because of the montage, 287 00:17:01,360 --> 00:17:04,639 Speaker 1: we think we see him change or quote. We know 288 00:17:04,760 --> 00:17:09,159 Speaker 1: the Cooleshov effect and it works. And then Princeton Hensley 289 00:17:09,160 --> 00:17:11,359 Speaker 1: also had a control condition where they showed the face 290 00:17:11,400 --> 00:17:14,800 Speaker 1: to twenty four film students, this time but without any 291 00:17:14,800 --> 00:17:18,000 Speaker 1: inner cutting. They were just showing them the face by 292 00:17:18,040 --> 00:17:21,560 Speaker 1: itself and asking them what emotion it was showing for 293 00:17:21,680 --> 00:17:24,960 Speaker 1: the face that they actually used in the experiment. Percent 294 00:17:25,080 --> 00:17:27,080 Speaker 1: said there was no emotion on the face. So this 295 00:17:27,200 --> 00:17:29,919 Speaker 1: is a very good neutral face. You know that. That 296 00:17:30,000 --> 00:17:33,440 Speaker 1: reminds me though of of use of neutral face uh 297 00:17:33,480 --> 00:17:37,960 Speaker 1: sort of not still pictures, but just seen sequences where um, 298 00:17:38,000 --> 00:17:41,400 Speaker 1: a character and individual is staring directly into the camera. Um. 299 00:17:41,440 --> 00:17:48,360 Speaker 1: I'm thinking it's certainly about Ron fricksm Baraka, which features 300 00:17:48,400 --> 00:17:51,119 Speaker 1: a number of these uh sequences where you'll you'll just 301 00:17:51,160 --> 00:17:54,000 Speaker 1: have an individual from from one culture or another just 302 00:17:54,080 --> 00:17:56,919 Speaker 1: staring into the camera. Or Another example that comes to 303 00:17:57,000 --> 00:18:00,199 Speaker 1: mind is the film The Mission, where at the very 304 00:18:00,280 --> 00:18:03,399 Speaker 1: end of the film there's you who just have several 305 00:18:03,440 --> 00:18:07,680 Speaker 1: beats of one of the primary characters, uh staring into 306 00:18:07,720 --> 00:18:12,199 Speaker 1: the camera and very neutral expression. And of course you 307 00:18:12,240 --> 00:18:15,600 Speaker 1: have the entire film you've just watched to help uh 308 00:18:15,920 --> 00:18:20,400 Speaker 1: inform your idea of what's going through that that character's head. Um. 309 00:18:20,480 --> 00:18:22,600 Speaker 1: But but but still, it's it's a it's a great 310 00:18:22,680 --> 00:18:25,199 Speaker 1: use of neutral expression, Like he doesn't it doesn't look 311 00:18:25,240 --> 00:18:29,440 Speaker 1: particularly sad in that case, but you in you can 312 00:18:29,480 --> 00:18:32,480 Speaker 1: see sadness in the character. You know. Well, yeah, that's 313 00:18:32,480 --> 00:18:34,640 Speaker 1: a good example, but I think it also does raise 314 00:18:34,760 --> 00:18:37,840 Speaker 1: questions about something that's supposed to be sort of outside 315 00:18:37,920 --> 00:18:43,120 Speaker 1: the the standard interpretation of this of this experiment, which 316 00:18:43,160 --> 00:18:45,119 Speaker 1: is like, well, wait, what are the actual contents of 317 00:18:45,160 --> 00:18:48,000 Speaker 1: the face? Maybe that does matter. That's going to come 318 00:18:48,040 --> 00:18:50,560 Speaker 1: up in the author's interpretation of the results they get. 319 00:18:51,000 --> 00:18:53,239 Speaker 1: But so in the actual experiment they did, they had 320 00:18:53,240 --> 00:18:56,320 Speaker 1: a hundred and thirty seven participants, including the control group. 321 00:18:56,640 --> 00:19:00,760 Speaker 1: In the experimental group. In every condition, whether it was soup, coffin, 322 00:19:00,880 --> 00:19:05,040 Speaker 1: or child, the majority of people said there was no emotion. 323 00:19:05,119 --> 00:19:07,760 Speaker 1: So they saw the face that was supposedly neutral, they 324 00:19:07,760 --> 00:19:10,280 Speaker 1: saw it intercut with whatever it was, the soup or 325 00:19:10,320 --> 00:19:13,320 Speaker 1: the coffin, and they said, nope, there is no emotion 326 00:19:13,400 --> 00:19:16,800 Speaker 1: on this face. In the soup condition, sixty eight percent 327 00:19:16,880 --> 00:19:21,880 Speaker 1: selected no emotion. In both the child and the coffin condition, 328 00:19:22,000 --> 00:19:25,520 Speaker 1: sixty one percent said no emotion, and so comparing that 329 00:19:25,560 --> 00:19:29,480 Speaker 1: to the control group, in the control eight percent said 330 00:19:29,480 --> 00:19:31,760 Speaker 1: there was no emotion, and that dropped down to sixty 331 00:19:31,840 --> 00:19:35,880 Speaker 1: eight in the soup and sixty one in the child 332 00:19:35,960 --> 00:19:38,520 Speaker 1: and the coffin. So you could say this is a 333 00:19:38,600 --> 00:19:42,000 Speaker 1: small increase in perceived emotion, though the authors note that 334 00:19:42,080 --> 00:19:44,080 Speaker 1: for the size of the group they tested, it actually 335 00:19:44,080 --> 00:19:47,200 Speaker 1: doesn't reach statistical significance, so it might just be a 336 00:19:47,280 --> 00:19:51,160 Speaker 1: random fluke. Furthermore, in the cases where the viewers picked 337 00:19:51,160 --> 00:19:55,280 Speaker 1: in emotion, it was usually not the expected emotion, so 338 00:19:55,320 --> 00:19:58,119 Speaker 1: it was not happiness for the child and so forth. 339 00:19:58,560 --> 00:20:02,080 Speaker 1: So either way, this experiment find something somewhere between no 340 00:20:02,280 --> 00:20:06,080 Speaker 1: effect and small effect on perceived emotion, which is a 341 00:20:06,160 --> 00:20:08,760 Speaker 1: very far cry either way from Kolashev's reports about the 342 00:20:08,760 --> 00:20:13,440 Speaker 1: audience is unanimous raving about the actor's subtle emotional performances, 343 00:20:14,200 --> 00:20:16,080 Speaker 1: And so the authors say here that you know, in 344 00:20:16,160 --> 00:20:19,119 Speaker 1: less contrary evidence emerges, it seems true to say that 345 00:20:19,200 --> 00:20:24,320 Speaker 1: quote the Kolashov effect as reported no longer exists, even 346 00:20:24,400 --> 00:20:27,040 Speaker 1: if the effect did play a role at one time, 347 00:20:27,520 --> 00:20:30,320 Speaker 1: though emphasis there should be on as reported, because some 348 00:20:30,359 --> 00:20:34,600 Speaker 1: of the broader implications of it probably do still hold true. Now, 349 00:20:34,920 --> 00:20:38,040 Speaker 1: this raises an interesting question. If we assume, for the 350 00:20:38,040 --> 00:20:41,920 Speaker 1: sake of argument, that Kolashev was basically reporting the results 351 00:20:41,920 --> 00:20:46,480 Speaker 1: of his experiment accurately or with only slight exaggeration, what 352 00:20:46,600 --> 00:20:49,720 Speaker 1: could account between the difference. Why did Kolashov get people 353 00:20:49,840 --> 00:20:53,080 Speaker 1: raving about the subtle emotion in the neutral face, but 354 00:20:53,280 --> 00:20:56,159 Speaker 1: that that didn't really happen in a modern experiment? The 355 00:20:56,200 --> 00:20:58,320 Speaker 1: authors offer some ideas here, and I think they're all 356 00:20:58,440 --> 00:21:02,880 Speaker 1: pretty possible via and certainly interesting. So one would be 357 00:21:03,440 --> 00:21:08,159 Speaker 1: changes in audience expectation. You know, audiences today are accustomed 358 00:21:08,200 --> 00:21:11,840 Speaker 1: to highly effective editing techniques that have been perfected over time, 359 00:21:11,920 --> 00:21:15,360 Speaker 1: such as, like I mentioned earlier, the preservation of eyelines 360 00:21:15,480 --> 00:21:19,760 Speaker 1: to enforce continuity of of perspective and reverse shots. Yeah, yeah, 361 00:21:19,800 --> 00:21:21,520 Speaker 1: I think this is this is a big one. And 362 00:21:21,640 --> 00:21:23,240 Speaker 1: I mean it comes down to like some of the 363 00:21:23,240 --> 00:21:25,920 Speaker 1: basics of what we said earlier about how at least 364 00:21:25,920 --> 00:21:28,000 Speaker 1: for many of us and certainly for me, like trying 365 00:21:28,000 --> 00:21:31,760 Speaker 1: to watch an actual cooler show film is very difficult. 366 00:21:31,800 --> 00:21:35,080 Speaker 1: Like it's just film has come has evolved so much 367 00:21:35,160 --> 00:21:39,080 Speaker 1: since then, um, and and the effects are subtle in 368 00:21:39,119 --> 00:21:42,480 Speaker 1: a way that you really the film only has to 369 00:21:42,520 --> 00:21:45,680 Speaker 1: be even halfway competent to really just draw you in 370 00:21:45,920 --> 00:21:49,360 Speaker 1: and create the illusion. Right, So uh so the author's 371 00:21:49,440 --> 00:21:51,719 Speaker 1: right quote. It may be that a modern audience, by 372 00:21:51,800 --> 00:21:55,560 Speaker 1: virtue of increased media exposure relative to cool A. Shov's day, 373 00:21:55,680 --> 00:21:59,480 Speaker 1: has become accustomed to a more systematic and complex set 374 00:21:59,520 --> 00:22:02,679 Speaker 1: of associate creational cues, such as those supplied by the 375 00:22:02,720 --> 00:22:07,000 Speaker 1: continuity system of editing. And is correspondingly less likely to 376 00:22:07,040 --> 00:22:10,359 Speaker 1: respond to a montage sequence that employs a blank face 377 00:22:10,400 --> 00:22:14,480 Speaker 1: and minimal, if any associated cues within shots. So maybe 378 00:22:14,480 --> 00:22:19,320 Speaker 1: the bar for perceiving emotion in films has has gone up, 379 00:22:19,400 --> 00:22:21,959 Speaker 1: you know, it's just harder to do now. And at 380 00:22:22,000 --> 00:22:26,520 Speaker 1: the time the Kolashov did his experiment, allegedly maybe the 381 00:22:26,600 --> 00:22:30,399 Speaker 1: audiences were just we're just more it was easier for 382 00:22:30,480 --> 00:22:33,320 Speaker 1: them to project that emotion now that There could be 383 00:22:33,359 --> 00:22:35,359 Speaker 1: a number of ways to read that. One way is 384 00:22:35,359 --> 00:22:38,840 Speaker 1: is thinking about how much exposure modern audiences have to 385 00:22:38,960 --> 00:22:43,000 Speaker 1: modern editing techniques. Um the other way, I guess, and 386 00:22:43,280 --> 00:22:46,199 Speaker 1: the authors don't really favor this explanation, but they say 387 00:22:46,240 --> 00:22:48,600 Speaker 1: another way of looking at it is naivete on the 388 00:22:49,040 --> 00:22:51,560 Speaker 1: part of the early audiences. There's some kind of projection 389 00:22:51,600 --> 00:22:55,120 Speaker 1: going on, because maybe early film audiences were just so 390 00:22:55,200 --> 00:22:59,560 Speaker 1: bewildered by moving pictures that they almost like hallucinated projections 391 00:22:59,560 --> 00:23:02,560 Speaker 1: of emotion. And uh, the authors don't think this is 392 00:23:02,600 --> 00:23:05,800 Speaker 1: a very good explanation for one thing, because they argue 393 00:23:05,880 --> 00:23:08,400 Speaker 1: that a lot of the stories that are used to 394 00:23:08,400 --> 00:23:12,159 Speaker 1: to illustrate the sort of bewilderment of early film audiences, 395 00:23:12,240 --> 00:23:14,880 Speaker 1: like that, you know, the semi mythological things about the 396 00:23:14,880 --> 00:23:17,880 Speaker 1: audiences running away from the Loomi air train and stuff 397 00:23:17,880 --> 00:23:20,560 Speaker 1: that they say that I mean, there were sort of 398 00:23:20,600 --> 00:23:24,119 Speaker 1: events of this kind, but they have been mythologized in 399 00:23:24,160 --> 00:23:28,280 Speaker 1: a way that over emphasizes how naive early audiences were, 400 00:23:28,359 --> 00:23:30,639 Speaker 1: and that a lot of these kinds of reactions may 401 00:23:30,640 --> 00:23:33,480 Speaker 1: have just been audiences playing along there at the theater 402 00:23:33,640 --> 00:23:36,240 Speaker 1: having a good time, and they're playing along with what 403 00:23:36,320 --> 00:23:39,520 Speaker 1: the suggested reaction was supposed to be. That's true once 404 00:23:39,560 --> 00:23:41,879 Speaker 1: you especially when you're dealing with a group of people, 405 00:23:41,960 --> 00:23:45,080 Speaker 1: you know, watching watching anything with a group, even even 406 00:23:45,080 --> 00:23:48,359 Speaker 1: today with our our modern exposure to cinema, you know, 407 00:23:48,480 --> 00:23:51,040 Speaker 1: if one person jumps, everybody can jump. That sort of thing, 408 00:23:51,119 --> 00:23:53,639 Speaker 1: you know, you're more maybe you're more likely to to 409 00:23:53,720 --> 00:23:56,320 Speaker 1: laugh or scream if you're watching it with with other people. 410 00:23:56,400 --> 00:23:59,399 Speaker 1: That sort of thing makes me think about William Castle 411 00:23:59,440 --> 00:24:01,240 Speaker 1: and the Tingle are trying to get people screaming in 412 00:24:01,280 --> 00:24:06,600 Speaker 1: the movie theaters. Yea, yeah, which which is uh is infectious. 413 00:24:06,640 --> 00:24:08,800 Speaker 1: As I think I mentioned in that Tingler episode, I 414 00:24:08,840 --> 00:24:11,640 Speaker 1: got to see the Tingler UH in a theater and 415 00:24:11,840 --> 00:24:14,639 Speaker 1: people were totally playing into it, like it's still worked today, 416 00:24:15,040 --> 00:24:18,000 Speaker 1: so good. Okay. A couple of other possible explanations for 417 00:24:18,000 --> 00:24:21,159 Speaker 1: the difference between Kolashov's report and then and then they 418 00:24:21,200 --> 00:24:25,520 Speaker 1: failed attempt to replicate those findings. Another one is response bias. 419 00:24:25,560 --> 00:24:28,960 Speaker 1: So this seems quite possible to me. Maybe it was 420 00:24:29,000 --> 00:24:33,960 Speaker 1: originally a sloppy experiment. Maybe Kolashov primed his test subjects 421 00:24:34,000 --> 00:24:37,320 Speaker 1: to react the way they did and they complied. Uh. 422 00:24:37,359 --> 00:24:39,400 Speaker 1: You know that This is why double blind tests are 423 00:24:39,520 --> 00:24:43,880 Speaker 1: very useful. If the person administering the test doesn't know 424 00:24:43,960 --> 00:24:47,399 Speaker 1: what hypothesis is being tested, it's harder for them to 425 00:24:47,440 --> 00:24:50,359 Speaker 1: behave in a way that would bias, that would bias 426 00:24:50,440 --> 00:24:53,600 Speaker 1: the subject response in favor of it. And there is 427 00:24:53,640 --> 00:24:57,359 Speaker 1: of course extensive evidence that Kolashov was already committed to 428 00:24:57,480 --> 00:25:00,399 Speaker 1: his theory about the power of montage, but for he 429 00:25:00,440 --> 00:25:04,160 Speaker 1: allegedly conducted this experiment like he he already had the 430 00:25:04,200 --> 00:25:07,440 Speaker 1: result he was looking for in mind. Yeah, like the 431 00:25:07,720 --> 00:25:11,120 Speaker 1: neutral face. I keep thinking of examples now of neutral 432 00:25:11,160 --> 00:25:14,080 Speaker 1: face or very neutral or or just you know, low 433 00:25:14,160 --> 00:25:17,640 Speaker 1: key acting performances, And one that instantly comes to mind 434 00:25:18,119 --> 00:25:21,280 Speaker 1: is the sequence in The Godfather where al Pacino's character 435 00:25:21,440 --> 00:25:25,520 Speaker 1: is in the restaurant with uh, what is the the 436 00:25:25,680 --> 00:25:30,040 Speaker 1: corrupt police officer and Sterling Hayden and yeah, and the turk, 437 00:25:30,320 --> 00:25:32,920 Speaker 1: right is that the other character his name also it's 438 00:25:32,960 --> 00:25:36,199 Speaker 1: also um And of course what's gonna happen is he's 439 00:25:36,200 --> 00:25:37,800 Speaker 1: gonna go to the toilet, he's gonna come back with 440 00:25:37,840 --> 00:25:39,480 Speaker 1: a gun, and then he's going to shoot them both. 441 00:25:39,520 --> 00:25:42,040 Speaker 1: That's the plan. And there's that great sequence where you 442 00:25:42,080 --> 00:25:44,720 Speaker 1: see al Pacino's face and he's he had a very again, 443 00:25:44,800 --> 00:25:48,159 Speaker 1: very neutral expression, and I previously just always thought, well, 444 00:25:48,200 --> 00:25:50,480 Speaker 1: that's just he's just he was such a great actor 445 00:25:50,520 --> 00:25:52,640 Speaker 1: at that point in his career, Like like you can 446 00:25:52,680 --> 00:25:55,280 Speaker 1: just see the wheels turning, you can see all the 447 00:25:55,320 --> 00:25:58,240 Speaker 1: tension going on behind the scenes. But maybe not, maybe 448 00:25:58,280 --> 00:26:01,320 Speaker 1: he's just thinking about, you know what, what what he 449 00:26:01,320 --> 00:26:02,960 Speaker 1: needs to pick up at the grocery store later on 450 00:26:03,000 --> 00:26:05,400 Speaker 1: in the day, and it's just all about everything else 451 00:26:05,440 --> 00:26:07,120 Speaker 1: going on in the scene and how it's been put 452 00:26:07,160 --> 00:26:09,480 Speaker 1: together that could be there. They're actually a number of 453 00:26:09,480 --> 00:26:13,760 Speaker 1: shots in The Godfather in particular where they're memorable because 454 00:26:13,920 --> 00:26:18,280 Speaker 1: of al Pacino's expressionless face, like when when Carlo Ritzie 455 00:26:18,480 --> 00:26:22,600 Speaker 1: confesses at the end to having killed Sonny, and Michael 456 00:26:22,640 --> 00:26:25,200 Speaker 1: just looks at him with the blank expression. But you 457 00:26:25,280 --> 00:26:27,879 Speaker 1: read a lot into that blank expression. It is a 458 00:26:27,960 --> 00:26:31,800 Speaker 1: murderous blank expression. But there's another way of reading the 459 00:26:31,840 --> 00:26:36,480 Speaker 1: al Pacino example here, and also of possibly interpreting the 460 00:26:36,480 --> 00:26:41,119 Speaker 1: original Mojoukan experiment. I really like this explanation. What if 461 00:26:41,160 --> 00:26:45,680 Speaker 1: Kulashov's montage was loaded with more conventional emotional content than 462 00:26:45,720 --> 00:26:48,680 Speaker 1: he claimed. There could be a million ways this could 463 00:26:48,720 --> 00:26:51,760 Speaker 1: be true. But for example, what if there was something 464 00:26:51,880 --> 00:26:55,879 Speaker 1: special about the face of Majukin. What if there was 465 00:26:55,880 --> 00:26:59,800 Speaker 1: something special about the face that Kulashov used in this 466 00:27:00,000 --> 00:27:03,800 Speaker 1: supposedly neutral test film, there was less neutral than we 467 00:27:03,800 --> 00:27:06,840 Speaker 1: would be led to believe. The authors of this ninety 468 00:27:06,880 --> 00:27:09,680 Speaker 1: two paper note quote there is a difference between an 469 00:27:09,720 --> 00:27:14,639 Speaker 1: expressionless face and an ambiguous expression. And they started an 470 00:27:14,680 --> 00:27:17,320 Speaker 1: experience from their own experiment. They talked about how the 471 00:27:17,400 --> 00:27:19,919 Speaker 1: very first tape they created, if somebody trying to do 472 00:27:19,960 --> 00:27:23,159 Speaker 1: a neutral face had to be rejected and replaced with 473 00:27:23,200 --> 00:27:26,320 Speaker 1: a different actor because it failed to be rated as 474 00:27:26,400 --> 00:27:29,280 Speaker 1: neutral in the control condition. So that was the control 475 00:27:29,320 --> 00:27:33,000 Speaker 1: when there were no shots juxtaposed, the control group thought 476 00:27:33,040 --> 00:27:36,200 Speaker 1: they perceived a range of emotions in the first neutral 477 00:27:36,280 --> 00:27:38,560 Speaker 1: face they looked at, and then the author's got a 478 00:27:38,600 --> 00:27:41,960 Speaker 1: different tape, different actor, and it succeeded at being perceived 479 00:27:42,000 --> 00:27:45,240 Speaker 1: as more neutral in the original control. This is great 480 00:27:45,280 --> 00:27:47,800 Speaker 1: to point out, Yeah, the difference between a neutral face 481 00:27:47,800 --> 00:27:51,560 Speaker 1: and an ambiguous face, because obviously this is one of 482 00:27:51,560 --> 00:27:54,879 Speaker 1: the arguments for why the Mona Lisa by Leonardo da 483 00:27:54,920 --> 00:27:59,120 Speaker 1: vinci Is is such a an admired piece of art. 484 00:27:59,560 --> 00:28:03,600 Speaker 1: Is because you can easily read what the Mona Lisa 485 00:28:03,800 --> 00:28:07,080 Speaker 1: is um is thinking or feeling, but that she has 486 00:28:07,119 --> 00:28:10,119 Speaker 1: this ambiguous countenance, right, And the difference would be that 487 00:28:10,160 --> 00:28:13,280 Speaker 1: there there is a difference between ambiguous and neutral. Neutral 488 00:28:13,600 --> 00:28:15,720 Speaker 1: is something we look at and we see I I 489 00:28:15,800 --> 00:28:19,760 Speaker 1: don't see any emotion on that face. Ambiguous is you 490 00:28:19,800 --> 00:28:22,320 Speaker 1: see emotion, but it's not clear what it is. It 491 00:28:22,400 --> 00:28:26,520 Speaker 1: maybe suggests something that could go in different directions. Oh, 492 00:28:26,560 --> 00:28:29,879 Speaker 1: but then the author's come back to talking about this 493 00:28:30,000 --> 00:28:33,800 Speaker 1: more ambiguous, more emotional face that they got the first 494 00:28:33,800 --> 00:28:36,320 Speaker 1: time they tried to record a tape, they said quote. 495 00:28:36,320 --> 00:28:39,160 Speaker 1: When other viewers were shown this face and sequence, many 496 00:28:39,200 --> 00:28:42,680 Speaker 1: attributed a wide range of emotions to the actor, some 497 00:28:42,800 --> 00:28:46,520 Speaker 1: consistent with the coolishev effect others not. The sequence with 498 00:28:46,560 --> 00:28:54,320 Speaker 1: the soup, for example, elicited interpretations of apathy, disgust, contemplation, detachment, dislike, indifference, 499 00:28:54,600 --> 00:28:57,920 Speaker 1: lack of interest, as well as an occasional attribution of hunger. 500 00:28:58,600 --> 00:29:02,000 Speaker 1: The ambiguous expression see him to offer a stronger interpretive 501 00:29:02,040 --> 00:29:05,520 Speaker 1: cue for the viewer than did the expressionless face. If 502 00:29:05,560 --> 00:29:09,000 Speaker 1: cool a Chauvian montage may not be capable of making 503 00:29:09,040 --> 00:29:12,600 Speaker 1: an expressionless face emotive, it may very well do with 504 00:29:12,680 --> 00:29:16,880 Speaker 1: an ambiguous expression, since the objects like soup, coffin, or 505 00:29:17,000 --> 00:29:22,080 Speaker 1: child provide a context for resolving the ambiguity. And I 506 00:29:22,160 --> 00:29:26,479 Speaker 1: think this interpretation seems very likely to me because again, 507 00:29:26,960 --> 00:29:31,560 Speaker 1: the allegation is that Mozukin was a famed actor, and 508 00:29:31,760 --> 00:29:35,360 Speaker 1: so there's naturally you can imagine a famed actor's face 509 00:29:35,480 --> 00:29:39,160 Speaker 1: has something special about it's kind of brimming with with 510 00:29:39,320 --> 00:29:43,320 Speaker 1: the the implication of emotion, even when they're being relatively 511 00:29:43,360 --> 00:29:46,120 Speaker 1: subtle or not, you know, offering a big smile or 512 00:29:46,160 --> 00:29:49,400 Speaker 1: frown or whatever, right right that this may well have 513 00:29:49,440 --> 00:29:53,239 Speaker 1: been the sort of performer that was highly aware of 514 00:29:53,280 --> 00:29:55,000 Speaker 1: what their face is doing. That is, you know, that 515 00:29:55,160 --> 00:29:57,880 Speaker 1: is practiced in front of the mirror that knows what 516 00:29:57,920 --> 00:30:02,000 Speaker 1: they're projecting, and therefore to you know, to a certain extent, 517 00:30:02,120 --> 00:30:04,640 Speaker 1: might be incapable of a neutral face at least when 518 00:30:05,360 --> 00:30:08,240 Speaker 1: when when told to pull some sort of face. Right, 519 00:30:08,280 --> 00:30:11,120 Speaker 1: So if there's something to this interpretation, I would say 520 00:30:11,160 --> 00:30:13,719 Speaker 1: that that the coolest off effect, even in the specific 521 00:30:13,800 --> 00:30:17,200 Speaker 1: case of interpreting neutral faces, as you know, based on 522 00:30:17,320 --> 00:30:22,160 Speaker 1: the the editing context, it's absolutely tapping into something real, 523 00:30:22,400 --> 00:30:26,120 Speaker 1: but there might be like thresholds or limits, like there 524 00:30:26,240 --> 00:30:28,600 Speaker 1: is some truth to it, but it can't overcome some 525 00:30:28,760 --> 00:30:34,520 Speaker 1: truly deeply blandly neutral faces, you know, like some ambiguous 526 00:30:34,520 --> 00:30:38,680 Speaker 1: faces just offer more hooks on which to hang emotional 527 00:30:38,760 --> 00:30:43,520 Speaker 1: values created by the context. Yeah. Yeah, I also wonder 528 00:30:44,120 --> 00:30:47,160 Speaker 1: what would what would happen if you you took exceptional 529 00:30:47,200 --> 00:30:49,200 Speaker 1: faces and you threw them in, you know, and not 530 00:30:49,240 --> 00:30:53,840 Speaker 1: necessarily even exceptionally dashing faces, but like just exceptionally evocative faces, 531 00:30:53,920 --> 00:30:55,760 Speaker 1: like like I don't know, like a Peter Laurie. You know, 532 00:30:56,320 --> 00:30:58,480 Speaker 1: if you put Peter Laurie in there, just even you know, 533 00:30:58,520 --> 00:31:00,280 Speaker 1: even though he's gonna do you know, a new traw 534 00:31:00,360 --> 00:31:03,400 Speaker 1: ambiguous face. Uh, you know, what would happen to the 535 00:31:03,400 --> 00:31:05,680 Speaker 1: experiment of course, in that case, you'd also have to 536 00:31:05,840 --> 00:31:08,920 Speaker 1: not know it was Peter Lori, because then you're gonna 537 00:31:08,960 --> 00:31:11,160 Speaker 1: you're gonna start typecasting, like, oh, we know what kind 538 00:31:11,200 --> 00:31:14,840 Speaker 1: of guys this this this actor plays. Yeah, you'd be suspicious, 539 00:31:14,840 --> 00:31:18,920 Speaker 1: you'd be reading negative emotional or suspicious mind content. What 540 00:31:19,080 --> 00:31:21,080 Speaker 1: is the planning for that soup? He's going to poison 541 00:31:21,120 --> 00:31:24,320 Speaker 1: that soup? Isn't he right? Anyway? I think the authors 542 00:31:24,360 --> 00:31:26,280 Speaker 1: make the point in the end that the the broader 543 00:31:26,320 --> 00:31:28,840 Speaker 1: implications of the cool A show of myth that that 544 00:31:28,920 --> 00:31:32,760 Speaker 1: individual shots, which may be low on meaning or emotion 545 00:31:32,840 --> 00:31:36,200 Speaker 1: by themselves, can become highly charged with meaning by the 546 00:31:36,240 --> 00:31:39,800 Speaker 1: power of the surrounding editing. This is obviously true, and 547 00:31:39,880 --> 00:31:43,080 Speaker 1: it is largely the basis for the magic of cinema, 548 00:31:43,760 --> 00:31:47,600 Speaker 1: But the specific claim about supposedly neutral faces appears to 549 00:31:47,640 --> 00:31:51,560 Speaker 1: be not true, at least for some audiences or some faces. 550 00:31:52,200 --> 00:31:55,520 Speaker 1: But this raises really interesting questions like, what are the 551 00:31:55,560 --> 00:32:00,320 Speaker 1: properties of the maximally cool A show ambiguous face? You know, what, 552 00:32:00,320 --> 00:32:02,480 Speaker 1: what kind of skills would you want an actor to 553 00:32:02,560 --> 00:32:06,640 Speaker 1: have to be able to have these you know, subtle 554 00:32:06,680 --> 00:32:12,280 Speaker 1: ambiguous expressions that can be sort of driven any which way. 555 00:32:12,360 --> 00:32:14,920 Speaker 1: By the surrounding context, by a bowl of soup or 556 00:32:14,960 --> 00:32:18,000 Speaker 1: by a coffin. I guess, you know, I'm just guessing here, 557 00:32:18,000 --> 00:32:20,040 Speaker 1: but at the bare minimum, you need to have some 558 00:32:20,120 --> 00:32:23,560 Speaker 1: sort of like spark of attention, Like they're saying, it's 559 00:32:23,600 --> 00:32:27,080 Speaker 1: not not enough perhaps to just rely solely on the 560 00:32:27,240 --> 00:32:29,760 Speaker 1: editing to imply that there's a connection between this shot 561 00:32:29,800 --> 00:32:33,160 Speaker 1: and the other. But the person's face appears to be 562 00:32:33,320 --> 00:32:36,400 Speaker 1: looking with interest at something. You know, Yeah, that's that's 563 00:32:36,400 --> 00:32:38,760 Speaker 1: a good point. I mean, I think sometimes with these studies, 564 00:32:38,800 --> 00:32:42,280 Speaker 1: like the face doesn't just look neutral. It looks like 565 00:32:42,360 --> 00:32:45,680 Speaker 1: it's not seeing anything, right, Like if it's just like 566 00:32:45,760 --> 00:32:49,000 Speaker 1: mug shot and then and then pick a plate of spaghetti, 567 00:32:49,080 --> 00:32:50,520 Speaker 1: Like okay, you show me a mug shot, and you 568 00:32:50,560 --> 00:32:53,719 Speaker 1: show me some spaghetti. Maybe something that's crucial is that 569 00:32:53,920 --> 00:32:57,280 Speaker 1: the even if they're not showing a very clear emotion, 570 00:32:57,360 --> 00:33:00,600 Speaker 1: that it looks like they're looking at what ever is 571 00:33:00,640 --> 00:33:11,680 Speaker 1: being shown. So Princeton Henley is very interesting, but it 572 00:33:11,760 --> 00:33:14,640 Speaker 1: was by no means the last study on the cooler 573 00:33:14,640 --> 00:33:17,120 Speaker 1: shop effect, the last attempt to look at it empirically, 574 00:33:17,680 --> 00:33:20,200 Speaker 1: And actually since then some other studies have kind of 575 00:33:20,400 --> 00:33:22,560 Speaker 1: come back on the other side have found a little 576 00:33:22,600 --> 00:33:27,640 Speaker 1: more support for the original alleged finding. So one example 577 00:33:28,040 --> 00:33:31,680 Speaker 1: is the is the study by Dean mobs at All 578 00:33:31,760 --> 00:33:35,080 Speaker 1: from two thousand six called the Cooler Shop Effect the 579 00:33:35,120 --> 00:33:38,840 Speaker 1: influence of contextual framing on emotional attributions. This was in 580 00:33:38,960 --> 00:33:42,640 Speaker 1: social cognitive and effective neuroscience, and the test here was 581 00:33:42,680 --> 00:33:45,760 Speaker 1: a little bit different, but they did basically look for 582 00:33:45,880 --> 00:33:49,080 Speaker 1: the same type of effect and did succeed in producing 583 00:33:49,120 --> 00:33:52,760 Speaker 1: it experimentally. So in this case, they didn't use just 584 00:33:53,000 --> 00:33:58,000 Speaker 1: a single supposedly neutral face as the stimulus. They used 585 00:33:58,200 --> 00:34:01,680 Speaker 1: neutral faces and then what they call faces displaying subtly 586 00:34:01,960 --> 00:34:06,200 Speaker 1: fearful or happy facial expressions, which if you want to 587 00:34:06,200 --> 00:34:08,160 Speaker 1: look up the study you can see the stimuli they 588 00:34:08,239 --> 00:34:11,560 Speaker 1: use the yeah, they're they're play their faces that are 589 00:34:11,680 --> 00:34:14,880 Speaker 1: almost neutral. They've just got the barest little hint of 590 00:34:14,920 --> 00:34:18,120 Speaker 1: a smile or kind of an apprehensive frown. And then 591 00:34:18,120 --> 00:34:20,640 Speaker 1: they put together a task where they would actually they 592 00:34:20,640 --> 00:34:23,640 Speaker 1: paired it with neuroimaging in the study, so they'd have 593 00:34:23,719 --> 00:34:27,160 Speaker 1: people doing neuroimaging while they gave them the task to 594 00:34:28,000 --> 00:34:30,440 Speaker 1: look at this face and then imagine that the person 595 00:34:30,719 --> 00:34:33,440 Speaker 1: is watching a movie of various kinds. It could be 596 00:34:33,480 --> 00:34:37,000 Speaker 1: a happy movie scene or a scary movie scene. Uh. 597 00:34:37,040 --> 00:34:40,080 Speaker 1: And they did find that people were, on average more 598 00:34:40,160 --> 00:34:44,280 Speaker 1: likely to interpret neutral or only very subtle expressive faces 599 00:34:44,840 --> 00:34:47,480 Speaker 1: more in alignment with the emotion that you would expect 600 00:34:47,560 --> 00:34:50,560 Speaker 1: if they believed the person was watching either a scary 601 00:34:50,680 --> 00:34:52,960 Speaker 1: or a happy movie. And so it's worth noting that 602 00:34:53,000 --> 00:34:55,799 Speaker 1: there is an effect here, but it's not as shockingly 603 00:34:55,920 --> 00:34:59,200 Speaker 1: powerful and unanimous as like those original tellings of the 604 00:34:59,400 --> 00:35:04,440 Speaker 1: Cooler Show experiment would suggest. Mm hmm, yeah, this is interesting. 605 00:35:04,480 --> 00:35:06,640 Speaker 1: This is something we'll continue to look at. I also 606 00:35:06,680 --> 00:35:09,920 Speaker 1: like that they were looking at scary and happy movie 607 00:35:09,960 --> 00:35:14,399 Speaker 1: scenes because it also brings to mind episodes we've done 608 00:35:14,400 --> 00:35:18,719 Speaker 1: in the past on audience reactions too scary movies and 609 00:35:18,800 --> 00:35:22,239 Speaker 1: how oftentimes, like like the the reaction you have to 610 00:35:22,440 --> 00:35:26,920 Speaker 1: a pleasant movie or certainly a funny movie compared to 611 00:35:26,920 --> 00:35:29,279 Speaker 1: that of a scary movie. Uh, that they may be 612 00:35:29,400 --> 00:35:32,360 Speaker 1: more like than one might think. Oh yeah, because a 613 00:35:32,360 --> 00:35:36,719 Speaker 1: lot of times people laugh when something is scary. Yeah, laughing, Uh, 614 00:35:36,960 --> 00:35:39,160 Speaker 1: you know, reacting to the way that people around them 615 00:35:39,160 --> 00:35:42,279 Speaker 1: are reacting. And if you are acting frightened during a 616 00:35:42,280 --> 00:35:44,880 Speaker 1: frightening movie, it's I feel like, it's very often a 617 00:35:45,000 --> 00:35:47,400 Speaker 1: kind of excited frightening you know, that's safe kind of 618 00:35:47,600 --> 00:35:49,799 Speaker 1: like I am. I am afraid for the characters, but 619 00:35:49,840 --> 00:35:52,960 Speaker 1: I'm not necessarily afraid for myself. You know, I've actually 620 00:35:52,960 --> 00:35:56,520 Speaker 1: wondered before if so. A lot of my movie going 621 00:35:56,640 --> 00:36:00,719 Speaker 1: entertainment pleasure comes from watching be horror movie is essentially 622 00:36:00,760 --> 00:36:05,080 Speaker 1: as unintentional comedies and having a good time laughing laughing 623 00:36:05,080 --> 00:36:08,000 Speaker 1: along with them. But I wonder if part of that 624 00:36:08,280 --> 00:36:11,400 Speaker 1: grows out of a kind of defense mechanism learned in childhood, 625 00:36:11,480 --> 00:36:15,000 Speaker 1: that that I could protect myself from something scary if 626 00:36:15,040 --> 00:36:18,200 Speaker 1: I sort of forced myself to see it instead as 627 00:36:18,200 --> 00:36:21,480 Speaker 1: something funny. Yeah, I don't know. I I certainly catch 628 00:36:21,560 --> 00:36:27,480 Speaker 1: myself going like ah, more like that exact um sound 629 00:36:28,120 --> 00:36:31,240 Speaker 1: if it is say a slightly goofy or goofy monster 630 00:36:31,360 --> 00:36:34,399 Speaker 1: that is suddenly jumping out as opposed to a more 631 00:36:35,160 --> 00:36:39,920 Speaker 1: I don't know, effective looking special effect. Uh there's something 632 00:36:39,960 --> 00:36:42,520 Speaker 1: about I don't know, it's probably you know, all this 633 00:36:42,560 --> 00:36:45,799 Speaker 1: is highly subjective, but for me at least, uh, you know, 634 00:36:45,880 --> 00:36:49,799 Speaker 1: maybe I'm just leaning into the imagination more in those cases. Now, 635 00:36:49,840 --> 00:36:52,759 Speaker 1: just very briefly, I wanted to point out a couple 636 00:36:52,800 --> 00:36:55,120 Speaker 1: more studies I dug up that looked into the cooler 637 00:36:55,120 --> 00:36:57,640 Speaker 1: Shov effect more recently than this one. So there was 638 00:36:57,680 --> 00:37:00,840 Speaker 1: a study in the journal Perception in in two thousand 639 00:37:00,880 --> 00:37:04,320 Speaker 1: and sixteen by Daniel Barrett at All called does the 640 00:37:04,360 --> 00:37:07,319 Speaker 1: cool a Shov effect really exist? Revisiting a classic film 641 00:37:07,360 --> 00:37:12,520 Speaker 1: experiment on facial expressions and emotional context. So they note 642 00:37:12,520 --> 00:37:14,360 Speaker 1: some of the stuff we already did, doubts about the 643 00:37:14,360 --> 00:37:17,960 Speaker 1: original experiment, and then the fact that recent attempts to 644 00:37:18,080 --> 00:37:22,080 Speaker 1: reproduce the effect have had conflicting results. So they tried 645 00:37:22,120 --> 00:37:25,080 Speaker 1: it out with a group of thirty six participants who 646 00:37:25,080 --> 00:37:30,239 Speaker 1: were presented with twenty four film sequences of neutral faces 647 00:37:30,280 --> 00:37:34,160 Speaker 1: across six different emotional conditions, so trying to reproduce the 648 00:37:34,200 --> 00:37:37,400 Speaker 1: same effect, and they actually did find a correlation. It 649 00:37:37,440 --> 00:37:39,680 Speaker 1: may it may not have been huge, but they said 650 00:37:39,800 --> 00:37:43,360 Speaker 1: quote for each emotional condition, the participants tended to choose 651 00:37:43,360 --> 00:37:47,960 Speaker 1: the appropriate the appropriate category more frequently than alternative options, 652 00:37:48,320 --> 00:37:51,200 Speaker 1: while the answers to the valence and arousal questions also 653 00:37:51,239 --> 00:37:54,279 Speaker 1: went in the expected direction. So they did find a 654 00:37:54,320 --> 00:37:57,520 Speaker 1: mild existence of the cool A Shov effect in their 655 00:37:57,600 --> 00:38:01,360 Speaker 1: research here and then there was Another one by Baranowski 656 00:38:01,400 --> 00:38:05,640 Speaker 1: and Hate in UH Perception in two thousand seventeen, called 657 00:38:05,680 --> 00:38:10,080 Speaker 1: the auditory cool Ashov Effect multisensory integration and movie editing. 658 00:38:10,480 --> 00:38:12,480 Speaker 1: The study tried to see if there were any cool 659 00:38:12,520 --> 00:38:15,120 Speaker 1: a Show of type effects, not for cross cutting with 660 00:38:15,280 --> 00:38:18,600 Speaker 1: visual images, but for music. So the question is does 661 00:38:18,800 --> 00:38:23,920 Speaker 1: music affect what emotions people detect on other people's supposedly 662 00:38:23,960 --> 00:38:27,120 Speaker 1: neutral faces, And according to the authors of this study, 663 00:38:27,480 --> 00:38:30,799 Speaker 1: their results were yes. They found that sad music did 664 00:38:30,840 --> 00:38:34,200 Speaker 1: in fact make people more likely to rate a supposedly 665 00:38:34,239 --> 00:38:38,640 Speaker 1: neutral face as sad and vice versa. Well that that 666 00:38:38,640 --> 00:38:41,760 Speaker 1: that doesn't surprise me at all. I mean, music, especially 667 00:38:41,880 --> 00:38:46,160 Speaker 1: music and film, is highly manipulative at times. And uh 668 00:38:46,239 --> 00:38:48,359 Speaker 1: and I think we've all seen experiments with this sort 669 00:38:48,360 --> 00:38:53,040 Speaker 1: of amateur experiments with this online where you take, um, 670 00:38:53,120 --> 00:38:55,719 Speaker 1: Johnny cash Is cover of nine inch Nails Hurt, and 671 00:38:55,719 --> 00:38:59,359 Speaker 1: you play it in the background of virtual virtually any uh, 672 00:38:59,440 --> 00:39:02,920 Speaker 1: neutral or ambiguous footage, and you're going to get a 673 00:39:02,960 --> 00:39:07,480 Speaker 1: sense of like deep personal anguish and and hurt. I'm 674 00:39:07,520 --> 00:39:09,279 Speaker 1: just I'm just putting it all together in my mind 675 00:39:09,400 --> 00:39:12,280 Speaker 1: right now. I'm seeing I'm seeing clips from like happy 676 00:39:12,360 --> 00:39:16,239 Speaker 1: Gilmore or something, but with with the Johnny Cash Yeah, 677 00:39:16,600 --> 00:39:19,520 Speaker 1: to see if I still feel. And then finally one 678 00:39:19,600 --> 00:39:22,880 Speaker 1: last one there was a paper by mullinicks at All 679 00:39:23,040 --> 00:39:26,560 Speaker 1: from twenty nineteen in pl Os one that also looked 680 00:39:26,560 --> 00:39:29,120 Speaker 1: at the cool a Shov effect, trying to see if 681 00:39:29,160 --> 00:39:34,160 Speaker 1: it existed for still photographs instead of dynamic film sequences. 682 00:39:34,520 --> 00:39:37,520 Speaker 1: And the authors say, yes, they did the cool Ashov 683 00:39:37,600 --> 00:39:40,319 Speaker 1: type experiment, but just with still photos, and they found 684 00:39:40,400 --> 00:39:42,840 Speaker 1: there was in fact a cool Ashov type effect for 685 00:39:42,960 --> 00:39:48,160 Speaker 1: just for still images. Okay, also not surprising to me anyway. 686 00:39:48,280 --> 00:39:50,919 Speaker 1: So it looks like more of the recent studies into 687 00:39:51,000 --> 00:39:53,560 Speaker 1: this have found some kind of effect, though I think 688 00:39:53,640 --> 00:39:56,200 Speaker 1: sometimes the effects are, you know, the kinds of things 689 00:39:56,280 --> 00:39:58,799 Speaker 1: you're more likely to normally see in psychology experiments, kind 690 00:39:58,800 --> 00:40:02,800 Speaker 1: of modest effects rather other than the overwhelming unanimous effect 691 00:40:02,960 --> 00:40:07,200 Speaker 1: described in the the original Masoukan experiment. Now, I'd like 692 00:40:07,239 --> 00:40:10,160 Speaker 1: to take um all these points we've been hitting and 693 00:40:10,320 --> 00:40:13,840 Speaker 1: come back around to something that I briefly discussed, and 694 00:40:13,880 --> 00:40:17,560 Speaker 1: that was Leonardo da Vinci's famous sixteenth century painting The 695 00:40:17,600 --> 00:40:20,200 Speaker 1: Mona Lisa. One of the most intriguing aspects of this 696 00:40:20,280 --> 00:40:25,480 Speaker 1: painting is the the ultimate ambiguity of the expression. You know, 697 00:40:25,520 --> 00:40:28,759 Speaker 1: the Mona Lisa smile especially, Uh, it's a it's a 698 00:40:29,239 --> 00:40:31,320 Speaker 1: it's a it's a slight smile. It's a kind of 699 00:40:31,320 --> 00:40:34,880 Speaker 1: an ambiguous smile. What is she smiling about or beginning 700 00:40:34,880 --> 00:40:38,320 Speaker 1: to smile about? Um? You know there there there have 701 00:40:38,400 --> 00:40:40,239 Speaker 1: been a number of papers written about this, and I'm 702 00:40:40,239 --> 00:40:42,360 Speaker 1: certainly not going to do them all justice here, but 703 00:40:42,400 --> 00:40:45,279 Speaker 1: I wanted to touch on some findings that I think 704 00:40:45,360 --> 00:40:49,520 Speaker 1: can potentially contribute to this conversation. Now, wait, did this 705 00:40:49,600 --> 00:40:53,840 Speaker 1: originally come up in our making a distinction between neutrality 706 00:40:53,880 --> 00:40:56,880 Speaker 1: and ambiguity and so so that maybe you're suggesting that 707 00:40:56,920 --> 00:40:59,680 Speaker 1: the Mona Lisa's face might be one of those famous 708 00:40:59,680 --> 00:41:03,640 Speaker 1: face is that is ambiguous but not neutral? Right, it 709 00:41:03,680 --> 00:41:06,600 Speaker 1: doesn't look like a death mask. But also you know, 710 00:41:06,680 --> 00:41:10,240 Speaker 1: she's not she's not scowling, she doesn't look like Vigo 711 00:41:10,280 --> 00:41:13,600 Speaker 1: the compathion. She's not smiling ear to ear. It's a 712 00:41:13,760 --> 00:41:18,640 Speaker 1: very interesting expression, to say the least. Um that people 713 00:41:18,760 --> 00:41:22,480 Speaker 1: have been discussing and studying for for for decades and 714 00:41:22,920 --> 00:41:26,160 Speaker 1: for for ages. Uh So I'm not going to cover 715 00:41:26,200 --> 00:41:28,360 Speaker 1: all the studies, but there there've been there's been plenty 716 00:41:29,080 --> 00:41:31,760 Speaker 1: but I was looking at one. This was a theory 717 00:41:31,800 --> 00:41:35,360 Speaker 1: that was put forth by Professor Margaret Livingstone of Harvard 718 00:41:35,440 --> 00:41:42,840 Speaker 1: University UM. She argues that, UM, a lot of what 719 00:41:43,040 --> 00:41:46,320 Speaker 1: fascinates us about this painting is because the smile appears 720 00:41:46,400 --> 00:41:50,200 Speaker 1: differently depending on where you're standing in position to the painting. 721 00:41:51,000 --> 00:41:53,880 Speaker 1: So if you look at it with your fobial or 722 00:41:53,880 --> 00:41:58,240 Speaker 1: direct vision, uh, then arguably there's not really a smile 723 00:41:58,320 --> 00:42:01,200 Speaker 1: going on there. But if you view it from your 724 00:42:01,600 --> 00:42:04,520 Speaker 1: with your peripheral vision, out of the corner of your eye, 725 00:42:04,960 --> 00:42:07,520 Speaker 1: then it seems like there's a pronounced smile. Now this 726 00:42:07,560 --> 00:42:11,440 Speaker 1: doesn't This this little tidbit doesn't particularly I have a 727 00:42:11,440 --> 00:42:15,040 Speaker 1: lot to reveal about the broader topic we're discussing here, 728 00:42:15,040 --> 00:42:17,360 Speaker 1: but I found it interesting just talking about. And indeed, 729 00:42:17,360 --> 00:42:18,759 Speaker 1: it's one that you can You can pull up an 730 00:42:18,760 --> 00:42:21,880 Speaker 1: image of the Mona Lisa on your computer or your phone, 731 00:42:22,000 --> 00:42:25,279 Speaker 1: or if you have a copy hanging in your your 732 00:42:25,400 --> 00:42:27,320 Speaker 1: your house. You can do it this way as well, 733 00:42:27,600 --> 00:42:30,560 Speaker 1: and you'll find I think that you do get this 734 00:42:30,600 --> 00:42:31,920 Speaker 1: effect if you kind of look at out of the 735 00:42:31,920 --> 00:42:33,800 Speaker 1: corner of your eye, it seems like there's a pronounced smile. 736 00:42:34,200 --> 00:42:37,680 Speaker 1: Look at her directly and uh, it's it's not there. 737 00:42:38,040 --> 00:42:40,520 Speaker 1: I see exactly what you mean. Another interesting thing is 738 00:42:40,560 --> 00:42:45,239 Speaker 1: that my mental image of the Mona Lisa is smiling 739 00:42:45,400 --> 00:42:48,040 Speaker 1: more than the actual image seems to be when I 740 00:42:48,080 --> 00:42:52,320 Speaker 1: look at it. Yeah, something about the lower resolution copy 741 00:42:52,400 --> 00:42:55,919 Speaker 1: in my brain appears to have accentuated the smile. And 742 00:42:56,200 --> 00:42:59,560 Speaker 1: maybe somehow that's picking up on the kind of subtle 743 00:42:59,640 --> 00:43:02,680 Speaker 1: shade ding of the contours of her cheeks which looks 744 00:43:02,719 --> 00:43:05,600 Speaker 1: like they could be continuing the lines of her mouth, 745 00:43:05,719 --> 00:43:09,840 Speaker 1: but it's not her mouth. Yeah, yeah, so yeah, I 746 00:43:10,280 --> 00:43:11,960 Speaker 1: think that that's that's very much it. And of course 747 00:43:12,000 --> 00:43:13,960 Speaker 1: you can get into deeper discussions of you know, to 748 00:43:14,000 --> 00:43:17,000 Speaker 1: what extent um you know this is intended and you 749 00:43:17,000 --> 00:43:20,920 Speaker 1: know what Leonardo Vinci's trying to do with this, um 750 00:43:21,120 --> 00:43:23,719 Speaker 1: Because another another aspect of the smile that's frequently brought 751 00:43:23,800 --> 00:43:26,759 Speaker 1: up is that it's um uh, it's it's not a 752 00:43:26,760 --> 00:43:31,520 Speaker 1: symmetrical smile. Um. And this is often cited as is 753 00:43:31,560 --> 00:43:35,320 Speaker 1: one of the key interesting aspects of the Mona Lisa's smile, 754 00:43:35,440 --> 00:43:40,239 Speaker 1: of Mona Lisa's face in general, um. Now, the emotional 755 00:43:40,280 --> 00:43:43,239 Speaker 1: impact of her expression has been much debated over the years, 756 00:43:43,239 --> 00:43:45,000 Speaker 1: and he is like like a lot of what we 757 00:43:45,040 --> 00:43:48,600 Speaker 1: discussed in part one and in this episode. It's one 758 00:43:48,600 --> 00:43:50,640 Speaker 1: of those areas where you can you can science it 759 00:43:50,680 --> 00:43:53,560 Speaker 1: all day, but you're still working with subjective art rather 760 00:43:53,600 --> 00:43:56,640 Speaker 1: than objective principles. But there are some papers that I 761 00:43:56,640 --> 00:43:59,640 Speaker 1: think have some revealing information based generally on you know, 762 00:43:59,719 --> 00:44:03,520 Speaker 1: small studies looking at asking people to look at the 763 00:44:04,239 --> 00:44:07,600 Speaker 1: painting or look at portions of the paintings sometimes they've 764 00:44:07,600 --> 00:44:11,080 Speaker 1: been manipulated in a key way, and see what people 765 00:44:11,120 --> 00:44:13,279 Speaker 1: have to say about it. And this is where we're getting, uh, 766 00:44:13,320 --> 00:44:16,160 Speaker 1: you know, we're getting into something that's more in line 767 00:44:16,719 --> 00:44:19,400 Speaker 1: with the broader topic here. When you look at the 768 00:44:19,600 --> 00:44:24,960 Speaker 1: Mona Lisa, what kind of emotional um understanding is passing 769 00:44:25,040 --> 00:44:28,000 Speaker 1: between the painting and yourself? Does it depend on what 770 00:44:28,120 --> 00:44:30,640 Speaker 1: painting is across the room from her on the other wall, 771 00:44:30,880 --> 00:44:33,000 Speaker 1: so like what you're perceiving her to be looking at. 772 00:44:33,880 --> 00:44:36,560 Speaker 1: They didn't get into that, uh as much, but I 773 00:44:36,640 --> 00:44:38,279 Speaker 1: couldn't help but think of it. I kept thinking of 774 00:44:38,320 --> 00:44:43,200 Speaker 1: her looking at soup and so forth. But one paper 775 00:44:43,239 --> 00:44:46,080 Speaker 1: I was looking at was a twenty nineteen paper from 776 00:44:46,120 --> 00:44:50,120 Speaker 1: Marsilli at All published in Cortex, the journal Cortex in 777 00:44:50,160 --> 00:44:53,319 Speaker 1: which the researchers asked forty two individuals to rate which 778 00:44:53,360 --> 00:44:55,960 Speaker 1: of the six basic emotions as well as a neutral 779 00:44:55,960 --> 00:45:02,000 Speaker 1: expression of emotion was related in chimerical images, uh, constructed 780 00:45:02,040 --> 00:45:05,120 Speaker 1: from the photos. So chimerical images in this sense are 781 00:45:05,200 --> 00:45:09,840 Speaker 1: formed from opposing halves of a pair of same or 782 00:45:09,880 --> 00:45:13,239 Speaker 1: different faces, usually in like studies in courtroom settings. But 783 00:45:13,280 --> 00:45:15,160 Speaker 1: in this case it would be like you know, um, 784 00:45:15,239 --> 00:45:17,719 Speaker 1: my understanding here is like mirroring different parts of the face, 785 00:45:17,760 --> 00:45:20,880 Speaker 1: stealing with the with the asymmetry. You know, like what 786 00:45:20,960 --> 00:45:24,839 Speaker 1: if you had side A is the and you just 787 00:45:24,880 --> 00:45:27,520 Speaker 1: cloned it onto side B. That sort of thing. Now, 788 00:45:27,600 --> 00:45:30,800 Speaker 1: the results in this case indicated that happiness is expressed 789 00:45:30,880 --> 00:45:34,440 Speaker 1: only on the left side of Mona Lisa's face, not 790 00:45:34,520 --> 00:45:38,120 Speaker 1: on the right. Uh. And this actually leans into the 791 00:45:38,160 --> 00:45:41,120 Speaker 1: interpretation that the Mona Lisa's smile is not a legitimate 792 00:45:41,160 --> 00:45:43,960 Speaker 1: smile at all, but a fake smile uh, something that 793 00:45:44,080 --> 00:45:46,640 Speaker 1: is either you know, a noteworthy subject of of the 794 00:45:46,719 --> 00:45:48,719 Speaker 1: art in and of itself, or has a more specific, 795 00:45:48,760 --> 00:45:52,840 Speaker 1: even cryptic purpose in da Vinci's art here, but and 796 00:45:53,080 --> 00:45:56,239 Speaker 1: I think potentially makes it more interesting. Peace it's not 797 00:45:56,239 --> 00:45:57,960 Speaker 1: of just a painting of a woman smiling. It's a 798 00:45:57,960 --> 00:46:02,319 Speaker 1: painting of a woman pretending to smile, yeah, faintly. This 799 00:46:02,400 --> 00:46:04,880 Speaker 1: is interesting because I know that's something I've read, and 800 00:46:04,920 --> 00:46:06,960 Speaker 1: I don't know how legitimate this is, but I've I've 801 00:46:07,000 --> 00:46:11,880 Speaker 1: at least read um facial expression ambiguity as one of 802 00:46:11,920 --> 00:46:16,240 Speaker 1: the features people use to detect fakeery of emotions in others. 803 00:46:16,440 --> 00:46:18,759 Speaker 1: So when people look at somebody else and they see 804 00:46:18,800 --> 00:46:22,239 Speaker 1: that their smile is asymmetrical, they're more likely to think 805 00:46:22,239 --> 00:46:25,799 Speaker 1: they're faking it, right right, um. And this is a 806 00:46:25,840 --> 00:46:27,719 Speaker 1: topic we've we've covered on the show before because you 807 00:46:27,760 --> 00:46:31,239 Speaker 1: get into that whole topic of of micro expressions and 808 00:46:31,320 --> 00:46:36,040 Speaker 1: reading micro expressions and uh, the the idea that that 809 00:46:36,239 --> 00:46:39,400 Speaker 1: a fake smile looks one way, but there's a more profound, 810 00:46:39,520 --> 00:46:45,879 Speaker 1: pronounced um muscle definition to a legitimate smile. And so that's, 811 00:46:45,920 --> 00:46:47,600 Speaker 1: I mean, that's on it on its own, is something 812 00:46:47,640 --> 00:46:52,680 Speaker 1: we might take into account when considering ambiguous like semi happy, 813 00:46:52,880 --> 00:46:58,520 Speaker 1: semi smiling, ambiguous uh images and ambiguous faces used in 814 00:46:58,560 --> 00:47:02,040 Speaker 1: one of these experiments. Now, another study I looked at 815 00:47:02,040 --> 00:47:05,760 Speaker 1: here was one from seventeen by leacci at All published 816 00:47:05,760 --> 00:47:11,360 Speaker 1: in Scientific Reports. The researchers here manipulated This one's actually 817 00:47:11,440 --> 00:47:15,960 Speaker 1: kind of funny. I think manipulated Mona Lisa's mouth curvature. Uh, 818 00:47:16,000 --> 00:47:19,800 Speaker 1: and studied how a range of happier and sadder face 819 00:47:19,920 --> 00:47:25,160 Speaker 1: variance influenced perceptions of her emotions. So, um, the actual 820 00:47:25,160 --> 00:47:27,160 Speaker 1: paper gets into a lot of like they bust out 821 00:47:27,160 --> 00:47:29,680 Speaker 1: some equations and math on this, but basically they're just 822 00:47:29,840 --> 00:47:32,279 Speaker 1: doing what you're imagining now, like making the smile more 823 00:47:32,320 --> 00:47:36,920 Speaker 1: pronounced or making it less pronounced. And um, they were 824 00:47:36,920 --> 00:47:41,799 Speaker 1: able to manipulate perception along a sadness happiness um uh spectrum, 825 00:47:42,480 --> 00:47:45,840 Speaker 1: but contended ultimately that their data indicates that the natural 826 00:47:45,920 --> 00:47:49,360 Speaker 1: Mona Lisa, at any rate, is always happy. But I 827 00:47:49,360 --> 00:47:53,600 Speaker 1: found this more telling quote observers recognize positive facial expressions 828 00:47:53,760 --> 00:47:58,080 Speaker 1: faster than negative expressions. Uh. This is not a finding, 829 00:47:58,080 --> 00:48:01,799 Speaker 1: but just a reality that they were discussing in in 830 00:48:01,920 --> 00:48:06,760 Speaker 1: the paper. So in other words, faces spiraling down through neutrality, 831 00:48:06,800 --> 00:48:12,120 Speaker 1: ambiguity and into other emotional states require more contemplation. Uh. 832 00:48:12,160 --> 00:48:15,400 Speaker 1: And and I'm making assumptions here, but but more nuance. 833 00:48:15,960 --> 00:48:18,600 Speaker 1: So like the like the face that's smiling ear to 834 00:48:18,680 --> 00:48:21,560 Speaker 1: ear or is in a you know, the vego of 835 00:48:21,560 --> 00:48:24,360 Speaker 1: the coppathion scowl. We don't have to think long and 836 00:48:24,400 --> 00:48:26,920 Speaker 1: hard about that, like what kind of emotion is this 837 00:48:26,960 --> 00:48:30,000 Speaker 1: person having about the soup. We know that they they're 838 00:48:30,000 --> 00:48:32,360 Speaker 1: either ecstatic over the soup or they just hate the 839 00:48:32,400 --> 00:48:35,080 Speaker 1: soup or something involved with the soup. We don't have 840 00:48:35,120 --> 00:48:37,200 Speaker 1: to uh to think about it much. But when you 841 00:48:37,280 --> 00:48:40,840 Speaker 1: have that that that ambiguous smile or even a slight 842 00:48:41,320 --> 00:48:44,319 Speaker 1: uh frown, you know, that's that's when that's when that 843 00:48:44,400 --> 00:48:47,480 Speaker 1: really makes you think, like what is this person thinking? 844 00:48:47,840 --> 00:48:50,719 Speaker 1: My my theory of mind has to maybe engage more 845 00:48:50,760 --> 00:48:52,800 Speaker 1: to try and figure it out, and then ultimately we 846 00:48:52,880 --> 00:48:54,560 Speaker 1: have to remember, I mean, one of the key things 847 00:48:54,600 --> 00:48:58,080 Speaker 1: about people's faces is that the face itself is a 848 00:48:58,080 --> 00:49:02,560 Speaker 1: communication array. So like we're trying to get information potentially 849 00:49:02,600 --> 00:49:06,480 Speaker 1: about that soup, right, like like that this individual might 850 00:49:06,480 --> 00:49:08,440 Speaker 1: know of that soup is good. I want to know 851 00:49:08,800 --> 00:49:12,040 Speaker 1: like what the inside track is on the soup um 852 00:49:12,280 --> 00:49:15,839 Speaker 1: or on other human beings before I myself decide how 853 00:49:15,880 --> 00:49:17,759 Speaker 1: I feel about it. I know this is sort of 854 00:49:17,800 --> 00:49:19,960 Speaker 1: besides your main point, but it also makes me think 855 00:49:19,960 --> 00:49:23,799 Speaker 1: about the strange biological contingency that one of the main 856 00:49:23,920 --> 00:49:27,280 Speaker 1: features of that communication arrays also the whole that soup 857 00:49:27,360 --> 00:49:31,160 Speaker 1: goes in. It's true. Do you ever think about how 858 00:49:31,239 --> 00:49:33,239 Speaker 1: weird that is? You know, didn't have to be that way, 859 00:49:33,280 --> 00:49:36,040 Speaker 1: but we just we we cram in, we cram in 860 00:49:36,120 --> 00:49:39,879 Speaker 1: nutrition and speak through the same orifice. It's weird. It's true, 861 00:49:39,920 --> 00:49:42,000 Speaker 1: it's weird, But you know, it's always a reminder that 862 00:49:42,040 --> 00:49:43,719 Speaker 1: we shouldn't try and do both at the same time. 863 00:49:44,960 --> 00:49:47,000 Speaker 1: But to bring it back to Koloshov, I do think 864 00:49:47,040 --> 00:49:50,400 Speaker 1: this drives home a little bit of the susceptibility of 865 00:49:50,520 --> 00:49:54,839 Speaker 1: ambiguous faces. You know that we can if the face 866 00:49:54,920 --> 00:49:56,880 Speaker 1: is ambiguous, we have to think more about it, We 867 00:49:56,960 --> 00:50:00,319 Speaker 1: have to think more about the context. But you know, 868 00:50:00,560 --> 00:50:05,879 Speaker 1: what is the relationship between um shot A and shot B, right? 869 00:50:05,920 --> 00:50:08,080 Speaker 1: I mean that would go along with what mobs that 870 00:50:08,200 --> 00:50:10,319 Speaker 1: all said in their background again, which is that, you know, 871 00:50:10,440 --> 00:50:14,520 Speaker 1: the broad finding of behavioral researches that people rely most 872 00:50:14,600 --> 00:50:17,279 Speaker 1: on context to interpret the faces of others when the 873 00:50:17,320 --> 00:50:20,359 Speaker 1: clarity of the facial expression is low, so that could 874 00:50:20,360 --> 00:50:23,200 Speaker 1: be ambiguity or other things maybe or maybe just like 875 00:50:23,239 --> 00:50:25,640 Speaker 1: it's hard to see, and when the clarity of the 876 00:50:25,680 --> 00:50:28,920 Speaker 1: context is high, so when there's information in the context 877 00:50:29,000 --> 00:50:32,360 Speaker 1: and less information in the face you reach for the context. 878 00:50:37,040 --> 00:50:40,600 Speaker 1: Thank you well anyway. I guess this all brings us 879 00:50:40,600 --> 00:50:43,360 Speaker 1: back to one of the questions posed by the Prince 880 00:50:43,360 --> 00:50:47,120 Speaker 1: in Hensley paper, which is, you know, I wonder if 881 00:50:47,600 --> 00:50:53,160 Speaker 1: certain actors are just more likely to um more likely 882 00:50:53,239 --> 00:50:56,439 Speaker 1: to give rise to this effect than others are. And 883 00:50:56,560 --> 00:50:59,640 Speaker 1: that again drawing on that observation that there's actually a 884 00:50:59,640 --> 00:51:02,640 Speaker 1: different ins between a neutral face and an ambiguous face. 885 00:51:03,280 --> 00:51:07,680 Speaker 1: I was trying to think of examples of actors who's 886 00:51:08,280 --> 00:51:12,080 Speaker 1: what you might call blank or neutral faces might tend 887 00:51:12,239 --> 00:51:17,080 Speaker 1: more toward expressive ambiguity rather than true neutrality. So even 888 00:51:17,080 --> 00:51:20,120 Speaker 1: when their face is supposedly at rest, you could look 889 00:51:20,160 --> 00:51:23,640 Speaker 1: at it and and it would seem valid to interpret 890 00:51:23,680 --> 00:51:27,240 Speaker 1: a wide range of intense emotions to them. The best 891 00:51:27,280 --> 00:51:29,399 Speaker 1: example I could think of, and I didn't pick him 892 00:51:29,400 --> 00:51:31,200 Speaker 1: just because I love him as an actor, though I do. 893 00:51:31,520 --> 00:51:34,000 Speaker 1: The best example I could think of was Toshiro Mufune, 894 00:51:34,239 --> 00:51:37,320 Speaker 1: who you might know from a cure Kua Sawa movies. 895 00:51:37,360 --> 00:51:39,520 Speaker 1: You know, he's the star of your Jimbo and movies 896 00:51:39,560 --> 00:51:43,040 Speaker 1: like that. I would say he is somebody who, even 897 00:51:43,120 --> 00:51:46,160 Speaker 1: when he's doing something very stoic with his face even 898 00:51:46,200 --> 00:51:49,239 Speaker 1: when his face appears to be at rest, you could 899 00:51:49,239 --> 00:51:53,520 Speaker 1: easily imagine that it is expressing a range of diametrically 900 00:51:53,560 --> 00:51:56,200 Speaker 1: opposing emotions. And Rob I I pasted in a picture 901 00:51:56,239 --> 00:51:58,680 Speaker 1: for you to look at here. That's just a portrait 902 00:51:58,719 --> 00:52:00,839 Speaker 1: of him. I don't think this is even from a film. 903 00:52:00,880 --> 00:52:03,480 Speaker 1: I think this might just be like a studio portrait still, 904 00:52:04,360 --> 00:52:06,520 Speaker 1: because this is one where I've seen, you know, like 905 00:52:06,560 --> 00:52:09,680 Speaker 1: that he's done autographs on and stuff. To my eye, 906 00:52:09,719 --> 00:52:13,360 Speaker 1: in this portrait, he could be happy, he could be sad, 907 00:52:13,560 --> 00:52:17,040 Speaker 1: he could be affectionate, he could be hungry, he could 908 00:52:17,040 --> 00:52:21,080 Speaker 1: be angry. All seemed totally plausible with the expression on 909 00:52:21,160 --> 00:52:24,279 Speaker 1: his face. And I guess this seems to correspond with 910 00:52:24,320 --> 00:52:27,759 Speaker 1: the fact that I'd say he's an actor known simultaneously 911 00:52:28,280 --> 00:52:32,280 Speaker 1: for having a highly emotionally expressive face and for often 912 00:52:32,320 --> 00:52:36,760 Speaker 1: playing kind of stoic characters. Yeah. Yeah, you think about 913 00:52:36,800 --> 00:52:40,040 Speaker 1: the especially some of the samurai type characters that he played, 914 00:52:40,080 --> 00:52:43,320 Speaker 1: it attends to be an intense stoicism to those characters. 915 00:52:43,440 --> 00:52:45,520 Speaker 1: Though at the same time, I mean, you think of 916 00:52:45,600 --> 00:52:50,080 Speaker 1: his the McBeth character or the equivalent of McBeth pretty 917 00:52:50,080 --> 00:52:53,120 Speaker 1: wise and Throne of Blood, you know, certainly he's you know, 918 00:52:53,160 --> 00:52:56,240 Speaker 1: there's plenty of wide eyed crazy shots in that film, 919 00:52:56,360 --> 00:52:58,120 Speaker 1: especially towards the end. But yeah, a lot of a 920 00:52:58,120 --> 00:53:01,560 Speaker 1: lot of the characters he plays, how have a certain sternness, 921 00:53:01,600 --> 00:53:06,480 Speaker 1: a certain stoic quality. Uh that that has ultimately has 922 00:53:06,520 --> 00:53:09,960 Speaker 1: an intense ambiguity to it, And it makes me think 923 00:53:10,000 --> 00:53:13,319 Speaker 1: about a difference that. You know, sometimes you read psychological 924 00:53:13,360 --> 00:53:17,920 Speaker 1: studies that are measuring emotions in some context, and they 925 00:53:17,960 --> 00:53:22,520 Speaker 1: measure emotions in terms of both valence and intensity, where 926 00:53:22,960 --> 00:53:25,799 Speaker 1: valence means what the emotion is, so it could be 927 00:53:25,840 --> 00:53:30,000 Speaker 1: like positive emotion or negative emotion, and intensity is how 928 00:53:30,120 --> 00:53:34,040 Speaker 1: strongly it is felt. Thinking about this makes me wonder 929 00:53:34,200 --> 00:53:38,680 Speaker 1: if maybe there are some people whose emotional expression naturally 930 00:53:38,800 --> 00:53:43,080 Speaker 1: tends to be high in intensity, even when the valence 931 00:53:43,200 --> 00:53:47,920 Speaker 1: is unknown or unclear, If that makes any sense. Yeah, yeah, 932 00:53:48,080 --> 00:53:50,240 Speaker 1: so I wonder if that's especially the kind of person 933 00:53:50,280 --> 00:53:53,360 Speaker 1: that you use a picture of, that kind of actor 934 00:53:53,920 --> 00:53:56,040 Speaker 1: trying to do a neutral face. But then you do 935 00:53:56,040 --> 00:53:59,640 Speaker 1: a Coolishov type experiment and people would be like, yes, 936 00:53:59,800 --> 00:54:02,040 Speaker 1: you know, you show them looking at the coffin, they're 937 00:54:02,160 --> 00:54:04,279 Speaker 1: very sad. You show them looking at the soup, they 938 00:54:04,280 --> 00:54:08,399 Speaker 1: are ravenous, whereas there are other actors who whose face 939 00:54:08,520 --> 00:54:13,040 Speaker 1: is just more successfully convey a blank neutrality where people 940 00:54:13,080 --> 00:54:14,960 Speaker 1: see it and they say, I don't think this person 941 00:54:15,000 --> 00:54:18,879 Speaker 1: is feeling anything. Yeah, yeah, I think it's a good 942 00:54:18,880 --> 00:54:20,759 Speaker 1: point and to try and sort of prove it out 943 00:54:20,760 --> 00:54:23,759 Speaker 1: for our own purposes. You posted this picture of a 944 00:54:23,840 --> 00:54:26,560 Speaker 1: man in our notes, and I posted a picture of 945 00:54:26,600 --> 00:54:29,160 Speaker 1: soup next to him. And indeed, if I look at 946 00:54:29,160 --> 00:54:30,680 Speaker 1: the two and I sort of go back and forth, 947 00:54:30,960 --> 00:54:32,920 Speaker 1: if yeah, I can read, I can lean into different 948 00:54:32,960 --> 00:54:36,560 Speaker 1: interpretations like is he he is angry that the soup 949 00:54:36,600 --> 00:54:38,880 Speaker 1: has been served, maybe it was served too early, or 950 00:54:38,920 --> 00:54:41,200 Speaker 1: it's you know, it's clearly cold, or he just had 951 00:54:41,200 --> 00:54:43,759 Speaker 1: the soup yesterday and therefore he has uh he is 952 00:54:43,800 --> 00:54:46,839 Speaker 1: I rate? But he also could be like, yes, now 953 00:54:46,960 --> 00:54:51,719 Speaker 1: it's time to to really get into this soup. Yeah, yeah, 954 00:54:51,880 --> 00:54:54,640 Speaker 1: or or various other interpretations. You know. Weirdly, some of 955 00:54:54,640 --> 00:54:57,360 Speaker 1: the other actors I know who fit into this mold 956 00:54:57,840 --> 00:55:00,279 Speaker 1: are not just film actors. I mean a lot them 957 00:55:00,320 --> 00:55:04,680 Speaker 1: are film actors, but especially people who have done like modeling, 958 00:55:04,800 --> 00:55:08,960 Speaker 1: like fashion modeling or art modeling like Grace Jones comes 959 00:55:09,000 --> 00:55:11,560 Speaker 1: to mind as somebody who could have have a facial 960 00:55:11,560 --> 00:55:17,560 Speaker 1: expression that is ambiguous in valence but high in intensity. No, yeah, 961 00:55:17,680 --> 00:55:21,120 Speaker 1: I definitely, yeah, I definitely can see that with Grace Jones. 962 00:55:21,640 --> 00:55:23,359 Speaker 1: I was thinking, I was trying to think of good 963 00:55:23,360 --> 00:55:26,360 Speaker 1: examples of this, and uh, like my mind turned to 964 00:55:26,520 --> 00:55:28,600 Speaker 1: some actors who certainly, you know, have kind of like 965 00:55:28,640 --> 00:55:32,359 Speaker 1: a smoldering uh stare or have you know, good at 966 00:55:32,360 --> 00:55:34,600 Speaker 1: the stoic type characters are especially the sort of Joe 967 00:55:34,640 --> 00:55:37,439 Speaker 1: cool characters, you know, as I think of them, where 968 00:55:37,480 --> 00:55:40,520 Speaker 1: you know, it's like it's playing some cool, cool dude 969 00:55:40,680 --> 00:55:42,960 Speaker 1: is like a detective or something, and he's you know, 970 00:55:43,000 --> 00:55:46,759 Speaker 1: he's acting pretty much unfazed by everything around him. But 971 00:55:47,080 --> 00:55:50,480 Speaker 1: I think the better example I ended up turning to 972 00:55:50,760 --> 00:55:53,920 Speaker 1: is Harry Dean Stanton, who often played very you know, 973 00:55:54,040 --> 00:55:57,359 Speaker 1: very sort of emotionally muted characters. I would say, though 974 00:55:57,400 --> 00:56:00,000 Speaker 1: not Joe cool characters, you know, not not a character 975 00:56:00,080 --> 00:56:02,520 Speaker 1: or that's so far above it all that he feels 976 00:56:02,520 --> 00:56:05,520 Speaker 1: completely at ease. Oh, I think Harry Dean's potentially another 977 00:56:05,560 --> 00:56:10,000 Speaker 1: great example. Yeah. Yeah. And another like actually kind of 978 00:56:10,040 --> 00:56:14,279 Speaker 1: like a suite of answers that came to mind were 979 00:56:14,400 --> 00:56:18,080 Speaker 1: from the uh, the the Alien film franchise. The various 980 00:56:18,120 --> 00:56:24,040 Speaker 1: actors that you had playing androids. UM, specifically thinking of 981 00:56:24,080 --> 00:56:29,160 Speaker 1: Ian Holme, Um, Lance Hendrickson, and Michael Fassbender, all three 982 00:56:29,520 --> 00:56:34,240 Speaker 1: very talented actors, um but um, But in all cases 983 00:56:34,280 --> 00:56:39,319 Speaker 1: they're supposed to be playing this artificial human type of 984 00:56:39,360 --> 00:56:43,680 Speaker 1: being that has no emotions but but has an intent 985 00:56:44,320 --> 00:56:46,719 Speaker 1: and in depending on which film you're landing on, in 986 00:56:46,760 --> 00:56:50,400 Speaker 1: which particular incarnation of of the android, that intent maybe 987 00:56:50,719 --> 00:56:55,279 Speaker 1: um benevolent or or might lean more neutral or might 988 00:56:55,320 --> 00:57:00,160 Speaker 1: be malicious. UM. And Yeah, I don't know if i'd 989 00:57:00,200 --> 00:57:02,880 Speaker 1: go there with Ian Holme actually, because Ian Holmes seems 990 00:57:03,000 --> 00:57:07,720 Speaker 1: unusually capable of projecting absolute blank neutrality where you don't 991 00:57:07,760 --> 00:57:11,480 Speaker 1: get that that ambiguity that spins off in all the directions. 992 00:57:11,520 --> 00:57:13,440 Speaker 1: Like I think he would be he would be great 993 00:57:13,520 --> 00:57:16,920 Speaker 1: to have people like absolutely fail to reproduce the coolish 994 00:57:17,000 --> 00:57:20,479 Speaker 1: of results have him doing blank face. But other ones 995 00:57:20,800 --> 00:57:23,520 Speaker 1: you're saying, I agree, Yeah, so I don't. I don't know. 996 00:57:24,000 --> 00:57:26,280 Speaker 1: Like I was just thinking back on those films, and 997 00:57:26,600 --> 00:57:29,280 Speaker 1: even though these are the characters that are not supposed 998 00:57:29,320 --> 00:57:32,840 Speaker 1: to have emotional states, in some cases, I feel like 999 00:57:33,000 --> 00:57:35,400 Speaker 1: I have a better handle on their emotional states versus 1000 00:57:35,520 --> 00:57:39,320 Speaker 1: other human characters in those pictures. Yeah, but I have 1001 00:57:39,360 --> 00:57:42,040 Speaker 1: to admit I did not paste all of their photos 1002 00:57:42,080 --> 00:57:44,520 Speaker 1: into our document and put them opposite soup, so I 1003 00:57:44,520 --> 00:57:47,160 Speaker 1: haven't tested it myself. Oh you did put fast spender 1004 00:57:47,200 --> 00:57:50,800 Speaker 1: next to soup, though, And I gotta say, he looks hungry. Yeah, yeah, yeah, 1005 00:57:50,800 --> 00:57:53,880 Speaker 1: he looks. He does look like he is, um, he's 1006 00:57:53,920 --> 00:57:56,520 Speaker 1: about to dine on some soup. Can't you just imagine 1007 00:57:56,760 --> 00:57:59,360 Speaker 1: a scene of him sensually teaching his twin how to 1008 00:57:59,440 --> 00:58:03,360 Speaker 1: peel above or nut squash. Yeah, that would be good, 1009 00:58:03,400 --> 00:58:06,800 Speaker 1: feeding each other's soup with wooden spoons. Yeah. Well, anyway, 1010 00:58:06,840 --> 00:58:09,439 Speaker 1: all this is just to say, and to be fair, 1011 00:58:09,520 --> 00:58:12,880 Speaker 1: maybe some studies have done this and I didn't realize it, 1012 00:58:12,880 --> 00:58:15,560 Speaker 1: but it seems like maybe one good move to try 1013 00:58:15,600 --> 00:58:21,000 Speaker 1: to avoid the the the the interactor effects of the 1014 00:58:21,200 --> 00:58:24,920 Speaker 1: of the stimulus you use in Kolashov type experiments is 1015 00:58:24,960 --> 00:58:27,760 Speaker 1: to just like get a whole lot of pictures of 1016 00:58:27,800 --> 00:58:31,240 Speaker 1: neutral faces and then serve them up at random, and 1017 00:58:31,320 --> 00:58:33,840 Speaker 1: so you can get kind of the neutral face photo 1018 00:58:34,080 --> 00:58:37,560 Speaker 1: averaged out over a big population, instead of having it 1019 00:58:37,640 --> 00:58:42,200 Speaker 1: fluctuate based on like how truly neutral your supposedly neutral 1020 00:58:42,240 --> 00:58:46,160 Speaker 1: face looks. I'd be delighted to hear from listeners out 1021 00:58:46,160 --> 00:58:49,600 Speaker 1: there what their thoughts are and their specific examples, uh, 1022 00:58:49,760 --> 00:58:54,480 Speaker 1: from cinema and from you know, the faces of various actors. 1023 00:58:54,840 --> 00:58:56,400 Speaker 1: You know, I wanted to come back to something that 1024 00:58:56,480 --> 00:58:59,040 Speaker 1: which which I thought is kind of interesting about this. Uh. 1025 00:58:59,320 --> 00:59:01,320 Speaker 1: Even if you'll accepted that the cool as show of 1026 00:59:01,320 --> 00:59:06,840 Speaker 1: effect is rather modest or only applies sometimes, it is 1027 00:59:06,840 --> 00:59:11,440 Speaker 1: still pretty interesting that it indicates how flexible the human 1028 00:59:11,520 --> 00:59:17,480 Speaker 1: brain is at constructing artificial scenarios and still applying like 1029 00:59:17,640 --> 00:59:20,360 Speaker 1: human logic to them. That like, you know, you're not 1030 00:59:20,800 --> 00:59:23,720 Speaker 1: observing a real scenario in life where you're trying to 1031 00:59:23,720 --> 00:59:26,240 Speaker 1: guess if somebody is hungry. You're looking at a photo 1032 00:59:26,680 --> 00:59:29,680 Speaker 1: or you're looking at an image on a on a screen, 1033 00:59:30,000 --> 00:59:32,040 Speaker 1: and then it's being intercut with a you know, a 1034 00:59:32,040 --> 00:59:34,240 Speaker 1: coffin that they might be sad at, or a just 1035 00:59:34,320 --> 00:59:37,440 Speaker 1: a picture of soup or something, and we we start 1036 00:59:37,480 --> 00:59:40,400 Speaker 1: applying the same logic we apply to real life to 1037 00:59:40,520 --> 00:59:45,120 Speaker 1: these obviously artificial stimuli. Yeah. Yeah, And I think it's 1038 00:59:45,120 --> 00:59:48,919 Speaker 1: a great reminder of just how film works and and 1039 00:59:48,920 --> 00:59:51,920 Speaker 1: and other mediums of virab but especially film, how you 1040 00:59:51,960 --> 00:59:55,360 Speaker 1: know there they still require a viewer. And if there's 1041 00:59:55,400 --> 00:59:59,160 Speaker 1: not a viewer, uh, there's not a movie go or 1042 00:59:59,440 --> 01:00:02,480 Speaker 1: there's no film experience, since therefore there's no film, and 1043 01:00:02,560 --> 01:00:04,800 Speaker 1: so there's no matter how polished the thing on the 1044 01:00:04,840 --> 01:00:09,200 Speaker 1: screen is, there's something that takes place not only between 1045 01:00:09,600 --> 01:00:12,160 Speaker 1: the film and the viewer, but inside the viewer's mind. 1046 01:00:12,320 --> 01:00:14,560 Speaker 1: That's that's critical, and that a lot of times we 1047 01:00:14,600 --> 01:00:18,480 Speaker 1: don't notice how many gaps were filling in as film viewers, Like, Yeah, 1048 01:00:18,600 --> 01:00:21,640 Speaker 1: you don't realize how much work you're doing, and it's 1049 01:00:21,680 --> 01:00:24,320 Speaker 1: work that is apparently pretty easy to do. It's just 1050 01:00:24,400 --> 01:00:27,600 Speaker 1: something we we tend to do pretty much automatically while 1051 01:00:27,640 --> 01:00:30,880 Speaker 1: we're watching movies is fill in those gaps of logic, 1052 01:00:31,080 --> 01:00:35,360 Speaker 1: make connections between one image and another, make assumptions about 1053 01:00:35,400 --> 01:00:37,800 Speaker 1: what's going on in an actor's head when they're portrayed 1054 01:00:37,800 --> 01:00:41,120 Speaker 1: on screen based on the context or the music, you know, 1055 01:00:41,200 --> 01:00:44,480 Speaker 1: what was shown just before after. But it's one of 1056 01:00:44,520 --> 01:00:46,560 Speaker 1: those things where it gets pretty weird when you start 1057 01:00:46,560 --> 01:00:49,440 Speaker 1: to notice all of those like assumptions you're having to 1058 01:00:49,520 --> 01:00:52,000 Speaker 1: make and mental work you're having to do for a 1059 01:00:52,080 --> 01:00:54,680 Speaker 1: movie to make sense, which in reality is a flickering 1060 01:00:54,720 --> 01:00:58,640 Speaker 1: succession of moving images, which you know, sometimes if you 1061 01:00:58,680 --> 01:01:01,120 Speaker 1: were to be very literal roll about them, are are 1062 01:01:01,160 --> 01:01:03,880 Speaker 1: totally unconnected. Like you see like a staircase that's from 1063 01:01:03,960 --> 01:01:06,440 Speaker 1: one state and then a house that's from another, and 1064 01:01:06,440 --> 01:01:08,920 Speaker 1: then somebody's coming in through a front door, and you 1065 01:01:09,000 --> 01:01:10,880 Speaker 1: just connect it all is this is all in the 1066 01:01:10,920 --> 01:01:15,000 Speaker 1: same place, persons just moving through their their daily routine. Yeah. 1067 01:01:15,560 --> 01:01:18,560 Speaker 1: We often think of of viewing films and watching TV 1068 01:01:18,680 --> 01:01:20,840 Speaker 1: programs as being kind of a shut your brain off 1069 01:01:20,920 --> 01:01:24,000 Speaker 1: kind of a situation, at least with certain types of 1070 01:01:24,000 --> 01:01:26,920 Speaker 1: of film and TV. And you know, we think that, Okay, 1071 01:01:27,120 --> 01:01:29,160 Speaker 1: if it's a it's a highly crafted product, we're not 1072 01:01:29,200 --> 01:01:31,720 Speaker 1: gonna have to mainstream product, we're not gonna have to 1073 01:01:31,800 --> 01:01:34,520 Speaker 1: do much thinking. It's gonna hold our hand the whole way. 1074 01:01:34,560 --> 01:01:37,080 Speaker 1: But but yeah, even even in the case if you're 1075 01:01:37,120 --> 01:01:41,040 Speaker 1: sort of you know, by the numbers summer Blockbuster, uh, 1076 01:01:41,080 --> 01:01:44,160 Speaker 1: you know, very much repeating a plot you've seen before, 1077 01:01:44,280 --> 01:01:46,840 Speaker 1: with the sort of characters you've seen before, your brain 1078 01:01:46,920 --> 01:01:49,680 Speaker 1: is still filling in these little gaps, like you say. 1079 01:01:49,880 --> 01:01:52,200 Speaker 1: But on the same hand, I think one one thing 1080 01:01:52,240 --> 01:01:54,120 Speaker 1: we can drive home based on what we've been discussing 1081 01:01:54,120 --> 01:01:57,000 Speaker 1: here is that that the opposite, uh, in a way 1082 01:01:57,120 --> 01:01:59,280 Speaker 1: is true, is that if you're dealing with a film 1083 01:01:59,320 --> 01:02:01,920 Speaker 1: that's say, is uh, you know of a of a 1084 01:02:01,960 --> 01:02:04,280 Speaker 1: genre you're not that familiar with, or a time period 1085 01:02:04,320 --> 01:02:08,480 Speaker 1: of filmmaking and out of familiar with. Um, perhaps it's 1086 01:02:08,560 --> 01:02:10,840 Speaker 1: a you know, more more of an art film, or 1087 01:02:10,880 --> 01:02:13,440 Speaker 1: it's you know, foreign language, etcetera. A lot of it 1088 01:02:13,520 --> 01:02:17,640 Speaker 1: is still going to come down to human or humanoid 1089 01:02:17,920 --> 01:02:21,480 Speaker 1: entities interacting with things in each other, and then our 1090 01:02:21,520 --> 01:02:25,320 Speaker 1: brain is going to make presumptions about their mental state 1091 01:02:25,480 --> 01:02:28,600 Speaker 1: and their emotional state. Oh yeah, yeah, you you infer 1092 01:02:28,800 --> 01:02:31,760 Speaker 1: drama even when the thing you're looking at is almost 1093 01:02:31,760 --> 01:02:35,360 Speaker 1: actively resisting it, and that that goes beyond movies. In fact, 1094 01:02:35,440 --> 01:02:39,400 Speaker 1: I mean what is drama. Drama is somebody wanting something 1095 01:02:39,520 --> 01:02:41,760 Speaker 1: or trying to get something and then coming up against 1096 01:02:41,800 --> 01:02:45,640 Speaker 1: resistance in some way. Uh. People infer those kinds of 1097 01:02:45,720 --> 01:02:49,680 Speaker 1: dramas on like balls rolling around on the table. They're 1098 01:02:49,720 --> 01:02:52,040 Speaker 1: literally studies of that. You know, people will say, like 1099 01:02:52,280 --> 01:02:55,520 Speaker 1: the ball wanted to go down in this hole, but 1100 01:02:55,600 --> 01:02:57,960 Speaker 1: it you know, it couldn't get there because something was 1101 01:02:58,040 --> 01:03:01,560 Speaker 1: preventing it. All Right, we're gonna go ahead and close 1102 01:03:01,600 --> 01:03:03,160 Speaker 1: it out there, but we would love to hear from 1103 01:03:03,200 --> 01:03:06,040 Speaker 1: everybody if you have particular thoughts on the clue Shov effect. 1104 01:03:06,680 --> 01:03:10,800 Speaker 1: Various examples and studies we've discussed in these episodes. Uh, 1105 01:03:10,840 --> 01:03:15,880 Speaker 1: some of the various examples from film and acting that 1106 01:03:15,920 --> 01:03:18,960 Speaker 1: we have alluded to. Perhaps you have some better examples 1107 01:03:19,720 --> 01:03:22,320 Speaker 1: that you would like to bring to our attention. Just 1108 01:03:22,440 --> 01:03:25,040 Speaker 1: write in and let us know. In the meantime, if 1109 01:03:25,040 --> 01:03:26,920 Speaker 1: you would like to check out other episodes of Stuff 1110 01:03:26,960 --> 01:03:29,120 Speaker 1: to Blow Your Mind, check it out in this Stuff 1111 01:03:29,160 --> 01:03:31,800 Speaker 1: to Blow your Mind podcast feed. You'll find that wherever 1112 01:03:31,840 --> 01:03:34,600 Speaker 1: you get your podcasts. We have core episodes on Tuesday 1113 01:03:34,680 --> 01:03:38,640 Speaker 1: and Thursday. We have a listener mail on Monday, short 1114 01:03:38,680 --> 01:03:41,480 Speaker 1: form artifact episode on Wednesday, and on Friday we do 1115 01:03:41,560 --> 01:03:44,000 Speaker 1: Weird How Cinema. That's our time to set aside most 1116 01:03:44,400 --> 01:03:48,800 Speaker 1: serious matters and just discuss a weird film. Um. If 1117 01:03:48,800 --> 01:03:50,320 Speaker 1: you want a quick way to get to our podcast, 1118 01:03:50,360 --> 01:03:51,760 Speaker 1: you can just go to Stuff to Blow your Mind 1119 01:03:51,760 --> 01:03:54,040 Speaker 1: dot com. That should still redirect you over to the 1120 01:03:54,160 --> 01:03:57,960 Speaker 1: I heart listing for our page. Huge thanks as always 1121 01:03:57,960 --> 01:04:01,400 Speaker 1: to our excellent audio producers Seth and Pollis Johnson. If 1122 01:04:01,400 --> 01:04:03,040 Speaker 1: you would like to get in touch with us with 1123 01:04:03,160 --> 01:04:05,680 Speaker 1: feedback on this episode or any other, to suggest a 1124 01:04:05,720 --> 01:04:07,800 Speaker 1: topic for the future, or just to say hello. You 1125 01:04:07,840 --> 01:04:10,520 Speaker 1: can email us at contact at stuff to Blow your 1126 01:04:10,560 --> 01:04:20,920 Speaker 1: Mind dot com. Stuff to Blow Your Mind is production 1127 01:04:20,960 --> 01:04:23,720 Speaker 1: of I Heart Radio. For more podcasts for my heart Radio, 1128 01:04:23,920 --> 01:04:26,600 Speaker 1: visit the i heart Radio app, Apple Podcasts, or wherever 1129 01:04:26,640 --> 01:04:37,120 Speaker 1: you listening to your favorite shows.