1 00:00:04,240 --> 00:00:07,240 Speaker 1: Welcome to tech Stuff, a production of I Heart Radios 2 00:00:07,320 --> 00:00:14,080 Speaker 1: How Stuff Works. Hey there, and welcome to text Stuff. 3 00:00:14,200 --> 00:00:18,119 Speaker 1: I'm your host, Jonathan Strickland. I'm an executive producer with 4 00:00:18,239 --> 00:00:21,760 Speaker 1: I Heart Radio and I love all things tech and 5 00:00:21,800 --> 00:00:25,000 Speaker 1: it's time for another classic episode of tech Stuff. This 6 00:00:25,079 --> 00:00:29,680 Speaker 1: episode originally published on December nineteen, two thousand and twelve. 7 00:00:30,080 --> 00:00:34,360 Speaker 1: It is titled how Motion Capture works and I love 8 00:00:34,479 --> 00:00:36,960 Speaker 1: mo cap when it's done well. When mo cap has 9 00:00:36,960 --> 00:00:41,440 Speaker 1: done well, you can get some phenomenal performances translated into 10 00:00:41,520 --> 00:00:44,720 Speaker 1: different types of media, including video games and c g 11 00:00:44,920 --> 00:00:49,080 Speaker 1: I films. It is an amazing tool and Chris Pullett 12 00:00:49,159 --> 00:00:52,360 Speaker 1: and I break down how it works. So let's listen 13 00:00:52,360 --> 00:00:56,200 Speaker 1: in on this classic episode. So today we thought we'd 14 00:00:56,240 --> 00:01:00,600 Speaker 1: talk a bit about a type of performance that is 15 00:01:00,800 --> 00:01:06,319 Speaker 1: relatively new as far as performance goes. Uh, something that uh, 16 00:01:06,360 --> 00:01:09,640 Speaker 1: I guess this falls into our movie making category, but 17 00:01:09,760 --> 00:01:12,200 Speaker 1: it's also something that's been used in things like video 18 00:01:12,319 --> 00:01:17,759 Speaker 1: games and and other forms of media as well. Motion capture. Yeah, 19 00:01:17,800 --> 00:01:21,440 Speaker 1: you'll even see it in UH in sports. They've been 20 00:01:21,680 --> 00:01:24,240 Speaker 1: talking about this for a while now. And if you've 21 00:01:24,280 --> 00:01:30,680 Speaker 1: ever seen the making of a a video or a game. Um, 22 00:01:30,800 --> 00:01:34,640 Speaker 1: or you know even in sports rehabilitation, in medicine, Um, 23 00:01:35,080 --> 00:01:38,120 Speaker 1: they where the people are wearing dots, little white dots 24 00:01:38,120 --> 00:01:42,520 Speaker 1: all over their clothing and sometimes their faces and hands. Um, 25 00:01:42,560 --> 00:01:45,319 Speaker 1: that's probably what they were doing. Either that or they 26 00:01:45,400 --> 00:01:48,880 Speaker 1: just really like stickers. Yeah, I mean who doesn't. I 27 00:01:48,920 --> 00:01:52,600 Speaker 1: remember being very competitive in elementary school in order to 28 00:01:52,600 --> 00:01:55,600 Speaker 1: get a sticker. And also this is a tangent but 29 00:01:55,680 --> 00:01:59,400 Speaker 1: a true story. I got a gold star sticker, uh 30 00:01:59,480 --> 00:02:02,760 Speaker 1: just last month, as from Tracy, the head of our 31 00:02:02,880 --> 00:02:07,720 Speaker 1: our site. So anyway, um, yeah, motion capture. Actually, there 32 00:02:07,760 --> 00:02:10,800 Speaker 1: are a lot of different terms that you can use 33 00:02:11,320 --> 00:02:14,960 Speaker 1: in this in this realm uh, motion capture am CAP 34 00:02:15,080 --> 00:02:17,360 Speaker 1: is probably the one I hear the most frequently, but 35 00:02:17,400 --> 00:02:23,160 Speaker 1: also things like performance animation, performance capture, digital puppetry, real 36 00:02:23,240 --> 00:02:26,800 Speaker 1: time animation, motion scanning, which is really more of a 37 00:02:26,840 --> 00:02:30,800 Speaker 1: proprietary thing, but these are The concept is pretty much 38 00:02:30,840 --> 00:02:33,320 Speaker 1: the same across the board. The idea is to capture 39 00:02:33,919 --> 00:02:38,720 Speaker 1: the physical representation of something and then converted into a 40 00:02:38,840 --> 00:02:42,560 Speaker 1: virtual format. So usually it's something that's in motion, but 41 00:02:42,600 --> 00:02:45,400 Speaker 1: it's not always that way. Uh. Since you know we're 42 00:02:45,400 --> 00:02:49,200 Speaker 1: talking about motion capture, that makes sense, but you're trying 43 00:02:49,240 --> 00:02:53,360 Speaker 1: to get uh, translates something that is moving through real 44 00:02:53,480 --> 00:02:58,440 Speaker 1: space into a digital format. And Uh, there's different ways 45 00:02:58,440 --> 00:02:59,800 Speaker 1: to do this. I mean you could do it the 46 00:03:00,040 --> 00:03:02,400 Speaker 1: really hard way, which is where you study something and 47 00:03:02,400 --> 00:03:05,640 Speaker 1: then you try to recreate it, uh, either by hand 48 00:03:05,760 --> 00:03:09,200 Speaker 1: or by or digitally, you know, by by programming movements 49 00:03:09,280 --> 00:03:13,079 Speaker 1: into an animated figure. But this is an idea that 50 00:03:13,160 --> 00:03:16,839 Speaker 1: kind of takes that step out where you are directly 51 00:03:16,960 --> 00:03:20,800 Speaker 1: porting the movements uh, something is making within physical space 52 00:03:20,840 --> 00:03:24,840 Speaker 1: into virtual space. Yeah, there was an early technique, UM, 53 00:03:24,880 --> 00:03:26,600 Speaker 1: And of course this is this is all an attempt 54 00:03:26,680 --> 00:03:31,359 Speaker 1: to to get as real as you can with with animation. UM. 55 00:03:32,040 --> 00:03:35,880 Speaker 1: And one of the earlier techniques that that was sort 56 00:03:35,880 --> 00:03:40,000 Speaker 1: of a predecessor to this is called rotoscoping. Uh. Ralph 57 00:03:40,000 --> 00:03:43,040 Speaker 1: Box She's Lord of the Rings had a lot of 58 00:03:43,160 --> 00:03:47,160 Speaker 1: rotoscoping in it. Well, what happens is UM, in that 59 00:03:47,200 --> 00:03:50,480 Speaker 1: case is that a real uh, a real human being 60 00:03:50,800 --> 00:03:53,760 Speaker 1: goes through the motions and they act through the parts 61 00:03:54,040 --> 00:03:55,880 Speaker 1: that are that you're going to see in the animation, 62 00:03:56,000 --> 00:03:59,400 Speaker 1: and they shoot that on film. Yes, yes, and then 63 00:03:59,640 --> 00:04:03,120 Speaker 1: the the animators basically are looking at that and are 64 00:04:03,200 --> 00:04:06,240 Speaker 1: drawing more or less on top of that, they see 65 00:04:06,280 --> 00:04:08,480 Speaker 1: a projection of that, and they are drawing, uh, the 66 00:04:08,520 --> 00:04:13,040 Speaker 1: animation over that to capture the way that person's body looks. 67 00:04:13,600 --> 00:04:17,240 Speaker 1: And this this was famous. You know, the Disney studios 68 00:04:17,240 --> 00:04:19,440 Speaker 1: were famous for this. We're studying models and then they 69 00:04:19,480 --> 00:04:23,279 Speaker 1: would do the rotoscoping technique to to try to make 70 00:04:23,480 --> 00:04:27,200 Speaker 1: their uh, their characters look more realistic. Yeah, and there 71 00:04:27,200 --> 00:04:30,600 Speaker 1: are some artists, like I said, like Bakshi who famously 72 00:04:30,920 --> 00:04:35,400 Speaker 1: would leave the film image as part of the animation, 73 00:04:35,520 --> 00:04:38,400 Speaker 1: so that you had this this weird effect where the 74 00:04:38,480 --> 00:04:42,120 Speaker 1: thing you were looking at was part uh well quote 75 00:04:42,160 --> 00:04:46,680 Speaker 1: unquote real image and part animated image, which was it 76 00:04:46,720 --> 00:04:49,720 Speaker 1: was an artistic choice, uh, definitely something that was not 77 00:04:50,120 --> 00:04:53,440 Speaker 1: meant to to necessarily fool you into thinking, oh, well, 78 00:04:53,440 --> 00:04:55,960 Speaker 1: that animated character is moving very realistically. It was done 79 00:04:56,000 --> 00:04:58,920 Speaker 1: on purpose, but it was. That's what I always think 80 00:04:58,960 --> 00:05:01,280 Speaker 1: of when I think rotoscope as I just think of 81 00:05:01,320 --> 00:05:03,560 Speaker 1: the different box sheet films, but in particular I think 82 00:05:03,560 --> 00:05:06,760 Speaker 1: of his Lord of the Rings adaptation, um, which, as 83 00:05:06,800 --> 00:05:12,080 Speaker 1: I recall, ended halfway through the Two Towers. So anyway, 84 00:05:12,120 --> 00:05:14,520 Speaker 1: that's just bringing back memories. But yeah, that was that 85 00:05:14,600 --> 00:05:20,080 Speaker 1: was sort of a precursor to motion capture motion capture itself. 86 00:05:20,080 --> 00:05:24,799 Speaker 1: There are many different ways of achieving this. For example, 87 00:05:25,040 --> 00:05:29,400 Speaker 1: there were it's not used very frequently now, but there 88 00:05:29,400 --> 00:05:34,120 Speaker 1: were mechanical systems where you had sensors that would be 89 00:05:34,160 --> 00:05:39,599 Speaker 1: attached to specific joints UH that would relay movement. And 90 00:05:40,000 --> 00:05:42,200 Speaker 1: usually it was kind of like a like an actor 91 00:05:42,200 --> 00:05:48,560 Speaker 1: would wear a physical metallic skeleton type device that would 92 00:05:48,560 --> 00:05:52,279 Speaker 1: have the sensors attached to the various joints, and as 93 00:05:52,320 --> 00:05:56,839 Speaker 1: the actor moved, the sensors would register the changes in 94 00:05:56,960 --> 00:06:02,640 Speaker 1: motion in this metallic skeleton and UH, and that would 95 00:06:02,680 --> 00:06:07,800 Speaker 1: be relayed through usually cables to a computer system that 96 00:06:07,839 --> 00:06:11,640 Speaker 1: would measure these or take the measurements from the sensors 97 00:06:11,640 --> 00:06:15,320 Speaker 1: and translated into movements for the virtual character. UH. It's 98 00:06:15,400 --> 00:06:19,120 Speaker 1: very limiting on this particular system. There was another one 99 00:06:19,160 --> 00:06:24,160 Speaker 1: that was a little more versatile, which was used electro magnets. 100 00:06:24,800 --> 00:06:28,160 Speaker 1: And in this case you talked about sensors that would 101 00:06:28,200 --> 00:06:31,720 Speaker 1: be attached by really thick cables that again would go 102 00:06:31,839 --> 00:06:35,360 Speaker 1: to a computer and there'd be a magnetic field and 103 00:06:35,560 --> 00:06:39,040 Speaker 1: by moving through this magnetic field, the sensors would pick 104 00:06:39,120 --> 00:06:41,880 Speaker 1: up alterations. You know, it would you know, moving through 105 00:06:41,960 --> 00:06:46,200 Speaker 1: magnetic field, you would get little electrical changes. We we've 106 00:06:46,240 --> 00:06:51,000 Speaker 1: talked a lot about electricity magnetism in general, moving through 107 00:06:51,400 --> 00:06:56,120 Speaker 1: uh fluctuating a magnetic field can induce electricity through a conductor, 108 00:06:56,279 --> 00:07:00,440 Speaker 1: or putting electricity through a conductor can induce a magnetic field. 109 00:07:00,640 --> 00:07:03,880 Speaker 1: So anyway, by moving these sensors through the magnetic field, 110 00:07:04,080 --> 00:07:08,000 Speaker 1: it would create these electronic fluctuations that would then be 111 00:07:08,080 --> 00:07:13,000 Speaker 1: measured and translated into movement. And again this was a 112 00:07:13,200 --> 00:07:17,520 Speaker 1: fairly effective way of picking up movements. It actually didn't 113 00:07:17,640 --> 00:07:20,600 Speaker 1: use as many points of contact as the optical systems 114 00:07:20,600 --> 00:07:22,520 Speaker 1: that we mostly think about, that was the kind that 115 00:07:22,600 --> 00:07:25,240 Speaker 1: Chris was referring to early with all the dots on 116 00:07:25,280 --> 00:07:28,520 Speaker 1: the person. Those systems tend to have lots and lots 117 00:07:28,600 --> 00:07:32,400 Speaker 1: and lots of points of reference. The electro magnet ones 118 00:07:32,920 --> 00:07:36,400 Speaker 1: didn't tend to have as many points of reference because 119 00:07:36,960 --> 00:07:40,360 Speaker 1: the software side of it, because you know, we do 120 00:07:40,400 --> 00:07:42,560 Speaker 1: have a hardware and a software side to this. The 121 00:07:42,640 --> 00:07:46,440 Speaker 1: software side would assume that the joints that these sensors 122 00:07:46,440 --> 00:07:50,720 Speaker 1: were attached to behaved the way they normally would in humans, 123 00:07:51,080 --> 00:07:54,320 Speaker 1: and that they don't have complete freedom of movement. Most 124 00:07:54,360 --> 00:07:58,480 Speaker 1: of us are not multi jointed in every joint, so 125 00:07:58,520 --> 00:08:00,600 Speaker 1: we can't you know that we have a imitation on 126 00:08:00,640 --> 00:08:02,920 Speaker 1: how far we can move in certain directions with these 127 00:08:02,960 --> 00:08:06,200 Speaker 1: various joints. So taking that into account, you didn't have 128 00:08:06,280 --> 00:08:10,000 Speaker 1: to have sensors all over the body. You would just 129 00:08:10,040 --> 00:08:12,080 Speaker 1: have in a few places, which was good considering that 130 00:08:12,120 --> 00:08:15,880 Speaker 1: there were these thick cables attached to the sensors. And 131 00:08:15,920 --> 00:08:19,400 Speaker 1: then once you were done moving, then the all that 132 00:08:19,480 --> 00:08:22,920 Speaker 1: data would get be captured within the system and could 133 00:08:22,920 --> 00:08:26,640 Speaker 1: then be rendered into animation. Although this was also a 134 00:08:26,640 --> 00:08:30,640 Speaker 1: way that you could do real time animation or digital puppetry. Uh, 135 00:08:30,640 --> 00:08:33,920 Speaker 1: it's not that different from controlling a video game character 136 00:08:34,360 --> 00:08:37,040 Speaker 1: with a controller. It's sort of the same principle, except 137 00:08:37,080 --> 00:08:39,559 Speaker 1: in this case the the video game controller instead of 138 00:08:39,720 --> 00:08:41,960 Speaker 1: being something you hold in your hands, it's something you 139 00:08:42,000 --> 00:08:46,479 Speaker 1: were actually wearing. And uh, I've seen plenty of instances 140 00:08:46,480 --> 00:08:49,599 Speaker 1: of this. If you've ever seen Turtle Talk with Crush 141 00:08:49,800 --> 00:08:51,800 Speaker 1: over at Disney, that's what they use. They use a 142 00:08:51,840 --> 00:08:55,480 Speaker 1: digital you know, they use digital poetry, and it's awesome, 143 00:08:55,480 --> 00:08:59,760 Speaker 1: by the way, I love that. Well, it it would 144 00:08:59,800 --> 00:09:03,760 Speaker 1: all so seeing that, um, you would need to be 145 00:09:03,800 --> 00:09:07,120 Speaker 1: aware of where those cables were going, and it would 146 00:09:07,160 --> 00:09:09,400 Speaker 1: it would also affect the way that you would move. 147 00:09:09,400 --> 00:09:12,080 Speaker 1: You wouldn't move as naturally if you were wearing something 148 00:09:12,160 --> 00:09:15,920 Speaker 1: like that as if you were, you know, unencumbered by 149 00:09:15,960 --> 00:09:19,120 Speaker 1: by that, which, um sort of I think would wind 150 00:09:19,160 --> 00:09:23,160 Speaker 1: itself to an upgrade, which is I think why they 151 00:09:23,280 --> 00:09:28,080 Speaker 1: were so keen on optical system Well, it's also also 152 00:09:28,360 --> 00:09:30,520 Speaker 1: that's very true. It did limit what you could do, 153 00:09:30,600 --> 00:09:32,320 Speaker 1: it could live, It would limit your movement. I mean, 154 00:09:32,320 --> 00:09:34,080 Speaker 1: when you've got these big cables attached to you, you you 155 00:09:34,200 --> 00:09:37,760 Speaker 1: obviously you can't just move freely within a space. Um. 156 00:09:38,040 --> 00:09:41,240 Speaker 1: So it did put some limitations on you. Their limitations 157 00:09:41,240 --> 00:09:43,319 Speaker 1: to the optical systems too, but we'll get into that. 158 00:09:43,880 --> 00:09:48,920 Speaker 1: The The other problem was that the sampling rate for 159 00:09:49,040 --> 00:09:52,000 Speaker 1: the magnetic systems was not as high as it is 160 00:09:52,040 --> 00:09:54,400 Speaker 1: for optical systems. And by sampling rate, what I mean 161 00:09:54,520 --> 00:09:57,880 Speaker 1: is that this the entire system as a whole is 162 00:09:57,920 --> 00:10:02,760 Speaker 1: taking little measure sure mints of from the sensors of 163 00:10:02,960 --> 00:10:05,800 Speaker 1: you know, the orientation of those sensors within the space, 164 00:10:06,559 --> 00:10:10,840 Speaker 1: and it does that several times every second. But the 165 00:10:10,880 --> 00:10:14,400 Speaker 1: sample rate of the magnetic motion capture systems was much 166 00:10:14,559 --> 00:10:18,720 Speaker 1: lower than what it was for than what it would 167 00:10:18,720 --> 00:10:20,600 Speaker 1: be if you were to use an optical system. So 168 00:10:20,720 --> 00:10:24,480 Speaker 1: you're not getting data as frequently, I mean still several 169 00:10:24,480 --> 00:10:27,560 Speaker 1: times a second, but it's not as precise as the 170 00:10:27,600 --> 00:10:30,560 Speaker 1: optical system. So not only were you limited in the 171 00:10:30,640 --> 00:10:33,480 Speaker 1: kind of movements you can make because you had these 172 00:10:33,679 --> 00:10:38,800 Speaker 1: major cables attached to you, but also you couldn't get 173 00:10:39,280 --> 00:10:44,800 Speaker 1: really minute precise measurements on every kind of movement. So 174 00:10:44,880 --> 00:10:48,040 Speaker 1: it wasn't good for things like sports. So you know, 175 00:10:48,280 --> 00:10:50,800 Speaker 1: something like throwing a pitch in baseball, there are a 176 00:10:50,800 --> 00:10:53,880 Speaker 1: lot of movements, a little tiny motions that are involved 177 00:10:53,880 --> 00:10:57,880 Speaker 1: in that. I mean, anyone who's watched slow motion footage 178 00:10:58,040 --> 00:11:02,280 Speaker 1: of a professional baseball pitcher throwing a pitch, you can 179 00:11:02,320 --> 00:11:06,559 Speaker 1: see that there are some incredibly subtle movements that are 180 00:11:06,559 --> 00:11:09,600 Speaker 1: involved in that. And uh, and it takes place over 181 00:11:09,640 --> 00:11:12,080 Speaker 1: a very short period of time. I mean, it's a 182 00:11:12,200 --> 00:11:17,160 Speaker 1: very fast thing to to to measure. Using the magnetic 183 00:11:17,240 --> 00:11:21,480 Speaker 1: motion capture system, you would probably one slow the person 184 00:11:21,520 --> 00:11:24,200 Speaker 1: down because they have all these cables attached to them, 185 00:11:24,240 --> 00:11:27,760 Speaker 1: and to not get enough data to give an accurate 186 00:11:27,800 --> 00:11:31,200 Speaker 1: representation of what had happened in the virtual format. So 187 00:11:31,200 --> 00:11:34,880 Speaker 1: if you were to say, create a video game, a 188 00:11:34,880 --> 00:11:40,920 Speaker 1: baseball video game, the picture would not necessarily behave properly 189 00:11:41,040 --> 00:11:44,040 Speaker 1: if all you did was directly port the data you 190 00:11:44,120 --> 00:11:47,440 Speaker 1: got from the motion capture into the game. Yeah. Another 191 00:11:47,520 --> 00:11:50,560 Speaker 1: drawback of the mechanical systems like that too, is that 192 00:11:50,559 --> 00:11:53,520 Speaker 1: that it's UM. It's the kind of system that not 193 00:11:53,600 --> 00:11:56,760 Speaker 1: only is cumbersome and inaccurate, but it has to be 194 00:11:56,800 --> 00:12:01,240 Speaker 1: calibrated fairly frequently. UM. And you know, there there's there's 195 00:12:01,280 --> 00:12:03,600 Speaker 1: some work that you can do with this. But the 196 00:12:03,600 --> 00:12:07,840 Speaker 1: optical systems that they began to introduce UM, you know, 197 00:12:07,920 --> 00:12:13,040 Speaker 1: generally became an upgrade UM. The only there is one 198 00:12:13,200 --> 00:12:17,520 Speaker 1: big advantage that the mechanical systems do have, though, and 199 00:12:17,559 --> 00:12:22,000 Speaker 1: that is that light. The lighting will not necessarily interfere 200 00:12:22,040 --> 00:12:24,679 Speaker 1: with the different points of motion that are captured by 201 00:12:24,720 --> 00:12:27,880 Speaker 1: the mechanical system UM. And that can be an issue 202 00:12:28,160 --> 00:12:31,880 Speaker 1: with the optical systems UM. You know, because that's that's 203 00:12:31,920 --> 00:12:34,720 Speaker 1: why UM. They will be wearing the people, the actors 204 00:12:34,720 --> 00:12:38,280 Speaker 1: who will be UM having their motions captured by the 205 00:12:38,360 --> 00:12:41,719 Speaker 1: system will be wearing you know, those bright dots so 206 00:12:41,800 --> 00:12:44,200 Speaker 1: that the computer can pick up on that. And at 207 00:12:44,240 --> 00:12:46,200 Speaker 1: the beginning, and these are these early systems, there were 208 00:12:46,200 --> 00:12:50,360 Speaker 1: only so many points action points that they could capture. Um. 209 00:12:50,400 --> 00:12:53,000 Speaker 1: They were very limited in what they could do at first, 210 00:12:53,160 --> 00:12:55,520 Speaker 1: but still you know, somewhat of an upgrade over the 211 00:12:55,520 --> 00:13:00,080 Speaker 1: mechanical Yeah. It also limited what you could have in 212 00:13:00,120 --> 00:13:03,680 Speaker 1: the background, obviously, because you could not have anything that 213 00:13:03,840 --> 00:13:07,640 Speaker 1: was going to be of a similar shade UH. You 214 00:13:07,640 --> 00:13:11,640 Speaker 1: know usually where you're talking about reflective white substance used 215 00:13:11,679 --> 00:13:16,959 Speaker 1: as the um the points of of UH articulations. So 216 00:13:17,000 --> 00:13:21,520 Speaker 1: the little like white stickers is like what you were saying, Chris, Um, 217 00:13:21,600 --> 00:13:23,640 Speaker 1: you couldn't have anything like that in the background because 218 00:13:23,640 --> 00:13:28,040 Speaker 1: it would confuse the optical system. So that's why a 219 00:13:28,040 --> 00:13:31,560 Speaker 1: lot of these motion capture scenes are shot against a 220 00:13:31,559 --> 00:13:35,440 Speaker 1: blue screen or green screen. It's so that the background 221 00:13:35,520 --> 00:13:39,240 Speaker 1: does not in any way interfere with the motion capture. 222 00:13:39,320 --> 00:13:41,600 Speaker 1: So if you've ever seen behind the scenes footage of 223 00:13:41,720 --> 00:13:43,520 Speaker 1: The Lord of the Rings movies is a great example 224 00:13:44,040 --> 00:13:48,280 Speaker 1: with Andy Sarkis as Gollum or Sniegel if you prefer, 225 00:13:48,760 --> 00:13:53,040 Speaker 1: But he's wearing you know, a tight like skin tight 226 00:13:53,120 --> 00:13:57,400 Speaker 1: suit with these little white UH circles all over it. 227 00:13:57,720 --> 00:14:01,600 Speaker 1: Those are the points that the camera track to create 228 00:14:01,720 --> 00:14:06,800 Speaker 1: the the performance of gallam slash Sniegel. So the performance 229 00:14:06,880 --> 00:14:10,920 Speaker 1: is something that's being created not only by the actor 230 00:14:11,120 --> 00:14:13,920 Speaker 1: but also the animators because not we should also point 231 00:14:13,960 --> 00:14:17,840 Speaker 1: out that the motion capture stuff rarely is motion capture 232 00:14:18,760 --> 00:14:24,080 Speaker 1: uh completely. Uh there's there's rarely a moment where you 233 00:14:24,120 --> 00:14:26,800 Speaker 1: don't have an animator step in and tweak it somehow, 234 00:14:27,360 --> 00:14:32,400 Speaker 1: Like you don't normally have someone create a physical performance 235 00:14:32,520 --> 00:14:39,160 Speaker 1: and that physical performance is completely without any tinkering represented 236 00:14:39,160 --> 00:14:41,440 Speaker 1: in the final product. I mean it can happen, there 237 00:14:41,480 --> 00:14:44,760 Speaker 1: are instances of it, but it's more frequently, uh, something 238 00:14:44,800 --> 00:14:48,280 Speaker 1: where the motion capture performance goes to the animator who 239 00:14:48,320 --> 00:14:52,040 Speaker 1: can then tweak things if the performance is not exactly 240 00:14:52,040 --> 00:14:54,960 Speaker 1: what needs to be, which is kind of nice. You 241 00:14:54,960 --> 00:14:58,120 Speaker 1: don't necessarily have that luxury with flesh and blood actors. 242 00:14:58,560 --> 00:15:02,040 Speaker 1: I'm gonna stop motion ure in mid motion right now 243 00:15:02,320 --> 00:15:04,000 Speaker 1: so that we can take a quick break to thank 244 00:15:04,040 --> 00:15:15,360 Speaker 1: our sponsor. Well, especially with the earlier systems, especially the 245 00:15:15,760 --> 00:15:21,480 Speaker 1: electromagnetic systems, Uh, those were really noisy, not literally noisy, 246 00:15:21,520 --> 00:15:25,880 Speaker 1: but digital noise. They weren't they weren't really highly accurate. Um. 247 00:15:25,920 --> 00:15:29,440 Speaker 1: The optical systems are are far cleaner and give them 248 00:15:29,440 --> 00:15:32,520 Speaker 1: more accurate representation. But you know that that's it sort 249 00:15:32,560 --> 00:15:35,920 Speaker 1: of falls in the realm of artistic license. I would think, um, 250 00:15:35,960 --> 00:15:38,840 Speaker 1: where they need to go in and make subtle adjustments 251 00:15:38,880 --> 00:15:41,840 Speaker 1: to make it look the way they wanted to look. Ye. 252 00:15:42,360 --> 00:15:44,760 Speaker 1: I should also point out, now you just reminded me 253 00:15:44,760 --> 00:15:47,880 Speaker 1: of something else another drawback to the electromagnetic systems, which 254 00:15:47,960 --> 00:15:52,640 Speaker 1: was you couldn't have anything metal on the set because 255 00:15:52,720 --> 00:15:56,000 Speaker 1: it would interfere with that magnetic field and give incorrect 256 00:15:56,040 --> 00:15:59,600 Speaker 1: readings to the system. So you're you're virtual character would 257 00:15:59,640 --> 00:16:01,720 Speaker 1: not move in the same way as the physical one 258 00:16:01,760 --> 00:16:04,240 Speaker 1: because there would be some interference in that sense. So 259 00:16:04,560 --> 00:16:08,600 Speaker 1: your set couldn't have anything metal in it. The props 260 00:16:08,880 --> 00:16:11,840 Speaker 1: didn't shouldn't have anything metal in them, so that that 261 00:16:11,960 --> 00:16:14,600 Speaker 1: limited to you as well. So each system has its 262 00:16:14,600 --> 00:16:18,520 Speaker 1: own limitations. Getting back to the optical one, UM, one 263 00:16:18,520 --> 00:16:20,080 Speaker 1: of the other things you have to remember is that 264 00:16:20,120 --> 00:16:24,920 Speaker 1: in order to really capture a a physical object moving 265 00:16:24,960 --> 00:16:30,840 Speaker 1: through three D space and to replicate that in virtual space, 266 00:16:31,600 --> 00:16:35,760 Speaker 1: you need multiple cameras in that system because a single camera, 267 00:16:36,120 --> 00:16:40,280 Speaker 1: assuming that's a regular video or film camera, something that 268 00:16:40,360 --> 00:16:44,520 Speaker 1: does not have three D capability, pointing that at an 269 00:16:44,520 --> 00:16:48,840 Speaker 1: object it's creating a two dimensional image of something that's 270 00:16:48,920 --> 00:16:54,560 Speaker 1: moving in three dimensions. The camera can't necessarily tell where 271 00:16:54,880 --> 00:16:59,880 Speaker 1: movements are happening within the depth frame of of that 272 00:17:00,120 --> 00:17:02,840 Speaker 1: of that image, right, So if someone's moving in such 273 00:17:02,880 --> 00:17:05,240 Speaker 1: a way where let's say they're moving their head where 274 00:17:05,240 --> 00:17:09,800 Speaker 1: it would be bobbing closer to the camera. Uh, unless 275 00:17:10,000 --> 00:17:15,320 Speaker 1: the size of the the sensors is such that something 276 00:17:15,359 --> 00:17:17,960 Speaker 1: that's subtle could be picked up by the camera system, 277 00:17:18,000 --> 00:17:21,399 Speaker 1: you would lose that information. So what you need are 278 00:17:21,480 --> 00:17:25,120 Speaker 1: multiple cameras on the same object so that you can 279 00:17:25,200 --> 00:17:28,800 Speaker 1: compare that data from the multiple angles to tell how 280 00:17:28,840 --> 00:17:32,080 Speaker 1: this object is really moving through this three dimensional space. 281 00:17:32,560 --> 00:17:35,800 Speaker 1: So it's kind of like the idea of having parallax 282 00:17:36,040 --> 00:17:39,280 Speaker 1: with two eyes. You know, our eyes are offset, so 283 00:17:39,440 --> 00:17:42,640 Speaker 1: by looking at an object, we can tell how far 284 00:17:42,680 --> 00:17:47,239 Speaker 1: away it is in part because of parallax. Uh. We 285 00:17:47,320 --> 00:17:49,840 Speaker 1: also have other visual cues that tell us about how 286 00:17:49,880 --> 00:17:52,439 Speaker 1: far something is, you know, things like how tall it 287 00:17:52,560 --> 00:17:54,600 Speaker 1: is in relation to where we are that kind of thing, 288 00:17:54,720 --> 00:17:56,520 Speaker 1: or how tall it is in relation to other objects 289 00:17:56,560 --> 00:17:59,520 Speaker 1: that are within our frame of vision. But parallax is 290 00:17:59,600 --> 00:18:02,359 Speaker 1: very importan and same sort of thing with these optical systems, 291 00:18:02,359 --> 00:18:04,800 Speaker 1: you would have multiple cameras set up to try and 292 00:18:04,920 --> 00:18:09,840 Speaker 1: capture the information that's going on in the frame so 293 00:18:09,920 --> 00:18:12,560 Speaker 1: that you could tell exactly how it's moving through that 294 00:18:12,600 --> 00:18:16,800 Speaker 1: three dimensional space. Yeah, it seems like um, in order 295 00:18:16,840 --> 00:18:20,760 Speaker 1: to capture the correct perspective, you need that additional information, 296 00:18:21,240 --> 00:18:24,080 Speaker 1: even though you may not necessarily see it. UM. It 297 00:18:24,119 --> 00:18:27,080 Speaker 1: helps the the animator do that, and the optical system 298 00:18:27,200 --> 00:18:31,879 Speaker 1: to allows you to work with more than one actor um, 299 00:18:31,920 --> 00:18:34,800 Speaker 1: which was not really an option with some of the 300 00:18:34,800 --> 00:18:39,359 Speaker 1: earlier systems. So in other words, you can although it 301 00:18:39,400 --> 00:18:43,040 Speaker 1: requires more equipment, you know, just simply out of necessity, 302 00:18:43,160 --> 00:18:47,960 Speaker 1: the optical system is really affording the animators a an 303 00:18:48,000 --> 00:18:52,720 Speaker 1: opportunity to use a greater amount of information um both 304 00:18:52,920 --> 00:18:56,320 Speaker 1: you know, from the different the different points of data 305 00:18:56,359 --> 00:18:59,760 Speaker 1: they're getting from a single actor, but from multiple actors 306 00:18:59,760 --> 00:19:03,160 Speaker 1: on this set simultaneously, which enables them to to create 307 00:19:03,160 --> 00:19:08,119 Speaker 1: more complex work. Right and uh. This also gives us 308 00:19:08,280 --> 00:19:11,440 Speaker 1: a good example of how the optical motion capture systems 309 00:19:11,440 --> 00:19:15,320 Speaker 1: are a passive system because you have these sensors you're 310 00:19:15,359 --> 00:19:18,359 Speaker 1: wearing that are not necessarily or not even sensors, they're 311 00:19:18,480 --> 00:19:21,679 Speaker 1: they're reflective markers that you're wearing. They aren't connected to 312 00:19:21,720 --> 00:19:26,240 Speaker 1: any sort of electronic components at all, versus the active 313 00:19:26,440 --> 00:19:32,160 Speaker 1: systems like the electromagnetic one where you are generating data 314 00:19:32,240 --> 00:19:34,840 Speaker 1: by moving through a magnetic field and you have these 315 00:19:34,880 --> 00:19:38,280 Speaker 1: big cables attached to it. Uh with the optical motion 316 00:19:38,280 --> 00:19:40,479 Speaker 1: capture systems. And another thing that's kind of interesting I 317 00:19:40,520 --> 00:19:42,680 Speaker 1: think is that a lot of at least the early ones, 318 00:19:43,119 --> 00:19:47,560 Speaker 1: the cameras would have infrared l e ed s uh 319 00:19:47,600 --> 00:19:51,640 Speaker 1: so emitters. Really that we're emitting infrared lights that's outside 320 00:19:51,720 --> 00:19:55,919 Speaker 1: are our visible spectrum. We cannot see infrared light. But 321 00:19:56,520 --> 00:19:59,480 Speaker 1: by putting an infrared filter on the camera, you could 322 00:19:59,640 --> 00:20:03,320 Speaker 1: have the camera pick up reflections of infrared light. And 323 00:20:03,400 --> 00:20:07,360 Speaker 1: that was a way of helping to identify the sensors 324 00:20:07,400 --> 00:20:10,199 Speaker 1: that you had put on the actor. The actors, the 325 00:20:10,280 --> 00:20:14,400 Speaker 1: sensors would be reflective specifically so that the infrared light 326 00:20:14,440 --> 00:20:17,280 Speaker 1: would reflect back toward the camera and give the most 327 00:20:17,320 --> 00:20:21,119 Speaker 1: accurate rendering of what's going on at any given moment 328 00:20:21,240 --> 00:20:25,800 Speaker 1: within a scene. So, um, yeah, it's another way of 329 00:20:25,840 --> 00:20:29,760 Speaker 1: making sure that the data being captured is as precise 330 00:20:29,800 --> 00:20:31,480 Speaker 1: as possible. I mean, that is, of course, the goal 331 00:20:31,560 --> 00:20:36,160 Speaker 1: is to try and recreate the physical movements as truthfully 332 00:20:36,200 --> 00:20:41,200 Speaker 1: as you possibly can given all the limitations involved. Yeah, 333 00:20:41,240 --> 00:20:44,359 Speaker 1: and if you're looking for a real life easy to 334 00:20:44,520 --> 00:20:48,800 Speaker 1: find an example of this, you would look no farther 335 00:20:48,920 --> 00:20:53,880 Speaker 1: than your local video game store. Um, because the Xbox 336 00:20:53,960 --> 00:20:59,840 Speaker 1: connect uses very much that that exact form of technology 337 00:21:00,119 --> 00:21:03,280 Speaker 1: is using an infrared emitter UM, and it has cameras 338 00:21:03,280 --> 00:21:06,000 Speaker 1: that it uses to pick it up uh, the information, 339 00:21:06,240 --> 00:21:08,800 Speaker 1: pick the information up that is coming back from what 340 00:21:08,960 --> 00:21:11,520 Speaker 1: is being reflected around the room. And anybody who who 341 00:21:11,520 --> 00:21:14,880 Speaker 1: has one is also aware that lighting is very much 342 00:21:14,880 --> 00:21:17,800 Speaker 1: an issue. UM. The way that were room is let 343 00:21:17,920 --> 00:21:20,920 Speaker 1: affects the information that the connect is able to refer 344 00:21:21,000 --> 00:21:24,000 Speaker 1: to the Xbox. Now, it's not, while it is sophisticated, 345 00:21:24,080 --> 00:21:26,199 Speaker 1: is not as sophisticated as the kind of equipment that 346 00:21:26,240 --> 00:21:28,919 Speaker 1: they might use in making a movie or making a 347 00:21:29,080 --> 00:21:32,800 Speaker 1: video game. But it is very very similar technology, and 348 00:21:32,800 --> 00:21:35,760 Speaker 1: in some ways I would argue that it's more sophisticated 349 00:21:35,760 --> 00:21:38,840 Speaker 1: than some of those early UH systems simply because it 350 00:21:38,960 --> 00:21:42,600 Speaker 1: is able to capture a lot of information. UM. Whereas 351 00:21:42,680 --> 00:21:45,359 Speaker 1: you know, the very early optical systems were only using 352 00:21:45,560 --> 00:21:49,240 Speaker 1: a handful of data points, right, So UM, it's it's 353 00:21:50,200 --> 00:21:53,200 Speaker 1: a pretty neat device. Um, you know, not only used 354 00:21:53,200 --> 00:21:55,240 Speaker 1: for gaming. Now the hacker community has fallen in love 355 00:21:55,280 --> 00:21:57,800 Speaker 1: with it too because it can do so much and 356 00:21:57,800 --> 00:21:59,879 Speaker 1: can be used for so many things and is you know, 357 00:22:00,000 --> 00:22:03,119 Speaker 1: fairly inexpensive. Yeah. The cool thing about the Connect is 358 00:22:03,160 --> 00:22:06,040 Speaker 1: that rather than have to obviously, if you've if you've 359 00:22:06,080 --> 00:22:09,000 Speaker 1: ever played an Xbox with a connect, you know you 360 00:22:09,040 --> 00:22:12,040 Speaker 1: don't have to go out and buy a snug body 361 00:22:12,080 --> 00:22:14,879 Speaker 1: suit covered in reflective markers in order to play. I mean, 362 00:22:14,920 --> 00:22:18,119 Speaker 1: it doesn't hurt, but you know, if you're if you 363 00:22:18,119 --> 00:22:20,240 Speaker 1: can pull that look off. There are very few of 364 00:22:20,280 --> 00:22:23,800 Speaker 1: us who can. I count myself among them. But you 365 00:22:23,840 --> 00:22:25,800 Speaker 1: don't have to do that because what it's doing is 366 00:22:25,800 --> 00:22:30,880 Speaker 1: it's actually projecting essentially a grid uh in infrared light, 367 00:22:31,240 --> 00:22:33,680 Speaker 1: so you can't see the grid, but it's being projected 368 00:22:33,960 --> 00:22:37,720 Speaker 1: into the room, and then when you move uh within 369 00:22:37,760 --> 00:22:40,320 Speaker 1: the space, you are deforming that grid. You know, the 370 00:22:40,359 --> 00:22:45,120 Speaker 1: camera that's picking up the reflections of that infrared light 371 00:22:45,760 --> 00:22:48,920 Speaker 1: can detect when the grid's being deformed by a physical 372 00:22:49,000 --> 00:22:52,680 Speaker 1: object interrupting the grid. So as you move, you interrupt 373 00:22:52,680 --> 00:22:54,480 Speaker 1: different parts of the grid, and it can start to 374 00:22:54,640 --> 00:23:00,840 Speaker 1: interpret those as motions and commands. It's not uh, it's 375 00:23:00,920 --> 00:23:03,600 Speaker 1: not as precise as what we're talking about with the 376 00:23:03,640 --> 00:23:06,959 Speaker 1: optical systems that are used in movies and video games. Uh, 377 00:23:07,040 --> 00:23:11,000 Speaker 1: to to create them, that is, not to to play them. Um, 378 00:23:11,000 --> 00:23:13,960 Speaker 1: it's not as precise as those. But it also has 379 00:23:14,080 --> 00:23:17,280 Speaker 1: other elements that help balance it out, Like it has 380 00:23:18,000 --> 00:23:23,920 Speaker 1: regular optical cameras that can have some other software that 381 00:23:24,280 --> 00:23:28,400 Speaker 1: aids it in recognizing things like facial recognition software, which 382 00:23:28,440 --> 00:23:32,760 Speaker 1: does not necessarily rely upon that infrared grid. It relies 383 00:23:32,800 --> 00:23:37,440 Speaker 1: more on the traditional camera functions, but has the software 384 00:23:37,520 --> 00:23:42,040 Speaker 1: included that lets the the programs within recognize who is 385 00:23:42,080 --> 00:23:46,640 Speaker 1: standing in front of it, so that combination increases the precision, 386 00:23:46,640 --> 00:23:48,920 Speaker 1: which of course is very important whenever you're playing a game. 387 00:23:48,960 --> 00:23:51,560 Speaker 1: I mean, anyone who's played any sort of game where 388 00:23:51,600 --> 00:23:55,000 Speaker 1: you're using a faulty controller, or it's just a system 389 00:23:55,080 --> 00:23:59,120 Speaker 1: that hasn't been fully uh it's not finished yet, it's 390 00:23:59,119 --> 00:24:01,679 Speaker 1: just in prototypes, dage or whatever. You may have noticed 391 00:24:01,720 --> 00:24:05,720 Speaker 1: that it could be very frustrating to try and control 392 00:24:05,880 --> 00:24:10,240 Speaker 1: something where the actual controller is not as responsive as 393 00:24:10,280 --> 00:24:15,200 Speaker 1: you would hope. It's um not a fun experience, but anyway, 394 00:24:15,280 --> 00:24:20,920 Speaker 1: that is kind of related to this whole motion capture technology. UM. 395 00:24:22,200 --> 00:24:24,000 Speaker 1: I'm sorry what you were You look like you have 396 00:24:24,040 --> 00:24:26,400 Speaker 1: something to say, Well, no, I was, I was going 397 00:24:26,480 --> 00:24:29,680 Speaker 1: to say that. UM. You know, we really hadn't other 398 00:24:29,720 --> 00:24:33,440 Speaker 1: than my earlier statement about sports. UM. You know, we've 399 00:24:33,440 --> 00:24:36,680 Speaker 1: we've been talking about it in an entertain amount entertainment 400 00:24:37,200 --> 00:24:42,080 Speaker 1: about the the ability to capture motion to make characters 401 00:24:42,119 --> 00:24:46,640 Speaker 1: more realistic. And UM that that is exactly what they 402 00:24:46,640 --> 00:24:49,120 Speaker 1: want to do when they are using this in sports medicine. UM. 403 00:24:49,240 --> 00:24:54,200 Speaker 1: Jonathan alluded to earlier the difficulty in uh in capturing 404 00:24:54,240 --> 00:24:58,719 Speaker 1: all the little subtle motions that go into um, into 405 00:24:58,760 --> 00:25:03,000 Speaker 1: a Major League Baseball players pitching. UM. And you know 406 00:25:03,040 --> 00:25:06,600 Speaker 1: when somebody, when somebody gets hurt, UM, sometimes they go 407 00:25:06,680 --> 00:25:12,640 Speaker 1: through uh extensive surgery. The Tommy John procedures is UH famous. 408 00:25:12,680 --> 00:25:16,040 Speaker 1: You know, they do a ligament transplant to to help 409 00:25:16,119 --> 00:25:20,000 Speaker 1: rebuild a picture's elbow, and that can really throw off, um, 410 00:25:20,119 --> 00:25:23,080 Speaker 1: the mechanics of a pictures motion. So they use this 411 00:25:23,160 --> 00:25:28,480 Speaker 1: motion capture technology to really get an idea of how, um, 412 00:25:28,640 --> 00:25:32,040 Speaker 1: how that person is throwing going about the mechanics of 413 00:25:32,080 --> 00:25:35,040 Speaker 1: their typical game play. And and that's exactly the same 414 00:25:35,119 --> 00:25:36,960 Speaker 1: kind of thing that they're doing when they create these 415 00:25:37,040 --> 00:25:41,760 Speaker 1: very realistic sports games. UM. But you know, in this case, 416 00:25:41,760 --> 00:25:44,480 Speaker 1: they're using it for sports medicine to see if they can, uh, 417 00:25:44,520 --> 00:25:48,240 Speaker 1: they can go back and recreate some of the motions 418 00:25:48,240 --> 00:25:52,200 Speaker 1: that made them so successful before they were injured. Now. UM. Ironically, 419 00:25:52,400 --> 00:25:59,360 Speaker 1: in in UH entertainment purposes, especially video, UM, you can 420 00:25:59,480 --> 00:26:04,200 Speaker 1: get too realistic. UM. The Japanese professor massa Hiro Mori 421 00:26:04,400 --> 00:26:08,600 Speaker 1: is famous for his Uncanny Valley UM, which has been 422 00:26:08,720 --> 00:26:13,560 Speaker 1: used in uses a robotics term for a robot that 423 00:26:13,600 --> 00:26:16,399 Speaker 1: looks so much and and moves so much like a 424 00:26:16,520 --> 00:26:20,399 Speaker 1: human that it it creeps us out. It looks a 425 00:26:20,480 --> 00:26:23,359 Speaker 1: little too realistic. And I can think of we're actually 426 00:26:23,359 --> 00:26:26,639 Speaker 1: recording this in December of and UM. One of the 427 00:26:26,760 --> 00:26:28,680 Speaker 1: movies that comes on about this time of year is 428 00:26:28,680 --> 00:26:35,080 Speaker 1: The Polar Express, which is it's a nightmare known loved 429 00:26:35,119 --> 00:26:38,720 Speaker 1: and reviled both for its story and it's um and 430 00:26:38,760 --> 00:26:41,080 Speaker 1: the way that they use motion capture because the characters 431 00:26:41,080 --> 00:26:43,960 Speaker 1: and they are so realistic, they're downright creepy. Yeah. It's 432 00:26:44,000 --> 00:26:47,520 Speaker 1: it's one of those things where they are almost but 433 00:26:47,800 --> 00:26:52,480 Speaker 1: not quite able to pass for a real person, so 434 00:26:52,520 --> 00:26:58,280 Speaker 1: that there's just enough off about them to be unsettling. Now, 435 00:26:58,840 --> 00:27:01,160 Speaker 1: this does bring up something else that's kind of interesting. 436 00:27:01,600 --> 00:27:04,040 Speaker 1: We have an article on how stuff works dot com 437 00:27:04,080 --> 00:27:08,520 Speaker 1: about motion scan technology, which is, as I said earlier, 438 00:27:08,560 --> 00:27:14,320 Speaker 1: a proprietary technology. It's it's more specific than just motion captured. 439 00:27:14,480 --> 00:27:19,920 Speaker 1: Specifically meant to capture facial motion activity. So when an 440 00:27:19,920 --> 00:27:23,919 Speaker 1: actor is speaking, when they're delivering lines, the way that 441 00:27:24,040 --> 00:27:30,720 Speaker 1: they furrow their brow or move their eyes or smile, 442 00:27:31,040 --> 00:27:34,520 Speaker 1: or they give a facial tick, anything like that, this 443 00:27:35,160 --> 00:27:37,960 Speaker 1: system is designed to pick that up so that it 444 00:27:38,080 --> 00:27:41,320 Speaker 1: can be recreated virtually in a game, and it was 445 00:27:41,440 --> 00:27:45,160 Speaker 1: used to great effect, in my opinion, in l A Noir. 446 00:27:46,720 --> 00:27:49,480 Speaker 1: Le Noir was a video game that came out in 447 00:27:49,480 --> 00:27:52,680 Speaker 1: two thousand eleven, and it was a game in which 448 00:27:52,680 --> 00:27:55,560 Speaker 1: you played a well, you played a couple of different characters, 449 00:27:55,560 --> 00:27:57,880 Speaker 1: but the one you played for most of the game 450 00:27:58,160 --> 00:28:02,960 Speaker 1: spoiler alert was was a a police detective. And you're 451 00:28:03,040 --> 00:28:07,000 Speaker 1: kind of rising through the ranks uh in L A 452 00:28:07,440 --> 00:28:11,720 Speaker 1: h during the uh early part of the twentieth century, 453 00:28:12,200 --> 00:28:19,360 Speaker 1: and it's it's um. It's notable in that you are uh, 454 00:28:19,600 --> 00:28:23,680 Speaker 1: You're spending most of the game looking at people's reactions. 455 00:28:24,080 --> 00:28:26,359 Speaker 1: You know, the idea behind l A Noir. It was 456 00:28:26,440 --> 00:28:30,240 Speaker 1: a new type of video game where you would interrogate 457 00:28:30,440 --> 00:28:34,919 Speaker 1: characters throughout your investigations, and as you interrogate them, you 458 00:28:35,000 --> 00:28:38,920 Speaker 1: had to watch the characters facial reactions to kind of 459 00:28:38,960 --> 00:28:41,320 Speaker 1: get an idea of whether the character was trying to 460 00:28:41,360 --> 00:28:44,760 Speaker 1: be evasive or if they were telling the truth. And 461 00:28:44,800 --> 00:28:47,400 Speaker 1: you would do things like watched for their eyes, and 462 00:28:47,440 --> 00:28:50,080 Speaker 1: if they weren't able to maintain eye contact, that was 463 00:28:50,160 --> 00:28:53,520 Speaker 1: an indication that perhaps they were being less than truthful. 464 00:28:53,680 --> 00:28:56,680 Speaker 1: Or if they would, you know, twitch their mouth or 465 00:28:56,680 --> 00:29:00,720 Speaker 1: clench their jaw, these would be little little pints that 466 00:29:00,880 --> 00:29:04,920 Speaker 1: perhaps there's more going on than what they're letting onto. 467 00:29:05,400 --> 00:29:10,120 Speaker 1: And obviously, if your gameplay depends upon trying to determine 468 00:29:10,160 --> 00:29:13,560 Speaker 1: whether or not a virtual character is telling the truth, 469 00:29:13,600 --> 00:29:17,680 Speaker 1: you have to be able to represent those facial expressions 470 00:29:17,800 --> 00:29:22,080 Speaker 1: as closely to reality as possible, or else the game 471 00:29:22,200 --> 00:29:26,520 Speaker 1: does not work. So they used this motion scan technology, 472 00:29:26,560 --> 00:29:29,360 Speaker 1: and the way that they did this was that they 473 00:29:29,560 --> 00:29:35,280 Speaker 1: had a very brightly lit studio that had lights trained 474 00:29:35,360 --> 00:29:38,120 Speaker 1: on an actor from just about every angle, and the 475 00:29:38,160 --> 00:29:41,160 Speaker 1: purpose of that was to try and eliminate shadows because 476 00:29:41,200 --> 00:29:43,000 Speaker 1: any sort of shadows you would have there would of 477 00:29:43,040 --> 00:29:46,800 Speaker 1: course affect the actual capture. It was really all about 478 00:29:46,840 --> 00:29:51,640 Speaker 1: the light. And they used thirty two high definition cameras, 479 00:29:52,560 --> 00:29:54,920 Speaker 1: So think about that, thirty two high definition cameras just 480 00:29:55,080 --> 00:29:59,120 Speaker 1: to capture and actor's facial performance like that's it. There, 481 00:29:58,960 --> 00:30:02,320 Speaker 1: there's no other move meant the actor is seated at 482 00:30:02,320 --> 00:30:05,320 Speaker 1: the time and um and had to remain as still 483 00:30:05,360 --> 00:30:09,920 Speaker 1: as possible and just do all the acting with their face, 484 00:30:10,320 --> 00:30:13,080 Speaker 1: which for anyone out there who's done any sort of acting, 485 00:30:13,560 --> 00:30:19,600 Speaker 1: you know, that's incredibly challenging because actors are trained to 486 00:30:19,760 --> 00:30:23,320 Speaker 1: use their whole body when they are performance making a performance. 487 00:30:23,360 --> 00:30:27,360 Speaker 1: They're trained to to really think about movement. I mean, 488 00:30:27,400 --> 00:30:29,440 Speaker 1: if you're if you're really serious about acting, you've probably 489 00:30:29,440 --> 00:30:32,479 Speaker 1: taken movement classes. And to suddenly have all of that 490 00:30:32,560 --> 00:30:35,800 Speaker 1: taken away and all of your acting is restricted to 491 00:30:35,920 --> 00:30:39,880 Speaker 1: just your face, it's pretty that's pretty dramatic. It's tough 492 00:30:39,960 --> 00:30:42,440 Speaker 1: to do, but anyway, that's what the actors had to do. 493 00:30:42,480 --> 00:30:45,880 Speaker 1: They had to sit down and and restrict their acting 494 00:30:45,920 --> 00:30:50,240 Speaker 1: to just their facial expressions without it going like over 495 00:30:50,320 --> 00:30:53,240 Speaker 1: the top crazy, because that would be just as distracting 496 00:30:53,280 --> 00:30:57,640 Speaker 1: as not enough performance at all. And these thirty two 497 00:30:57,720 --> 00:31:01,800 Speaker 1: cameras were paired up, so six pairs of cameras. There's 498 00:31:01,920 --> 00:31:04,160 Speaker 1: technically there was a thirty third camera as well that 499 00:31:04,200 --> 00:31:07,880 Speaker 1: the director used to watch the scene and give directions 500 00:31:07,920 --> 00:31:12,400 Speaker 1: to the actors. Um. But these these pairs of cameras 501 00:31:12,920 --> 00:31:15,200 Speaker 1: were trained on all these different angles of the face 502 00:31:15,320 --> 00:31:19,640 Speaker 1: in order to capture that that performance so that in 503 00:31:19,720 --> 00:31:24,120 Speaker 1: the virtual world they could recreate it accurately, which to 504 00:31:24,240 --> 00:31:28,200 Speaker 1: me is phenomenal. And apparently the way the system works 505 00:31:28,280 --> 00:31:33,120 Speaker 1: is you get that virtual version of the person's face 506 00:31:33,600 --> 00:31:38,520 Speaker 1: and head almost instantly, which is kind of creepy but 507 00:31:38,600 --> 00:31:42,360 Speaker 1: also awesome. Chris Bilette and I have a little bit 508 00:31:42,400 --> 00:31:44,560 Speaker 1: more to say about how motion capture works, but first 509 00:31:44,640 --> 00:31:55,600 Speaker 1: let's take another quick break. It's funny too that they 510 00:31:55,720 --> 00:31:59,880 Speaker 1: used that many cameras in the creation of a video game, because, uh, 511 00:32:00,600 --> 00:32:04,880 Speaker 1: as elsewhere in that article that notes that um Circus, 512 00:32:04,960 --> 00:32:12,920 Speaker 1: who was playing Gollum Um only had only had cameras 513 00:32:12,960 --> 00:32:16,240 Speaker 1: on on him, but in doing so, they were able 514 00:32:16,280 --> 00:32:21,560 Speaker 1: to uh to create roughly, you know, ten thousand different 515 00:32:21,600 --> 00:32:24,840 Speaker 1: kinds or identify ten thousand different kinds of facial movements 516 00:32:24,880 --> 00:32:28,280 Speaker 1: that they could use in animating the character on screen. 517 00:32:28,440 --> 00:32:33,560 Speaker 1: So um, clearly, uh, you know, this is very very 518 00:32:33,680 --> 00:32:38,080 Speaker 1: high tech and painstaking procedure to do, but in doing 519 00:32:38,160 --> 00:32:41,160 Speaker 1: so they can they can create very very realistic movements. Yeah, 520 00:32:41,160 --> 00:32:44,720 Speaker 1: there's a lot of number crunching involved, and frankly, the 521 00:32:44,720 --> 00:32:48,920 Speaker 1: the part that takes place after you've captured the data 522 00:32:50,040 --> 00:32:52,880 Speaker 1: is can be dramatically different from one case to the next. 523 00:32:53,240 --> 00:32:56,920 Speaker 1: In some cases, you may have already created uh, an 524 00:32:56,960 --> 00:33:01,480 Speaker 1: animated figure any much from start to finish. You might 525 00:33:01,480 --> 00:33:05,600 Speaker 1: not have completely put textures on it or or something, 526 00:33:05,640 --> 00:33:10,120 Speaker 1: but you might have essentially the way the character is 527 00:33:10,120 --> 00:33:14,160 Speaker 1: going to look in the finished product, uh, and then 528 00:33:14,320 --> 00:33:16,720 Speaker 1: you just map it to the movements that you've captured 529 00:33:17,000 --> 00:33:19,320 Speaker 1: and it's and there it goes. And in other cases 530 00:33:19,360 --> 00:33:22,240 Speaker 1: you might see that what they do is they capture 531 00:33:22,840 --> 00:33:25,200 Speaker 1: the motions and then you essentially have what looks like 532 00:33:25,720 --> 00:33:30,200 Speaker 1: a very primitive stick figure skeleton that moves in the 533 00:33:30,200 --> 00:33:33,080 Speaker 1: way that the actor moved, but there's no definition, there's 534 00:33:33,120 --> 00:33:36,680 Speaker 1: no character there yet, And you may have animators who 535 00:33:36,720 --> 00:33:39,760 Speaker 1: build the character somewhat based upon the way the actor 536 00:33:39,840 --> 00:33:43,480 Speaker 1: moved through the space, so that perhaps the character's design 537 00:33:43,560 --> 00:33:47,120 Speaker 1: is not finalized until you've captured that that performance, and 538 00:33:47,200 --> 00:33:50,120 Speaker 1: the performance helps guide the design of the character. It 539 00:33:50,240 --> 00:33:54,960 Speaker 1: all depends on the specific technology that's being used and 540 00:33:55,280 --> 00:33:59,040 Speaker 1: the preference of the crew that's that's designing whatever it 541 00:33:59,120 --> 00:34:01,480 Speaker 1: is that they're making. There's a video game or a movie, 542 00:34:01,520 --> 00:34:05,640 Speaker 1: TV show, commercial, whatever it happens to be. Uh. In 543 00:34:05,680 --> 00:34:08,360 Speaker 1: the case of digital puppetry, obviously you would already have 544 00:34:08,600 --> 00:34:13,040 Speaker 1: the the full character realized, so that just by using 545 00:34:13,080 --> 00:34:17,719 Speaker 1: whatever control mechanism happens to be there, you would be 546 00:34:17,800 --> 00:34:21,360 Speaker 1: able to make the puppet move in real time. Otherwise 547 00:34:21,360 --> 00:34:24,640 Speaker 1: it's not really puppetry. Um. And again that's sort of 548 00:34:24,680 --> 00:34:27,000 Speaker 1: like the if you've been to that that turtle talk 549 00:34:27,040 --> 00:34:31,640 Speaker 1: thing I talked about, the Disney World or Disneyland. UM. 550 00:34:31,760 --> 00:34:34,720 Speaker 1: I'm sure there are other similar ones. I think Monsters Inc. 551 00:34:35,200 --> 00:34:38,880 Speaker 1: Laugh Factory has a similar setup where you've got a 552 00:34:38,880 --> 00:34:41,799 Speaker 1: digital character on a screen that can react in real 553 00:34:41,920 --> 00:34:45,200 Speaker 1: time to things that are happening within the physical environment. 554 00:34:45,280 --> 00:34:48,960 Speaker 1: So they interact with the audience like they'll specifically single 555 00:34:49,000 --> 00:34:52,799 Speaker 1: people out and chat with people in the audience. And um, 556 00:34:52,800 --> 00:34:54,920 Speaker 1: to two kids, this is amazing. I mean, it's a 557 00:34:54,960 --> 00:34:59,279 Speaker 1: cartoon character acting in real time. It's a real person now, Uh. 558 00:34:59,360 --> 00:35:01,799 Speaker 1: Two adults. It's fascinating because they're like, how the heck 559 00:35:01,840 --> 00:35:06,720 Speaker 1: did that happen? Um? But yeah, that's it's all based 560 00:35:06,719 --> 00:35:10,399 Speaker 1: on the same sort of technology. UM. And it's it's 561 00:35:10,440 --> 00:35:14,560 Speaker 1: really interesting to me to see how the field is 562 00:35:14,600 --> 00:35:17,640 Speaker 1: evolving over time because things like the connect show that 563 00:35:18,200 --> 00:35:21,400 Speaker 1: we are adapting the same sort of technology in different ways. 564 00:35:21,400 --> 00:35:24,399 Speaker 1: We're using different implementations to essentially do the same thing, 565 00:35:25,080 --> 00:35:28,280 Speaker 1: and that perhaps we will get to a point where 566 00:35:29,000 --> 00:35:33,640 Speaker 1: we won't have to worry about all the sensors so much. Um. 567 00:35:34,320 --> 00:35:38,880 Speaker 1: You can maybe have an actor who's not completely coded 568 00:35:38,880 --> 00:35:42,319 Speaker 1: and stickers perform and and you could capture all that 569 00:35:42,440 --> 00:35:45,680 Speaker 1: data without having to worry about, you know, tracking these 570 00:35:45,719 --> 00:35:48,400 Speaker 1: little dots that might be something that we've see in 571 00:35:48,400 --> 00:35:50,000 Speaker 1: the future. I mean the motion scan is kind of 572 00:35:50,000 --> 00:35:54,480 Speaker 1: like that, because before motion scan with that facial acting 573 00:35:55,000 --> 00:36:00,319 Speaker 1: uh technology, Uh, whenever I saw anyone who was having 574 00:36:00,320 --> 00:36:03,560 Speaker 1: their face tracked for a performance, they always were wearing 575 00:36:04,200 --> 00:36:08,120 Speaker 1: those tiny little white stickers all over their face to track. 576 00:36:08,160 --> 00:36:09,719 Speaker 1: I mean, we've got a lot of muscles in our face. 577 00:36:09,719 --> 00:36:12,040 Speaker 1: There's something like nineteen muscles or something that you have 578 00:36:12,080 --> 00:36:15,239 Speaker 1: to track, so um, you would have all these little 579 00:36:15,280 --> 00:36:17,120 Speaker 1: dots on your face to track those motions. Well, with 580 00:36:17,200 --> 00:36:20,920 Speaker 1: motion scan you don't need those anymore. So maybe we'll 581 00:36:20,920 --> 00:36:22,840 Speaker 1: see something like that. Of course, so that would really 582 00:36:23,280 --> 00:36:27,400 Speaker 1: depend upon perhaps the lighting, which could if you're shooting 583 00:36:27,480 --> 00:36:30,400 Speaker 1: a virtual character that's next to real characters like in 584 00:36:30,440 --> 00:36:34,160 Speaker 1: The Lord of the Rings real being. I guess you know, 585 00:36:34,239 --> 00:36:36,840 Speaker 1: your mileage may vary. I mean, they're hobbits. But anyway, 586 00:36:37,160 --> 00:36:40,239 Speaker 1: when you're next to real people, clearly you can't mess 587 00:36:40,280 --> 00:36:42,640 Speaker 1: with the lighting too much or it'll just make the 588 00:36:42,640 --> 00:36:48,200 Speaker 1: whole scene look strange. Speaking of strange, UM, while you 589 00:36:48,320 --> 00:36:54,520 Speaker 1: might think that the techniques used in motion capture, um 590 00:36:54,560 --> 00:36:57,239 Speaker 1: you know, bringing film into it, you know, adding a 591 00:36:57,239 --> 00:37:01,839 Speaker 1: lot of advancement to to film, um basically uh, some 592 00:37:01,880 --> 00:37:06,200 Speaker 1: people sort of regardless as cheating. Yeah. I I did 593 00:37:06,239 --> 00:37:11,360 Speaker 1: some research that that indicated that although some other types 594 00:37:11,400 --> 00:37:15,960 Speaker 1: of animation are considered you know, considered more artful, UM, 595 00:37:16,160 --> 00:37:21,000 Speaker 1: motion capture is sort of not everyone. But some people say, well, 596 00:37:21,040 --> 00:37:23,359 Speaker 1: you know, it's it's not Oscar worthy because you were 597 00:37:23,440 --> 00:37:29,000 Speaker 1: using these computer add animation techniques that that really um 598 00:37:29,680 --> 00:37:32,759 Speaker 1: are simulating human motion and it's just it's just not real. 599 00:37:33,239 --> 00:37:36,320 Speaker 1: And the argument that I've seen used against it is, well, 600 00:37:36,520 --> 00:37:41,720 Speaker 1: you consider rotoscoping, okay, why don't you consider motion capture, 601 00:37:41,719 --> 00:37:45,640 Speaker 1: which is a kind of descendant from this technology. Why 602 00:37:45,680 --> 00:37:48,520 Speaker 1: why isn't that okay to uh, you know, to consider 603 00:37:48,600 --> 00:37:53,720 Speaker 1: for um quality and and and for rewards. But um, 604 00:37:53,760 --> 00:37:56,360 Speaker 1: apparently it's a it's sort of a hot topic among 605 00:37:57,000 --> 00:38:01,759 Speaker 1: um among movie makers. Yeah, I can see why animator 606 00:38:01,760 --> 00:38:05,719 Speaker 1: a traditional animator or even a computer animator. I mean 607 00:38:05,760 --> 00:38:09,600 Speaker 1: that's closer and closer to becoming traditional already, but either 608 00:38:09,680 --> 00:38:12,799 Speaker 1: a hand drawn animation or computer animation. Someone who goes 609 00:38:12,880 --> 00:38:16,040 Speaker 1: through the trouble of animating these things and doing a 610 00:38:16,080 --> 00:38:19,960 Speaker 1: lot of this work. Uh by hand seems like it's 611 00:38:19,960 --> 00:38:22,960 Speaker 1: the wrong term, but but personally going through and creating 612 00:38:22,960 --> 00:38:27,560 Speaker 1: these performances, I can see where they might feel that way. Um. 613 00:38:27,600 --> 00:38:29,520 Speaker 1: I have a completely different perspective on it. Of course, 614 00:38:29,520 --> 00:38:31,920 Speaker 1: I'm not an animator, so that's part of it. But 615 00:38:32,040 --> 00:38:34,520 Speaker 1: I think of it as creating a performance. And in 616 00:38:34,600 --> 00:38:36,400 Speaker 1: the sense of creating a performance, I think it's a 617 00:38:36,440 --> 00:38:42,839 Speaker 1: completely legitimate tool because you're still relying on an actor 618 00:38:42,960 --> 00:38:48,200 Speaker 1: to create a performance. That that that people will relate 619 00:38:48,239 --> 00:38:52,080 Speaker 1: to whether it's a character that you're supposed to love 620 00:38:52,239 --> 00:38:57,120 Speaker 1: or hate or fear. That all is dependent upon the 621 00:38:57,160 --> 00:39:01,960 Speaker 1: animator and the actor and several other people working to 622 00:39:02,120 --> 00:39:06,880 Speaker 1: create this this performance. And uh, I don't see anything 623 00:39:07,200 --> 00:39:10,799 Speaker 1: wrong with that. That to me is a completely legitimate 624 00:39:10,840 --> 00:39:16,560 Speaker 1: form of creating the art of entertainment. So um, I mean, 625 00:39:16,600 --> 00:39:20,000 Speaker 1: I do understand from an artistic perspective where some people 626 00:39:20,040 --> 00:39:21,759 Speaker 1: could have a problem with it. But but if you 627 00:39:21,840 --> 00:39:24,640 Speaker 1: take a bigger picture look and not not just you 628 00:39:24,680 --> 00:39:27,840 Speaker 1: know what technique you're using, but the end goal of creating, 629 00:39:28,760 --> 00:39:30,680 Speaker 1: whether you want to call it art or not, but 630 00:39:30,880 --> 00:39:35,120 Speaker 1: creating something that has an impact to the viewer or 631 00:39:35,200 --> 00:39:38,040 Speaker 1: player in the case of a video game, I think 632 00:39:38,120 --> 00:39:42,120 Speaker 1: that's more important. But then again, I'm, like I said, 633 00:39:42,120 --> 00:39:44,439 Speaker 1: I'm not an animator, so I don't have that kind 634 00:39:44,440 --> 00:39:47,480 Speaker 1: of emotional attachment, you know, I'm not vested in it 635 00:39:47,520 --> 00:39:50,560 Speaker 1: in that way. So um, I'd be curious to hear 636 00:39:50,600 --> 00:39:54,080 Speaker 1: what our listeners think if they think that is motion capture? 637 00:39:54,440 --> 00:39:58,399 Speaker 1: Is that cheating? Is it? Uh? Is it, as Red 638 00:39:58,520 --> 00:40:02,719 Speaker 1: versus Blue would have you say, a legitimate strategy? What 639 00:40:02,719 --> 00:40:04,440 Speaker 1: what do you think? What do you consider a motion 640 00:40:04,440 --> 00:40:08,319 Speaker 1: capture you should less know. Yeah, I UM, I do 641 00:40:08,760 --> 00:40:15,279 Speaker 1: see where UM it might make a traditional animator concerned, 642 00:40:15,400 --> 00:40:19,239 Speaker 1: but I don't. I don't really think it diminishes their UM, 643 00:40:19,400 --> 00:40:24,120 Speaker 1: their artistic value, to to UM, to a work whatever 644 00:40:24,160 --> 00:40:26,560 Speaker 1: it may be that they are working on. UM. And 645 00:40:26,600 --> 00:40:29,600 Speaker 1: there are certain times I'm sure where uh you would 646 00:40:29,680 --> 00:40:33,799 Speaker 1: argue that using these techniques is completely inappropriate to what 647 00:40:34,200 --> 00:40:37,440 Speaker 1: they might do. UM. But yeah, I mean it's it's 648 00:40:37,480 --> 00:40:41,840 Speaker 1: always a concern when UM you start saying, well, the 649 00:40:41,840 --> 00:40:43,960 Speaker 1: machine can do it, and we don't really need people 650 00:40:44,000 --> 00:40:47,359 Speaker 1: to do it, so get out. Yeah, I don't think 651 00:40:47,360 --> 00:40:51,759 Speaker 1: that's ever gonna be UM always fully the case, because 652 00:40:51,760 --> 00:40:54,560 Speaker 1: you're going to have certain characters within movies that are 653 00:40:55,360 --> 00:40:59,960 Speaker 1: going to be so different from the way humans are built, 654 00:41:00,920 --> 00:41:03,919 Speaker 1: so to speak, that that, uh, that motion capture would 655 00:41:03,920 --> 00:41:06,839 Speaker 1: not be practical. For example, like let's say that the 656 00:41:06,960 --> 00:41:12,399 Speaker 1: character that you're creating has really super long arms, and 657 00:41:12,440 --> 00:41:14,760 Speaker 1: you know, you've got an actor who's pretty lanky, but 658 00:41:14,760 --> 00:41:18,440 Speaker 1: but their arms are not as long as the character's arms. Uh. 659 00:41:18,520 --> 00:41:20,640 Speaker 1: If you were just to a direct translation of the 660 00:41:20,680 --> 00:41:24,759 Speaker 1: actor's movements into the animation, it might not look right 661 00:41:24,840 --> 00:41:30,120 Speaker 1: because the character has different dimensions, their body is built 662 00:41:30,200 --> 00:41:34,000 Speaker 1: differently than the actor, and so without tweaking it, without 663 00:41:34,040 --> 00:41:36,440 Speaker 1: having an animator go in there and adjust this and 664 00:41:36,480 --> 00:41:39,600 Speaker 1: make it look correct compared to what the you know, 665 00:41:39,640 --> 00:41:42,440 Speaker 1: the the vision is for the movie, it doesn't come 666 00:41:42,440 --> 00:41:47,040 Speaker 1: out correctly, it doesn't look right. So I think there's 667 00:41:47,719 --> 00:41:51,520 Speaker 1: very little risk of motion capture ever taking that away completely. 668 00:41:51,560 --> 00:41:56,880 Speaker 1: Plus there is something too, you know, creating a performance 669 00:41:56,920 --> 00:42:01,560 Speaker 1: through traditional animation that you know, it does feel differently 670 00:42:01,560 --> 00:42:04,520 Speaker 1: the motion capture, but that's not a bad thing, like 671 00:42:04,880 --> 00:42:08,640 Speaker 1: it just depends upon the vision of the director and 672 00:42:08,840 --> 00:42:12,600 Speaker 1: what the tone of the piece needs to be. And 673 00:42:12,760 --> 00:42:17,279 Speaker 1: that wraps up another classic episode of text Stuff. Hope 674 00:42:17,320 --> 00:42:20,279 Speaker 1: you guys enjoyed it. If you have any suggestions for 675 00:42:20,400 --> 00:42:23,360 Speaker 1: future episodes of tech Stuff, feel free to get in 676 00:42:23,400 --> 00:42:25,439 Speaker 1: touch with me. You can send an email to tex 677 00:42:25,520 --> 00:42:28,200 Speaker 1: Stuff at how stuff Works dot com, or you can 678 00:42:28,280 --> 00:42:30,920 Speaker 1: drop me a line on Facebook or Twitter to handle it. 679 00:42:31,000 --> 00:42:34,520 Speaker 1: Both of those is text Stuff HSW. You can pop 680 00:42:34,560 --> 00:42:37,319 Speaker 1: on over to our website that's tech Stuff Podcast dot com. 681 00:42:37,320 --> 00:42:39,520 Speaker 1: You're gonna find a link to our archive where we 682 00:42:39,560 --> 00:42:44,239 Speaker 1: have every episode we've ever published right there searchable, so 683 00:42:44,239 --> 00:42:46,799 Speaker 1: we can go check that out, and you can also 684 00:42:46,920 --> 00:42:49,640 Speaker 1: find a link to our online store, where every purchase 685 00:42:49,680 --> 00:42:52,080 Speaker 1: you make goes to help the show. We greatly appreciate it, 686 00:42:52,440 --> 00:42:59,360 Speaker 1: and I'll talk to you again. Releasing text Stuff is 687 00:42:59,360 --> 00:43:01,840 Speaker 1: a production of I Heart Radio's How Stuff Works. For 688 00:43:02,000 --> 00:43:04,960 Speaker 1: more podcasts from I heart Radio, visit the i heart 689 00:43:05,040 --> 00:43:08,200 Speaker 1: Radio app, Apple Podcasts, or wherever you listen to your 690 00:43:08,239 --> 00:43:08,960 Speaker 1: favorite shows.