1 00:00:05,000 --> 00:00:08,720 Speaker 1: Can a person who is blind come to see through 2 00:00:08,800 --> 00:00:13,080 Speaker 1: her tongue? Can a baby be born without ears? What 3 00:00:13,119 --> 00:00:15,680 Speaker 1: does it like to have the smell of a dog? 4 00:00:15,840 --> 00:00:17,680 Speaker 1: And what does any of this have to do with 5 00:00:17,920 --> 00:00:25,080 Speaker 1: airplane pilots or Westworld or potato head. Welcome to Inner 6 00:00:25,160 --> 00:00:29,520 Speaker 1: Cosmos with me, David Eagleman. I'm a neuroscientist and an author, 7 00:00:30,000 --> 00:00:32,480 Speaker 1: and my fascination for a very long time has been 8 00:00:32,640 --> 00:00:37,599 Speaker 1: how brains perceive reality, because the strange part is that 9 00:00:37,600 --> 00:00:40,199 Speaker 1: we're not seeing most of the action that's going on 10 00:00:40,400 --> 00:00:43,279 Speaker 1: out there. So today we're going to dive into that 11 00:00:43,440 --> 00:00:47,600 Speaker 1: and we're going to see how we might expand our perception. 12 00:00:50,560 --> 00:00:54,080 Speaker 1: We're built out of really small stuff like DNA, and 13 00:00:54,160 --> 00:00:58,320 Speaker 1: we're embedded in a very large cosmos, and we're not 14 00:00:58,480 --> 00:01:03,480 Speaker 1: particularly good at receiving reality at either of those scales. 15 00:01:03,880 --> 00:01:06,360 Speaker 1: And that's because we've evolved to deal with reality. It's 16 00:01:06,360 --> 00:01:09,360 Speaker 1: it's very thin slice in between at the level of 17 00:01:10,120 --> 00:01:12,840 Speaker 1: rivers and apples and rabbits and stuff like that. But 18 00:01:13,160 --> 00:01:17,240 Speaker 1: even here, at our level of perception, we're not seeing 19 00:01:17,800 --> 00:01:21,240 Speaker 1: most of the action that's going on. So take, for example, 20 00:01:21,319 --> 00:01:24,240 Speaker 1: the colors of our world, So picture the reds and 21 00:01:24,319 --> 00:01:28,000 Speaker 1: blues and greens and purples. These are light waves that 22 00:01:28,200 --> 00:01:31,800 Speaker 1: bounce off objects and hit these specialized receptors at the 23 00:01:31,800 --> 00:01:35,039 Speaker 1: back of our eyes, and then we perceive these colors, 24 00:01:35,560 --> 00:01:38,400 Speaker 1: but we're not seeing all the light waves that are 25 00:01:38,440 --> 00:01:42,200 Speaker 1: out there. In fact, what we see is less than 26 00:01:42,240 --> 00:01:46,160 Speaker 1: a ten trillionth of the light waves out there. So 27 00:01:46,280 --> 00:01:49,160 Speaker 1: if you look at what's called the electromagnetic spectrum, you 28 00:01:49,280 --> 00:01:55,160 Speaker 1: have radio waves and microwaves and X rays and gamma rays. 29 00:01:55,280 --> 00:01:58,640 Speaker 1: All these are light. They're just different frequencies. These are 30 00:01:58,840 --> 00:02:02,640 Speaker 1: passing through your body right now, and you're completely unaware 31 00:02:02,680 --> 00:02:06,240 Speaker 1: of them because your biology doesn't come with the right 32 00:02:06,320 --> 00:02:10,960 Speaker 1: receptors to pick those up. They are light, but they're 33 00:02:10,960 --> 00:02:15,480 Speaker 1: not visible light. There are thousands of cell phone conversations 34 00:02:15,520 --> 00:02:20,160 Speaker 1: passing through you right now, and you're completely blind to them. Now, 35 00:02:20,200 --> 00:02:25,280 Speaker 1: it's not that these other wavelengths of light are inherently unseeable. 36 00:02:25,840 --> 00:02:30,560 Speaker 1: Snakes include some infrared light in their reality, and honeybees 37 00:02:30,600 --> 00:02:34,400 Speaker 1: include ultraviolet light in their view of the world. And 38 00:02:34,440 --> 00:02:37,760 Speaker 1: of course we build machines in the dashboards of our 39 00:02:37,800 --> 00:02:41,079 Speaker 1: cars to pick up on signals in the radio frequency range, 40 00:02:41,440 --> 00:02:43,960 Speaker 1: and we build machines and hospitals to pick up on 41 00:02:44,040 --> 00:02:47,400 Speaker 1: the X ray range and so on. But you can't 42 00:02:47,440 --> 00:02:50,360 Speaker 1: sense any of these things by yourself, at least not yet, 43 00:02:50,720 --> 00:02:55,480 Speaker 1: because you don't come equipped with the proper sensors. Now, 44 00:02:55,520 --> 00:02:59,320 Speaker 1: what this means is that our experience of reality is 45 00:03:00,000 --> 00:03:03,760 Speaker 1: it's trained by our biology, and that goes against this 46 00:03:03,880 --> 00:03:07,000 Speaker 1: common sense notion that our eyes and our ears and 47 00:03:07,040 --> 00:03:10,880 Speaker 1: our fingertips are just picking up on the objective reality 48 00:03:10,919 --> 00:03:14,639 Speaker 1: out there. Instead, what this means is that our brains 49 00:03:14,680 --> 00:03:19,880 Speaker 1: are sampling just a little bit of the world. Now, 50 00:03:19,919 --> 00:03:24,480 Speaker 1: across the animal kingdom, different animals pick up on different 51 00:03:24,720 --> 00:03:29,480 Speaker 1: parts of reality. So take the tick. It's blind and 52 00:03:29,560 --> 00:03:33,399 Speaker 1: death and in its little world, the important signals are 53 00:03:33,840 --> 00:03:38,240 Speaker 1: temperature and body odor butteric acid, and that's all it 54 00:03:38,320 --> 00:03:41,480 Speaker 1: picks up on, and that's how it constructs its reality. 55 00:03:42,080 --> 00:03:46,640 Speaker 1: For a fish called the black ghost knife fish, its 56 00:03:46,640 --> 00:03:51,560 Speaker 1: sensory world is all about electrical fields and the perturbations 57 00:03:51,600 --> 00:03:54,600 Speaker 1: of those fields when it's passing a rock or another creature, 58 00:03:55,040 --> 00:03:59,640 Speaker 1: and that's always picking up on. For the echolocating bat, 59 00:04:00,160 --> 00:04:03,600 Speaker 1: its reality is constructed out of air compression waves that 60 00:04:03,680 --> 00:04:06,520 Speaker 1: bounce off objects and come back to them. So for 61 00:04:06,600 --> 00:04:10,880 Speaker 1: these different animals. That's the slice of their ecosystem that 62 00:04:10,880 --> 00:04:13,720 Speaker 1: they can pick up on, and that's all they're seeing. 63 00:04:13,720 --> 00:04:16,280 Speaker 1: And we have a word for this in science. This 64 00:04:16,320 --> 00:04:19,760 Speaker 1: is called the umveldt, which is the German word for 65 00:04:19,839 --> 00:04:24,359 Speaker 1: the surrounding world. Now, every animal is very limited in 66 00:04:24,400 --> 00:04:27,520 Speaker 1: the umveldt that it can pick up on, but presumably 67 00:04:27,600 --> 00:04:32,080 Speaker 1: every animal assumes that it's umveldt is the entire objective 68 00:04:32,160 --> 00:04:35,000 Speaker 1: reality that's out there, because why would you ever stop 69 00:04:35,040 --> 00:04:39,599 Speaker 1: to imagine that there's something beyond what you can sense. Instead, 70 00:04:39,680 --> 00:04:43,440 Speaker 1: we all accept reality as it is presented to us. 71 00:04:43,920 --> 00:04:48,080 Speaker 1: So let's do a consciousness raiser on this. Imagine that 72 00:04:48,279 --> 00:04:53,160 Speaker 1: you are your family dog, and your whole world is 73 00:04:53,160 --> 00:04:56,400 Speaker 1: about smelling. So you've got this long snout that has 74 00:04:56,440 --> 00:04:59,360 Speaker 1: two hundred million scent receptors in it, and you have 75 00:04:59,480 --> 00:05:04,160 Speaker 1: wet nostrils that attract and trap scent molecules. And your 76 00:05:04,160 --> 00:05:06,800 Speaker 1: nostrils even have slits so you can take these big 77 00:05:06,839 --> 00:05:09,920 Speaker 1: nose fulls of air. You have floppy ears to kick 78 00:05:10,000 --> 00:05:14,080 Speaker 1: up more scent. Everything is about smell for you. So 79 00:05:14,200 --> 00:05:18,520 Speaker 1: one day you stop in your tracks with a revelation 80 00:05:18,720 --> 00:05:21,839 Speaker 1: and you look at your human owners and you think, 81 00:05:22,279 --> 00:05:25,360 Speaker 1: what is it like to have the pitiful little nose 82 00:05:25,520 --> 00:05:27,720 Speaker 1: of a human? What is it like when they take 83 00:05:27,920 --> 00:05:31,800 Speaker 1: a little feeble nose full of air? How can a 84 00:05:31,880 --> 00:05:34,640 Speaker 1: human not know that there's a cat one hundred yards 85 00:05:34,680 --> 00:05:38,120 Speaker 1: away or that their best friend was on this very 86 00:05:38,200 --> 00:05:41,760 Speaker 1: spot six hours ago. But because we're humans that we've 87 00:05:41,800 --> 00:05:46,440 Speaker 1: never experienced that world of smell, we don't miss it 88 00:05:46,480 --> 00:05:48,839 Speaker 1: and we don't even think about it. Because we are 89 00:05:48,920 --> 00:05:51,920 Speaker 1: firmly settled into our umvelt we don't feel like there's 90 00:05:51,960 --> 00:05:55,600 Speaker 1: a black hole of smell that we're missing there. We 91 00:05:55,640 --> 00:05:58,640 Speaker 1: think we've got the whole world. But the question is 92 00:05:58,760 --> 00:06:02,320 Speaker 1: do we have to be stuck in the umveldt into 93 00:06:02,360 --> 00:06:05,839 Speaker 1: which we were born? So as a neuroscientist, I've always 94 00:06:05,880 --> 00:06:09,520 Speaker 1: been interested in the way that our technology might allow 95 00:06:09,600 --> 00:06:12,839 Speaker 1: us to expand our umveldt and how that's going to 96 00:06:12,920 --> 00:06:17,560 Speaker 1: change the experience of being human. So we're already quite 97 00:06:17,600 --> 00:06:22,560 Speaker 1: good at marrying our technology to our biology. You may 98 00:06:22,640 --> 00:06:25,599 Speaker 1: know this, but there are hundreds of thousands of people 99 00:06:25,680 --> 00:06:30,400 Speaker 1: walking around with artificial hearing and artificial vision. The way 100 00:06:30,440 --> 00:06:33,599 Speaker 1: this works, for example, with artificial hearing is you have 101 00:06:34,040 --> 00:06:37,680 Speaker 1: a microphone and you digitize the signal and you put 102 00:06:37,720 --> 00:06:42,160 Speaker 1: an electrode strip directly into the inner ear. Or with 103 00:06:42,360 --> 00:06:45,640 Speaker 1: artificial vision, you have what's called a retinal implant, where 104 00:06:45,680 --> 00:06:49,400 Speaker 1: you take a camera and you digitize this signal and 105 00:06:49,440 --> 00:06:53,560 Speaker 1: you plug an electrode grid directly into the back of 106 00:06:53,600 --> 00:06:57,120 Speaker 1: the eye and the optic nerve. Now, as recently as 107 00:06:57,320 --> 00:06:59,719 Speaker 1: twenty five years ago, there were a lot of scientists 108 00:06:59,720 --> 00:07:03,640 Speaker 1: who these technologies were never going to work. Why. It's 109 00:07:03,680 --> 00:07:07,279 Speaker 1: because these technologies speak the language of silicon valley and 110 00:07:07,400 --> 00:07:10,880 Speaker 1: zeros and ones, and it's not exactly the same dialect 111 00:07:11,000 --> 00:07:16,280 Speaker 1: as our natural biological sense organs. But the fact is 112 00:07:16,520 --> 00:07:19,920 Speaker 1: that these technologies work. The brain figures out how to 113 00:07:20,040 --> 00:07:23,240 Speaker 1: use the signals just fine. Now, how do we understand that? 114 00:07:24,720 --> 00:07:28,320 Speaker 1: The key to understanding this requires diving one level deeper. 115 00:07:29,040 --> 00:07:33,360 Speaker 1: Your three pounds of brain tissue are not hearing or 116 00:07:33,440 --> 00:07:37,120 Speaker 1: seeing the world around you directly. It's not that your 117 00:07:37,160 --> 00:07:40,360 Speaker 1: eyes are piping in light or your ears are piping 118 00:07:40,520 --> 00:07:45,760 Speaker 1: sound in. Instead, your brain is locked in a crypt 119 00:07:46,000 --> 00:07:51,680 Speaker 1: of silence and darkness inside your skull. All inever experiences 120 00:07:52,080 --> 00:07:57,320 Speaker 1: are electrochemical signals that stream in along different data cables. 121 00:07:57,360 --> 00:07:59,600 Speaker 1: That's all it has to work with are these little 122 00:07:59,800 --> 00:08:05,080 Speaker 1: electrical spikes and chemical releases. It's just a world of 123 00:08:05,120 --> 00:08:09,240 Speaker 1: spikes running around in darkness inside there, and in ways 124 00:08:09,280 --> 00:08:12,200 Speaker 1: that we're still working to understand. The brain is shockingly 125 00:08:12,320 --> 00:08:16,680 Speaker 1: good at taking these signals running around and extracting patterns 126 00:08:16,720 --> 00:08:20,680 Speaker 1: into those patterns that as signs meaning, and with that meaning, 127 00:08:20,720 --> 00:08:26,160 Speaker 1: you have subjective experience. So the brain is an organ 128 00:08:26,560 --> 00:08:30,160 Speaker 1: that converts sparks in the dark into a picture show 129 00:08:30,360 --> 00:08:35,000 Speaker 1: of your world. All the hues and aromas and emotions 130 00:08:35,040 --> 00:08:39,040 Speaker 1: and sensations of your life. These are encoded in trillions 131 00:08:39,040 --> 00:08:42,720 Speaker 1: of signals zipping around in the blackness. So you know 132 00:08:42,760 --> 00:08:45,800 Speaker 1: when you watch a beautiful screensaver on your computer screen, 133 00:08:46,160 --> 00:08:49,120 Speaker 1: that's just built out of zeros and ones and transistors, 134 00:08:49,840 --> 00:08:52,600 Speaker 1: and it's somehow the same thing that's happening with your 135 00:08:52,760 --> 00:08:56,360 Speaker 1: experience of the world. Let's understand this just a little 136 00:08:56,400 --> 00:08:59,640 Speaker 1: bit more. Imagine that you traveled over to an island 137 00:08:59,760 --> 00:09:03,000 Speaker 1: of people who are all born blind, so they all 138 00:09:03,080 --> 00:09:07,400 Speaker 1: read by braille. They feel tiny patterns of inputs on 139 00:09:07,440 --> 00:09:11,000 Speaker 1: their fingertips. So you watch them read a book and 140 00:09:11,040 --> 00:09:14,520 Speaker 1: they're brushing over the small bumps with their fingers and 141 00:09:14,559 --> 00:09:18,400 Speaker 1: you watch them laugh and cry at the book they're reading, 142 00:09:18,440 --> 00:09:21,960 Speaker 1: and you might wonder how can they fit all that 143 00:09:22,400 --> 00:09:25,760 Speaker 1: emotion into the tip of their finger. So you explain 144 00:09:25,800 --> 00:09:28,760 Speaker 1: to them that when you read a novel, you aim 145 00:09:28,880 --> 00:09:34,000 Speaker 1: these spheres on your face towards visual patterns of lines 146 00:09:34,040 --> 00:09:37,200 Speaker 1: and curves on a page, and each of your eyes 147 00:09:37,280 --> 00:09:41,720 Speaker 1: has a lawn of cells that catch photons, and in 148 00:09:41,760 --> 00:09:45,520 Speaker 1: this way you can register the shapes of the symbols. 149 00:09:45,559 --> 00:09:47,960 Speaker 1: And you tell them that you have memorized a set 150 00:09:47,960 --> 00:09:52,160 Speaker 1: of rules by which different shapes on the page represent 151 00:09:52,320 --> 00:09:55,920 Speaker 1: different sounds. So for each squiggle that you detect with 152 00:09:56,040 --> 00:09:59,320 Speaker 1: your eyes, you recite a small sound in your head, 153 00:10:00,000 --> 00:10:02,760 Speaker 1: imagining what you would hear if someone were speaking that 154 00:10:02,840 --> 00:10:07,800 Speaker 1: out loud. And so the resulting pattern of neurochemical signaling 155 00:10:08,120 --> 00:10:12,920 Speaker 1: makes you laugh or cry. You couldn't blame the islanders 156 00:10:12,960 --> 00:10:17,360 Speaker 1: for finding your story difficult to understand. How do you 157 00:10:17,400 --> 00:10:21,120 Speaker 1: fit all that emotion into two spheres on your head? Okay, 158 00:10:21,320 --> 00:10:24,679 Speaker 1: So you or they would finally have to allow something, 159 00:10:24,760 --> 00:10:28,920 Speaker 1: which is that the fingertip or the eyeball is just 160 00:10:29,040 --> 00:10:33,360 Speaker 1: the peripheral device that converts information from the outside world 161 00:10:33,800 --> 00:10:37,160 Speaker 1: into spikes in the brain. And then the brain does 162 00:10:37,200 --> 00:10:41,280 Speaker 1: all the hard work of the interpretation. You and the 163 00:10:41,280 --> 00:10:44,760 Speaker 1: Islanders would break bread over the fact that in the end, 164 00:10:44,800 --> 00:10:48,240 Speaker 1: it's all about the trillions of spikes racing around in 165 00:10:48,280 --> 00:10:52,040 Speaker 1: the brain, and that the method of entry simply isn't 166 00:10:52,040 --> 00:10:55,600 Speaker 1: the part that matters, because your brain doesn't know and 167 00:10:55,640 --> 00:10:59,200 Speaker 1: it doesn't care where it gets the data from. Whatever 168 00:10:59,240 --> 00:11:03,200 Speaker 1: information comes in from the outside, it just figures out 169 00:11:03,360 --> 00:11:06,160 Speaker 1: what to do with it. And this is a very 170 00:11:06,280 --> 00:11:11,360 Speaker 1: efficient kind of machine. It is essentially a general purpose 171 00:11:11,440 --> 00:11:15,080 Speaker 1: computing device. It just takes in everything and it figures 172 00:11:15,080 --> 00:11:17,480 Speaker 1: out what it's going to do with it. And in 173 00:11:17,520 --> 00:11:21,200 Speaker 1: my work, I've proposed that this freeze up mother nature 174 00:11:21,640 --> 00:11:26,720 Speaker 1: to tinker around with different sorts of input channels. So 175 00:11:26,800 --> 00:11:29,520 Speaker 1: I've argued in my talks and books and papers that 176 00:11:29,559 --> 00:11:35,000 Speaker 1: we can send information into the brain via unusual pathways. 177 00:11:35,760 --> 00:11:39,480 Speaker 1: And I call this the pH model of evolution. And 178 00:11:39,480 --> 00:11:41,600 Speaker 1: I don't want to get too technical here, but pH 179 00:11:41,640 --> 00:11:44,760 Speaker 1: stands for potato head, and I use this name to 180 00:11:44,840 --> 00:11:49,079 Speaker 1: emphasize that all these sensors that we know in love, 181 00:11:49,360 --> 00:11:53,960 Speaker 1: like our eyes and our ears and our fingertips, these 182 00:11:54,040 --> 00:11:59,160 Speaker 1: are merely peripheral, plug and play devices, you stick them 183 00:11:59,160 --> 00:12:01,840 Speaker 1: in and you're good to go, just like with a 184 00:12:01,880 --> 00:12:06,319 Speaker 1: potato head. Where you attach these devices, the brain figures 185 00:12:06,360 --> 00:12:09,080 Speaker 1: out what to do with the data that comes in. 186 00:12:09,679 --> 00:12:12,559 Speaker 1: And by the way, when you look across the animal kingdom, 187 00:12:12,840 --> 00:12:17,680 Speaker 1: you find lots of interesting peripheral devices. So snakes have 188 00:12:18,080 --> 00:12:22,040 Speaker 1: heat pits with which they detect the infrared light. And 189 00:12:22,120 --> 00:12:26,960 Speaker 1: the black ghost knifefish has electro receptors up and down 190 00:12:27,000 --> 00:12:29,280 Speaker 1: its body. That's how it detects the changes in the 191 00:12:29,360 --> 00:12:33,160 Speaker 1: electrical field. And there's an animal called the star nosed mole, 192 00:12:33,200 --> 00:12:37,320 Speaker 1: which essentially has this nose with twenty two fingers on it, 193 00:12:37,640 --> 00:12:41,120 Speaker 1: and it moves around through its three dimensional tunnel system 194 00:12:41,160 --> 00:12:44,800 Speaker 1: and feels around and constructs a model of its world 195 00:12:44,840 --> 00:12:49,880 Speaker 1: that way. And many birds and cows and insects have 196 00:12:49,960 --> 00:12:54,240 Speaker 1: specializations so that they can feel the magnetic field of 197 00:12:54,320 --> 00:12:57,640 Speaker 1: the planet. This is called magneto reception, and they navigate 198 00:12:57,720 --> 00:13:02,640 Speaker 1: that way. The idea with the potato head model is 199 00:13:02,679 --> 00:13:07,720 Speaker 1: that Mother Nature doesn't have to continually redesign the brain 200 00:13:07,920 --> 00:13:12,800 Speaker 1: every time she introduces some new peripheral device. Instead, with 201 00:13:12,920 --> 00:13:17,439 Speaker 1: the principles of brain operation already established, all she has 202 00:13:17,520 --> 00:13:21,120 Speaker 1: to do is worry about designing new peripheral devices to 203 00:13:21,160 --> 00:13:24,600 Speaker 1: pick up on new information from the world. So in 204 00:13:24,640 --> 00:13:27,480 Speaker 1: the same way you can plug an arbitrary nose or 205 00:13:27,520 --> 00:13:32,280 Speaker 1: eyes or mouth into potato head. Likewise, nature plugs all 206 00:13:32,360 --> 00:13:36,199 Speaker 1: kinds of instrumentation into the brain for the purpose of 207 00:13:36,280 --> 00:13:54,760 Speaker 1: detecting these energy sources in the outside world. Now, the 208 00:13:54,800 --> 00:14:00,679 Speaker 1: idea of looking at our peripheral sensors like individuals standalone 209 00:14:00,679 --> 00:14:05,720 Speaker 1: devices might seem bizarre, because, after all, aren't there thousands 210 00:14:05,800 --> 00:14:08,960 Speaker 1: of genes involved with building these devices, and don't these 211 00:14:09,040 --> 00:14:12,319 Speaker 1: genes overlap with other pieces and parts of the body. 212 00:14:12,640 --> 00:14:16,040 Speaker 1: Can we really look at the nose or the eye, 213 00:14:16,120 --> 00:14:18,800 Speaker 1: or the ear or the tongue as a device that 214 00:14:18,920 --> 00:14:22,280 Speaker 1: stands alone. So I started studying this question because I thought, 215 00:14:22,560 --> 00:14:26,440 Speaker 1: if the potato head model is correct, wouldn't that suggest 216 00:14:26,520 --> 00:14:30,200 Speaker 1: we might find switches in the genetics that lead to 217 00:14:30,920 --> 00:14:34,360 Speaker 1: the presence or absence of a peripheral device. And as 218 00:14:34,400 --> 00:14:39,400 Speaker 1: it turns out, that's precisely what can happen. So, for example, 219 00:14:39,560 --> 00:14:43,800 Speaker 1: some babies are born completely missing a nose, and they 220 00:14:43,840 --> 00:14:48,240 Speaker 1: also lack the nasal cavity and the whole system for smelling. 221 00:14:48,640 --> 00:14:51,720 Speaker 1: This is called a rhinia. Now, these kind of mutations, 222 00:14:51,720 --> 00:14:55,240 Speaker 1: they seem startling and difficult to fathom, But in our 223 00:14:55,360 --> 00:14:59,360 Speaker 1: plug and play framework, a rhinia is predictable. With a 224 00:14:59,440 --> 00:15:03,800 Speaker 1: slight tweak of the genes, the peripheral device simply doesn't 225 00:15:03,800 --> 00:15:07,080 Speaker 1: get built. Or consider other babies who are born normal, 226 00:15:07,160 --> 00:15:11,440 Speaker 1: but they have no eyes. This is called anophthalmia, and 227 00:15:11,560 --> 00:15:16,320 Speaker 1: others are born without tongues. Some babies are born without ears, 228 00:15:16,400 --> 00:15:21,360 Speaker 1: that's called anotia. Some children are born without any pain receptors, 229 00:15:21,720 --> 00:15:24,600 Speaker 1: and more generally, others are born without any touch receptors. 230 00:15:24,600 --> 00:15:27,680 Speaker 1: This is called nafia. And so when we look at 231 00:15:27,680 --> 00:15:33,520 Speaker 1: these situations, it becomes clear that our peripheral detectors unpack 232 00:15:33,680 --> 00:15:37,120 Speaker 1: because of specific genetic programs. And if you have a 233 00:15:37,200 --> 00:15:41,400 Speaker 1: minor mauthfunction in the genes, that can halt the program, 234 00:15:41,720 --> 00:15:45,040 Speaker 1: and then the brain just doesn't get that particular data 235 00:15:45,040 --> 00:15:49,160 Speaker 1: stream of information from the world, whether that's smell, molecules, 236 00:15:49,280 --> 00:15:52,400 Speaker 1: or photons or air compression waves or touch or whatever. 237 00:15:53,120 --> 00:15:55,760 Speaker 1: For me, the lesson that comes together here is that 238 00:15:55,880 --> 00:16:00,240 Speaker 1: nature designs ways of extracting information from the world world, 239 00:16:00,560 --> 00:16:04,440 Speaker 1: and these unpack with their own little genetic instructions. Now, 240 00:16:04,480 --> 00:16:09,600 Speaker 1: what this implies is that there's nothing really fundamental about 241 00:16:09,600 --> 00:16:12,200 Speaker 1: the devices that you and I come to the table 242 00:16:12,240 --> 00:16:14,440 Speaker 1: with our eyes and our ears and our nose and 243 00:16:14,480 --> 00:16:18,760 Speaker 1: our fingertips. It's just what we've inherited from a complex 244 00:16:18,920 --> 00:16:23,960 Speaker 1: road of evolution. But that particular collection of sensors might 245 00:16:24,000 --> 00:16:27,200 Speaker 1: not have to be what we stick with, because the 246 00:16:27,280 --> 00:16:32,640 Speaker 1: brain's ability to decode different kinds of incoming information implies 247 00:16:32,720 --> 00:16:35,400 Speaker 1: the crazy prediction that you might be able to get 248 00:16:35,720 --> 00:16:39,360 Speaker 1: some sensory cable going into the brain to carry a 249 00:16:39,440 --> 00:16:42,920 Speaker 1: different kind of sensory information. For example, what if you 250 00:16:43,000 --> 00:16:47,160 Speaker 1: took a data stream from a video camera and converted 251 00:16:47,200 --> 00:16:51,280 Speaker 1: that into touch on your skin. Would the brain eventually 252 00:16:51,320 --> 00:16:56,320 Speaker 1: be able to interpret the visual world simply by feeling it? 253 00:16:56,680 --> 00:17:02,720 Speaker 1: And this is the stranger than fiction world of sensory substitution. 254 00:17:03,880 --> 00:17:08,520 Speaker 1: Sensory substitution refers to the idea of feeding information into 255 00:17:08,600 --> 00:17:12,800 Speaker 1: the brain via unusual sensory channels and the brain just 256 00:17:12,880 --> 00:17:15,520 Speaker 1: figures out what to do with the information. Now, that 257 00:17:15,600 --> 00:17:19,919 Speaker 1: might sound speculative, but the first paper demonstrating this was 258 00:17:19,960 --> 00:17:24,200 Speaker 1: published in the journal Nature in nineteen sixty nine. There 259 00:17:24,240 --> 00:17:28,280 Speaker 1: was a scientist named Paul Baki Rita and he put 260 00:17:28,359 --> 00:17:32,640 Speaker 1: blind people in a modified dental chair and he set 261 00:17:32,720 --> 00:17:35,080 Speaker 1: up a video feed and he would put something in 262 00:17:35,080 --> 00:17:37,879 Speaker 1: front of the camera and then the person would feel 263 00:17:37,920 --> 00:17:42,000 Speaker 1: that poked into their back with a grid of solenoids. 264 00:17:42,200 --> 00:17:44,159 Speaker 1: So if he put a coffee cup in front of 265 00:17:44,160 --> 00:17:47,160 Speaker 1: the camera, they would feel the shape of a coffee 266 00:17:47,200 --> 00:17:49,760 Speaker 1: cup in their back. Or he puts a telephone in 267 00:17:49,760 --> 00:17:52,199 Speaker 1: front of the camera, and they feel a telephone in 268 00:17:52,240 --> 00:17:56,560 Speaker 1: their back. And amazingly, people who were blind got pretty 269 00:17:56,600 --> 00:17:59,800 Speaker 1: good at being able to determine what was in front 270 00:17:59,840 --> 00:18:02,920 Speaker 1: of them the camera, just by feeling it in the 271 00:18:02,960 --> 00:18:07,359 Speaker 1: small of their back. So Baki Rita summarized his findings 272 00:18:07,400 --> 00:18:10,720 Speaker 1: by saying, quote, the brain is able to use incoming 273 00:18:10,760 --> 00:18:14,000 Speaker 1: information from the skin as if it were coming from 274 00:18:14,040 --> 00:18:19,120 Speaker 1: the eyes end quote. The subjective experience for the blind 275 00:18:19,240 --> 00:18:21,679 Speaker 1: people who are feeling this in their back was that 276 00:18:22,200 --> 00:18:27,199 Speaker 1: visual objects were located out there instead of on the 277 00:18:27,200 --> 00:18:30,359 Speaker 1: skin of their back. In other words, it was something 278 00:18:30,440 --> 00:18:33,000 Speaker 1: like vision. And think about it this way. When you're 279 00:18:33,040 --> 00:18:35,600 Speaker 1: at the coffee shop and you see your friend waving 280 00:18:35,640 --> 00:18:38,920 Speaker 1: at you across the way, the photons from your friend 281 00:18:39,119 --> 00:18:42,639 Speaker 1: are impinging on your photoreceptors in your eye. But you 282 00:18:42,680 --> 00:18:46,360 Speaker 1: don't perceive that the signal is at your eyes or 283 00:18:46,400 --> 00:18:48,800 Speaker 1: in your brain. You perceive that your friend is out 284 00:18:48,880 --> 00:18:52,359 Speaker 1: there waving at you from a distance. And so it 285 00:18:52,440 --> 00:18:56,600 Speaker 1: goes with the users of baki Rita's modified dental chair. 286 00:18:57,200 --> 00:19:02,600 Speaker 1: They were perceiving the object out there now. Amazingly. While 287 00:19:02,720 --> 00:19:06,160 Speaker 1: baki Rita's device was the first to hit public consciousness, 288 00:19:06,480 --> 00:19:10,000 Speaker 1: it was not actually the first attempt at sensory substitution. 289 00:19:10,680 --> 00:19:12,960 Speaker 1: On the other side of the world, at the end 290 00:19:13,000 --> 00:19:18,280 Speaker 1: of the eighteen nineties, a Polish ophthalmologist developed a crude 291 00:19:18,359 --> 00:19:22,280 Speaker 1: device for people who were blind. He put a single 292 00:19:22,359 --> 00:19:25,600 Speaker 1: photo cell on the forehead of a blind person, and 293 00:19:25,640 --> 00:19:29,280 Speaker 1: the more light that hit it, the louder a sound 294 00:19:29,320 --> 00:19:32,880 Speaker 1: would be in the person's ear, so based on the 295 00:19:33,000 --> 00:19:37,600 Speaker 1: sound's intensity, the blind person could tell where there were 296 00:19:37,680 --> 00:19:40,800 Speaker 1: lights or where there were dark areas. Unfortunately, the whole 297 00:19:40,800 --> 00:19:43,760 Speaker 1: device was very large and heavy, and of course it 298 00:19:43,840 --> 00:19:47,200 Speaker 1: was only one pixel of resolution, so it never got 299 00:19:47,200 --> 00:19:51,120 Speaker 1: any traction. But in nineteen sixty another group in Poland 300 00:19:51,160 --> 00:19:54,440 Speaker 1: picked up the ball and ran with it. They recognized 301 00:19:54,480 --> 00:19:58,159 Speaker 1: that hearing is critical for the blind, so they turned 302 00:19:58,240 --> 00:20:03,200 Speaker 1: to passing in the light information via touch. They built 303 00:20:03,240 --> 00:20:06,520 Speaker 1: a helmet that had all these vibratory motors in it, 304 00:20:06,840 --> 00:20:10,959 Speaker 1: and they essentially drew the images on the head, and 305 00:20:11,080 --> 00:20:15,440 Speaker 1: blind participants were able to move around in these specially 306 00:20:15,480 --> 00:20:19,679 Speaker 1: prepared rooms that were painted to enhance the contrast of 307 00:20:19,720 --> 00:20:24,560 Speaker 1: the door frames and the furniture edges. It worked. Unfortunately, 308 00:20:25,040 --> 00:20:28,080 Speaker 1: it was also heavy and would get very hot, and 309 00:20:28,160 --> 00:20:30,840 Speaker 1: so the world had to wait. But the proof of 310 00:20:31,000 --> 00:20:35,280 Speaker 1: principle was starting to emerge. Now, why did these strange 311 00:20:35,280 --> 00:20:39,920 Speaker 1: approaches work. It's because input to the brain, whether that's 312 00:20:39,920 --> 00:20:42,320 Speaker 1: from photons of the eyes, or air compression waves of 313 00:20:42,400 --> 00:20:46,040 Speaker 1: the ears, or pressure on the skin, they're all converted 314 00:20:46,480 --> 00:20:50,520 Speaker 1: into the common currency of electrical signals. So as long 315 00:20:50,560 --> 00:20:56,040 Speaker 1: as the incoming spikes carry information that represents something important 316 00:20:56,040 --> 00:21:00,159 Speaker 1: about the outside world, the brain will learn how to 317 00:21:00,240 --> 00:21:04,280 Speaker 1: interpret it. The vast forests of your brain cells in 318 00:21:04,320 --> 00:21:07,760 Speaker 1: the dark, they don't care about how the spikes get there. 319 00:21:08,080 --> 00:21:10,480 Speaker 1: They just do their work on it. Now, there have 320 00:21:10,560 --> 00:21:15,200 Speaker 1: been all kinds of incarnations of sensory substitution for the blind. 321 00:21:15,720 --> 00:21:20,120 Speaker 1: One also from the nineteen sixties, is called the sonic glasses. 322 00:21:20,680 --> 00:21:22,560 Speaker 1: It takes a video feed right in front of you 323 00:21:22,920 --> 00:21:26,760 Speaker 1: and turns that into a sound landscape. So as things 324 00:21:27,200 --> 00:21:30,000 Speaker 1: move around and get closer and farther, it sounds like 325 00:21:32,960 --> 00:21:37,879 Speaker 1: it sounds like a cacophony. But after some time, blind 326 00:21:37,920 --> 00:21:42,000 Speaker 1: people start getting really good at understanding what is in 327 00:21:42,040 --> 00:21:45,680 Speaker 1: front of them just based on what they're hearing through 328 00:21:45,720 --> 00:21:48,919 Speaker 1: their ears. And the best example of this is a 329 00:21:49,000 --> 00:21:51,359 Speaker 1: program that you can download on your cell phone called 330 00:21:51,920 --> 00:21:56,080 Speaker 1: the Voice. Note that the three middle letters are oh, 331 00:21:56,359 --> 00:21:59,880 Speaker 1: I see anyway. This is developed by an engineer named 332 00:22:00,400 --> 00:22:04,359 Speaker 1: Meyer in the Netherlands, and it started as a bulky project, 333 00:22:04,400 --> 00:22:06,560 Speaker 1: but it can now be downloaded on your phone. You 334 00:22:06,600 --> 00:22:10,680 Speaker 1: point your phone camera at things and the program converts 335 00:22:11,080 --> 00:22:15,960 Speaker 1: what the phone sees into sounds. The app is amazing 336 00:22:16,080 --> 00:22:18,399 Speaker 1: and you can download this onto your phone and start 337 00:22:18,440 --> 00:22:21,360 Speaker 1: walking around in the world with it and really understand 338 00:22:21,600 --> 00:22:24,960 Speaker 1: what's going on when you convert site into sound. And 339 00:22:25,000 --> 00:22:28,040 Speaker 1: my colleagues all over the world, like Jamie Warren and 340 00:22:28,080 --> 00:22:32,200 Speaker 1: emirah Medi, have been running science experiments on these sorts 341 00:22:32,240 --> 00:22:35,800 Speaker 1: of approaches. And by the way, the century substitution doesn't 342 00:22:35,840 --> 00:22:39,280 Speaker 1: have to be through the ears. Another version is called 343 00:22:39,760 --> 00:22:43,240 Speaker 1: the brain port, and this is a little grid. It's 344 00:22:43,280 --> 00:22:46,960 Speaker 1: called an electro techtile grid. It sits on your tongue 345 00:22:47,280 --> 00:22:50,600 Speaker 1: and gives little shocks. So you have a camera and 346 00:22:50,680 --> 00:22:54,080 Speaker 1: that video feed gets turned into these little shocks on 347 00:22:54,119 --> 00:22:56,880 Speaker 1: your tongue. It feels like pop rocks in your mouth. 348 00:22:57,200 --> 00:22:59,920 Speaker 1: And blind people can get so good at using this 349 00:23:00,080 --> 00:23:03,240 Speaker 1: that they can throw a ball into a basket where 350 00:23:03,280 --> 00:23:07,040 Speaker 1: they can navigate a complex obstacle. Course they can come 351 00:23:07,080 --> 00:23:12,760 Speaker 1: to see through their tongue. Now that sounds completely insane, right, 352 00:23:13,040 --> 00:23:18,119 Speaker 1: but remember all vision, ever is, are these electrical signals 353 00:23:18,160 --> 00:23:21,560 Speaker 1: coursing around in your brain. Your brain doesn't know where 354 00:23:21,560 --> 00:23:25,959 Speaker 1: the signals come from, it just figures out what to 355 00:23:26,040 --> 00:23:29,280 Speaker 1: do with them. So my laboratory set out some years 356 00:23:29,320 --> 00:23:34,320 Speaker 1: ago to solve sensory substitution for people who are deaf, 357 00:23:34,960 --> 00:23:37,120 Speaker 1: and we wanted to make it so that the sound 358 00:23:37,200 --> 00:23:39,600 Speaker 1: from the world gets converted in some ways so that 359 00:23:39,680 --> 00:23:43,480 Speaker 1: a deaf person can understand what is being said. So 360 00:23:43,560 --> 00:23:48,000 Speaker 1: with my graduate student, Scott Novic, we built a vest. 361 00:23:48,240 --> 00:23:50,320 Speaker 1: Now this is not a normal vest. This is a 362 00:23:50,440 --> 00:23:53,760 Speaker 1: vest that zips up tight around the torso and it 363 00:23:53,800 --> 00:23:58,280 Speaker 1: has thirty two little motors on it. And these are 364 00:23:58,520 --> 00:24:01,480 Speaker 1: vibratory motors like the buzzer on your cell phone, but 365 00:24:01,680 --> 00:24:05,520 Speaker 1: thirty two of them, and they're distributed pretty evenly around 366 00:24:05,560 --> 00:24:09,000 Speaker 1: your waist in your back, and each motor represents a 367 00:24:09,080 --> 00:24:14,359 Speaker 1: different frequency of sound from low to high. And by 368 00:24:14,440 --> 00:24:17,440 Speaker 1: breaking up sound in this way, this is the same 369 00:24:17,480 --> 00:24:20,280 Speaker 1: thing that your inner ear does a part of your 370 00:24:20,280 --> 00:24:25,119 Speaker 1: interear called the cochlea, So we have essentially transferred the 371 00:24:25,200 --> 00:24:29,720 Speaker 1: cochlea to the torso, so it captures sound and turns 372 00:24:29,720 --> 00:24:33,000 Speaker 1: that into these patterns vibration. So some years ago we 373 00:24:33,040 --> 00:24:36,280 Speaker 1: started to test this in conjunction with the deaf community. 374 00:24:36,720 --> 00:24:39,760 Speaker 1: Our first participant was a guy named Jonathan. He was 375 00:24:39,800 --> 00:24:42,680 Speaker 1: thirty seven years old. He had a master's degree, and 376 00:24:42,720 --> 00:24:46,639 Speaker 1: he had been born profoundly deaf, which means there was 377 00:24:46,680 --> 00:24:49,880 Speaker 1: a part of his umvelt that was unavailable to him. 378 00:24:50,359 --> 00:24:53,479 Speaker 1: So we had Jonathan wear the vest and train with 379 00:24:53,520 --> 00:24:56,640 Speaker 1: it for four days, two hours a day, and by 380 00:24:56,680 --> 00:25:00,280 Speaker 1: the fifth day he was pretty good at identifying the 381 00:25:00,320 --> 00:25:03,120 Speaker 1: words that were being said to him. So you say 382 00:25:03,160 --> 00:25:06,960 Speaker 1: the word dog, and Jonathan feels a pattern of vibrations 383 00:25:06,960 --> 00:25:09,480 Speaker 1: all over the vest, and his job is simply to 384 00:25:09,640 --> 00:25:12,879 Speaker 1: write on the dry erase board what he thinks the 385 00:25:12,920 --> 00:25:15,359 Speaker 1: word might have been, and by day five, he could 386 00:25:15,359 --> 00:25:18,280 Speaker 1: get this mostly right. Now. We had trained him on 387 00:25:18,320 --> 00:25:21,399 Speaker 1: a limited number of words, what's called a closed set, 388 00:25:21,880 --> 00:25:24,119 Speaker 1: but when we switched to a new set of words, 389 00:25:24,160 --> 00:25:26,720 Speaker 1: once he had never heard before, he was able to 390 00:25:26,800 --> 00:25:29,760 Speaker 1: perform well above chance. And he learned more and more 391 00:25:29,840 --> 00:25:32,840 Speaker 1: quickly with every new set. And this suggested he wasn't 392 00:25:32,920 --> 00:25:38,199 Speaker 1: just memorizing some answers. He was actually learning how to 393 00:25:38,560 --> 00:25:42,959 Speaker 1: hear with the vest. He was translating the complicated pattern 394 00:25:43,000 --> 00:25:49,040 Speaker 1: of vibrations into an understanding of what was being said. Now, 395 00:25:49,080 --> 00:25:52,800 Speaker 1: he wasn't doing this consciously, because the patterns are too 396 00:25:52,960 --> 00:25:57,080 Speaker 1: complicated for that, but his brain was unlocking the meaning 397 00:25:57,119 --> 00:26:00,000 Speaker 1: of this. And by the way, this is just like you. 398 00:26:00,000 --> 00:26:04,240 Speaker 1: So listening to this podcast, you're not thinking, oh, Eagelman 399 00:26:04,359 --> 00:26:06,560 Speaker 1: is saying some high frequencies and now some low and 400 00:26:06,640 --> 00:26:10,760 Speaker 1: some medium, so that must be a s sound. Instead, 401 00:26:10,880 --> 00:26:14,560 Speaker 1: You've just practiced hearing your whole life, and eventually you 402 00:26:14,640 --> 00:26:17,399 Speaker 1: become pretty good at using your ears and your brain. 403 00:26:17,760 --> 00:26:19,639 Speaker 1: But when you were born, you didn't know how to 404 00:26:19,720 --> 00:26:23,680 Speaker 1: use your ears, but your brain looked for correlations, things 405 00:26:23,720 --> 00:26:27,120 Speaker 1: that went together. So you would watch your mother's mouth 406 00:26:27,280 --> 00:26:30,879 Speaker 1: moving and you get spikes coming down your auditory nerve, 407 00:26:31,160 --> 00:26:33,800 Speaker 1: and you figure out that those go together. Or as 408 00:26:33,840 --> 00:26:36,560 Speaker 1: a baby, you clap your hands and you get a 409 00:26:36,560 --> 00:26:39,800 Speaker 1: different pattern of spikes coming down your auditory nerve. Or 410 00:26:39,920 --> 00:26:42,560 Speaker 1: you bang on the bars of your cage, or you 411 00:26:42,640 --> 00:26:46,200 Speaker 1: babble with your mouth, and these all correlate with particular 412 00:26:46,320 --> 00:26:51,040 Speaker 1: patterns coming in along this nerve, and eventually these patterns 413 00:26:51,119 --> 00:26:55,720 Speaker 1: become what philosophers call a qualia, which is a private, 414 00:26:55,800 --> 00:27:00,200 Speaker 1: subjective experience of hearing. You don't have to think about 415 00:27:00,240 --> 00:27:03,920 Speaker 1: what all the spikes mean. They just get translated into 416 00:27:04,000 --> 00:27:08,600 Speaker 1: a direct perceptual experience. Okay, so back to the vest. 417 00:27:08,760 --> 00:27:12,399 Speaker 1: So we tested the vest with lots of participants in 418 00:27:12,440 --> 00:27:14,800 Speaker 1: the deaf community, and in fact, we even built a 419 00:27:14,920 --> 00:27:17,800 Speaker 1: miniature vest because it turned out that one of the 420 00:27:17,840 --> 00:27:21,080 Speaker 1: people we were working with had a daughter who was 421 00:27:21,119 --> 00:27:25,520 Speaker 1: born deaf and blind. So we made this little miniature 422 00:27:25,600 --> 00:27:28,040 Speaker 1: vest for her and it picked up on the sounds 423 00:27:28,080 --> 00:27:31,080 Speaker 1: of the world and translated this into patterns of vibration 424 00:27:31,280 --> 00:27:34,280 Speaker 1: on her skin. And so her grandmother took her around 425 00:27:34,320 --> 00:27:37,119 Speaker 1: the lab and touched her feet on things and said, 426 00:27:37,400 --> 00:27:40,400 Speaker 1: this is hard, this is soft, this is going up, 427 00:27:40,520 --> 00:27:43,320 Speaker 1: this is going down, and so on, and this allowed 428 00:27:43,359 --> 00:27:46,879 Speaker 1: the little girl to tap into a larger part of 429 00:27:46,920 --> 00:27:50,800 Speaker 1: her umveldt. Then we made a smaller version of the vest, 430 00:27:50,840 --> 00:27:53,639 Speaker 1: just a chest strap, and we began testing that with 431 00:27:53,680 --> 00:27:57,199 Speaker 1: some other children. But eventually we were able to shrink 432 00:27:57,240 --> 00:28:01,280 Speaker 1: the whole system down to a wristband, and that opens 433 00:28:01,359 --> 00:28:05,040 Speaker 1: up the technology for a much larger population, and we 434 00:28:05,480 --> 00:28:08,520 Speaker 1: spun this off of the lab as a company called 435 00:28:08,800 --> 00:28:13,520 Speaker 1: Neo Sensory, and one of our first users was a 436 00:28:13,560 --> 00:28:16,639 Speaker 1: wonderful guy here in the San Francisco Bay area named 437 00:28:16,680 --> 00:28:20,639 Speaker 1: phil and we videoed him talking in sign language about 438 00:28:20,680 --> 00:28:23,480 Speaker 1: what the wristband meant to him. So I'm going to 439 00:28:23,520 --> 00:28:26,320 Speaker 1: quote him here as a translation from the sign language used. 440 00:28:26,600 --> 00:28:30,440 Speaker 1: He signed quote, it makes me feel a natural connection 441 00:28:30,600 --> 00:28:34,720 Speaker 1: with everyone around me. Sometimes I perceive wow. I can 442 00:28:34,800 --> 00:28:38,640 Speaker 1: tell what a sound feels like if someone calls my name, 443 00:28:39,080 --> 00:28:41,240 Speaker 1: or if there's some kind of noise nearby, or my 444 00:28:41,440 --> 00:28:45,600 Speaker 1: dog's barking, or even my wife calling me from far away. Philip, 445 00:28:46,120 --> 00:28:48,720 Speaker 1: I feel her call my name and I go to her. 446 00:28:49,200 --> 00:28:52,040 Speaker 1: So we tested lots of people who were deaf in 447 00:28:52,080 --> 00:28:54,959 Speaker 1: the Bay area, and people reported things to us like 448 00:28:55,400 --> 00:28:59,000 Speaker 1: I'm picking up on running water or birds or the 449 00:28:59,040 --> 00:29:01,800 Speaker 1: oven timer. And when wearing it at work, I had 450 00:29:01,840 --> 00:29:04,320 Speaker 1: a really good experience, like when people were talking in 451 00:29:04,360 --> 00:29:07,200 Speaker 1: the room, I could feel what they were saying and 452 00:29:07,240 --> 00:29:09,960 Speaker 1: it helped me lip read better. And as a quick 453 00:29:10,000 --> 00:29:13,880 Speaker 1: side note, we went to interview lots of people who 454 00:29:13,880 --> 00:29:16,840 Speaker 1: are deaf, and I came to understand that lots of 455 00:29:16,880 --> 00:29:21,040 Speaker 1: deaf people live in nice apartments in one particular location, 456 00:29:21,400 --> 00:29:24,400 Speaker 1: which is right next to the railroad track, because the 457 00:29:24,440 --> 00:29:28,280 Speaker 1: sound of the howling trains passing by doesn't register with 458 00:29:28,320 --> 00:29:32,040 Speaker 1: them and bother them, so they can live comfortably in 459 00:29:32,080 --> 00:29:35,600 Speaker 1: a steeply discounted apartment that's perfectly nice. But people who 460 00:29:35,680 --> 00:29:38,760 Speaker 1: are hearing don't want that apartment. Anyway, back to the story, 461 00:29:39,440 --> 00:29:42,480 Speaker 1: users started telling us that they were picking up on 462 00:29:42,600 --> 00:29:46,840 Speaker 1: things that they didn't even know existed, like that microwaves beeped, 463 00:29:47,040 --> 00:29:50,080 Speaker 1: or that their car blinker made a clicking sound, or 464 00:29:50,080 --> 00:29:53,880 Speaker 1: that if they accidentally left the air blower on at work, 465 00:29:53,960 --> 00:29:56,560 Speaker 1: that it was making a noise, or for that matter, 466 00:29:56,680 --> 00:30:00,600 Speaker 1: the loudness of toilets flushing, or the they had left 467 00:30:00,640 --> 00:30:03,479 Speaker 1: the sink running. And they started feeling things like the 468 00:30:03,600 --> 00:30:06,760 Speaker 1: laughter of their children on their skin. And they were 469 00:30:06,800 --> 00:30:10,640 Speaker 1: able to distinguish which child was talking and which of 470 00:30:10,680 --> 00:30:13,840 Speaker 1: their dogs was barking. And with time, people just get 471 00:30:13,920 --> 00:30:16,880 Speaker 1: better and better at picking up the sounds of the 472 00:30:16,920 --> 00:30:20,680 Speaker 1: world as patterns of vibration on their skin. And with 473 00:30:20,800 --> 00:30:23,320 Speaker 1: one of our users, I asked him, what was it 474 00:30:23,480 --> 00:30:26,200 Speaker 1: like when he hears the dog bark? Does he register? Oh, 475 00:30:26,200 --> 00:30:28,440 Speaker 1: there were just vibrations on my wrist, and so now 476 00:30:28,440 --> 00:30:30,320 Speaker 1: I have to translate that that must have been a 477 00:30:30,360 --> 00:30:34,760 Speaker 1: dog barking. And he said, no, I just hear the 478 00:30:34,920 --> 00:30:39,080 Speaker 1: dog barking out there, which sounds crazy, right, but remember 479 00:30:39,160 --> 00:30:41,920 Speaker 1: that's all that's going on with your ears. You hear 480 00:30:42,040 --> 00:30:46,520 Speaker 1: the sound out there even though it's actually happening in 481 00:30:47,160 --> 00:31:05,280 Speaker 1: here in your head. Now, after we were years into 482 00:31:05,280 --> 00:31:07,880 Speaker 1: this project, I began to discover that the idea of 483 00:31:08,040 --> 00:31:11,640 Speaker 1: converting touch to sound is not even new. I found 484 00:31:11,640 --> 00:31:16,040 Speaker 1: a paper from nineteen twenty three. There was a psychologist 485 00:31:16,040 --> 00:31:19,680 Speaker 1: at Northwestern University called Robert Galt, and he heard about 486 00:31:19,680 --> 00:31:23,760 Speaker 1: a deaf and blind ten year old girl who claimed 487 00:31:23,840 --> 00:31:28,120 Speaker 1: to be able to feel sound through her fingertips. So 488 00:31:28,200 --> 00:31:31,400 Speaker 1: he was skeptical, and so he ran an experiment. He 489 00:31:32,040 --> 00:31:34,840 Speaker 1: stopped up her ears and wrapped her head in a 490 00:31:34,840 --> 00:31:38,760 Speaker 1: woolen blanket, and he put her finger against the diaphragm 491 00:31:38,760 --> 00:31:43,160 Speaker 1: of a device which converted his voice signal into vibrations. 492 00:31:43,600 --> 00:31:46,360 Speaker 1: So Galt sat in a closet and spoke to her 493 00:31:46,440 --> 00:31:49,440 Speaker 1: through the device, and so her only chance to understand 494 00:31:49,480 --> 00:31:53,520 Speaker 1: what he was saying was from the vibrations on her fingertip. 495 00:31:53,880 --> 00:31:56,640 Speaker 1: And what he reported is that it worked. She was 496 00:31:56,720 --> 00:32:00,760 Speaker 1: able to tell what he was saying through her fingertips. 497 00:32:01,120 --> 00:32:04,440 Speaker 1: And in the early nineteen thirties, and educator at a 498 00:32:04,440 --> 00:32:08,600 Speaker 1: school in Massachusetts developed a technique for two deaf and 499 00:32:08,680 --> 00:32:12,840 Speaker 1: blind students. Being deaf, they needed a way to read 500 00:32:12,960 --> 00:32:16,200 Speaker 1: the lips of speakers, but they were blind as well, 501 00:32:16,400 --> 00:32:20,000 Speaker 1: so that couldn't work. So the technique consists of placing 502 00:32:20,040 --> 00:32:23,480 Speaker 1: a hand over the face and neck of the person 503 00:32:23,480 --> 00:32:27,360 Speaker 1: who is speaking, so the thumb rests lightly on the 504 00:32:27,400 --> 00:32:30,360 Speaker 1: lips and the fingers fan out to cover the neck 505 00:32:30,400 --> 00:32:32,960 Speaker 1: and cheek, and in this way they can feel the 506 00:32:33,000 --> 00:32:37,040 Speaker 1: lips moving and the vocal courts vibrating, and even the 507 00:32:37,080 --> 00:32:39,640 Speaker 1: air coming out of the nostrils. And by the way, 508 00:32:39,680 --> 00:32:43,640 Speaker 1: because these two original students were named Tad and Oma, 509 00:32:44,000 --> 00:32:48,400 Speaker 1: this technique is known as the Tadoma technique, and thousands 510 00:32:48,800 --> 00:32:52,040 Speaker 1: of deaf and blind children have been taught this method 511 00:32:52,080 --> 00:32:56,280 Speaker 1: and they can obtain proficiency understanding language almost to the 512 00:32:56,280 --> 00:32:59,360 Speaker 1: point of those with hearing. So the key thing to 513 00:32:59,440 --> 00:33:02,400 Speaker 1: note for our purposes is that all the information is 514 00:33:02,400 --> 00:33:05,800 Speaker 1: coming in through their sense of touch. And in the 515 00:33:05,880 --> 00:33:11,000 Speaker 1: nineteen seventies the death inventor Dmitri Konewski came up with 516 00:33:11,080 --> 00:33:16,400 Speaker 1: a two channel vibrotactile device, one of which captures the 517 00:33:16,440 --> 00:33:19,320 Speaker 1: low frequencies and the other the high frequencies, and these 518 00:33:19,360 --> 00:33:22,360 Speaker 1: two vibratory motors sit on the wrists. And in the 519 00:33:22,440 --> 00:33:25,240 Speaker 1: nineteen eighties some other people came up with things like 520 00:33:25,320 --> 00:33:29,800 Speaker 1: this too, which all demonstrated the power of sensory substitution. 521 00:33:30,360 --> 00:33:34,680 Speaker 1: The problem was that all these devices were too large, 522 00:33:34,720 --> 00:33:38,000 Speaker 1: and they typically just had one motor or two motors, 523 00:33:38,600 --> 00:33:41,600 Speaker 1: and they got too hot, and it was not practical 524 00:33:41,640 --> 00:33:44,200 Speaker 1: for people to wear these. It's only now that we're 525 00:33:44,200 --> 00:33:48,400 Speaker 1: able to capitalize on a whole constellation of tech advances 526 00:33:49,040 --> 00:33:52,120 Speaker 1: to run this in a wristband in real time. And 527 00:33:52,160 --> 00:33:55,120 Speaker 1: so I'm really happy to say that the neosensory wrist 528 00:33:55,160 --> 00:33:58,720 Speaker 1: band is now on risks all over the world. And 529 00:33:58,760 --> 00:34:01,960 Speaker 1: what's cool is that this technology is a game changer 530 00:34:02,240 --> 00:34:06,600 Speaker 1: because the only other solution for deafness is a cochlear implant, 531 00:34:06,720 --> 00:34:09,840 Speaker 1: and that's something that requires about one hundred thousand dollars 532 00:34:09,880 --> 00:34:13,040 Speaker 1: and an invasive surgery. But the riskband we can build 533 00:34:13,120 --> 00:34:17,160 Speaker 1: for one hundred times cheaper, and that opens up the 534 00:34:17,200 --> 00:34:20,600 Speaker 1: technology globally, even for the poorest countries in the world. 535 00:34:20,920 --> 00:34:22,799 Speaker 1: And that's one of the reasons we've been able to 536 00:34:22,840 --> 00:34:26,919 Speaker 1: get this into underfunded schools for the deaf all over 537 00:34:26,960 --> 00:34:30,560 Speaker 1: the globe, and we've had many wonderful philanthropists help us 538 00:34:30,600 --> 00:34:34,440 Speaker 1: do that because this is such a different scale of 539 00:34:34,600 --> 00:34:40,080 Speaker 1: solution that's simple and inexpensive and takes advantage of a 540 00:34:40,320 --> 00:34:44,640 Speaker 1: very strange principle of the brain sensory substitution. And we've 541 00:34:44,760 --> 00:34:47,920 Speaker 1: just released something else that's having real impact. It's a 542 00:34:48,000 --> 00:34:50,279 Speaker 1: version of the same idea, but it's not for people 543 00:34:50,320 --> 00:34:54,360 Speaker 1: who are deaf, but instead people who are having normal 544 00:34:54,840 --> 00:34:59,160 Speaker 1: age related hearing loss, which almost always happens in the 545 00:34:59,320 --> 00:35:02,120 Speaker 1: high freaquent and c which is why people who are 546 00:35:02,120 --> 00:35:05,880 Speaker 1: getting older and losing hearing start having a harder time 547 00:35:06,080 --> 00:35:09,160 Speaker 1: understanding women and children because their voices tend to be 548 00:35:09,200 --> 00:35:12,600 Speaker 1: at a higher frequency. So we develop cutting edge machine 549 00:35:12,640 --> 00:35:15,360 Speaker 1: learning that sits on the wristband and listens in real 550 00:35:15,440 --> 00:35:19,400 Speaker 1: time just for the high frequency parts of speech. So 551 00:35:19,560 --> 00:35:22,520 Speaker 1: for example, it just listens for an S or a 552 00:35:22,680 --> 00:35:25,960 Speaker 1: Z or a B or a K, and the wristband 553 00:35:26,080 --> 00:35:29,279 Speaker 1: signals in different ways each time it hears one of 554 00:35:29,320 --> 00:35:32,399 Speaker 1: those speech sounds. And so the key is when you're 555 00:35:32,440 --> 00:35:36,040 Speaker 1: losing your high frequency hearing, your ears are still doing 556 00:35:36,120 --> 00:35:38,800 Speaker 1: fine at the medium and low frequencies. Those are getting 557 00:35:38,840 --> 00:35:42,399 Speaker 1: to the brain. The risk band is just clarifying what's 558 00:35:42,440 --> 00:35:46,520 Speaker 1: happening at the high frequencies, and your brain learns to 559 00:35:46,800 --> 00:35:50,840 Speaker 1: fuse these signals from your ear and from your skin, 560 00:35:51,320 --> 00:35:54,560 Speaker 1: so it puts together what it heard from the ear 561 00:35:54,600 --> 00:35:56,759 Speaker 1: with what it's getting through the wristband, and after a 562 00:35:56,840 --> 00:36:02,279 Speaker 1: few weeks people develop much clear were hearing. And as 563 00:36:02,320 --> 00:36:05,480 Speaker 1: an interesting side note, people don't always notice that they're 564 00:36:05,520 --> 00:36:08,040 Speaker 1: getting better, but everyone around them does, and if they 565 00:36:08,040 --> 00:36:11,120 Speaker 1: forget to put on the wristband, they get yelled at. 566 00:36:11,600 --> 00:36:14,960 Speaker 1: So that's an example of pushing some information into the 567 00:36:15,000 --> 00:36:18,640 Speaker 1: brain via an unusual channel while most of the information 568 00:36:18,760 --> 00:36:20,920 Speaker 1: is coming in the normal way. And I'll also tell 569 00:36:20,960 --> 00:36:23,040 Speaker 1: you something else amazing that we found, which is that 570 00:36:23,640 --> 00:36:27,960 Speaker 1: the wristband works incredibly well for reducing tonitis, which is 571 00:36:28,239 --> 00:36:31,640 Speaker 1: ringing in the ears. So a couple of research labs 572 00:36:31,680 --> 00:36:35,640 Speaker 1: had previously shown that tonitis can be reduced from something 573 00:36:35,719 --> 00:36:39,680 Speaker 1: called bi modal stimulation, which just means that you have 574 00:36:40,160 --> 00:36:43,600 Speaker 1: sounds and you have touch that are synchronized. So that's 575 00:36:43,719 --> 00:36:47,359 Speaker 1: two modes or by modal. Now, the previous research had 576 00:36:47,400 --> 00:36:52,480 Speaker 1: done this by combining tones beppep with shocks on the tongues, 577 00:36:52,960 --> 00:36:56,520 Speaker 1: and that worked to drive down the ringing in the ears. 578 00:36:56,680 --> 00:36:58,759 Speaker 1: So we did the same thing with the wristband and 579 00:36:58,800 --> 00:37:01,880 Speaker 1: it works the same. We've published our data on this 580 00:37:02,080 --> 00:37:08,359 Speaker 1: that people with tonightis get clinically significant improvement. Now, why 581 00:37:08,440 --> 00:37:12,920 Speaker 1: does something like that work. There are some sophisticated arguments 582 00:37:12,960 --> 00:37:15,120 Speaker 1: and debates about why this works, but I think this 583 00:37:15,320 --> 00:37:18,480 Speaker 1: simple explanation is that we're just teaching the brain what 584 00:37:18,840 --> 00:37:23,799 Speaker 1: is a real external sound, because those get confirmation on 585 00:37:23,840 --> 00:37:26,680 Speaker 1: the wristband when you hear boo boo boo boop you're feeling, 586 00:37:27,880 --> 00:37:33,760 Speaker 1: But the tonitis, the internal gets no verification on the wrist, 587 00:37:34,080 --> 00:37:37,920 Speaker 1: and so the brain figures out that's fake. News and 588 00:37:37,960 --> 00:37:41,400 Speaker 1: it drives it down. Now, we're doing all kinds of 589 00:37:41,520 --> 00:37:46,600 Speaker 1: other experiments using the wristband for sensory substitution. So, for example, 590 00:37:46,640 --> 00:37:50,000 Speaker 1: we've begun to study this as a device for balance. 591 00:37:50,520 --> 00:37:53,400 Speaker 1: So there are many people who have problems with balance 592 00:37:53,800 --> 00:37:56,480 Speaker 1: because of their inner ear. They don't realize when their 593 00:37:56,520 --> 00:37:59,480 Speaker 1: body is tilting. So in our experiments, they wear the 594 00:37:59,560 --> 00:38:02,480 Speaker 1: risk band and they also wear a small collar clip, 595 00:38:02,719 --> 00:38:05,560 Speaker 1: and the collar clip has a motion detector and a 596 00:38:05,680 --> 00:38:09,680 Speaker 1: gyroscope in it, and it can detect your orientation whether 597 00:38:09,719 --> 00:38:12,320 Speaker 1: you're standing straight or you're tilting one way or another, 598 00:38:12,680 --> 00:38:15,279 Speaker 1: and it just sends that information to the wristband, so 599 00:38:15,320 --> 00:38:18,279 Speaker 1: you become aware if you're tilting and you know in 600 00:38:18,320 --> 00:38:21,080 Speaker 1: which direction. So it goes BIZ when you tilt and 601 00:38:21,120 --> 00:38:23,440 Speaker 1: BZ when you turn the other way. And this is 602 00:38:23,480 --> 00:38:26,680 Speaker 1: simply taking what your inner ear would normally do, and 603 00:38:26,719 --> 00:38:29,680 Speaker 1: if there's something wrong with it, it's just sending it 604 00:38:29,680 --> 00:38:33,920 Speaker 1: in through a different channel. And beyond deafness and balance, 605 00:38:34,000 --> 00:38:37,960 Speaker 1: we're doing other things like working with prosthetics. So when 606 00:38:38,000 --> 00:38:41,879 Speaker 1: somebody gets an amputation, they get an artificial leg prosthetic. 607 00:38:42,400 --> 00:38:44,880 Speaker 1: And what we did is we put sensors on the 608 00:38:45,000 --> 00:38:49,520 Speaker 1: leg so that you can feel the information on the wristband. 609 00:38:49,600 --> 00:38:52,680 Speaker 1: So we're taking an artificial limb and by putting angle 610 00:38:52,760 --> 00:38:57,360 Speaker 1: and pressure sensors on it, we are restoring the sensory 611 00:38:57,480 --> 00:39:01,160 Speaker 1: input that you would have from it through wrist and 612 00:39:01,200 --> 00:39:04,440 Speaker 1: that allows patients to learn much more quickly how to 613 00:39:04,600 --> 00:39:10,680 Speaker 1: walk with their new prosthetic limb. Now, beyond sensory substitution, 614 00:39:11,640 --> 00:39:14,880 Speaker 1: how can we use a technology like this to add 615 00:39:14,960 --> 00:39:18,800 Speaker 1: a completely new kind of sense to actually expand the 616 00:39:18,880 --> 00:39:23,319 Speaker 1: human umvelt? For example, could we feed real time data 617 00:39:23,360 --> 00:39:27,320 Speaker 1: from the internet directly into somebody and could they develop 618 00:39:27,480 --> 00:39:31,920 Speaker 1: a direct perceptual experience. So some years ago we did 619 00:39:31,920 --> 00:39:36,359 Speaker 1: an experiment in the lab where a participant feels a 620 00:39:36,400 --> 00:39:40,399 Speaker 1: real time streaming feed from the net of data for 621 00:39:40,480 --> 00:39:44,040 Speaker 1: five seconds, and then he's holding a tablet and two 622 00:39:44,120 --> 00:39:46,480 Speaker 1: buttons appear and he has to make a choice. He 623 00:39:46,480 --> 00:39:48,720 Speaker 1: doesn't know what's going on, but he makes his choice 624 00:39:49,080 --> 00:39:53,320 Speaker 1: and then gets feedback after a second and a half. Now, 625 00:39:53,360 --> 00:39:56,200 Speaker 1: here's the thing. The subject has no idea what all 626 00:39:56,239 --> 00:39:58,799 Speaker 1: these patterns mean, but we're seeing if he can get 627 00:39:58,920 --> 00:40:02,879 Speaker 1: better at figure out which button to press. And he 628 00:40:02,920 --> 00:40:05,920 Speaker 1: doesn't know that what we're feeding is real time data 629 00:40:06,239 --> 00:40:10,040 Speaker 1: from the stock market, and he's making buy and sell decisions, 630 00:40:10,320 --> 00:40:13,080 Speaker 1: and the feedback is telling him whether he did the 631 00:40:13,160 --> 00:40:15,560 Speaker 1: right thing or not. And what we're seeing is can 632 00:40:15,600 --> 00:40:19,120 Speaker 1: we expand the human umvelt so that he comes to 633 00:40:19,160 --> 00:40:23,880 Speaker 1: have a direct perceptual experience of the economic movements of 634 00:40:23,920 --> 00:40:28,520 Speaker 1: the planet. Here's another experiment which I showed it teds 635 00:40:28,520 --> 00:40:31,520 Speaker 1: some years ago in a talk. We scrape the web 636 00:40:31,560 --> 00:40:35,839 Speaker 1: for any hashtag and we do an automated sentiment analysis, 637 00:40:35,880 --> 00:40:39,320 Speaker 1: which means are people using positive words or negative words 638 00:40:39,400 --> 00:40:42,080 Speaker 1: or neutral and we feed that into the vest or 639 00:40:42,080 --> 00:40:46,080 Speaker 1: the wristband. And this allows a person to feel what's 640 00:40:46,160 --> 00:40:49,839 Speaker 1: going on in the community of millions of people and 641 00:40:49,920 --> 00:40:54,239 Speaker 1: to be plugged into the aggregate emotion of giant crowds 642 00:40:54,280 --> 00:40:57,640 Speaker 1: all at the same time. And that's a new kind 643 00:40:57,640 --> 00:41:01,360 Speaker 1: of human experience because you can you can't know normally 644 00:41:01,440 --> 00:41:05,560 Speaker 1: how a population is feeling. It's a bigger experience than 645 00:41:05,560 --> 00:41:08,720 Speaker 1: a human can normally have. And we're working on feeling 646 00:41:09,239 --> 00:41:12,600 Speaker 1: signals that exist out there but are normally invisible to you. 647 00:41:13,200 --> 00:41:15,880 Speaker 1: So imagine that instead of a police officer having to 648 00:41:15,920 --> 00:41:21,560 Speaker 1: have a drug dog, they could instead feel the odors 649 00:41:21,640 --> 00:41:25,319 Speaker 1: around them that they normally couldn't. So imagine building an 650 00:41:25,440 --> 00:41:29,520 Speaker 1: array of molecular detectors and instead of needing the dog 651 00:41:29,560 --> 00:41:33,239 Speaker 1: with its huge snout, they can just directly experience that 652 00:41:33,400 --> 00:41:37,920 Speaker 1: level of smell themselves through vibrations on the skin. And 653 00:41:38,000 --> 00:41:42,239 Speaker 1: we're doing things with robotic surgery. So normally, when a 654 00:41:42,320 --> 00:41:44,919 Speaker 1: surgeon is doing a robotic surgery, they have to keep 655 00:41:44,960 --> 00:41:47,480 Speaker 1: looking up at the monitors to understand what's going on 656 00:41:47,480 --> 00:41:50,920 Speaker 1: with the patient. But imagine being able to simply feel 657 00:41:51,040 --> 00:41:53,880 Speaker 1: the data from the patient, the heart rate and the 658 00:41:53,920 --> 00:41:57,960 Speaker 1: breathing and so on, simply feeling it as you're going 659 00:41:58,320 --> 00:42:01,480 Speaker 1: and not needing to keep looking at the monitors. Another 660 00:42:01,520 --> 00:42:03,879 Speaker 1: thing we've been working on for a while is expanding 661 00:42:03,920 --> 00:42:08,040 Speaker 1: the umvelt of drone pilots. So in this case, we 662 00:42:08,120 --> 00:42:12,800 Speaker 1: have the vest streaming nine different measures from a quad copter, 663 00:42:13,239 --> 00:42:16,600 Speaker 1: so the pitch and yaw and roll and orientation and heading, 664 00:42:16,920 --> 00:42:21,240 Speaker 1: and that improves the pilot's ability to fly it because 665 00:42:21,280 --> 00:42:25,040 Speaker 1: it's essentially like the drone pilot is extending his skin 666 00:42:25,280 --> 00:42:29,359 Speaker 1: up there onto the drone far away. He's becoming one 667 00:42:29,920 --> 00:42:33,040 Speaker 1: with the drone. He can learn how to fly it better. 668 00:42:33,120 --> 00:42:36,319 Speaker 1: In the fog or in the darkness, because essentially he 669 00:42:36,400 --> 00:42:40,120 Speaker 1: is becoming one with the drone. Or something that's related 670 00:42:40,160 --> 00:42:44,239 Speaker 1: to this is imagine taking a modern airplane cockpit which 671 00:42:44,280 --> 00:42:47,400 Speaker 1: is full of gages and instead of trying to read 672 00:42:47,480 --> 00:42:50,799 Speaker 1: the whole thing, you just feel it. Because we live 673 00:42:50,840 --> 00:42:53,680 Speaker 1: in a world of information now and there's a difference 674 00:42:53,800 --> 00:42:59,319 Speaker 1: between accessing big data and experiencing it. And we're also 675 00:42:59,440 --> 00:43:03,840 Speaker 1: exploring to expand your body to a different location. So 676 00:43:04,440 --> 00:43:09,160 Speaker 1: imagine that you feel everything that a robot feels. So 677 00:43:09,200 --> 00:43:12,680 Speaker 1: you send an avatar robot on a rescue mission into 678 00:43:12,760 --> 00:43:16,160 Speaker 1: a place that's very dangerous, like after an earthquake, with 679 00:43:16,200 --> 00:43:19,799 Speaker 1: collapsed buildings and dangerous chemicals, and you feel what the 680 00:43:19,840 --> 00:43:23,600 Speaker 1: avatar robot is feeling. So you can close the feedback 681 00:43:23,680 --> 00:43:28,280 Speaker 1: loop between action and perception. And we're interested in using 682 00:43:28,320 --> 00:43:32,360 Speaker 1: this for the military to reduce friendly fire, which is 683 00:43:32,360 --> 00:43:35,000 Speaker 1: when a person gets killed just because one of their 684 00:43:35,040 --> 00:43:38,080 Speaker 1: colleagues makes a mistake and shoots them. So with our 685 00:43:38,239 --> 00:43:43,239 Speaker 1: chest strap and some encrypted position information, you can tell 686 00:43:43,280 --> 00:43:47,040 Speaker 1: where your friendlies are in any moment because you're feeling them. 687 00:43:47,080 --> 00:43:50,360 Speaker 1: You know their location right on your body, like Fred 688 00:43:50,480 --> 00:43:52,600 Speaker 1: is off to my left because I can feel a 689 00:43:52,640 --> 00:43:55,160 Speaker 1: slight vibration, but now he's getting closer to me, so 690 00:43:55,200 --> 00:43:57,799 Speaker 1: the vibration gets more intense. And now I know that 691 00:43:57,840 --> 00:44:00,600 Speaker 1: Steve is behind the wall over there because I can 692 00:44:00,680 --> 00:44:03,919 Speaker 1: feel him moving around even though I can't see him, 693 00:44:04,239 --> 00:44:07,319 Speaker 1: and Tom is behind me back there. You don't have 694 00:44:07,400 --> 00:44:11,640 Speaker 1: to rely on vision because you're feeling where everyone is. 695 00:44:12,880 --> 00:44:16,120 Speaker 1: So with one of our engineers, Mike Perata, we built 696 00:44:16,160 --> 00:44:20,160 Speaker 1: a version of this and we demonstrated it by turning 697 00:44:20,200 --> 00:44:23,400 Speaker 1: to fiction. We had our vest make a cameo on 698 00:44:23,520 --> 00:44:28,360 Speaker 1: the show Westworld. So if you saw season two, episode seven, 699 00:44:28,880 --> 00:44:34,560 Speaker 1: the storyline is that private military contractors drop into Westworld 700 00:44:34,600 --> 00:44:38,439 Speaker 1: to take care of these out of control robots called 701 00:44:38,440 --> 00:44:40,719 Speaker 1: the Hosts. And we set this up so that the 702 00:44:40,760 --> 00:44:44,960 Speaker 1: military contractors in the show are wearing our vests that 703 00:44:45,080 --> 00:44:49,239 Speaker 1: let them feel the location of the Hosts on their bodies, 704 00:44:49,560 --> 00:44:52,880 Speaker 1: and that's how they know exactly how to target them. 705 00:44:53,200 --> 00:44:55,680 Speaker 1: So as they're moving around, they can feel, oh, there's 706 00:44:55,680 --> 00:44:57,960 Speaker 1: a robot over there, and there's a robot on the 707 00:44:58,000 --> 00:45:00,120 Speaker 1: other side of that thing, and there's a robot the 708 00:45:00,200 --> 00:45:04,279 Speaker 1: dark over there, and they can aim at them appropriately. Now, 709 00:45:04,320 --> 00:45:07,560 Speaker 1: as it turns out, all the military contractors eventually get killed, 710 00:45:07,960 --> 00:45:10,359 Speaker 1: so the vest is not necessarily going to save your 711 00:45:10,360 --> 00:45:13,440 Speaker 1: life if things really hit the fan with robot consciousness, 712 00:45:13,440 --> 00:45:16,480 Speaker 1: but that's a different episode, and we've used this same 713 00:45:16,640 --> 00:45:21,040 Speaker 1: concept for people who are blind. We set this up 714 00:45:21,080 --> 00:45:24,080 Speaker 1: in collaboration with some colleagues at Google who have light 715 00:45:24,239 --> 00:45:27,400 Speaker 1: ar in their offices. Light ar is like sonar, but 716 00:45:27,440 --> 00:45:29,759 Speaker 1: with light and so with light ar you can know 717 00:45:29,800 --> 00:45:34,200 Speaker 1: the location of everything and everybody moving around in the offices, 718 00:45:34,719 --> 00:45:37,760 Speaker 1: and we tapped into that data stream and we brought 719 00:45:37,800 --> 00:45:42,480 Speaker 1: in blind participants and they could feel where everyone was. 720 00:45:42,560 --> 00:45:44,320 Speaker 1: So if there's someone on your right, you feel a 721 00:45:44,400 --> 00:45:46,279 Speaker 1: vibration on your right, and as they get closer, it 722 00:45:46,280 --> 00:45:48,400 Speaker 1: gets more intense than as they go away it gets 723 00:45:48,680 --> 00:45:51,400 Speaker 1: less intense, and you can feel them moving around you 724 00:45:51,480 --> 00:45:54,680 Speaker 1: and you can even feel when they're walking around behind you, 725 00:45:55,200 --> 00:45:59,239 Speaker 1: which is better than sighted vision. And on top of that, 726 00:45:59,320 --> 00:46:03,719 Speaker 1: we also added navigation. So our participants had never been 727 00:46:03,760 --> 00:46:07,120 Speaker 1: to these offices before, but we type into the system 728 00:46:07,680 --> 00:46:11,160 Speaker 1: a particular conference room to go to, and the person 729 00:46:11,200 --> 00:46:14,040 Speaker 1: then feels on their vest a buzzing on the front, 730 00:46:14,280 --> 00:46:16,200 Speaker 1: so they walk straight and then they feel a buzz 731 00:46:16,239 --> 00:46:18,799 Speaker 1: on their left, and they turn left and then they 732 00:46:18,800 --> 00:46:21,680 Speaker 1: feel a diagonal buzz and they know that the conference 733 00:46:21,760 --> 00:46:24,440 Speaker 1: room is diagonally over there, and they were able to 734 00:46:24,560 --> 00:46:29,000 Speaker 1: navigate this way on top of feeling who is around them, 735 00:46:29,680 --> 00:46:32,520 Speaker 1: and so in this way, they're not getting real vision, 736 00:46:32,600 --> 00:46:36,640 Speaker 1: but they're getting a lot of incredibly important information in 737 00:46:36,680 --> 00:46:40,640 Speaker 1: a very simple way. And there's really no end to 738 00:46:40,719 --> 00:46:46,040 Speaker 1: the possibilities on the horizon with sensory substitution and sensory expansion. 739 00:46:46,400 --> 00:46:50,040 Speaker 1: One experiment we did involves using these smart watches that 740 00:46:50,120 --> 00:46:53,279 Speaker 1: can measure things like your heart rate and hurried variability 741 00:46:53,320 --> 00:46:57,000 Speaker 1: in galvanic skin response, and so we tapped into the 742 00:46:57,080 --> 00:47:00,600 Speaker 1: API for that and we put the data on the internet, 743 00:47:00,800 --> 00:47:04,040 Speaker 1: and then you feel that on the wristband, so you 744 00:47:04,080 --> 00:47:07,719 Speaker 1: can feel these normally invisible states of your body. But 745 00:47:07,800 --> 00:47:11,120 Speaker 1: the interesting part is when you take the watch off 746 00:47:11,239 --> 00:47:13,600 Speaker 1: and give it to someone else, let's say your spouse, 747 00:47:13,760 --> 00:47:18,360 Speaker 1: so that now you are feeling the physiologic responses of 748 00:47:18,440 --> 00:47:23,759 Speaker 1: another person. You're tapped into their internal signals. Now, I 749 00:47:23,760 --> 00:47:26,160 Speaker 1: have no idea if this is good or bad for marriages, 750 00:47:26,239 --> 00:47:29,560 Speaker 1: but this is an experiment we're trying because humans are 751 00:47:29,600 --> 00:47:32,279 Speaker 1: at a point now where we can open up new 752 00:47:32,400 --> 00:47:35,360 Speaker 1: folds in the possibility space. There are things we can 753 00:47:35,680 --> 00:47:39,680 Speaker 1: experiment with to have new kinds of senses and bodies, 754 00:47:40,040 --> 00:47:43,160 Speaker 1: and we can feel things like not only other people's physiology, 755 00:47:43,280 --> 00:47:48,960 Speaker 1: but things like entire factories or traffic patterns. In general, 756 00:47:49,400 --> 00:47:52,920 Speaker 1: what this gives us is a new approach to data. 757 00:47:53,000 --> 00:47:57,040 Speaker 1: Our visual systems are fundamentally really good at blobs and 758 00:47:57,200 --> 00:48:00,200 Speaker 1: edges and motion, but they're limited in what they can 759 00:48:00,200 --> 00:48:02,440 Speaker 1: attend to. They can only do one thing at a time, 760 00:48:02,840 --> 00:48:05,920 Speaker 1: and that's not very good for high dimensional data. But 761 00:48:06,000 --> 00:48:09,200 Speaker 1: your body is very good at multidimensional data, which is 762 00:48:09,200 --> 00:48:12,000 Speaker 1: why you can balance on one leg and you're getting 763 00:48:12,080 --> 00:48:15,000 Speaker 1: feedback from all these different muscle groups. You're taking in 764 00:48:15,120 --> 00:48:18,240 Speaker 1: high dimensional data and dealing with it all at once 765 00:48:19,200 --> 00:48:22,319 Speaker 1: and with the right sorts of data compression. I think 766 00:48:22,360 --> 00:48:26,319 Speaker 1: there's no limits to the kind of data that we 767 00:48:26,360 --> 00:48:30,240 Speaker 1: can take in. We have about seventy different experiments running 768 00:48:30,280 --> 00:48:33,239 Speaker 1: on this, and if you're interested, go to neosensory dot 769 00:48:33,239 --> 00:48:36,920 Speaker 1: com slash developers and you can see all the various 770 00:48:36,920 --> 00:48:41,040 Speaker 1: cool projects that we in the community in general has done. 771 00:48:41,360 --> 00:48:46,120 Speaker 1: So the possibilities are endless here. Just imagine an astronaut 772 00:48:46,200 --> 00:48:49,879 Speaker 1: being able to feel the overall health of the International 773 00:48:49,920 --> 00:48:53,000 Speaker 1: Space Station, or for that matter, having you feel the 774 00:48:53,040 --> 00:48:57,040 Speaker 1: invisible states of your own health, like your blood sugar 775 00:48:57,120 --> 00:49:00,080 Speaker 1: and the state of your microbiome, or having three one 776 00:49:00,160 --> 00:49:04,800 Speaker 1: hundred and sixty degree vision or seeing an infrared or ultraviolet. 777 00:49:05,320 --> 00:49:08,640 Speaker 1: So the key is this, as we move into the future, 778 00:49:09,040 --> 00:49:12,640 Speaker 1: we're going to be increasingly able to choose our own 779 00:49:12,680 --> 00:49:16,840 Speaker 1: peripheral devices. We don't have to wait for Mother Nature's 780 00:49:17,320 --> 00:49:21,439 Speaker 1: sensory gifts on her time scales, because instead, like any 781 00:49:21,560 --> 00:49:25,319 Speaker 1: good parent, what she's given us are the tools that 782 00:49:25,360 --> 00:49:29,720 Speaker 1: we need to go out and define our own trajectory. 783 00:49:30,320 --> 00:49:34,200 Speaker 1: So the question now is how do you want to 784 00:49:34,280 --> 00:49:43,320 Speaker 1: experience your universe? That's all for this week. To find 785 00:49:43,320 --> 00:49:45,680 Speaker 1: out more and to share your thoughts, head over to 786 00:49:45,719 --> 00:49:50,239 Speaker 1: eagleman dot com slash podcasts. Any questions or discussions that 787 00:49:50,320 --> 00:49:54,719 Speaker 1: you have please email podcasts at eagleman dot com and 788 00:49:54,760 --> 00:49:59,160 Speaker 1: I will be addressing those on future episodes. Until next time, 789 00:49:59,239 --> 00:50:01,840 Speaker 1: I'm David nigelm In signing off to you from the 790 00:50:01,920 --> 00:50:02,960 Speaker 1: Inner Cosmos.