1 00:00:04,240 --> 00:00:07,240 Speaker 1: Welcome to tech Stuff, a production of I Heart Radios, 2 00:00:07,320 --> 00:00:13,880 Speaker 1: How Stuff Works. Hey there, and welcome to tech Stuff. 3 00:00:13,920 --> 00:00:17,600 Speaker 1: I'm your host, Jonathan Strickland. I'm an executive producer with 4 00:00:17,760 --> 00:00:19,520 Speaker 1: I Heart Radio and How Stuff Works and a love 5 00:00:19,520 --> 00:00:23,320 Speaker 1: of all things tech and A couple of recent stories 6 00:00:23,480 --> 00:00:27,560 Speaker 1: in the summer of twenty nineteen have been about the 7 00:00:27,680 --> 00:00:32,919 Speaker 1: subject of brain computer interfaces or b C eyes. That's 8 00:00:32,920 --> 00:00:35,640 Speaker 1: a topic I've touched on with previous episodes of Tech Stuff, 9 00:00:35,680 --> 00:00:40,000 Speaker 1: and if you listened to the show Forward Thinking, we 10 00:00:40,120 --> 00:00:42,320 Speaker 1: covered it on that show as well. But since we've 11 00:00:42,760 --> 00:00:46,720 Speaker 1: now got people like Elon Musk and Mark Zuckerberg behind 12 00:00:46,840 --> 00:00:49,680 Speaker 1: the efforts of creating BC eyes, I figured to be 13 00:00:49,720 --> 00:00:52,720 Speaker 1: a good time to revisit the topic, talk about what 14 00:00:52,840 --> 00:00:57,160 Speaker 1: it is, how far along or not far along we 15 00:00:57,200 --> 00:01:00,640 Speaker 1: are with the technology and the ethical can iterations we 16 00:01:00,680 --> 00:01:03,760 Speaker 1: need to keep in mind when we're developing tech like this. 17 00:01:04,280 --> 00:01:09,120 Speaker 1: So a brain computer interface is exactly what it sounds like. 18 00:01:09,360 --> 00:01:12,399 Speaker 1: It's a methodology to allow a user to control or 19 00:01:12,440 --> 00:01:17,160 Speaker 1: interact with a computer directly through brain activity through thought. 20 00:01:17,920 --> 00:01:22,480 Speaker 1: It marries the complicated subjects of neuroscience and computer science 21 00:01:22,920 --> 00:01:26,560 Speaker 1: and a lot of media outlets sort of gloss over 22 00:01:26,920 --> 00:01:31,319 Speaker 1: how truly complicated this is. We have a tendency to 23 00:01:31,400 --> 00:01:34,960 Speaker 1: either think of our brains as being kind of like computers, 24 00:01:35,120 --> 00:01:38,440 Speaker 1: or of computers as being kind of like brains, but 25 00:01:38,720 --> 00:01:42,040 Speaker 1: really they're quite different, and creating an interface that can 26 00:01:42,040 --> 00:01:45,280 Speaker 1: translate the operations of one so that it makes sense 27 00:01:45,319 --> 00:01:50,200 Speaker 1: to the other is harder than it sounds. The goal 28 00:01:50,560 --> 00:01:54,240 Speaker 1: of a brain computer interface is to strip away as 29 00:01:54,320 --> 00:01:58,160 Speaker 1: much of the barrier between our intent and the computer's 30 00:01:58,240 --> 00:02:02,040 Speaker 1: actions as possible, is to get beyond the limitations of 31 00:02:02,080 --> 00:02:05,280 Speaker 1: other types of interfaces. So let's talk about those other 32 00:02:05,320 --> 00:02:08,960 Speaker 1: interfaces for a second to kind of have a comparison here. So, 33 00:02:09,520 --> 00:02:13,000 Speaker 1: in the very early days of computers, like the earliest 34 00:02:13,040 --> 00:02:19,160 Speaker 1: electro mechanical computers, the interface was incredibly complicated. It consisted 35 00:02:19,200 --> 00:02:23,120 Speaker 1: of switches and plugs, so you'd have to physically make 36 00:02:23,440 --> 00:02:27,080 Speaker 1: changes to the machine to run a different calculation. You 37 00:02:27,160 --> 00:02:32,600 Speaker 1: programmed it by physically changing the connections. Operating a computer 38 00:02:32,680 --> 00:02:36,400 Speaker 1: required learning a pretty intricate system, so it was a 39 00:02:36,480 --> 00:02:40,120 Speaker 1: very high barrier to using computers. But on the other hand, 40 00:02:40,680 --> 00:02:45,040 Speaker 1: there were hardly any computers to use, so it wasn't 41 00:02:45,040 --> 00:02:47,320 Speaker 1: like people were stumped all the time. It wasn't like 42 00:02:47,639 --> 00:02:49,640 Speaker 1: you were in the I. T Department looking at a 43 00:02:49,680 --> 00:02:53,000 Speaker 1: manual that was five thousand pages long. There are only 44 00:02:53,040 --> 00:02:56,559 Speaker 1: a few computers in the world at all. Now, gradually 45 00:02:57,040 --> 00:03:00,360 Speaker 1: this gave way to other interface systems, and it first 46 00:03:00,400 --> 00:03:04,320 Speaker 1: they were still incredibly complicated, at least by today's standards. 47 00:03:04,639 --> 00:03:07,799 Speaker 1: The punch cards of yesterday are a really good example. 48 00:03:08,080 --> 00:03:11,480 Speaker 1: You could feed a series of punch cards which represented 49 00:03:11,520 --> 00:03:14,760 Speaker 1: a program, to a computer. The computer would read the 50 00:03:14,760 --> 00:03:20,200 Speaker 1: punch cards, make whatever calculations were indicated by those punch cards, 51 00:03:20,440 --> 00:03:22,720 Speaker 1: and then it might in turn spit out a different 52 00:03:22,840 --> 00:03:25,560 Speaker 1: set of punch cards, or it might light up some 53 00:03:25,639 --> 00:03:28,840 Speaker 1: indicator lights. Maybe if you were lucky, you had a 54 00:03:28,919 --> 00:03:31,560 Speaker 1: printer and it would print out a result. But boy, 55 00:03:31,600 --> 00:03:34,679 Speaker 1: it was still a pretty tough barrier of entry as 56 00:03:34,680 --> 00:03:38,600 Speaker 1: far as computer use was concerned. It wasn't something the 57 00:03:38,640 --> 00:03:42,000 Speaker 1: average person could tackle on his or her own. Now. 58 00:03:42,040 --> 00:03:46,760 Speaker 1: A huge breakthrough was the incorporation of computer displays and keyboards, 59 00:03:47,200 --> 00:03:49,320 Speaker 1: and there were other advances in computers at the time 60 00:03:49,400 --> 00:03:51,880 Speaker 1: that also made a huge difference, like the development of 61 00:03:51,880 --> 00:03:55,920 Speaker 1: operating systems and high level programming languages. And we obviously 62 00:03:55,920 --> 00:03:59,800 Speaker 1: still use keyboards and displace today. So these were really 63 00:04:00,040 --> 00:04:04,920 Speaker 1: sticky types of interfaces so to speak. Actually that could 64 00:04:04,960 --> 00:04:07,840 Speaker 1: be literal if you tend to drink sugary sodas while 65 00:04:07,880 --> 00:04:12,680 Speaker 1: you're computing, but I'm mostly talking about the metaphorical here. Anyway. 66 00:04:12,840 --> 00:04:16,480 Speaker 1: The computer mouse would then expand how we would interact 67 00:04:16,600 --> 00:04:21,280 Speaker 1: with computers, as would the graphic user interface or gooey. 68 00:04:21,320 --> 00:04:24,479 Speaker 1: This would allow us to have new ways to interact 69 00:04:24,520 --> 00:04:27,400 Speaker 1: with our machines, and then we would see further advancements 70 00:04:27,440 --> 00:04:31,720 Speaker 1: like voice recognition systems and touch screen interfaces. It was 71 00:04:31,720 --> 00:04:35,400 Speaker 1: pretty typical that each advance in technology, if it was 72 00:04:35,760 --> 00:04:41,240 Speaker 1: implemented well, would make interactions with computers easier and more natural. 73 00:04:41,720 --> 00:04:45,839 Speaker 1: So when you see a kid look at a screen 74 00:04:46,360 --> 00:04:49,760 Speaker 1: and the kid has never really played with keyboard, mouse 75 00:04:49,839 --> 00:04:52,360 Speaker 1: or even touch screens, yet you might see them reach 76 00:04:52,400 --> 00:04:55,160 Speaker 1: out and try to touch things on the screen, Well, 77 00:04:55,200 --> 00:04:58,000 Speaker 1: that tells you, oh, a touch screen might work better 78 00:04:58,400 --> 00:05:02,599 Speaker 1: for certain things. Maybe not everything, but certain things. And 79 00:05:02,640 --> 00:05:06,159 Speaker 1: then you start to implement that kind of interface in 80 00:05:06,240 --> 00:05:08,880 Speaker 1: low and behold, you see I've created a new way 81 00:05:08,920 --> 00:05:12,839 Speaker 1: to interact with this machine. Well, brain computer interfaces would 82 00:05:12,880 --> 00:05:17,880 Speaker 1: remove even those small gaps between our intent and executing 83 00:05:17,880 --> 00:05:21,160 Speaker 1: a command on a computer. Ideally, you would have a 84 00:05:21,279 --> 00:05:24,200 Speaker 1: non invasive technology, meaning you wouldn't have to have any 85 00:05:24,279 --> 00:05:27,160 Speaker 1: kind of surgery or anything in order to actually use 86 00:05:27,240 --> 00:05:30,320 Speaker 1: this stuff, and that technology would be able to interpret 87 00:05:30,360 --> 00:05:34,279 Speaker 1: your thoughts as commands, and then the computer would carry 88 00:05:34,279 --> 00:05:37,719 Speaker 1: those out, and the computer could potentially send information back 89 00:05:37,720 --> 00:05:40,839 Speaker 1: to you through those same channels that you could interpret 90 00:05:40,920 --> 00:05:43,360 Speaker 1: in some meaningful way. And there are a lot of 91 00:05:43,440 --> 00:05:47,440 Speaker 1: potential uses for this kind of technology, and many of 92 00:05:47,480 --> 00:05:51,839 Speaker 1: those uses are truly noble in their mission. For example, 93 00:05:51,839 --> 00:05:53,839 Speaker 1: and I'll talk a lot about this in this episode, 94 00:05:54,040 --> 00:05:57,919 Speaker 1: it could allow people who have severe mobility issues and 95 00:05:58,000 --> 00:06:00,680 Speaker 1: outlet for interacting with the world around on them that 96 00:06:00,760 --> 00:06:04,559 Speaker 1: they might not otherwise have. With the proper interface, someone 97 00:06:04,560 --> 00:06:07,160 Speaker 1: who is paralyzed and may not be able to move 98 00:06:07,360 --> 00:06:10,880 Speaker 1: or even speak could use the interface to activate commands 99 00:06:10,880 --> 00:06:13,880 Speaker 1: on a computer in order to communicate with others or 100 00:06:13,960 --> 00:06:17,640 Speaker 1: carry out tasks with the help of robotics and automated systems. 101 00:06:18,600 --> 00:06:22,400 Speaker 1: We've actually seen applications of brain computer interfaces do this 102 00:06:22,560 --> 00:06:27,360 Speaker 1: kind of thing already to a limited degree, and frankly, 103 00:06:27,440 --> 00:06:32,440 Speaker 1: it's amazing and inspiring. I I highly recommend you seek 104 00:06:32,520 --> 00:06:37,080 Speaker 1: out stories and videos about these types of projects because 105 00:06:37,080 --> 00:06:42,560 Speaker 1: they are phenomenal. But there are use cases beyond helping 106 00:06:42,600 --> 00:06:45,320 Speaker 1: people gain more autonomy, and some of them are a 107 00:06:45,320 --> 00:06:49,520 Speaker 1: bit well, let's say they're a bit questionable. So let's 108 00:06:49,880 --> 00:06:53,680 Speaker 1: walk down the history of brain computer interfaces and then 109 00:06:53,720 --> 00:06:59,120 Speaker 1: we will revisit these specific examples, as well as what 110 00:06:59,320 --> 00:07:03,760 Speaker 1: is currently going on with Elon Musk and Facebook getting 111 00:07:03,760 --> 00:07:07,799 Speaker 1: into the game. Well before such a thing could even 112 00:07:07,839 --> 00:07:10,920 Speaker 1: be theorized as a brain computer interface, we first had 113 00:07:10,960 --> 00:07:15,240 Speaker 1: to understand more about how the brain itself works. And 114 00:07:15,280 --> 00:07:18,480 Speaker 1: this is a non trivial thing. The brain was largely 115 00:07:18,840 --> 00:07:22,680 Speaker 1: an organ of mystery for a very long time. In 116 00:07:22,720 --> 00:07:27,240 Speaker 1: the late nineteenth century, physicians and scientists were first starting 117 00:07:27,240 --> 00:07:30,560 Speaker 1: to learn that there is electrical activity in the brains 118 00:07:30,720 --> 00:07:33,720 Speaker 1: of mammals. We started to get an understanding that our 119 00:07:33,720 --> 00:07:39,480 Speaker 1: nervous system is an electrochemical system, that electricity and chemicals 120 00:07:39,480 --> 00:07:43,080 Speaker 1: play a very important part in sending messages through this 121 00:07:43,320 --> 00:07:46,960 Speaker 1: system in a very sophisticated way. Now, this was the 122 00:07:47,000 --> 00:07:52,000 Speaker 1: same time that physicists were getting a better understanding about energy, 123 00:07:52,120 --> 00:07:56,000 Speaker 1: and so there was a curiosity about energy in the brain. 124 00:07:56,120 --> 00:07:59,280 Speaker 1: The brain does stuff, It must get energy, it must 125 00:07:59,400 --> 00:08:03,240 Speaker 1: use energy. What is that mechanism? What the physicists of 126 00:08:03,320 --> 00:08:06,480 Speaker 1: the time didn't yet understand was that the brain was 127 00:08:06,600 --> 00:08:10,040 Speaker 1: this electro chemical machine. They didn't have a complete picture yet. 128 00:08:10,520 --> 00:08:15,640 Speaker 1: So for a few decades, research mostly with animals like dogs, rabbits, 129 00:08:15,680 --> 00:08:20,200 Speaker 1: and monkeys, showed that brains generated electrical activity in some fashion, 130 00:08:20,360 --> 00:08:23,560 Speaker 1: and by the early twentieth century we had a rudimentary 131 00:08:23,680 --> 00:08:29,320 Speaker 1: understanding of brain waves. Then Hans Burger, a German psychiatrist 132 00:08:29,640 --> 00:08:33,680 Speaker 1: and physicist, recorded the first human E E G. In 133 00:08:33,720 --> 00:08:38,280 Speaker 1: the mid nineteen twenties. Now, Burger was interested in investigating 134 00:08:38,679 --> 00:08:42,840 Speaker 1: psychical energy in the brain. He was convinced that there 135 00:08:42,920 --> 00:08:45,520 Speaker 1: is some energy beyond what is needed to do quote 136 00:08:45,600 --> 00:08:50,680 Speaker 1: unquote work that would be thinking and operating the human body. 137 00:08:50,960 --> 00:08:54,920 Speaker 1: He never did uncover any sort of psychical energy and 138 00:08:55,120 --> 00:08:59,240 Speaker 1: his research, but his invention of the electro and cephalogram 139 00:08:59,280 --> 00:09:03,040 Speaker 1: would set the stage for neuroscience in the twentieth century. 140 00:09:03,360 --> 00:09:05,440 Speaker 1: And I'll have to do a full episode about Burger 141 00:09:05,440 --> 00:09:09,160 Speaker 1: in the future because he was a really interesting person. 142 00:09:10,360 --> 00:09:15,520 Speaker 1: His life story is full of drama. Now, over the 143 00:09:15,559 --> 00:09:20,520 Speaker 1: next several decades after Burger's invention of the e G, 144 00:09:20,880 --> 00:09:24,040 Speaker 1: or at least the the refining of the e G, 145 00:09:24,320 --> 00:09:26,920 Speaker 1: since there were sort of precursors to the E e 146 00:09:27,040 --> 00:09:30,280 Speaker 1: G before Burger got involved anyway. Over the following years, 147 00:09:30,400 --> 00:09:34,840 Speaker 1: scientists and doctors refined their understanding of electrical activity in 148 00:09:34,880 --> 00:09:38,560 Speaker 1: the brain, and they observed phenomena like R E M sleep. 149 00:09:38,880 --> 00:09:43,679 Speaker 1: They identified different types of brain waves. Neuroscientists also got 150 00:09:43,720 --> 00:09:46,440 Speaker 1: a deeper understanding about what the different parts of the 151 00:09:46,480 --> 00:09:50,960 Speaker 1: brain do and are responsible for, and that gets really 152 00:09:51,120 --> 00:09:54,800 Speaker 1: super complicated. Their sections of the brain that are dedicated 153 00:09:54,800 --> 00:09:58,400 Speaker 1: to very specific tasks and major parts of the brain 154 00:09:58,440 --> 00:10:02,040 Speaker 1: include stuff like the frontal lobe, the parietal lobe, the 155 00:10:02,120 --> 00:10:06,320 Speaker 1: temporal lobe, the occipital lobe, cerebellum, and more. And I 156 00:10:06,360 --> 00:10:11,120 Speaker 1: am no neuroscientist, and to go into deep detail on 157 00:10:11,160 --> 00:10:14,680 Speaker 1: all of these parts would necessitate at least a couple 158 00:10:14,720 --> 00:10:18,200 Speaker 1: of episodes plus an expert on the subject matter. So 159 00:10:18,240 --> 00:10:22,000 Speaker 1: I'm just gonna leave the general discussion of the brain 160 00:10:22,080 --> 00:10:27,920 Speaker 1: with an acknowledgement that they are really complicated. Now, there's 161 00:10:27,920 --> 00:10:31,240 Speaker 1: still a ton that we don't know about the brain, 162 00:10:31,640 --> 00:10:34,480 Speaker 1: and probably there's stuff we don't know that we don't 163 00:10:34,559 --> 00:10:38,640 Speaker 1: know but we've made a lot of progress, which has 164 00:10:38,720 --> 00:10:43,520 Speaker 1: led to some enterprising researchers, scientists, and technologists to look 165 00:10:43,520 --> 00:10:47,040 Speaker 1: into ways to create an interface between the machine in 166 00:10:47,040 --> 00:10:51,520 Speaker 1: our heads and the computers around us. In the nineteen sixties, 167 00:10:51,800 --> 00:10:57,119 Speaker 1: a neurophysicist named William gray Walter demonstrated that the electrical 168 00:10:57,160 --> 00:11:01,960 Speaker 1: signals and brains could do useful work outside of our noggins. 169 00:11:02,640 --> 00:11:05,800 Speaker 1: And it was a fairly primitive demonstration, but an effective one. 170 00:11:06,440 --> 00:11:10,520 Speaker 1: He had subjects who had electrode implants for e g s. 171 00:11:11,200 --> 00:11:14,400 Speaker 1: By the way, e g s can either involve having 172 00:11:14,440 --> 00:11:20,520 Speaker 1: surgical implants of electrodes or electrodes that are part of 173 00:11:20,600 --> 00:11:23,640 Speaker 1: you know, the sticky pads that stick against the scalp. 174 00:11:23,880 --> 00:11:27,520 Speaker 1: They have to be positioned in very precise places. But 175 00:11:27,960 --> 00:11:30,679 Speaker 1: you can have invasive or non invasive e g s. 176 00:11:31,600 --> 00:11:35,040 Speaker 1: What William gray Walter was working with were the invasive types. 177 00:11:35,840 --> 00:11:38,199 Speaker 1: So he had these people who are wired up e 178 00:11:38,280 --> 00:11:40,959 Speaker 1: g s and they were navigating through a slide show 179 00:11:41,320 --> 00:11:45,280 Speaker 1: with an old slide show projector, and they were using 180 00:11:45,320 --> 00:11:49,280 Speaker 1: a remote control to advance to the next slide. So 181 00:11:49,280 --> 00:11:51,280 Speaker 1: when they were done looking at a slide, or if 182 00:11:51,320 --> 00:11:54,160 Speaker 1: they were told to go to the next slide. They 183 00:11:54,200 --> 00:11:55,840 Speaker 1: would push a button and it would go to the 184 00:11:55,880 --> 00:11:59,440 Speaker 1: next slide. But what Walter didn't tell these people who 185 00:11:59,480 --> 00:12:01,760 Speaker 1: had their e g s hooked up to the system 186 00:12:01,960 --> 00:12:06,440 Speaker 1: was that the remote control was a nert it. It 187 00:12:06,520 --> 00:12:10,360 Speaker 1: didn't work at all. It was just a dummy remote. Rather, 188 00:12:11,040 --> 00:12:15,440 Speaker 1: when the subject's brain sent the command I'm going to 189 00:12:15,559 --> 00:12:18,600 Speaker 1: use the remote now, the electrodes would pick up that 190 00:12:18,720 --> 00:12:23,319 Speaker 1: brain activity and it would send those signals onto an amplifier, 191 00:12:23,600 --> 00:12:26,560 Speaker 1: which would boost the signal enough to send a command 192 00:12:26,760 --> 00:12:29,520 Speaker 1: to go to the next slide to the projector. It's 193 00:12:29,520 --> 00:12:32,319 Speaker 1: a very simple one, just the same sort of electrical 194 00:12:32,360 --> 00:12:35,200 Speaker 1: impulse that the projector would get if you push the 195 00:12:35,240 --> 00:12:40,160 Speaker 1: button now. The subjects reportedly were startled by this experience 196 00:12:40,920 --> 00:12:44,200 Speaker 1: because frequently they would make the decision that they were 197 00:12:44,240 --> 00:12:46,760 Speaker 1: going to go to the next slide, and they would 198 00:12:46,760 --> 00:12:49,600 Speaker 1: be in the process of pushing the button when the 199 00:12:49,679 --> 00:12:54,480 Speaker 1: slide would change in advance before they had pushed the button. 200 00:12:54,880 --> 00:12:57,400 Speaker 1: They said it started to feel like the slide projector 201 00:12:57,720 --> 00:13:01,760 Speaker 1: had anticipated their action. It had guessed that they were 202 00:13:01,800 --> 00:13:04,160 Speaker 1: ready to move on even though they had not yet 203 00:13:04,320 --> 00:13:07,440 Speaker 1: pushed the button. And in a way, that's exactly what 204 00:13:07,600 --> 00:13:10,600 Speaker 1: it had done, or rather it was able to act 205 00:13:10,760 --> 00:13:14,800 Speaker 1: faster than the subject was, and it raised some really 206 00:13:15,000 --> 00:13:19,600 Speaker 1: interesting questions about consciousness because the implication was that we 207 00:13:19,679 --> 00:13:23,320 Speaker 1: can arrive at a decision to do something before we 208 00:13:23,400 --> 00:13:27,480 Speaker 1: are actually aware of the decision we have made. And 209 00:13:27,520 --> 00:13:31,199 Speaker 1: so in theory, if you have a brain computer interface, 210 00:13:31,480 --> 00:13:34,400 Speaker 1: you might get the sensation that you're working with a 211 00:13:34,440 --> 00:13:38,080 Speaker 1: machine that's actually anticipating what you want to do before 212 00:13:38,120 --> 00:13:40,840 Speaker 1: you are aware that you wanted to do it, which 213 00:13:40,880 --> 00:13:44,320 Speaker 1: is both kind of creepy and amazing. Now, in reality, 214 00:13:44,400 --> 00:13:47,080 Speaker 1: it's because you wanted to do that thing, but your 215 00:13:47,120 --> 00:13:51,680 Speaker 1: awareness of your desire hasn't caught up yet. It's brains 216 00:13:51,679 --> 00:13:55,120 Speaker 1: are funny things. It's also possible that because of those 217 00:13:55,160 --> 00:14:00,400 Speaker 1: implanted electrodes, which can detect activity and relatively small regions 218 00:14:00,440 --> 00:14:03,480 Speaker 1: of the brain, allowed for more precision when looking for 219 00:14:03,520 --> 00:14:06,080 Speaker 1: signals that would indicate I'm going to push the button, 220 00:14:06,520 --> 00:14:09,560 Speaker 1: rather than signals that would indicate something like blink now 221 00:14:09,720 --> 00:14:12,480 Speaker 1: or eat soon or whatever. So, in other words, it's 222 00:14:12,679 --> 00:14:16,400 Speaker 1: very important to target the neurons that are going to 223 00:14:16,400 --> 00:14:19,840 Speaker 1: be responsible for whatever activity you're looking for. You can't 224 00:14:19,880 --> 00:14:23,600 Speaker 1: just have, you know, a general brain reading device that's 225 00:14:23,600 --> 00:14:26,320 Speaker 1: looking for any electrical activity in the brain. There's always 226 00:14:26,400 --> 00:14:28,840 Speaker 1: electrical activity in the brain, so you have to be 227 00:14:28,920 --> 00:14:32,520 Speaker 1: looking for precise activity, or else you would have a 228 00:14:32,560 --> 00:14:38,680 Speaker 1: system that's constantly activating under no particular impulse. Jacques J. 229 00:14:39,200 --> 00:14:45,520 Speaker 1: Vidal coined the phrase brain computer interface in nineteen The 230 00:14:45,680 --> 00:14:49,080 Speaker 1: DOLL presented a plan towards establishing the technology for such 231 00:14:49,200 --> 00:14:52,479 Speaker 1: an interface at the University of California at Los Angeles. 232 00:14:53,280 --> 00:14:56,160 Speaker 1: And it should come as no surprise that one of 233 00:14:56,200 --> 00:14:59,920 Speaker 1: the big organizations that has funded a lot of research 234 00:15:00,080 --> 00:15:04,720 Speaker 1: into brain computer interfaces is DARPA, or the Defense Advanced 235 00:15:04,720 --> 00:15:07,920 Speaker 1: Research Projects Agency in the United States. This is the 236 00:15:07,960 --> 00:15:10,560 Speaker 1: part of the Department of Defense that oversees money that 237 00:15:10,640 --> 00:15:13,640 Speaker 1: can be granted to projects that relate back to national 238 00:15:13,720 --> 00:15:18,200 Speaker 1: security and defense strategies for the United States. Sometimes these 239 00:15:18,240 --> 00:15:22,040 Speaker 1: projects have an obvious connection to national defense, such as 240 00:15:22,080 --> 00:15:26,040 Speaker 1: research into new types of weaponry. Other times the connection 241 00:15:26,120 --> 00:15:28,720 Speaker 1: might not be quite as clear, such as the DARPA 242 00:15:28,800 --> 00:15:33,720 Speaker 1: Grand challenges that help bootstrap the development of driverless car technologies. 243 00:15:34,080 --> 00:15:37,320 Speaker 1: But I think you could agree that brain computer interfaces, 244 00:15:37,960 --> 00:15:40,480 Speaker 1: you could think of a lot of different potential uses 245 00:15:40,680 --> 00:15:45,760 Speaker 1: to augment national defense with that kind of technology. So 246 00:15:45,880 --> 00:15:48,960 Speaker 1: DARPA has funded a ton of research into BC eyes 247 00:15:49,240 --> 00:15:52,480 Speaker 1: and much of that work has had incredible results. Now 248 00:15:52,520 --> 00:15:54,840 Speaker 1: I'm not just talking about device that would let you control, 249 00:15:55,160 --> 00:15:58,640 Speaker 1: say a computer cursor with your mind, but technologies that 250 00:15:58,680 --> 00:16:03,240 Speaker 1: would help people regain lost senses like hearing or vision. 251 00:16:03,560 --> 00:16:06,480 Speaker 1: And it's all through stimulating neurons in specific ways, so 252 00:16:06,520 --> 00:16:11,040 Speaker 1: it becomes a bidirectional communications channel. It's incredible stuff. And 253 00:16:11,120 --> 00:16:14,120 Speaker 1: again the subject matter is vast and it would require 254 00:16:14,240 --> 00:16:16,880 Speaker 1: lots of episodes. But the bit I wanted to focus 255 00:16:16,920 --> 00:16:19,880 Speaker 1: on in the early history was a project in nineteen 256 00:16:20,000 --> 00:16:23,520 Speaker 1: seventy four. It was called the Close Coupled Man Machine 257 00:16:23,560 --> 00:16:26,880 Speaker 1: Systems Project, and later on it would undergo a name change. 258 00:16:26,920 --> 00:16:30,840 Speaker 1: It would become known as bio cybernetics. To quote the article, 259 00:16:31,320 --> 00:16:34,640 Speaker 1: DARPA funded efforts in the development of novel brain computer 260 00:16:34,720 --> 00:16:39,080 Speaker 1: interface technologies in the April two thousand, fifteen Journal of 261 00:16:39,120 --> 00:16:44,520 Speaker 1: Neuroscience Methods. Quote. This program investigated the application of human 262 00:16:44,520 --> 00:16:49,560 Speaker 1: physiological signals, including brain signals as measured non invasively using 263 00:16:49,560 --> 00:16:53,520 Speaker 1: either E E G or magneto and cephalography m EG, 264 00:16:53,960 --> 00:16:57,760 Speaker 1: to enable direct communication between humans and machines, and to 265 00:16:57,840 --> 00:17:04,159 Speaker 1: monitor neural states associated with vigilance, fatigue, emotions, decision making, perception, 266 00:17:04,440 --> 00:17:08,920 Speaker 1: and general cognitive ability. The program yielded notable advancements such 267 00:17:08,960 --> 00:17:13,159 Speaker 1: as detailed understanding of single trial sensory evoked responses in 268 00:17:13,200 --> 00:17:16,600 Speaker 1: the e g. Of human participants. These efforts demonstrated that 269 00:17:16,640 --> 00:17:20,680 Speaker 1: neural activity in response to visual checkerboard stimuli alternating at 270 00:17:20,680 --> 00:17:24,000 Speaker 1: different frequencies at each of four fixation points could be 271 00:17:24,080 --> 00:17:27,560 Speaker 1: decoded in real time and used to navigate a cursor 272 00:17:27,760 --> 00:17:33,440 Speaker 1: through a simple maze. End quote. Fascinating stuff. Now we're 273 00:17:33,480 --> 00:17:35,520 Speaker 1: gonna take a quick break, but when we come back, 274 00:17:35,840 --> 00:17:37,840 Speaker 1: I'll give a little bit more about the history and 275 00:17:37,880 --> 00:17:50,359 Speaker 1: talk about the different approaches to brain computer interfaces. Now 276 00:17:50,359 --> 00:17:53,399 Speaker 1: to detail every bc I project since the early nineteen 277 00:17:53,480 --> 00:17:58,320 Speaker 1: seventies would take us hours. There have been countless. Some 278 00:17:58,440 --> 00:18:01,800 Speaker 1: of them have led to amazing sets and breakthroughs, some 279 00:18:02,040 --> 00:18:05,800 Speaker 1: revealed frustrating barriers and challenges that we've yet to overcome. 280 00:18:06,160 --> 00:18:08,439 Speaker 1: I'll talk about a few more examples in a moment, 281 00:18:08,600 --> 00:18:11,560 Speaker 1: and I should stress that I'm just kind of arbitrarily 282 00:18:11,720 --> 00:18:14,720 Speaker 1: picking these examples because there's been so much amazing work 283 00:18:14,720 --> 00:18:17,240 Speaker 1: in this field. But before I get into that, I 284 00:18:17,280 --> 00:18:19,560 Speaker 1: want to talk about one of the biggest challenges in 285 00:18:19,600 --> 00:18:22,840 Speaker 1: the way of a robust brain computer interface, and that's 286 00:18:22,920 --> 00:18:27,640 Speaker 1: reading the signals of the brain reliably. So there are 287 00:18:27,680 --> 00:18:31,080 Speaker 1: two broad categories you can consider when it comes to 288 00:18:31,160 --> 00:18:35,359 Speaker 1: monitoring brain activity, and those would be invasive methods and 289 00:18:35,440 --> 00:18:40,280 Speaker 1: non invasive methods, or surgical and non surgical. Typically, though 290 00:18:40,520 --> 00:18:44,399 Speaker 1: there are some methods that are considered non invasive that 291 00:18:44,600 --> 00:18:48,040 Speaker 1: still involve implanting stuff into the brain, it's just it 292 00:18:48,119 --> 00:18:51,920 Speaker 1: tends to be through less invasive procedures like an injection 293 00:18:51,960 --> 00:18:55,520 Speaker 1: as opposed to brain surgery. And uh yeah, But generally 294 00:18:55,520 --> 00:18:58,760 Speaker 1: we're talking about technology that has to be surgically implanted 295 00:18:58,760 --> 00:19:01,280 Speaker 1: on the brain, or to anology that can monitor brain 296 00:19:01,320 --> 00:19:05,000 Speaker 1: activity without first having to you know, crack open a skull. 297 00:19:05,440 --> 00:19:07,880 Speaker 1: And as you can imagine, this is a pretty big 298 00:19:07,880 --> 00:19:11,280 Speaker 1: difference right between these categories. So let's break down the 299 00:19:11,320 --> 00:19:14,040 Speaker 1: pros and cons of each of them. So the cons 300 00:19:14,080 --> 00:19:17,920 Speaker 1: with invasive approaches are pretty darn easy to anticipate, right, 301 00:19:17,960 --> 00:19:21,080 Speaker 1: I mean, we use brain surgery as a stand in 302 00:19:21,200 --> 00:19:25,360 Speaker 1: for any activity that requires an incredible amount of knowledge, understanding, 303 00:19:25,400 --> 00:19:28,280 Speaker 1: and skill to perform. It's right out there with rocket science. 304 00:19:28,880 --> 00:19:32,159 Speaker 1: We do that because we know brain surgery is freaking hard, 305 00:19:32,560 --> 00:19:34,919 Speaker 1: it's risky, and I think it's safe to say that 306 00:19:34,920 --> 00:19:37,640 Speaker 1: the vast majority of people out there aren't too keen 307 00:19:37,800 --> 00:19:41,560 Speaker 1: to undergo a surgical procedure unless the potential benefits are 308 00:19:41,600 --> 00:19:47,080 Speaker 1: truly impressive, maybe life saving or life changing. Invasive methods 309 00:19:47,280 --> 00:19:51,720 Speaker 1: typically involve either implanting electrodes directly into brain matter or 310 00:19:51,800 --> 00:19:55,880 Speaker 1: using small sensor pads that essentially stick to the exterior 311 00:19:55,960 --> 00:19:59,520 Speaker 1: of the brain. Implanting electrodes comes with its own set 312 00:19:59,520 --> 00:20:01,960 Speaker 1: of challenge is and one is that it can cause 313 00:20:02,119 --> 00:20:05,240 Speaker 1: scarring in the brain, and if scar tissue forms near 314 00:20:05,280 --> 00:20:08,240 Speaker 1: the electrode, it can interfere with the electrodes ability to 315 00:20:08,280 --> 00:20:11,600 Speaker 1: pick up that electrical activity from neurons, so the scarring 316 00:20:11,680 --> 00:20:14,080 Speaker 1: process can prevent the electrodes from being able to do 317 00:20:14,160 --> 00:20:18,000 Speaker 1: their jobs. Another challenge is that sometimes an electrode could 318 00:20:18,040 --> 00:20:21,080 Speaker 1: shift slightly in the brain, and even a small shift 319 00:20:21,080 --> 00:20:22,920 Speaker 1: could mean the electrode would no longer be able to 320 00:20:22,960 --> 00:20:26,159 Speaker 1: pick up signals from the targeted neurons. There have been 321 00:20:26,200 --> 00:20:30,320 Speaker 1: some impressive advancements in getting around these challenges. Philip our 322 00:20:30,440 --> 00:20:33,640 Speaker 1: Kennedy of Emory University, which is just down the road 323 00:20:33,720 --> 00:20:37,119 Speaker 1: from our office in Atlanta, developed a neural electrode with 324 00:20:37,200 --> 00:20:41,360 Speaker 1: a tip encased in a tiny glass cone. Neurons would 325 00:20:41,359 --> 00:20:45,560 Speaker 1: actually grow into the cone and reach the electrode. The 326 00:20:45,600 --> 00:20:48,800 Speaker 1: cone helped protect the electrode from scarring, and the neurons 327 00:20:48,800 --> 00:20:52,200 Speaker 1: growing into the cone helped it resist any shifting. Kennedy 328 00:20:52,240 --> 00:20:54,679 Speaker 1: worked with a few patients to test the design and 329 00:20:54,760 --> 00:20:58,880 Speaker 1: work out actual useful brain computer interactions. One of those 330 00:20:58,880 --> 00:21:02,200 Speaker 1: patients was a man named Johnny Ray, a man who 331 00:21:02,320 --> 00:21:06,560 Speaker 1: was nearly immobile and incapable of communication after a severe stroke. 332 00:21:07,119 --> 00:21:12,400 Speaker 1: Surgeons implanted electrodes in Ray's brain in March, and Ray 333 00:21:12,520 --> 00:21:15,280 Speaker 1: learned how to move a cursor on a screen. He 334 00:21:15,359 --> 00:21:18,280 Speaker 1: was imagining that he was moving the cursor with his 335 00:21:18,400 --> 00:21:21,719 Speaker 1: hand like he was making hand movements, or imagining that 336 00:21:21,800 --> 00:21:25,000 Speaker 1: because he didn't have that capability anymore. He later learned 337 00:21:25,040 --> 00:21:27,560 Speaker 1: to move a cursor on a screen to highlight letters, 338 00:21:27,880 --> 00:21:30,879 Speaker 1: and then he would click on them like with a mouse, 339 00:21:31,080 --> 00:21:33,399 Speaker 1: except he did it by twitching his shoulders, one of 340 00:21:33,400 --> 00:21:36,479 Speaker 1: the few muscle movements he could still do. When he 341 00:21:36,520 --> 00:21:39,000 Speaker 1: was asked by the media what he felt when he 342 00:21:39,040 --> 00:21:43,040 Speaker 1: moved the cursor, he spelled out the word nothing, which 343 00:21:43,200 --> 00:21:46,760 Speaker 1: doctors actually interpreted to mean that Ray no longer had 344 00:21:46,800 --> 00:21:50,360 Speaker 1: to even imagine moving his hand anymore. His brain had 345 00:21:50,359 --> 00:21:54,000 Speaker 1: become trained to move the cursor through thought alone without 346 00:21:54,040 --> 00:21:57,320 Speaker 1: having to have the hand as sort of an intermediate step. 347 00:21:58,040 --> 00:22:00,840 Speaker 1: And this highlights one of the biggest ed vantages that 348 00:22:00,880 --> 00:22:05,600 Speaker 1: the invasive methodology has over the non invasive version. Implants 349 00:22:05,640 --> 00:22:08,159 Speaker 1: have a more direct path to the neurons that they 350 00:22:08,200 --> 00:22:11,800 Speaker 1: are monitoring. They are more precise, they're more finely attuned. 351 00:22:12,200 --> 00:22:15,680 Speaker 1: They can pick up signals much more easily. Brown University 352 00:22:15,760 --> 00:22:20,160 Speaker 1: professor John P. Donohue is another pioneer using electrode implants 353 00:22:20,200 --> 00:22:23,520 Speaker 1: as part of research into brain computer interfaces. His team 354 00:22:23,560 --> 00:22:27,280 Speaker 1: created a system called brain Gate, which initially had ninety 355 00:22:27,359 --> 00:22:30,600 Speaker 1: six electrodes arrayed on a small implant, and by small 356 00:22:30,600 --> 00:22:34,080 Speaker 1: immunit measures about four millimeters per side. It's about the 357 00:22:34,080 --> 00:22:37,040 Speaker 1: size of a baby aspirin. As Science Daily put it, 358 00:22:37,680 --> 00:22:40,959 Speaker 1: the stories about brain Gate are pretty inspiring. People who 359 00:22:41,000 --> 00:22:44,080 Speaker 1: have become paralyzed have undergone the surgical procedure to have 360 00:22:44,160 --> 00:22:47,480 Speaker 1: the electrode array implanted in their brains, then they have 361 00:22:47,560 --> 00:22:50,760 Speaker 1: gone through an extensive training period to learn how to 362 00:22:50,920 --> 00:22:54,880 Speaker 1: use this technology. In that training period, they learn how 363 00:22:54,880 --> 00:22:57,960 Speaker 1: to control some exterior technology with their thoughts. It might 364 00:22:58,000 --> 00:23:00,880 Speaker 1: be a cursor on a screen, giving them the ability 365 00:23:00,920 --> 00:23:04,200 Speaker 1: to communicate and run applications kind of like a computer mouse. 366 00:23:04,960 --> 00:23:07,840 Speaker 1: It could be a robotic limb. And on top of that, 367 00:23:08,160 --> 00:23:10,800 Speaker 1: there's been work to create systems that can replicate a 368 00:23:10,840 --> 00:23:14,520 Speaker 1: sense of touch in the user. So not only can 369 00:23:14,560 --> 00:23:17,160 Speaker 1: the person with the implants in commands to an external 370 00:23:17,160 --> 00:23:21,120 Speaker 1: piece of technology, they can also experience tactile feedback as 371 00:23:21,160 --> 00:23:24,400 Speaker 1: if that external tech was one of their natural limbs. 372 00:23:24,400 --> 00:23:27,119 Speaker 1: So a person outfitted with a robotic arm connected to 373 00:23:27,160 --> 00:23:29,960 Speaker 1: this type of interface could not just pick stuff up, 374 00:23:30,080 --> 00:23:33,160 Speaker 1: which is already phenomenal with a robotic limb, they could 375 00:23:33,200 --> 00:23:36,880 Speaker 1: actually feel how tightly they were holding the thing they 376 00:23:36,880 --> 00:23:39,520 Speaker 1: picked up, and that becomes really important for things like 377 00:23:39,600 --> 00:23:43,199 Speaker 1: fine motor skills. And this is incredible stuff. But I 378 00:23:43,200 --> 00:23:46,920 Speaker 1: would still argue that it's fairly primitive in the sense 379 00:23:46,960 --> 00:23:49,679 Speaker 1: that I think we're just at the very dawn of 380 00:23:49,680 --> 00:23:53,840 Speaker 1: being able to harness this type of technology that we 381 00:23:53,840 --> 00:23:57,000 Speaker 1: We've made some incredible strides, but there's a long way 382 00:23:57,040 --> 00:24:01,200 Speaker 1: to go. Now, let's get back to the non invasive approach. 383 00:24:01,440 --> 00:24:03,879 Speaker 1: So a clear advantage here is that you don't have 384 00:24:03,920 --> 00:24:07,080 Speaker 1: to have any sort of surgical procedure to make use 385 00:24:07,119 --> 00:24:11,359 Speaker 1: of noninvasive technology. And an e G can be an 386 00:24:11,400 --> 00:24:14,480 Speaker 1: example of a noninvasive approach. Right, you just have those 387 00:24:14,720 --> 00:24:18,400 Speaker 1: electrodes that you slap onto your scalp, but you don't 388 00:24:18,440 --> 00:24:22,840 Speaker 1: have to have a transcranial system. Uh So you can't 389 00:24:22,840 --> 00:24:25,720 Speaker 1: have e G s that are transcranial, meaning that they 390 00:24:25,880 --> 00:24:28,359 Speaker 1: it requires brain surgery and you have wires that stick 391 00:24:28,400 --> 00:24:31,440 Speaker 1: out through your cranium, through your skull. But you can 392 00:24:31,480 --> 00:24:35,359 Speaker 1: have noninvasive ones too. But even with the electrodes on 393 00:24:35,400 --> 00:24:37,640 Speaker 1: the scalp, we run into other problems, and a big 394 00:24:37,680 --> 00:24:39,960 Speaker 1: one is that the signals in our brains aren't really 395 00:24:40,000 --> 00:24:44,920 Speaker 1: that strong electrically speaking, and their skulls are fairly decent 396 00:24:44,960 --> 00:24:48,399 Speaker 1: at muffling those signals. Plus, if we're moving around a 397 00:24:48,400 --> 00:24:51,399 Speaker 1: lot having the rig we're using it, it needs to 398 00:24:51,440 --> 00:24:56,760 Speaker 1: remain steady because otherwise we might end up misaligning things 399 00:24:56,760 --> 00:25:00,119 Speaker 1: and again we end up reading the wrong neurons, and 400 00:25:00,160 --> 00:25:02,960 Speaker 1: then an irrelevant brain signal could initiate a command that 401 00:25:03,000 --> 00:25:07,400 Speaker 1: we weren't intending to send. That's obviously a big challenge. 402 00:25:07,960 --> 00:25:10,520 Speaker 1: Now we can get a really good look at what's 403 00:25:10,520 --> 00:25:14,880 Speaker 1: going on inside the brain using noninvasive technology like an 404 00:25:15,000 --> 00:25:17,960 Speaker 1: m r I, But in an m r I requires 405 00:25:17,960 --> 00:25:21,600 Speaker 1: a person to lay very still inside a very large 406 00:25:21,680 --> 00:25:25,000 Speaker 1: and very noisy machine for quite a long time, so 407 00:25:25,040 --> 00:25:27,160 Speaker 1: it's not a practical solution. If you want to build 408 00:25:27,200 --> 00:25:30,480 Speaker 1: a brain computer interface for day to day use. There's 409 00:25:30,520 --> 00:25:33,040 Speaker 1: a lot of work going into finding a methodology to 410 00:25:33,119 --> 00:25:37,560 Speaker 1: read brain signals, either directly or indirectly through noninvasive means. 411 00:25:38,000 --> 00:25:40,240 Speaker 1: Getting a method to a point where the precision and 412 00:25:40,320 --> 00:25:45,240 Speaker 1: accuracy rivals the implanted electrodes is a non trivial challenge. 413 00:25:45,440 --> 00:25:49,080 Speaker 1: DARPA is funding a lot of research into that area. However, 414 00:25:49,320 --> 00:25:51,480 Speaker 1: it stands to reason that if the agency wants to 415 00:25:51,600 --> 00:25:55,080 Speaker 1: use bc I technology for divinse purposes, it would be 416 00:25:55,160 --> 00:25:58,399 Speaker 1: ideal to have a version that doesn't require the user 417 00:25:58,440 --> 00:26:01,320 Speaker 1: to first undergo a surgical for seizure. In May two 418 00:26:01,320 --> 00:26:04,560 Speaker 1: thousand nineteen, the agency announced it was working with six 419 00:26:04,640 --> 00:26:08,679 Speaker 1: different teams to explore non invasive bc I strategies and 420 00:26:08,760 --> 00:26:13,720 Speaker 1: what was called the next generation non Surgical neuro Technology 421 00:26:13,840 --> 00:26:18,240 Speaker 1: or in three program. Included in those teams are people 422 00:26:18,280 --> 00:26:23,320 Speaker 1: from Carnegie Mellon University, the Palo Alto Research Center or Park, 423 00:26:23,880 --> 00:26:28,320 Speaker 1: and Telendyne Scientific among others, and the proposals are really interesting. 424 00:26:28,760 --> 00:26:34,720 Speaker 1: One from Battel Memorial Institute proposes electro magnetic neuro transducers 425 00:26:34,800 --> 00:26:39,040 Speaker 1: that are quote non surgically delivered to neurons of interest 426 00:26:39,280 --> 00:26:42,679 Speaker 1: into quote. They will then take electrical signals from the 427 00:26:42,760 --> 00:26:46,240 Speaker 1: neurons and convert them into magnetic signals, which could then 428 00:26:46,280 --> 00:26:49,679 Speaker 1: be picked up by an external transceiver, and the neuro 429 00:26:49,720 --> 00:26:53,600 Speaker 1: transducers could also perform the same process in reverse, taking 430 00:26:53,720 --> 00:26:58,400 Speaker 1: incoming magnetic fields or magnetic fluctuations and transmitting them as 431 00:26:58,440 --> 00:27:01,320 Speaker 1: electric signals to neurons and the brain, so it could 432 00:27:01,400 --> 00:27:06,640 Speaker 1: be bi directional. Other methods including acousto optical approach, which 433 00:27:06,640 --> 00:27:10,919 Speaker 1: means the team responsible plans to use ultrasonic signals to 434 00:27:10,960 --> 00:27:14,160 Speaker 1: guide light into the brain to detect a neural activity. 435 00:27:14,880 --> 00:27:18,160 Speaker 1: There's a similar one, but it would use magnetic fields 436 00:27:18,280 --> 00:27:21,439 Speaker 1: rather than light, while still using ultrasonic signals to generate 437 00:27:21,800 --> 00:27:26,120 Speaker 1: localized electric currents in the brain. It's all really fascinating stuff, 438 00:27:26,560 --> 00:27:30,320 Speaker 1: and it also quickly gets beyond my understanding of neuroscience 439 00:27:30,320 --> 00:27:32,560 Speaker 1: and physics, so I won't spend a whole lot more 440 00:27:32,560 --> 00:27:36,359 Speaker 1: time talking about them, but they are pretty darn nifty. 441 00:27:36,480 --> 00:27:39,800 Speaker 1: In the meantime, researchers have been relying on the established 442 00:27:39,800 --> 00:27:41,840 Speaker 1: E e G. Technology to do a lot of the 443 00:27:41,880 --> 00:27:45,600 Speaker 1: groundwork for a noninvasive approach, but as I mentioned, that 444 00:27:45,600 --> 00:27:48,080 Speaker 1: has some big limitations to it, so it's just a 445 00:27:48,160 --> 00:27:51,320 Speaker 1: stepping stone, and there are other groups looking at different 446 00:27:51,359 --> 00:27:54,399 Speaker 1: ways to measure brain activity for the purposes of an interface. 447 00:27:54,600 --> 00:27:57,240 Speaker 1: Finding a method is replicable and accurate is still a 448 00:27:57,280 --> 00:28:00,280 Speaker 1: really hard thing to do, whether it's looking specific fickly 449 00:28:00,320 --> 00:28:03,320 Speaker 1: at neuron activity or maybe something like keeping tabs on 450 00:28:03,440 --> 00:28:06,400 Speaker 1: changes in blood flow in the brain, so you're looking 451 00:28:06,440 --> 00:28:09,440 Speaker 1: at sort of an indirect indicator in those cases. At 452 00:28:09,440 --> 00:28:12,320 Speaker 1: the same time, researchers are starting to rely upon machine 453 00:28:12,400 --> 00:28:15,520 Speaker 1: learning strategies to help train the technology to determine whether 454 00:28:15,640 --> 00:28:18,520 Speaker 1: or not any particular signal is a real hit or 455 00:28:18,560 --> 00:28:23,000 Speaker 1: a false flag. So this is actually a multidisciplinary endeavor. 456 00:28:23,040 --> 00:28:26,560 Speaker 1: It's going to rely on many different technologies as well 457 00:28:26,600 --> 00:28:30,960 Speaker 1: as our understanding of neuroscience, which continues to grow. Okay, 458 00:28:31,000 --> 00:28:33,280 Speaker 1: so we know about the tech and we know a 459 00:28:33,280 --> 00:28:36,840 Speaker 1: bit about history. We know that still in fairly early 460 00:28:36,920 --> 00:28:39,840 Speaker 1: stages of development. When we come back, I'll talk about 461 00:28:39,840 --> 00:28:43,960 Speaker 1: Elon Musk, Facebook, and brain computer interfaces. But first let's 462 00:28:43,960 --> 00:28:55,040 Speaker 1: take another quick break. So in July two thousand nineteen, 463 00:28:55,440 --> 00:28:58,400 Speaker 1: one of the many tech stories to come out about 464 00:28:58,480 --> 00:29:01,680 Speaker 1: Elon Musk, because there's never a shortage of them, had 465 00:29:01,720 --> 00:29:05,800 Speaker 1: to do with the startup company Neuralalink. Now, for some people, 466 00:29:05,920 --> 00:29:08,240 Speaker 1: this was the first they had ever heard of Musk's 467 00:29:08,280 --> 00:29:11,800 Speaker 1: interest in creating a brain computer interface, but in fact 468 00:29:11,960 --> 00:29:13,920 Speaker 1: he had been talking about this kind of thing since 469 00:29:13,960 --> 00:29:17,400 Speaker 1: at least two thousand and sixteen. At the CODE Conference 470 00:29:17,480 --> 00:29:19,720 Speaker 1: of two thousand and sixteen, he talked about a ton 471 00:29:19,760 --> 00:29:24,760 Speaker 1: of stuff, including a technology called neural lace. Neural lace 472 00:29:24,880 --> 00:29:28,160 Speaker 1: is a term for a mesh of electrodes that could 473 00:29:28,240 --> 00:29:31,880 Speaker 1: graft into the brain through a simple injection in the 474 00:29:32,400 --> 00:29:37,000 Speaker 1: ideal implementation, so no full brain surgery was would be needed, 475 00:29:37,480 --> 00:29:40,640 Speaker 1: and ideally it would be wireless and offer the chance 476 00:29:40,680 --> 00:29:44,280 Speaker 1: to interact with computer systems through thought alone, which is 477 00:29:44,280 --> 00:29:48,320 Speaker 1: pretty nifty, but it's also essentially science fiction, at least 478 00:29:48,400 --> 00:29:52,000 Speaker 1: in that incarnation. Not that the idea has no merit, 479 00:29:52,280 --> 00:29:55,560 Speaker 1: but rather, we hadn't any real clue on how to 480 00:29:55,640 --> 00:29:58,520 Speaker 1: go about doing it. Yet it's only a little bit 481 00:29:58,560 --> 00:30:00,480 Speaker 1: better than saying, you know, it's sure would be nice 482 00:30:00,520 --> 00:30:05,040 Speaker 1: if we had teleporters. Well, yeah, it would be nice, 483 00:30:05,560 --> 00:30:08,080 Speaker 1: But that doesn't mean we can suddenly build teleporters just 484 00:30:08,120 --> 00:30:10,760 Speaker 1: because it would be nice to have them now. In 485 00:30:10,800 --> 00:30:15,160 Speaker 1: two thousand sixteen, Musk said he was interested in developing 486 00:30:15,200 --> 00:30:18,280 Speaker 1: this neural lace technology and if nobody else was going 487 00:30:18,320 --> 00:30:21,120 Speaker 1: to pursue it, he would do it himself, meaning he 488 00:30:21,160 --> 00:30:24,800 Speaker 1: would fund it himself. The next year, two thousand seventeen, 489 00:30:24,840 --> 00:30:27,400 Speaker 1: for those keeping score, he announced he was backing a 490 00:30:27,440 --> 00:30:31,120 Speaker 1: startup called Neuralalink, which would attempt to bring this dream 491 00:30:31,200 --> 00:30:33,520 Speaker 1: to life. Musk said at the time that one of 492 00:30:33,520 --> 00:30:36,720 Speaker 1: the biggest challenges was around bandwidth, or how much data 493 00:30:36,800 --> 00:30:39,880 Speaker 1: can pass through an interface in a given amount of time. 494 00:30:40,160 --> 00:30:44,360 Speaker 1: I would argue that challenge it is a big one, 495 00:30:44,880 --> 00:30:47,920 Speaker 1: but it's further down the road than some of the 496 00:30:48,000 --> 00:30:51,560 Speaker 1: more immediate challenges. So why did Musk say that, I'll 497 00:30:51,600 --> 00:30:54,760 Speaker 1: get to that. The two thousand nineteen announcement was all 498 00:30:54,800 --> 00:30:57,720 Speaker 1: about giving a few more details about the general plan 499 00:30:57,840 --> 00:31:01,680 Speaker 1: to achieve this science fiction vision. And Neuralink is working 500 00:31:01,680 --> 00:31:05,720 Speaker 1: to create flexible threads of electrodes, and each thread would 501 00:31:05,760 --> 00:31:09,400 Speaker 1: have essentially an electrode array with a potential density of 502 00:31:09,520 --> 00:31:14,000 Speaker 1: three thousand, seventy two electrodes distributed across ninety six threads. 503 00:31:14,360 --> 00:31:18,000 Speaker 1: Now by comparison, brain Gates array had a hundred twenty 504 00:31:18,080 --> 00:31:22,240 Speaker 1: eight electrode channels in it, so this would be much 505 00:31:22,240 --> 00:31:26,080 Speaker 1: more dense. The threads themselves would only measure a few 506 00:31:26,160 --> 00:31:29,400 Speaker 1: microns in width and would be very very flexible, which 507 00:31:29,440 --> 00:31:32,680 Speaker 1: would hopefully cut down on the possibility of them shifting. 508 00:31:33,240 --> 00:31:35,160 Speaker 1: They would be able to to move with the brain 509 00:31:35,200 --> 00:31:39,840 Speaker 1: instead of remaining still with comparison to the brain and 510 00:31:39,880 --> 00:31:43,200 Speaker 1: Neuralink has worked on a robotic device that would automatically 511 00:31:43,200 --> 00:31:46,080 Speaker 1: embed the threads into the brain of a recipient. This 512 00:31:46,120 --> 00:31:49,960 Speaker 1: would require surgery. According to the Verge, this robotic device 513 00:31:50,040 --> 00:31:53,160 Speaker 1: looks like a cross between a microscope and a sewing 514 00:31:53,200 --> 00:31:57,560 Speaker 1: machine and it can implant up to six threads per minute. Now. 515 00:31:57,640 --> 00:31:59,960 Speaker 1: Musque stated the reason he was talking about neuralinks were 516 00:32:00,080 --> 00:32:03,000 Speaker 1: at the time was largely as a recruiting strategy to 517 00:32:03,040 --> 00:32:06,719 Speaker 1: get more talent to apply to work on the Neuralink team, 518 00:32:06,760 --> 00:32:09,360 Speaker 1: and his end goal is not to help those who 519 00:32:09,440 --> 00:32:13,800 Speaker 1: have severe mobility and communication limitations gained some autonomy, although 520 00:32:14,000 --> 00:32:16,520 Speaker 1: they will be some of the people that would first 521 00:32:16,840 --> 00:32:21,120 Speaker 1: be exposed to this technology. Instead, it's to create a 522 00:32:21,160 --> 00:32:25,120 Speaker 1: bridge between humanity and AI. And this might be why 523 00:32:25,200 --> 00:32:29,160 Speaker 1: Musk was talking about that barrier, that bandwidth barrier, because 524 00:32:29,520 --> 00:32:32,560 Speaker 1: for there to be a meaningful exchange of data, you 525 00:32:32,600 --> 00:32:35,080 Speaker 1: need to be able to move a lot of information 526 00:32:35,200 --> 00:32:39,000 Speaker 1: very quickly back and forth. Presumably, and Musk has made 527 00:32:39,000 --> 00:32:42,080 Speaker 1: it pretty clear that he is concerned about the possibility 528 00:32:42,120 --> 00:32:45,640 Speaker 1: that AI could bring about an existential crisis for humanity. 529 00:32:45,760 --> 00:32:49,680 Speaker 1: So to me, this sounds like if you can't beat them, 530 00:32:49,800 --> 00:32:53,440 Speaker 1: join them type of strategy. Musk seems to say the 531 00:32:53,440 --> 00:32:56,560 Speaker 1: interface would serve as a step toward merging human and 532 00:32:56,680 --> 00:33:01,160 Speaker 1: artificial intelligence, perhaps pushing humanity into a trans human state. 533 00:33:01,560 --> 00:33:05,000 Speaker 1: We'd no longer be human beings as we would classically 534 00:33:05,040 --> 00:33:08,400 Speaker 1: define the term. Now, I have to stress again that 535 00:33:08,520 --> 00:33:12,320 Speaker 1: such a future, if it is even possible, is still 536 00:33:12,360 --> 00:33:16,640 Speaker 1: a long way away. The neuralink approach has a long 537 00:33:16,640 --> 00:33:20,560 Speaker 1: way to go just for a basic functionality, and building 538 00:33:20,560 --> 00:33:23,520 Speaker 1: a meaningful interface that can bring together human and artificial 539 00:33:23,520 --> 00:33:27,560 Speaker 1: intelligence is another matter entirely. In fact, I'm not even 540 00:33:27,720 --> 00:33:30,720 Speaker 1: sure what such a thing would mean. That would it 541 00:33:30,960 --> 00:33:34,280 Speaker 1: mean enhancing human intelligence with AI? And and if so, 542 00:33:34,360 --> 00:33:37,440 Speaker 1: how would that work? How could a computer system and 543 00:33:37,480 --> 00:33:41,600 Speaker 1: a brain work together like that, not just communicating back 544 00:33:41,640 --> 00:33:45,320 Speaker 1: and forth, but working as a cohesive unit. I'm not 545 00:33:45,360 --> 00:33:48,800 Speaker 1: really sure. I'm not sure if anybody is sure. Now 546 00:33:48,840 --> 00:33:51,640 Speaker 1: that's not to say it's not possible. It very well 547 00:33:51,680 --> 00:33:56,960 Speaker 1: maybe possible, but it's way beyond my humble understanding. Musk's 548 00:33:57,080 --> 00:34:00,200 Speaker 1: vision is an interesting one, but it also raised is 549 00:34:00,240 --> 00:34:04,520 Speaker 1: a lot of ethical questions. Now, presumably this technology will 550 00:34:04,560 --> 00:34:08,759 Speaker 1: not come cheaply, so who exactly would be able to 551 00:34:08,840 --> 00:34:12,799 Speaker 1: afford such a bio enhancement. So let's assume, for the 552 00:34:12,840 --> 00:34:17,120 Speaker 1: sake of argument, that Must's vision becomes reality, and that 553 00:34:17,160 --> 00:34:20,359 Speaker 1: this technology works the way he intended it to, which 554 00:34:20,360 --> 00:34:23,480 Speaker 1: I'm still not convinced is actually possible. But let's say 555 00:34:23,480 --> 00:34:26,360 Speaker 1: it is possible and it happens. Would that mean we 556 00:34:26,360 --> 00:34:29,840 Speaker 1: would actually see a new class system, one that essentially 557 00:34:29,920 --> 00:34:33,279 Speaker 1: mirrors the massive divide between the most wealthy and the 558 00:34:33,320 --> 00:34:36,640 Speaker 1: poorest people of today. But more so, would we have 559 00:34:36,719 --> 00:34:41,560 Speaker 1: a very small population of elite rich and enhanced people 560 00:34:41,680 --> 00:34:44,920 Speaker 1: overseeing a massive you know, the rest of us, because 561 00:34:44,920 --> 00:34:47,120 Speaker 1: I know I don't make enough money to fall into 562 00:34:47,160 --> 00:34:51,440 Speaker 1: the cyber human tax bracket. Again, we're so far away 563 00:34:51,480 --> 00:34:54,000 Speaker 1: from this being oppressing matter, but it's the sort of 564 00:34:54,080 --> 00:34:57,239 Speaker 1: question we have to ask when we talk about an 565 00:34:57,239 --> 00:35:02,320 Speaker 1: amazing future. Whose future or are we talking about? Because 566 00:35:02,400 --> 00:35:05,920 Speaker 1: if it's not everyone's future, I think it kind of stinks. 567 00:35:06,840 --> 00:35:11,399 Speaker 1: Speaking of stinking, let's segue over to Facebook, and that 568 00:35:11,520 --> 00:35:15,279 Speaker 1: might betray my opinion on this next item in our 569 00:35:15,360 --> 00:35:19,160 Speaker 1: bc I discussion. So at the two thousand, nineteen f 570 00:35:19,360 --> 00:35:24,040 Speaker 1: eight or FATE Conference, which is Facebook's conference for developers, 571 00:35:24,320 --> 00:35:27,680 Speaker 1: one of the many presentations was on Facebook's efforts to 572 00:35:27,800 --> 00:35:30,680 Speaker 1: fund the development of what has been called a mind 573 00:35:30,880 --> 00:35:34,880 Speaker 1: reading device, So what gives well? Researchers at the University 574 00:35:34,920 --> 00:35:38,719 Speaker 1: of California at San Francisco are helming this project and 575 00:35:38,760 --> 00:35:42,800 Speaker 1: the ultimate goal, at least the ultimate short term goal, 576 00:35:43,280 --> 00:35:46,560 Speaker 1: is to create a non invasive device or method that 577 00:35:46,640 --> 00:35:49,600 Speaker 1: will allow a user to transmit words or commands to 578 00:35:49,640 --> 00:35:53,359 Speaker 1: a computer device through thought alone. And the short term 579 00:35:53,440 --> 00:35:56,040 Speaker 1: goal is to develop such a system that can handle 580 00:35:56,120 --> 00:35:59,120 Speaker 1: up to one words per minute with a one thousand 581 00:35:59,200 --> 00:36:04,279 Speaker 1: word vocabular larry, and an error rate below sevent Now 582 00:36:04,360 --> 00:36:07,880 Speaker 1: those parameters should already tell you that this goal is 583 00:36:07,920 --> 00:36:11,279 Speaker 1: a tough one. We have no way to take raw 584 00:36:11,440 --> 00:36:13,960 Speaker 1: brain data from the speech center of the brain and 585 00:36:14,040 --> 00:36:16,960 Speaker 1: figure out what a person is trying to say all 586 00:36:16,960 --> 00:36:20,760 Speaker 1: by itself, right, I couldn't just slap a headset onto 587 00:36:20,800 --> 00:36:23,880 Speaker 1: a person have them think words and no immediately what 588 00:36:23,920 --> 00:36:26,160 Speaker 1: they're saying. To get to that point, we actually have 589 00:36:26,200 --> 00:36:30,000 Speaker 1: to train a computer system to recognize certain brain patterns 590 00:36:30,239 --> 00:36:33,240 Speaker 1: that represent specific words and the speech center of the brain. 591 00:36:33,320 --> 00:36:36,360 Speaker 1: That's what the researchers have been working on. So, like 592 00:36:36,440 --> 00:36:39,080 Speaker 1: the other examples I've given, these researchers have been working 593 00:36:39,080 --> 00:36:43,280 Speaker 1: with volunteers who elected to have surgeons implant electrodes into 594 00:36:43,320 --> 00:36:46,680 Speaker 1: their brains. And these were volunteers who were already undergoing 595 00:36:46,680 --> 00:36:50,200 Speaker 1: surgical procedures to treat stuff like epilepsy, So it wasn't 596 00:36:50,200 --> 00:36:52,759 Speaker 1: like they just walked in off the street. They were 597 00:36:53,320 --> 00:36:56,560 Speaker 1: electing to do this in addition to other treatments they 598 00:36:56,560 --> 00:36:59,360 Speaker 1: were seeking. The subjects were then given a series of 599 00:36:59,440 --> 00:37:02,839 Speaker 1: multiple choice questions. Now these were questions that didn't have 600 00:37:02,880 --> 00:37:05,480 Speaker 1: a right or wrong answer, so you could get a 601 00:37:05,560 --> 00:37:08,480 Speaker 1: question like how are you feeling today, and then the 602 00:37:08,520 --> 00:37:13,200 Speaker 1: answers could include stuff like tired, happy, sad, lonely, that 603 00:37:13,280 --> 00:37:16,080 Speaker 1: kind of thing. That's just an example from my own head. 604 00:37:16,080 --> 00:37:18,200 Speaker 1: By the way, I don't know for a fact that 605 00:37:18,200 --> 00:37:22,720 Speaker 1: that was an example question from their procedure. The subjects 606 00:37:22,760 --> 00:37:25,279 Speaker 1: would then answer out loud. They would say what their 607 00:37:25,360 --> 00:37:29,799 Speaker 1: choice was verbally, and during the whole test, the researchers 608 00:37:29,800 --> 00:37:33,120 Speaker 1: would record the brain activity in the subject's speech center 609 00:37:33,239 --> 00:37:35,600 Speaker 1: as it was going on. Doing this over and over 610 00:37:35,640 --> 00:37:39,359 Speaker 1: would establish a sort of picture neurologically speaking of how 611 00:37:39,440 --> 00:37:43,399 Speaker 1: specific responses quote unquote looked in the brain. So when 612 00:37:43,440 --> 00:37:47,840 Speaker 1: you were ready to say happy, then the neurons in 613 00:37:47,880 --> 00:37:51,160 Speaker 1: your brain fire in a specific kind of pattern, and 614 00:37:51,600 --> 00:37:56,080 Speaker 1: the the the the e G picks that up and 615 00:37:56,160 --> 00:37:59,080 Speaker 1: it it's kind of like making a picture. So if 616 00:37:59,080 --> 00:38:01,759 Speaker 1: the computer sees a picture that looks like that one, 617 00:38:02,000 --> 00:38:05,600 Speaker 1: it might interpret that you have said the word happy. 618 00:38:05,719 --> 00:38:08,880 Speaker 1: After training machine learning algorithm on the data, the researchers 619 00:38:08,880 --> 00:38:12,000 Speaker 1: tried to test the system and they would feed brain 620 00:38:12,120 --> 00:38:14,840 Speaker 1: data into the system without telling the system what the 621 00:38:14,920 --> 00:38:18,040 Speaker 1: data referred to. It would say, all right, which question 622 00:38:18,160 --> 00:38:21,560 Speaker 1: was asked and which answer was given, so the system 623 00:38:21,560 --> 00:38:24,080 Speaker 1: tried to figure that out based upon the amount of 624 00:38:24,200 --> 00:38:28,000 Speaker 1: data it had gathered in its training process. It did 625 00:38:28,080 --> 00:38:31,759 Speaker 1: fairly well figuring out which question was asked, getting it 626 00:38:31,880 --> 00:38:34,560 Speaker 1: right s the time, so three times out of four. 627 00:38:35,080 --> 00:38:38,680 Speaker 1: It was slightly less successful at guessing what the answer 628 00:38:39,000 --> 00:38:41,960 Speaker 1: was by the subject. It was not as good about 629 00:38:42,040 --> 00:38:46,360 Speaker 1: that it was about success rate, but that's still pretty impressive. 630 00:38:46,560 --> 00:38:49,400 Speaker 1: It's a long way away from the stated goal of 631 00:38:49,440 --> 00:38:51,759 Speaker 1: the project to get that error rate down below sev 632 00:38:52,520 --> 00:38:56,000 Speaker 1: Especially with a vocabulary of a thousand words, it's got 633 00:38:56,040 --> 00:38:59,080 Speaker 1: to get more complicated as the number of words increases, 634 00:38:59,360 --> 00:39:01,799 Speaker 1: because the more words the system has to identify, the 635 00:39:01,840 --> 00:39:03,200 Speaker 1: harder it has to be. It has to be able 636 00:39:03,200 --> 00:39:07,000 Speaker 1: to recognize differences between each of those words to determine 637 00:39:07,040 --> 00:39:11,640 Speaker 1: which one was intended. Okay, so what does Facebook want 638 00:39:11,680 --> 00:39:14,799 Speaker 1: to do with this technology, assuming that they're able to 639 00:39:15,360 --> 00:39:18,040 Speaker 1: mature the technology and have it perform up to the 640 00:39:18,120 --> 00:39:20,400 Speaker 1: level that they want well, the company has said that 641 00:39:20,440 --> 00:39:22,120 Speaker 1: the goal is to create a system in which a 642 00:39:22,200 --> 00:39:25,680 Speaker 1: user can just think a command or message and send 643 00:39:25,719 --> 00:39:28,839 Speaker 1: it to a computer. So rather than look down at 644 00:39:28,840 --> 00:39:31,759 Speaker 1: your phone to dash off a quick text to your BFF. 645 00:39:32,040 --> 00:39:35,760 Speaker 1: You could concentrate and send that message by thought alone 646 00:39:35,880 --> 00:39:38,400 Speaker 1: to your phone, and then commanded to send the message 647 00:39:38,400 --> 00:39:41,960 Speaker 1: onward without every taking the phone out of your pocket 648 00:39:42,239 --> 00:39:45,200 Speaker 1: or out of a purse or whatever. You're just concentrating 649 00:39:45,239 --> 00:39:49,680 Speaker 1: and making it happen. Now, the skeptics among you might say, hey, Jonathan, 650 00:39:49,960 --> 00:39:53,239 Speaker 1: wouldn't you say that Facebook has a somewhat spotty reputation 651 00:39:53,280 --> 00:39:56,080 Speaker 1: when it comes to stuff like privacy and security? And 652 00:39:56,160 --> 00:39:59,560 Speaker 1: my response would be, you bet you. I'm sure the 653 00:39:59,560 --> 00:40:02,920 Speaker 1: company has anticipated this. Folks at Facebook have already said 654 00:40:03,320 --> 00:40:06,200 Speaker 1: that this system would only pick up words that were 655 00:40:06,200 --> 00:40:08,680 Speaker 1: in the speech center of the brain, and only words 656 00:40:08,719 --> 00:40:10,920 Speaker 1: that the system had been trained on for that matter, 657 00:40:11,360 --> 00:40:14,520 Speaker 1: and that it wouldn't pick up just random surface thoughts. 658 00:40:14,920 --> 00:40:17,920 Speaker 1: So you'd have to be thinking about saying the word 659 00:40:18,280 --> 00:40:22,319 Speaker 1: for it to be detected by the technology. Presumably this 660 00:40:22,400 --> 00:40:25,400 Speaker 1: makes everything okay, I'm not quite ready to sign on 661 00:40:25,560 --> 00:40:29,520 Speaker 1: to that just now, but anyway, that being said, I 662 00:40:29,520 --> 00:40:32,399 Speaker 1: would imagine for the system to work, each user would 663 00:40:32,440 --> 00:40:37,080 Speaker 1: first have to train their individual instance of that system. 664 00:40:37,160 --> 00:40:39,919 Speaker 1: It's sort of like the old voice recognition programs out there. 665 00:40:40,440 --> 00:40:43,520 Speaker 1: You first had to go through a fairly extensive calibration 666 00:40:43,600 --> 00:40:46,719 Speaker 1: process with voice recognition systems that had to learn your 667 00:40:46,920 --> 00:40:49,840 Speaker 1: voice in order for it to be able to respond properly. 668 00:40:50,320 --> 00:40:53,319 Speaker 1: I imagine you'd have to do something similar with a 669 00:40:53,560 --> 00:40:56,920 Speaker 1: mind reading system like this, where you'd have to actively 670 00:40:57,000 --> 00:40:59,920 Speaker 1: think about specific words in sort of a two to 671 00:41:00,040 --> 00:41:02,680 Speaker 1: oriole in order to train the system on how your 672 00:41:02,760 --> 00:41:06,040 Speaker 1: brain lights up when you are thinking those words. To 673 00:41:06,080 --> 00:41:08,560 Speaker 1: put it another way, the neurons in my head might 674 00:41:08,640 --> 00:41:11,160 Speaker 1: light up a slightly different way when I say the 675 00:41:11,160 --> 00:41:14,239 Speaker 1: word cat than they would in your head when you 676 00:41:14,280 --> 00:41:17,239 Speaker 1: say the word cat, and each person would need to 677 00:41:17,239 --> 00:41:21,080 Speaker 1: make sure their version of this technology understood how they thought. 678 00:41:21,640 --> 00:41:23,760 Speaker 1: But that also means putting in a lot more prep 679 00:41:23,800 --> 00:41:26,320 Speaker 1: time before you can actually use the technology to dash 680 00:41:26,360 --> 00:41:29,600 Speaker 1: off an email or something. Another potential use is for 681 00:41:29,680 --> 00:41:33,840 Speaker 1: a hands free interface for technology like augmented reality glasses, 682 00:41:34,320 --> 00:41:37,319 Speaker 1: which frankly makes me even more worried. You can see 683 00:41:37,320 --> 00:41:40,319 Speaker 1: the use of such technology right away. You could wear 684 00:41:40,400 --> 00:41:43,719 Speaker 1: one of these glasses which can overlay digital this information 685 00:41:43,719 --> 00:41:45,880 Speaker 1: on top of your view of the world around you, 686 00:41:46,120 --> 00:41:48,279 Speaker 1: so you could stare at a building, for example, and 687 00:41:48,320 --> 00:41:51,960 Speaker 1: think what address is that, and just by thinking that 688 00:41:52,000 --> 00:41:54,600 Speaker 1: the a r handset could consult the Internet and come 689 00:41:54,600 --> 00:41:58,400 Speaker 1: back with some information and say that is this address 690 00:41:58,480 --> 00:42:02,320 Speaker 1: on this street, which is pretty useful. But let's paint 691 00:42:02,320 --> 00:42:07,360 Speaker 1: a more terrifying scenario. Facebook has an enormous amount of 692 00:42:07,400 --> 00:42:12,560 Speaker 1: information on millions, in fact billions of people. So let's 693 00:42:12,560 --> 00:42:16,200 Speaker 1: say you've got a pair of Facebook branded augmented reality 694 00:42:16,280 --> 00:42:19,439 Speaker 1: goggles and it's got a brain computer interface as part 695 00:42:19,600 --> 00:42:22,719 Speaker 1: of the system, so you can just think commands and 696 00:42:22,760 --> 00:42:25,440 Speaker 1: the goggles will pick up on what you are asking 697 00:42:25,440 --> 00:42:28,440 Speaker 1: them to do. And because so many people use Facebook 698 00:42:28,600 --> 00:42:31,680 Speaker 1: and many people have public accounts, you could walk down 699 00:42:31,680 --> 00:42:34,359 Speaker 1: the street and get quick bits of information about all 700 00:42:34,400 --> 00:42:36,239 Speaker 1: the people you were looking at. You know, you get 701 00:42:36,280 --> 00:42:39,760 Speaker 1: facial recognition software, it recognizes who the person is starts 702 00:42:39,760 --> 00:42:44,000 Speaker 1: pulling up your information on them that's publicly available. Maybe 703 00:42:44,000 --> 00:42:46,400 Speaker 1: you even figure out how to exploit the system and 704 00:42:46,440 --> 00:42:50,520 Speaker 1: get access to information beyond what was allowed for the 705 00:42:50,560 --> 00:42:54,960 Speaker 1: general public. This could be a massive privacy problem. Now, again, 706 00:42:55,440 --> 00:42:58,919 Speaker 1: we are a long way away from that particular type 707 00:42:58,960 --> 00:43:02,520 Speaker 1: of technology becoming reality, but the possibility is there. There's 708 00:43:02,560 --> 00:43:05,680 Speaker 1: no denying Facebook has access to a stupendous amount of 709 00:43:05,719 --> 00:43:08,279 Speaker 1: information about all of us and we don't even need 710 00:43:08,320 --> 00:43:11,120 Speaker 1: the brain computer interface for that to be a problem. 711 00:43:11,160 --> 00:43:14,520 Speaker 1: You could just have the A R glasses themselves with 712 00:43:14,600 --> 00:43:18,319 Speaker 1: a deep connection to Facebook's databases and a way of 713 00:43:18,320 --> 00:43:21,040 Speaker 1: interacting with it, even if it's with voice commands or 714 00:43:21,440 --> 00:43:24,560 Speaker 1: mobile app or whatever, and you can still have these problems. 715 00:43:24,640 --> 00:43:27,520 Speaker 1: It's just it seems even more insidious if you don't 716 00:43:27,560 --> 00:43:29,760 Speaker 1: have to do anything other than just stare at someone 717 00:43:29,920 --> 00:43:34,120 Speaker 1: and think it. It seems pretty spooky and creepy. And 718 00:43:34,120 --> 00:43:36,279 Speaker 1: it's these sort of scenarios that remind us we have 719 00:43:36,440 --> 00:43:39,359 Speaker 1: to be careful as we develop technologies to make sure 720 00:43:39,400 --> 00:43:42,720 Speaker 1: that they are applied ethically without posing harm to others. 721 00:43:42,960 --> 00:43:46,000 Speaker 1: We've got to ask ourselves what are the consequences of 722 00:43:46,040 --> 00:43:50,840 Speaker 1: this technology, both the intended and unintended consequences, and who 723 00:43:51,040 --> 00:43:53,840 Speaker 1: benefits most from it and who could be who stands 724 00:43:53,880 --> 00:43:58,759 Speaker 1: to to be victimized by it anyway, I think brain 725 00:43:58,800 --> 00:44:03,720 Speaker 1: computer interfaces really have great potential to do an enormous 726 00:44:03,760 --> 00:44:08,560 Speaker 1: amount of good, especially for people who otherwise have a 727 00:44:08,680 --> 00:44:11,600 Speaker 1: really difficult struggle just being able to interact with the 728 00:44:11,600 --> 00:44:13,960 Speaker 1: world around them and to have any sort of autonomy 729 00:44:13,960 --> 00:44:17,040 Speaker 1: at all, and to even just be able to communicate 730 00:44:17,080 --> 00:44:19,960 Speaker 1: with others. I think that that alone makes it a 731 00:44:20,000 --> 00:44:23,480 Speaker 1: worthy endeavor to pursue, but we do need to make 732 00:44:23,520 --> 00:44:26,440 Speaker 1: sure that we're doing it for the right reasons and 733 00:44:26,480 --> 00:44:29,799 Speaker 1: we're not just doing it because somebody is scared that 734 00:44:29,920 --> 00:44:32,319 Speaker 1: robots are going to take over the world, or a 735 00:44:32,400 --> 00:44:36,600 Speaker 1: company really wants to know what you're thinking, because the 736 00:44:36,640 --> 00:44:38,759 Speaker 1: more data the company has about you, the better it 737 00:44:38,760 --> 00:44:42,440 Speaker 1: can sell things to you or sell you to other things. 738 00:44:44,320 --> 00:44:47,520 Speaker 1: Keeping that in mind is very important. That's it for 739 00:44:47,560 --> 00:44:50,560 Speaker 1: this episode. If you have any suggestions for future episodes 740 00:44:50,560 --> 00:44:53,239 Speaker 1: of tech Stuff, maybe something that's happy and fun and 741 00:44:53,440 --> 00:44:56,800 Speaker 1: not nearly as terrifying and orwellian, send me a message. 742 00:44:56,960 --> 00:45:00,000 Speaker 1: The email is tech Stuff at how stuff works dot com, 743 00:45:00,120 --> 00:45:02,279 Speaker 1: or you can pop over to our website that's tech 744 00:45:02,320 --> 00:45:05,160 Speaker 1: stuff podcast dot com. You'll find an archive of all 745 00:45:05,200 --> 00:45:07,720 Speaker 1: of our previous episodes on there, as well as links 746 00:45:07,760 --> 00:45:10,200 Speaker 1: to where we are on social media and a link 747 00:45:10,280 --> 00:45:12,839 Speaker 1: to our online store, where every purchase you make goes 748 00:45:12,880 --> 00:45:15,520 Speaker 1: to help the show and we greatly appreciate it, and 749 00:45:15,680 --> 00:45:23,520 Speaker 1: I will talk to you again really soon. Tech Stuff 750 00:45:23,560 --> 00:45:25,880 Speaker 1: is a production of I Heart Radio's How Stuff Works. 751 00:45:26,080 --> 00:45:28,880 Speaker 1: For more podcasts from I Heart Radio, visit the I 752 00:45:29,000 --> 00:45:32,239 Speaker 1: heart Radio app, Apple podcasts, or wherever you listen to 753 00:45:32,280 --> 00:45:33,200 Speaker 1: your favorite shows.