1 00:00:04,600 --> 00:00:08,320 Speaker 1: Sleepwalkers is a production of I Heart Radio and Unusual Productions. 2 00:00:12,760 --> 00:00:16,840 Speaker 1: Jan had become paralyzed through a disease process such that 3 00:00:17,400 --> 00:00:21,520 Speaker 1: she couldn't move anything below her neck, and she volunteered 4 00:00:21,520 --> 00:00:25,759 Speaker 1: to have the surgical procedure in which we implanted electrodes 5 00:00:25,760 --> 00:00:29,240 Speaker 1: in her head and we were able to decode the 6 00:00:29,280 --> 00:00:31,520 Speaker 1: signals from her brain and she was able to move 7 00:00:31,600 --> 00:00:37,120 Speaker 1: a high performance prosthetic arm in hand. That's Andy Schwartz, 8 00:00:37,200 --> 00:00:41,080 Speaker 1: a professor of neurobiology at the University of Pittsburgh. He 9 00:00:41,080 --> 00:00:43,720 Speaker 1: helped build the technology that allowed a patient called Jan 10 00:00:43,840 --> 00:00:48,199 Speaker 1: Sherman to control a robotic arm with her mind. In 11 00:00:48,320 --> 00:00:50,479 Speaker 1: order to do that, Andy's team needed to give the 12 00:00:50,640 --> 00:00:53,240 Speaker 1: arm away to know where Jan wanted it to move. 13 00:00:53,760 --> 00:00:56,440 Speaker 1: They had to read signals directly from her brain and 14 00:00:56,480 --> 00:00:59,120 Speaker 1: teach a computer to understand them. So they built a 15 00:00:59,160 --> 00:01:03,120 Speaker 1: brain computer interface a b c I. We implanted electrodes 16 00:01:03,120 --> 00:01:05,400 Speaker 1: in her head. You have to put on these bulky 17 00:01:05,440 --> 00:01:08,880 Speaker 1: connectors with thick cables going to a bank of amplifiers 18 00:01:08,880 --> 00:01:11,400 Speaker 1: and computers. It sounds like something out of a science 19 00:01:11,400 --> 00:01:15,440 Speaker 1: fiction film. But once Andy could see Jan's neurons firing, 20 00:01:15,920 --> 00:01:20,240 Speaker 1: reading her intentions was simpler than you might think. What 21 00:01:20,360 --> 00:01:23,920 Speaker 1: we found is that the rate that these neurons fire 22 00:01:24,160 --> 00:01:27,120 Speaker 1: is related to the direction that the arm moves. When 23 00:01:27,160 --> 00:01:29,280 Speaker 1: you add their signals together, you can get a very 24 00:01:29,360 --> 00:01:34,840 Speaker 1: precise representation of that movement. It's a very very simple algorithm, 25 00:01:34,880 --> 00:01:37,920 Speaker 1: and it's like listening to a Geiger counter um and 26 00:01:38,120 --> 00:01:41,360 Speaker 1: each click of the Geiger counters the same, but as 27 00:01:41,400 --> 00:01:45,000 Speaker 1: you get closer to a radioactive source, those clicks come 28 00:01:45,040 --> 00:01:50,040 Speaker 1: closer together. Listening for those clicks was the breakthrough necessary 29 00:01:50,040 --> 00:01:53,240 Speaker 1: to translate Jan's thoughts into signals that the robotic arm 30 00:01:53,240 --> 00:01:56,760 Speaker 1: could understand, and this meant that Jan could move again. 31 00:01:57,320 --> 00:02:01,480 Speaker 1: She could reach out and touch her husband, and Andy's 32 00:02:01,520 --> 00:02:04,120 Speaker 1: team filmed Jan using the arm to complete a more 33 00:02:04,160 --> 00:02:08,040 Speaker 1: playful goal that she'd set to feed herself chocolate for 34 00:02:08,080 --> 00:02:16,280 Speaker 1: the first time in ten years for a woman, one 35 00:02:16,440 --> 00:02:22,560 Speaker 1: giant fight for DHDR. Jan was able to do that 36 00:02:22,880 --> 00:02:25,440 Speaker 1: with the robot, and not only that, but she made 37 00:02:25,480 --> 00:02:29,200 Speaker 1: graceful and beautiful movements. That's what really blew me away. 38 00:02:29,760 --> 00:02:32,560 Speaker 1: It looks much like a real arm in hand, and 39 00:02:33,880 --> 00:02:37,720 Speaker 1: for someone who studies movement, it was really quite beautiful. 40 00:02:40,040 --> 00:02:43,120 Speaker 1: Andy is a scientist and a researcher not given to 41 00:02:43,160 --> 00:02:46,680 Speaker 1: being sentimental, but the way he talks about Jan moving 42 00:02:46,720 --> 00:02:50,040 Speaker 1: her robotic arm, it's like describing a dance a ballet. 43 00:02:50,600 --> 00:02:53,160 Speaker 1: It's one of the most inspiring examples of humans and 44 00:02:53,200 --> 00:02:56,640 Speaker 1: machines working together in concert that we've come across in 45 00:02:56,680 --> 00:03:00,639 Speaker 1: all of our reporting for Sleepwalkers. But it also raises 46 00:03:00,680 --> 00:03:04,239 Speaker 1: profound questions about the future of our health, our bodies, 47 00:03:04,280 --> 00:03:08,480 Speaker 1: and our society. What are the implications positive and negative? 48 00:03:08,919 --> 00:03:12,079 Speaker 1: As AI makes us ever more able to decode complex 49 00:03:12,120 --> 00:03:16,240 Speaker 1: systems like the brain or the human genome, how is 50 00:03:16,280 --> 00:03:20,000 Speaker 1: AI poised to change the world of medicine. I'm as 51 00:03:20,040 --> 00:03:36,640 Speaker 1: Veloshin and this is Sleepwalkers, So Kara I found Andy's 52 00:03:36,640 --> 00:03:41,280 Speaker 1: story completely mind blowing, no pun intended. You know it 53 00:03:41,440 --> 00:03:44,720 Speaker 1: shakes one of the last remaining mysteries. For the most part, 54 00:03:44,800 --> 00:03:47,880 Speaker 1: neuroscience is still very much a black box problem. We 55 00:03:47,920 --> 00:03:51,440 Speaker 1: know what's happening in the brain, but neuroscientists can't always 56 00:03:51,640 --> 00:03:54,240 Speaker 1: know why and how, which is a lot like the 57 00:03:54,240 --> 00:03:57,560 Speaker 1: problem of black box AI. We are moving quickly into 58 00:03:57,600 --> 00:03:59,840 Speaker 1: a world where sensors can read us better and better 59 00:04:00,040 --> 00:04:03,080 Speaker 1: know they can read pupil dilation, carbon dioxide in the 60 00:04:03,160 --> 00:04:06,000 Speaker 1: breath to understand people's emotions. Yeah, this is the stuff 61 00:04:06,040 --> 00:04:08,960 Speaker 1: we talked about an episode four around using AI to 62 00:04:09,080 --> 00:04:12,240 Speaker 1: better read biometrics with Poppy Chrum. The difference being in 63 00:04:12,280 --> 00:04:14,880 Speaker 1: this case is that Andy isn't monitoring the outside of 64 00:04:14,880 --> 00:04:18,760 Speaker 1: our bodies, so the privacy concern is less. To read 65 00:04:18,839 --> 00:04:21,800 Speaker 1: Jan's brain, he had to drill into her skull and 66 00:04:21,880 --> 00:04:25,200 Speaker 1: place electros into the surface of her brain and then 67 00:04:25,200 --> 00:04:28,200 Speaker 1: connect them to a computer, so that's unlikely to creep 68 00:04:28,279 --> 00:04:30,680 Speaker 1: up on you. Google is not going to be doing 69 00:04:30,720 --> 00:04:34,919 Speaker 1: that to get geolocation, not yet. Because that said, and 70 00:04:35,040 --> 00:04:37,159 Speaker 1: he told me one of his big priorities is trying 71 00:04:37,200 --> 00:04:40,200 Speaker 1: to figure out how to achieve the same effects without 72 00:04:40,279 --> 00:04:43,440 Speaker 1: needing invasive surgery, so that people can use this technology 73 00:04:43,520 --> 00:04:46,839 Speaker 1: at home. Talking about reading the brain, one cutting edge 74 00:04:46,839 --> 00:04:50,520 Speaker 1: application for AI is to restore language. A few months ago, 75 00:04:50,560 --> 00:04:52,400 Speaker 1: I was actually reading this paper in Nature, and I'm 76 00:04:52,400 --> 00:04:56,040 Speaker 1: not sure all podcast listeners read Nature. Well, thanks, Carol, 77 00:04:56,080 --> 00:04:57,880 Speaker 1: you do the hard work so that we don't have to. 78 00:04:58,000 --> 00:05:01,320 Speaker 1: I guess why might they pay me the big bitcoin um? 79 00:05:01,320 --> 00:05:03,800 Speaker 1: But you know, basically this article was about decoding the 80 00:05:03,880 --> 00:05:07,919 Speaker 1: human brain to produce language based on how the brain 81 00:05:07,960 --> 00:05:11,440 Speaker 1: tells the mouth to move well. Funny enough, this is 82 00:05:11,480 --> 00:05:15,080 Speaker 1: something I spoke to Andy about a few months before 83 00:05:15,440 --> 00:05:18,159 Speaker 1: your paper and Nature was published, Kara, and he was 84 00:05:18,160 --> 00:05:21,559 Speaker 1: talking about exactly this. So if you record from motor 85 00:05:21,640 --> 00:05:26,280 Speaker 1: areas associated with producing language, you can start to recognize 86 00:05:26,360 --> 00:05:30,280 Speaker 1: certain words and phrases. Even I think that's realizable in 87 00:05:30,279 --> 00:05:33,240 Speaker 1: the your term. What Andy is saying is that to 88 00:05:33,320 --> 00:05:36,520 Speaker 1: create spoken words directly from the brain, we don't actually 89 00:05:36,600 --> 00:05:38,800 Speaker 1: need to read thoughts. We can just look at the 90 00:05:38,880 --> 00:05:42,440 Speaker 1: last step, the moment when thought becomes action as neurons 91 00:05:42,480 --> 00:05:45,599 Speaker 1: fired to move our tongue, our lips are jaw to 92 00:05:45,680 --> 00:05:49,600 Speaker 1: create sound. And just like with jan, an algorithm can 93 00:05:49,680 --> 00:05:53,039 Speaker 1: listen to those neurons and allow thoughts to become actions. 94 00:05:53,839 --> 00:05:57,520 Speaker 1: And this could transform lives. So if you could start 95 00:05:57,520 --> 00:06:02,560 Speaker 1: to recognize words and language from brain activity, that would 96 00:06:02,600 --> 00:06:05,800 Speaker 1: be very helpful for people who are locked in with 97 00:06:05,880 --> 00:06:08,880 Speaker 1: a l S and can't communicate. We talked in the 98 00:06:08,960 --> 00:06:12,240 Speaker 1: last episode about how many technological breakthroughs have come out 99 00:06:12,279 --> 00:06:15,360 Speaker 1: of DARPER. That's the branch of the Defense Department charged 100 00:06:15,360 --> 00:06:20,920 Speaker 1: with inventing technological surprises, well neural prosthetics or robotic limbs 101 00:06:20,960 --> 00:06:23,520 Speaker 1: controlled by the mind have been an area of heavy 102 00:06:23,560 --> 00:06:26,880 Speaker 1: investment for the agency. You may remember Gil Pratt from 103 00:06:26,880 --> 00:06:29,400 Speaker 1: earlier in the series. He's now the CEO of the 104 00:06:29,440 --> 00:06:33,919 Speaker 1: Toyota Research Institute, but previously he was at DARPA where 105 00:06:33,960 --> 00:06:37,360 Speaker 1: he worked with none other than Andy Schwartz. So he 106 00:06:37,440 --> 00:06:39,680 Speaker 1: was involved in a project at DARPA which was called 107 00:06:39,720 --> 00:06:44,680 Speaker 1: Revolutionizing Prosthetics. And the project that I started was to 108 00:06:45,040 --> 00:06:48,479 Speaker 1: see if we could actually help some of the experimental 109 00:06:48,520 --> 00:06:52,640 Speaker 1: patients that he had to perform even better. So Kara 110 00:06:52,760 --> 00:06:55,479 Speaker 1: I was interviewing Gil Pratt about his work at Toyota 111 00:06:55,600 --> 00:06:58,239 Speaker 1: and on self driving cars, and we got talking about 112 00:06:58,279 --> 00:07:01,320 Speaker 1: his interest in these human machine owner ships, and I said, 113 00:07:01,360 --> 00:07:04,920 Speaker 1: let me tell you a story as a scientist called 114 00:07:04,960 --> 00:07:08,880 Speaker 1: Andy Schwartz and his patient Jan and Gil was like, um, yeah, 115 00:07:08,960 --> 00:07:13,440 Speaker 1: I worked on that. Yeah, no, no no, no, literally, it's funny. 116 00:07:13,480 --> 00:07:17,040 Speaker 1: And it also shows the long arm of DARPA intended. 117 00:07:17,120 --> 00:07:21,840 Speaker 1: Again that's far away from military applications. You know, darp 118 00:07:21,840 --> 00:07:25,280 Speaker 1: AS interest in prosthetics is actually because of veterans, many 119 00:07:25,360 --> 00:07:29,080 Speaker 1: of whom lose limbs on the battlefield, which highlights again 120 00:07:29,120 --> 00:07:32,280 Speaker 1: the dual use nature of so much innovation. Right, um 121 00:07:32,280 --> 00:07:34,680 Speaker 1: and The work Gil was doing at DARPA with Jan 122 00:07:35,160 --> 00:07:37,320 Speaker 1: was all about doing about a job of interpreting her 123 00:07:37,440 --> 00:07:42,080 Speaker 1: brain waves using the existing models. My group made a 124 00:07:42,360 --> 00:07:46,200 Speaker 1: system called arm assist that watched what Jan was trying 125 00:07:46,240 --> 00:07:49,280 Speaker 1: to do. It inferred Okay, now she's trying to pick 126 00:07:49,360 --> 00:07:51,600 Speaker 1: up a block. Now she's trying to move it over 127 00:07:51,640 --> 00:07:53,840 Speaker 1: to the left. Now she's trying to drop the block. 128 00:07:54,040 --> 00:07:55,800 Speaker 1: You know, in a way very similar to if you 129 00:07:55,920 --> 00:07:58,880 Speaker 1: use power Point and you have like snapped to grit 130 00:07:59,040 --> 00:08:02,200 Speaker 1: or snapped the object turned on, it will help you 131 00:08:02,240 --> 00:08:04,480 Speaker 1: move the mouse to where it thinks you want to go. 132 00:08:05,160 --> 00:08:08,080 Speaker 1: This system helped her move the arm to where it 133 00:08:08,200 --> 00:08:11,080 Speaker 1: inferred she wanted to go based on a very noisy 134 00:08:11,240 --> 00:08:13,800 Speaker 1: signal that was coming out of her brain. That noise 135 00:08:13,840 --> 00:08:16,520 Speaker 1: in the signal was partly because the connection between Jan's 136 00:08:16,560 --> 00:08:20,440 Speaker 1: brain and the decoding computer was weak. So Gil's team 137 00:08:20,440 --> 00:08:25,400 Speaker 1: Adoppa designed a program to boost Jan's intentions. We tested 138 00:08:25,440 --> 00:08:29,200 Speaker 1: her by randomly turning the assists on and off. And 139 00:08:29,360 --> 00:08:32,280 Speaker 1: you can think of the assist as a guardian, like 140 00:08:32,360 --> 00:08:34,800 Speaker 1: what we're developing for the car to stop you from 141 00:08:34,840 --> 00:08:37,959 Speaker 1: having a crash, and we had the guardian in this 142 00:08:38,000 --> 00:08:42,160 Speaker 1: case help as little as possible, but still be effective 143 00:08:42,160 --> 00:08:45,040 Speaker 1: at helping her to reach the goal. The amount of 144 00:08:45,080 --> 00:08:48,600 Speaker 1: help that we gave her was so low that she 145 00:08:48,640 --> 00:08:51,199 Speaker 1: couldn't tell whether it was on or off. But when 146 00:08:51,240 --> 00:08:54,960 Speaker 1: Guardian was turned off, Jan's success rate fell. And in 147 00:08:55,000 --> 00:08:58,320 Speaker 1: a human machine partnership is not just the computer that 148 00:08:58,400 --> 00:09:03,080 Speaker 1: adapts to the human brain also adapts to the algorithm. 149 00:09:03,120 --> 00:09:05,960 Speaker 1: And the's algorithms are simple and they rely on human 150 00:09:06,000 --> 00:09:10,160 Speaker 1: programmed rules. So when they see cause in this case, 151 00:09:10,320 --> 00:09:13,760 Speaker 1: enough neurons firing at the same time, they create an effect, 152 00:09:14,040 --> 00:09:18,840 Speaker 1: a movement. But what's tantalizing is that effect that output 153 00:09:19,200 --> 00:09:25,360 Speaker 1: hints at something far deeper and infinitely complex personality. I 154 00:09:25,440 --> 00:09:28,960 Speaker 1: like Moby Dick where he talks about Captain Ahab walking 155 00:09:29,000 --> 00:09:31,800 Speaker 1: on the deck of the ship, and by observing his movements, 156 00:09:31,840 --> 00:09:34,679 Speaker 1: you can really understand what he's thinking about. If you 157 00:09:34,760 --> 00:09:39,400 Speaker 1: think about movement, it's really a communication between your innermost 158 00:09:39,440 --> 00:09:42,600 Speaker 1: thoughts in the outside world. Andy and his team saw 159 00:09:42,640 --> 00:09:45,560 Speaker 1: this come to life before their very eyes when they 160 00:09:45,600 --> 00:09:50,559 Speaker 1: connected another subject, Nathan, to a neural prosthesis. The robotic 161 00:09:50,679 --> 00:09:53,880 Speaker 1: arm moved in accordance with the personality of the user 162 00:09:54,480 --> 00:09:57,440 Speaker 1: Jan when she moved. She was very careful and gentle, 163 00:09:58,160 --> 00:10:01,240 Speaker 1: and Nathan is more are of a video gamer. He's 164 00:10:01,280 --> 00:10:03,840 Speaker 1: a younger guy. He's more of a competitor, so he 165 00:10:03,880 --> 00:10:07,880 Speaker 1: would move much faster, and so he would pick up 166 00:10:07,880 --> 00:10:10,600 Speaker 1: an object and then instead of placing it carefully in 167 00:10:10,600 --> 00:10:13,240 Speaker 1: a receptacle, he would basically toss it into the receptacle 168 00:10:13,840 --> 00:10:16,920 Speaker 1: to be faster. Today we can see the difference in 169 00:10:17,040 --> 00:10:20,800 Speaker 1: Nathan and Gen's personalities from how the brain controls a 170 00:10:20,920 --> 00:10:25,720 Speaker 1: prosthetic arm. We can infer personality from output, but we 171 00:10:25,760 --> 00:10:29,680 Speaker 1: don't even have sophisticated enough tools to ask why. As 172 00:10:29,720 --> 00:10:32,880 Speaker 1: we get better and better at these computational approaches, will 173 00:10:32,960 --> 00:10:36,880 Speaker 1: gain a much better understanding of the way the brain works. 174 00:10:37,000 --> 00:10:43,360 Speaker 1: Rather than having some major event causing some simple consequence. Instead, 175 00:10:43,679 --> 00:10:46,880 Speaker 1: it's more like a perfect storm where many factors come 176 00:10:46,920 --> 00:10:52,079 Speaker 1: together to generate a consequence. And if we understand which 177 00:10:52,120 --> 00:10:56,000 Speaker 1: of those events are important and how they're combined, then 178 00:10:56,080 --> 00:11:00,960 Speaker 1: we can start to understand brain function better. If you 179 00:11:01,000 --> 00:11:04,400 Speaker 1: see a cup and you're thirsty, if we could realize 180 00:11:04,440 --> 00:11:06,719 Speaker 1: that what you want to do is to drink from it, 181 00:11:07,400 --> 00:11:10,760 Speaker 1: then we could understand that you want to grasp the 182 00:11:10,800 --> 00:11:14,200 Speaker 1: cup from the side, and if we could distinguish that 183 00:11:14,360 --> 00:11:17,800 Speaker 1: from you grasping a cup with the intention of passing 184 00:11:17,800 --> 00:11:20,079 Speaker 1: it to me, you might hold it now from the top, 185 00:11:20,440 --> 00:11:23,079 Speaker 1: then we could do a better job generating the correct 186 00:11:23,360 --> 00:11:27,360 Speaker 1: movement for Andy. The next frontier is more deeply understanding 187 00:11:27,400 --> 00:11:31,240 Speaker 1: the human brain carrot by developing new models and better algorithms. 188 00:11:31,559 --> 00:11:34,120 Speaker 1: And that's really a computer science problem as much as 189 00:11:34,120 --> 00:11:36,880 Speaker 1: a medical problem. And then really the next frontier in 190 00:11:36,920 --> 00:11:40,800 Speaker 1: medicine is all about using AI to decode complex interactions. 191 00:11:41,200 --> 00:11:45,319 Speaker 1: And in fact, you reported a piece on exactly this phenomenon. Yeah, 192 00:11:45,400 --> 00:11:48,319 Speaker 1: I spoke to a woman named Regina bars Ali. She's 193 00:11:48,360 --> 00:11:51,600 Speaker 1: a computer scientist m I T and her own experience 194 00:11:51,600 --> 00:11:54,400 Speaker 1: of how she was diagnosed with breast cancer actually inspired 195 00:11:54,480 --> 00:11:58,000 Speaker 1: her to work on bringing AI into the realm of medicine. 196 00:11:58,720 --> 00:12:01,400 Speaker 1: When we come back, we'll here from Regina and take 197 00:12:01,440 --> 00:12:04,760 Speaker 1: a look at other ways AI is changing diagnostics and 198 00:12:04,840 --> 00:12:15,040 Speaker 1: the future of our health. I remember still, I went 199 00:12:15,080 --> 00:12:18,480 Speaker 1: to mamogram and they told me a high density but 200 00:12:18,559 --> 00:12:22,199 Speaker 1: you shouldn't worry. Half of the population have high density breasts, 201 00:12:22,240 --> 00:12:26,079 Speaker 1: so don't worry about it. That's Regina bars Lie. Regina 202 00:12:26,160 --> 00:12:28,960 Speaker 1: is a professor of Electrical Engineering and Computer Science at 203 00:12:29,040 --> 00:12:31,520 Speaker 1: m I T. I couldn't believe in it. I didn't 204 00:12:31,559 --> 00:12:33,960 Speaker 1: feel anything. There was nothing wrong, you know. I was 205 00:12:34,160 --> 00:12:37,720 Speaker 1: continuing my morning runs and being fine. It was later 206 00:12:37,760 --> 00:12:40,720 Speaker 1: on after her mammogram that Regina found out she had 207 00:12:40,760 --> 00:12:44,280 Speaker 1: breast cancer, and as it often does, the news came 208 00:12:44,280 --> 00:12:47,080 Speaker 1: as a surprise. And if I'm looking at myself, I 209 00:12:47,120 --> 00:12:51,760 Speaker 1: really cannot explain what was wrong that I got this 210 00:12:51,920 --> 00:12:55,560 Speaker 1: disease that clearly didn't have any family history. I'm exercising, 211 00:12:55,640 --> 00:13:00,000 Speaker 1: I'm meeting healthy and for many many patients that I'm 212 00:13:00,000 --> 00:13:03,560 Speaker 1: me during my own journey, UH, their diagnosis came to 213 00:13:03,600 --> 00:13:07,120 Speaker 1: them as the biggest surprise. The thing is, according to 214 00:13:07,160 --> 00:13:09,960 Speaker 1: current medical standards, breast cancer risk is based on a 215 00:13:10,000 --> 00:13:13,000 Speaker 1: few factors. Are you a woman, and are you old? 216 00:13:13,200 --> 00:13:14,880 Speaker 1: Do you have the Brocko gene? And do you have 217 00:13:14,960 --> 00:13:18,240 Speaker 1: breast cancer in your family? But those are relatively simple 218 00:13:18,280 --> 00:13:21,320 Speaker 1: inputs and they don't account for what complex systems we 219 00:13:21,360 --> 00:13:25,080 Speaker 1: are above eighty percent of women who diagnosed with breast 220 00:13:25,080 --> 00:13:29,680 Speaker 1: cancer there first in their families, so it's not clear 221 00:13:30,040 --> 00:13:33,240 Speaker 1: you know what causes it. According to the Susan Becoman 222 00:13:33,280 --> 00:13:37,160 Speaker 1: Breast Cancer Foundation, breast cancer is the most common cancer 223 00:13:37,280 --> 00:13:40,560 Speaker 1: amongst women around the world. Every two minutes a case 224 00:13:40,559 --> 00:13:42,559 Speaker 1: of breast cancer is diagnosed and a woman in the 225 00:13:42,640 --> 00:13:46,200 Speaker 1: United States, and every minute somewhere on earth a woman 226 00:13:46,240 --> 00:13:50,079 Speaker 1: dies of breast cancer. That's more than four hundred women 227 00:13:50,120 --> 00:13:52,880 Speaker 1: per day. So if you're listening to this podcast, you 228 00:13:53,000 --> 00:13:56,079 Speaker 1: probably know someone who has been or will be diagnosed 229 00:13:56,080 --> 00:13:59,200 Speaker 1: with breast cancer in their lifetime. And with that many 230 00:13:59,240 --> 00:14:02,040 Speaker 1: people affected that they are bound to be some oversights, 231 00:14:02,080 --> 00:14:06,200 Speaker 1: like in Regina's case. I discovered that my own diagnosis 232 00:14:06,320 --> 00:14:10,640 Speaker 1: was delayed because the malignancy was missed in the previous mammograms. 233 00:14:10,960 --> 00:14:14,720 Speaker 1: And I also discovered that this is not a unique experience. 234 00:14:15,080 --> 00:14:18,319 Speaker 1: So the question I asked is is it possible for 235 00:14:18,440 --> 00:14:22,480 Speaker 1: us to know ahead of time what's to come? In 236 00:14:22,520 --> 00:14:26,200 Speaker 1: other words, can we look into the future prior to 237 00:14:26,200 --> 00:14:29,080 Speaker 1: our diagnosis. Regina had already been a long time computer 238 00:14:29,120 --> 00:14:31,920 Speaker 1: scientists at M I T. She thought often about machine 239 00:14:32,000 --> 00:14:35,560 Speaker 1: learning in her work, but this new personal hardship redirected 240 00:14:35,560 --> 00:14:39,600 Speaker 1: her thinking. When you go to the hospital, you see 241 00:14:39,680 --> 00:14:42,520 Speaker 1: like real pain of other people. You see people who 242 00:14:42,560 --> 00:14:46,560 Speaker 1: go through key most radiation, even though the hospital just 243 00:14:46,640 --> 00:14:49,760 Speaker 1: want stop away from M I T. I just was 244 00:14:49,800 --> 00:14:52,640 Speaker 1: not aware there is so much suffering. And at that 245 00:14:52,720 --> 00:14:55,760 Speaker 1: point when I came back, I was thinking, we create 246 00:14:56,040 --> 00:14:59,120 Speaker 1: so much you know, exciting new technology it and I 247 00:14:59,240 --> 00:15:03,200 Speaker 1: see why I we are not trying to solve it. 248 00:15:03,200 --> 00:15:07,440 Speaker 1: It's it's just a travesty. And so Regine and her 249 00:15:07,480 --> 00:15:09,960 Speaker 1: colleagues that M I T began training a deep learning 250 00:15:09,960 --> 00:15:13,600 Speaker 1: model on over ninety thod mamograms from mass General Hospital, 251 00:15:14,240 --> 00:15:16,400 Speaker 1: and with such a large data set, they were able 252 00:15:16,440 --> 00:15:19,520 Speaker 1: to predict a patient's risk of breast cancer by comparing 253 00:15:19,560 --> 00:15:23,640 Speaker 1: one mammogram to tens of thousands of others instantly. My 254 00:15:23,800 --> 00:15:27,800 Speaker 1: firm belief was that despite the standard risk factors, there 255 00:15:27,840 --> 00:15:31,800 Speaker 1: is a lot of information in women's breast. Human eye 256 00:15:31,880 --> 00:15:35,200 Speaker 1: which even seen you know, thousands or tens of thousands 257 00:15:35,280 --> 00:15:39,120 Speaker 1: images over their lifetime, may not be really able to 258 00:15:39,200 --> 00:15:43,080 Speaker 1: detect it with a great clarity. However, if machine which 259 00:15:43,160 --> 00:15:46,560 Speaker 1: is uh you know, trained on sixty images where it 260 00:15:46,720 --> 00:15:51,600 Speaker 1: knows the outcomes, it can identify the difference in pixel 261 00:15:51,680 --> 00:15:56,760 Speaker 1: distribution that I likely correlate to a future things that 262 00:15:57,000 --> 00:16:00,400 Speaker 1: may come. The amount of data which you can train 263 00:16:00,400 --> 00:16:04,280 Speaker 1: a computer on versus a doctor is massive and the 264 00:16:04,400 --> 00:16:07,200 Speaker 1: AI was able to detect smaller details than the human 265 00:16:07,280 --> 00:16:09,600 Speaker 1: I could pick up, so this does feel like a 266 00:16:09,680 --> 00:16:15,080 Speaker 1: perfect application for the strengths of machine learning. Fifteen months 267 00:16:15,080 --> 00:16:18,960 Speaker 1: ago we put first our density model, which is every 268 00:16:19,000 --> 00:16:22,520 Speaker 1: mammograms that goes through months general it shows a prediction 269 00:16:22,600 --> 00:16:27,760 Speaker 1: to the radiologists and around of the times radiologists degree 270 00:16:27,920 --> 00:16:30,960 Speaker 1: with the machine, and when they disagree, the more experienced 271 00:16:31,000 --> 00:16:35,440 Speaker 1: radiologists typically sides up with the machine. So Regina's early 272 00:16:35,480 --> 00:16:38,040 Speaker 1: efforts are doing really well, and the hope is that 273 00:16:38,080 --> 00:16:41,000 Speaker 1: more hospitals around the country will begin using these models 274 00:16:41,000 --> 00:16:43,920 Speaker 1: for early detection and risk assessment. And that's not only 275 00:16:43,960 --> 00:16:48,880 Speaker 1: because understanding risk is important, but also because misdiagnoses have 276 00:16:49,040 --> 00:16:53,120 Speaker 1: led to unnecessary surgeries. I read this book The Emperor 277 00:16:53,240 --> 00:16:55,920 Speaker 1: Whole Melodies. It's a book about comes, so I really 278 00:16:56,000 --> 00:16:59,920 Speaker 1: like let me just say one particular moment it really 279 00:17:00,080 --> 00:17:04,760 Speaker 1: choices me out. It's about how men, male surgeons who 280 00:17:04,760 --> 00:17:09,359 Speaker 1: are treating women with the surgeries this firmly believe that 281 00:17:09,480 --> 00:17:12,280 Speaker 1: the more you cut out of you know, women's body, 282 00:17:12,920 --> 00:17:16,720 Speaker 1: the beta is w likelihood. In the early twentieth century, 283 00:17:16,760 --> 00:17:20,480 Speaker 1: there was an insurgence of doctors performing radical mastectomies, removing 284 00:17:20,520 --> 00:17:25,080 Speaker 1: the entire breast, thus permanently disfiguring a patient, and radical 285 00:17:25,160 --> 00:17:28,680 Speaker 1: mastectomies became the norm for much of the twentieth century. 286 00:17:28,840 --> 00:17:31,680 Speaker 1: The reasoning was that removing a lump leaves the risk 287 00:17:31,720 --> 00:17:35,000 Speaker 1: of tumors growing elsewhere, so why take the risk. By 288 00:17:35,040 --> 00:17:38,280 Speaker 1: the nineteen eighties, it was clear that radical mastectomies weren't 289 00:17:38,280 --> 00:17:42,320 Speaker 1: actually in effective treatment for many patients, but according to Regina, 290 00:17:42,680 --> 00:17:45,120 Speaker 1: surgeons still remove too much tissue out of an abundance 291 00:17:45,160 --> 00:17:48,560 Speaker 1: of caution. The reason it happens, it's not because there 292 00:17:48,640 --> 00:17:51,040 Speaker 1: is some evil doctors that wants to look another surgery. 293 00:17:51,080 --> 00:17:53,880 Speaker 1: It's just because people are uncertain and it's high risk, 294 00:17:54,000 --> 00:17:56,639 Speaker 1: and many times conceptician would say, I am ready to 295 00:17:56,680 --> 00:18:01,160 Speaker 1: go for the harshest treatment to minimize that chances. So 296 00:18:01,320 --> 00:18:05,040 Speaker 1: what we demonstrated that with machine learning you can actually 297 00:18:05,520 --> 00:18:09,000 Speaker 1: identify a city percent of the populations it doesn't need 298 00:18:09,280 --> 00:18:13,560 Speaker 1: this type of surgery. Regina told me that had these 299 00:18:13,560 --> 00:18:15,480 Speaker 1: deep learning models been in place at the time of 300 00:18:15,520 --> 00:18:18,760 Speaker 1: her early mamograms, she might have detected her risk two 301 00:18:18,840 --> 00:18:22,199 Speaker 1: years sooner. And in many cases, early detection makes a 302 00:18:22,240 --> 00:18:25,000 Speaker 1: big difference in how a patient chooses to treat their cancer. 303 00:18:25,760 --> 00:18:28,640 Speaker 1: Since developing the breast cancer detection model, Regina is now 304 00:18:28,680 --> 00:18:31,320 Speaker 1: co leading m I T S J Clinic, which is 305 00:18:31,320 --> 00:18:34,359 Speaker 1: a new initiative focusing on machine learning and health. What 306 00:18:34,600 --> 00:18:39,240 Speaker 1: I hope that we as a society advance since then 307 00:18:39,680 --> 00:18:42,000 Speaker 1: and we are ready to bring you know, the recent 308 00:18:42,520 --> 00:18:46,399 Speaker 1: science and help women, and even if it means that 309 00:18:46,480 --> 00:18:51,560 Speaker 1: we need to change our beliefs about how risk assessment works. 310 00:18:52,080 --> 00:18:54,280 Speaker 1: And Regina hopes that as a society we can move 311 00:18:54,320 --> 00:18:57,400 Speaker 1: toward greater acceptance of using machine learning to enhance medicine. 312 00:18:58,160 --> 00:19:01,520 Speaker 1: Whichever one does a bit, Joel, that one should prevail. 313 00:19:05,520 --> 00:19:08,800 Speaker 1: So big question, how do you feel, Kara about putting 314 00:19:08,800 --> 00:19:11,280 Speaker 1: your health into the hands of an algorithm? You know, 315 00:19:11,359 --> 00:19:15,840 Speaker 1: after speaking to Regina, who told me that she probably 316 00:19:15,840 --> 00:19:18,680 Speaker 1: would have been diagnosed two years sooner using her models, 317 00:19:20,040 --> 00:19:23,880 Speaker 1: it seems as though machine learning provides this unparalleled form 318 00:19:23,880 --> 00:19:28,520 Speaker 1: of detection right because the most seasoned doctors simply can't 319 00:19:28,640 --> 00:19:31,160 Speaker 1: compare thousands of data points at once. I don't think 320 00:19:31,200 --> 00:19:34,600 Speaker 1: the issue is algorithms replacing doctors. It's more a matter 321 00:19:34,640 --> 00:19:37,000 Speaker 1: of equipping doctors the sharper tools that they can do 322 00:19:37,040 --> 00:19:40,159 Speaker 1: their jobs. They've got to provide a patient with information 323 00:19:40,320 --> 00:19:44,040 Speaker 1: and allow that patient to make informed decisions. Algorithms don't 324 00:19:44,080 --> 00:19:46,200 Speaker 1: have a bedside manner. You know. What you say about 325 00:19:46,200 --> 00:19:48,560 Speaker 1: the shop of tools reminds me of our conversation about 326 00:19:48,560 --> 00:19:53,320 Speaker 1: creativity and episode two, using AI to give artists, musicians, 327 00:19:53,680 --> 00:19:57,720 Speaker 1: screenwriters new tools to do better work. At the same time, 328 00:19:58,240 --> 00:20:00,879 Speaker 1: just like the art world, the medical profession sits on 329 00:20:00,920 --> 00:20:03,720 Speaker 1: this enormous pedestal where we have to trust what they 330 00:20:03,800 --> 00:20:05,879 Speaker 1: say because most of us don't have the tools to 331 00:20:06,000 --> 00:20:09,840 Speaker 1: question them, right unless you're on WebMD. Yeah, the bane 332 00:20:09,840 --> 00:20:11,960 Speaker 1: of every doctor's life. Yeah, you know, doctors have a 333 00:20:11,960 --> 00:20:14,960 Speaker 1: hard enough time explaining medicine to patients. Imagine them having 334 00:20:14,960 --> 00:20:17,920 Speaker 1: to explain artificial intelligence. Well, from experience, we can say 335 00:20:17,920 --> 00:20:21,520 Speaker 1: good luck to him. What's crazy to me actually is 336 00:20:21,560 --> 00:20:24,760 Speaker 1: that Regina and her co author, this woman, doctor Connie Lehman, 337 00:20:24,760 --> 00:20:28,359 Speaker 1: who's a radiologist at Mass General, were rejected from every 338 00:20:28,400 --> 00:20:30,879 Speaker 1: single federal grant they submitted at first. Why would they 339 00:20:30,880 --> 00:20:33,880 Speaker 1: be rejected from all those federal grants? Because as much 340 00:20:33,920 --> 00:20:36,760 Speaker 1: as there is a ton of buzz surrounding AI, I 341 00:20:36,800 --> 00:20:40,320 Speaker 1: think people have to appreciate how new this frontier is right, 342 00:20:40,560 --> 00:20:44,360 Speaker 1: and using machine learning to make predictions about people's cancer 343 00:20:44,960 --> 00:20:47,240 Speaker 1: is very, very new, and it's going to take doctors 344 00:20:47,280 --> 00:20:49,199 Speaker 1: a really long time to learn how to convey this 345 00:20:49,240 --> 00:20:52,800 Speaker 1: information to patients. So that's where Regina is right now, 346 00:20:53,200 --> 00:20:57,639 Speaker 1: figuring out how doctors can explain to patients, Hey, AI 347 00:20:57,760 --> 00:21:00,439 Speaker 1: helped us determine your cancer risk. Well, part of problem 348 00:21:00,480 --> 00:21:03,400 Speaker 1: is that we don't yet have explainable AI. So it's 349 00:21:03,440 --> 00:21:05,280 Speaker 1: not just that it's hard to explain it to patients, 350 00:21:05,320 --> 00:21:09,160 Speaker 1: it's actually a black box. You may remember Sebastian Thrun, 351 00:21:09,320 --> 00:21:11,960 Speaker 1: the founder of Google X from earlier in the series. 352 00:21:12,520 --> 00:21:15,520 Speaker 1: As well as self driving cars and flying cars. He 353 00:21:15,600 --> 00:21:20,679 Speaker 1: works on medical diagnostics and he recognizes a problem. One 354 00:21:20,720 --> 00:21:23,399 Speaker 1: of the conundrums of machine learning is that when you 355 00:21:23,440 --> 00:21:25,520 Speaker 1: open up in doing that brook you look at like 356 00:21:25,680 --> 00:21:28,680 Speaker 1: hundreds of millions of numbers, but he can't quite understand 357 00:21:28,800 --> 00:21:31,359 Speaker 1: what's happening. So people are are concerned. People look at 358 00:21:31,359 --> 00:21:33,359 Speaker 1: that Brooks say, add the thing of diagnals and cancer, 359 00:21:33,760 --> 00:21:36,560 Speaker 1: what does it do. We've talked about the black box 360 00:21:36,640 --> 00:21:39,200 Speaker 1: problem in AI and how hard it is to trust 361 00:21:39,240 --> 00:21:42,520 Speaker 1: decisions that can't be explained, but Sebastian is quick to 362 00:21:42,520 --> 00:21:46,000 Speaker 1: point out that human beings can also be difficult to decipher. 363 00:21:47,000 --> 00:21:50,520 Speaker 1: Let's remind ourselves our doctors are also black boxes. You 364 00:21:50,560 --> 00:21:52,680 Speaker 1: can't open up the brain of your doctor and ask 365 00:21:53,119 --> 00:21:55,959 Speaker 1: what was this here she using for diagonals and cancer. 366 00:21:56,520 --> 00:21:58,719 Speaker 1: It's a fair point, and it's one has also been 367 00:21:58,800 --> 00:22:02,080 Speaker 1: noted by Siddathan Kache. He is one of the world's 368 00:22:02,080 --> 00:22:05,879 Speaker 1: foremost cancer doctors and the Pulitzerprise winning author of The 369 00:22:05,920 --> 00:22:09,199 Speaker 1: Emperor of All Maladies. That's the book that Regina referred 370 00:22:09,200 --> 00:22:12,960 Speaker 1: to earlier, and Siddartha has also written extensively about how 371 00:22:13,000 --> 00:22:16,359 Speaker 1: AI is changing medicine. I've been a huge fan of 372 00:22:16,359 --> 00:22:18,959 Speaker 1: his work for a long time and actually ambushed him 373 00:22:18,960 --> 00:22:21,440 Speaker 1: after a talk he gave in order to persuade him 374 00:22:21,440 --> 00:22:24,879 Speaker 1: to an interview for this podcast. So thinking about AI 375 00:22:24,960 --> 00:22:28,720 Speaker 1: helping diagnose patients, it's worth asking is the human doctor 376 00:22:28,880 --> 00:22:31,960 Speaker 1: so very different? One problem that I think is fascinating 377 00:22:32,080 --> 00:22:35,000 Speaker 1: is when a patient comes into the hospital. If you 378 00:22:35,080 --> 00:22:39,760 Speaker 1: ask a particularly astute physician, that physician can actually describe 379 00:22:39,800 --> 00:22:41,960 Speaker 1: to you what the most likely journey of that patient 380 00:22:42,040 --> 00:22:44,800 Speaker 1: will be in the hospital, whether they're likely to stay 381 00:22:44,800 --> 00:22:47,800 Speaker 1: for twenty five days suffered through bacterial stepsis, you know 382 00:22:47,840 --> 00:22:50,160 Speaker 1: all from peeking in through the door of in emriency room, 383 00:22:50,400 --> 00:22:54,440 Speaker 1: Siddartha started to wonder how doctors make those lightning fast calls, 384 00:22:54,920 --> 00:22:57,840 Speaker 1: and it got him interested in understanding what the brain 385 00:22:57,920 --> 00:23:01,080 Speaker 1: is doing when a doctor makes a die ignosis. We 386 00:23:01,119 --> 00:23:05,120 Speaker 1: actually understand very little about how human beings make diagnoses. 387 00:23:05,160 --> 00:23:07,480 Speaker 1: I mean, the studies that have been done so far 388 00:23:07,720 --> 00:23:10,840 Speaker 1: suggest that most people make diagnosis in a kind of 389 00:23:11,480 --> 00:23:15,560 Speaker 1: recognition sense rather than an algorithmic sense. The classical description 390 00:23:15,600 --> 00:23:18,760 Speaker 1: of how we make diagnosis was extraordinarily algorithmic, sort of 391 00:23:18,800 --> 00:23:22,360 Speaker 1: goes down a series of elimination. It's not this, it's 392 00:23:22,400 --> 00:23:25,440 Speaker 1: not that. Now. Whence the alto says algorithmic, he doesn't 393 00:23:25,480 --> 00:23:29,480 Speaker 1: mean using a computer algorithm. He means using rule based logic. 394 00:23:29,960 --> 00:23:33,600 Speaker 1: If this, then that, et cetera. But what he learned 395 00:23:33,680 --> 00:23:36,840 Speaker 1: was that despite what the textbooks say, that's not actually 396 00:23:36,840 --> 00:23:40,480 Speaker 1: how doctors make a diagnosis. When you put doctors inside 397 00:23:40,600 --> 00:23:44,040 Speaker 1: MRI machines and ask the question how do they make diagnosis? 398 00:23:44,040 --> 00:23:47,280 Speaker 1: In fact what lights up is parts of the brain 399 00:23:47,359 --> 00:23:50,359 Speaker 1: that are much much more to do with pattern recognition. 400 00:23:50,680 --> 00:23:53,640 Speaker 1: Here's a rhinoceros, Here's not a rhinoceros. Here's an elephant, 401 00:23:53,680 --> 00:23:58,439 Speaker 1: here's not an elephant. Especially mature doctors make diagnoses based 402 00:23:58,520 --> 00:24:02,359 Speaker 1: on patent recognition, and they'll flip around like moths around 403 00:24:02,400 --> 00:24:05,520 Speaker 1: the flame and ultimately slowly arrive at the target. It's 404 00:24:05,600 --> 00:24:09,080 Speaker 1: much more geographical way of thinking rather than linear. They're 405 00:24:09,200 --> 00:24:13,520 Speaker 1: using a combination of Bayesian or prior probability understandings. They're 406 00:24:13,600 --> 00:24:17,520 Speaker 1: using pattern recognition. They're understanding things about the patient and 407 00:24:17,640 --> 00:24:23,920 Speaker 1: figuring out what to do. Hearing Saddatha speak about doctor's 408 00:24:24,000 --> 00:24:28,800 Speaker 1: cara in terms like prior probability understandings and Bayesian statistics 409 00:24:29,320 --> 00:24:32,000 Speaker 1: really does make it sound like he's describing AI rather 410 00:24:32,040 --> 00:24:34,800 Speaker 1: than people. Well, it kind of is. You know, neural 411 00:24:34,840 --> 00:24:38,360 Speaker 1: networks are purposely modeled on the human brain. It's not 412 00:24:38,760 --> 00:24:40,960 Speaker 1: as easy as causing effect. It's about drawing on a 413 00:24:41,000 --> 00:24:43,920 Speaker 1: lifetime of experience to make best guesses based on competing 414 00:24:43,960 --> 00:24:47,399 Speaker 1: information that we have to weigh appropriately in micro second. 415 00:24:47,520 --> 00:24:50,119 Speaker 1: It's no easy task. It's funny because we've worn several 416 00:24:50,119 --> 00:24:52,800 Speaker 1: times on this series that we shouldn't be surprised when 417 00:24:52,800 --> 00:24:56,760 Speaker 1: our creations reflect us, and yet it's almost impossible not 418 00:24:56,840 --> 00:24:59,280 Speaker 1: to be. I feel a sense of uncanny chills when 419 00:24:59,320 --> 00:25:02,760 Speaker 1: Saddatha is gribes a human doctor working like an algorithm. 420 00:25:03,000 --> 00:25:04,800 Speaker 1: And he wrote about this in The New Yorker with 421 00:25:05,040 --> 00:25:09,639 Speaker 1: the headline AI versus m D And he made this point, 422 00:25:09,680 --> 00:25:12,679 Speaker 1: which is that human and machine processes of making diagnosis 423 00:25:12,680 --> 00:25:17,359 Speaker 1: are converging. And it makes me wonder who's going to 424 00:25:17,480 --> 00:25:21,360 Speaker 1: have the final word. Well, I asked Sebastian thrun exactly 425 00:25:21,359 --> 00:25:24,919 Speaker 1: that question. He will die of cancer a lot. I 426 00:25:24,960 --> 00:25:29,040 Speaker 1: believe many of those deaths is actually preventable using artificial intelligence. 427 00:25:29,320 --> 00:25:32,800 Speaker 1: It's amazing how diverse diagnostics you get. When you show 428 00:25:32,840 --> 00:25:35,840 Speaker 1: a set of dermatologists the same set of images, some 429 00:25:35,920 --> 00:25:40,760 Speaker 1: will say cancers as I would say five. And Sebastian 430 00:25:40,840 --> 00:25:44,640 Speaker 1: has a personal interest in the topic. My family, unfortunately 431 00:25:44,680 --> 00:25:47,639 Speaker 1: has a long, long, long history of cancer. My my 432 00:25:47,680 --> 00:25:49,960 Speaker 1: sister passed away last year. My mother passed away a 433 00:25:50,040 --> 00:25:52,439 Speaker 1: young age. So one of my questions I had in 434 00:25:52,480 --> 00:25:56,119 Speaker 1: my life with me is since my mother died, um, 435 00:25:56,160 --> 00:25:58,680 Speaker 1: maybe we should not work on on treatment. We should 436 00:25:58,720 --> 00:26:02,960 Speaker 1: really focus on detect not diagnostics. Diagnosis of skin cancer 437 00:26:03,000 --> 00:26:05,240 Speaker 1: doesn't require looking inside your organs. You can just look 438 00:26:05,240 --> 00:26:08,200 Speaker 1: at the person from outside and it turns out we're 439 00:26:08,240 --> 00:26:11,280 Speaker 1: not heavenal symptoms. Before it becomes dangerous, it sits therefore 440 00:26:11,359 --> 00:26:14,760 Speaker 1: quite a vie. It grows below your skin, it spreads, 441 00:26:14,880 --> 00:26:16,920 Speaker 1: and then it destroys your liver, and then your first 442 00:26:16,920 --> 00:26:19,160 Speaker 1: symptom might be that back pain or a yellow face. 443 00:26:19,520 --> 00:26:22,920 Speaker 1: Maybe we should just look every single day. In fact, 444 00:26:23,119 --> 00:26:25,639 Speaker 1: Sebastian his work to make it possible for people to 445 00:26:25,720 --> 00:26:29,840 Speaker 1: check themselves every day. He published a paper in Nature 446 00:26:29,920 --> 00:26:35,040 Speaker 1: called Dermatologist level classification of skin cancer with Deep neural Networks. 447 00:26:35,520 --> 00:26:38,199 Speaker 1: What he demonstrated is that a program that runs on 448 00:26:38,240 --> 00:26:42,320 Speaker 1: an iPhone performs just as well as dermatologists at diagnosing 449 00:26:42,400 --> 00:26:47,360 Speaker 1: skin cancer. It sounds transformative, but Siddhartha has a very 450 00:26:47,400 --> 00:26:51,280 Speaker 1: specific concern about this kind of technology entering the mainstream. 451 00:26:51,680 --> 00:26:54,880 Speaker 1: Well Over diagnosis is an important risk. A classic example 452 00:26:54,920 --> 00:26:58,800 Speaker 1: of that is a lesion in the breast, a spot 453 00:26:58,960 --> 00:27:01,080 Speaker 1: that is actually not reast cancer, but it's picked up 454 00:27:01,080 --> 00:27:04,000 Speaker 1: and described as breast cancer, that leads to a biopsy, 455 00:27:04,080 --> 00:27:07,240 Speaker 1: the barbsy leads to complications and so forth, and at 456 00:27:07,240 --> 00:27:08,879 Speaker 1: the end of it you discover that that you know 457 00:27:08,880 --> 00:27:11,800 Speaker 1: you've achieved not very much, except for subjecting a woman 458 00:27:11,840 --> 00:27:15,440 Speaker 1: to an unpleasant procedure with unpleasant costs. We don't want 459 00:27:15,440 --> 00:27:17,800 Speaker 1: to catch just early cancer. We want to catch the 460 00:27:17,800 --> 00:27:20,480 Speaker 1: early cancers that are likely to kill you, the other 461 00:27:20,480 --> 00:27:24,280 Speaker 1: ones that are unlikely to become anything. We actually want 462 00:27:24,359 --> 00:27:27,040 Speaker 1: to be able to reassure patients that they don't need 463 00:27:27,080 --> 00:27:31,000 Speaker 1: a biopsy. Regina told Kara how screening enabled by machine 464 00:27:31,080 --> 00:27:36,280 Speaker 1: learning could reduce unnecessary mystectomies, But according to Siddartha, we 465 00:27:36,359 --> 00:27:39,520 Speaker 1: also need to be cautious about overscreening pushing us into 466 00:27:39,600 --> 00:27:44,320 Speaker 1: unnecessary procedures. And as AI and sensors become more ubiquitous, 467 00:27:44,760 --> 00:27:47,879 Speaker 1: enabling us to constantly search for illness, there may be 468 00:27:48,040 --> 00:27:51,960 Speaker 1: psychological implications that were not fully prepared for. This is 469 00:27:52,000 --> 00:27:55,160 Speaker 1: the very or valiant notion of previvors. It's the word 470 00:27:55,240 --> 00:27:57,640 Speaker 1: that I first encountering clinic in it was a woman 471 00:27:57,640 --> 00:27:59,960 Speaker 1: who had brack of one mutation, but in fact did 472 00:28:00,040 --> 00:28:02,800 Speaker 1: not have any breast cancer. She called herself a pre 473 00:28:02,920 --> 00:28:05,119 Speaker 1: vivor of breast cancer. She was a survivor of a 474 00:28:05,160 --> 00:28:08,119 Speaker 1: disease that she yet did not have. Our culture hasn't 475 00:28:08,119 --> 00:28:10,080 Speaker 1: reached the place that you know, we're routinely thinking of 476 00:28:10,119 --> 00:28:13,080 Speaker 1: ourselves as previvors. But it has reached a place where 477 00:28:13,119 --> 00:28:16,720 Speaker 1: surveillance is is constant. You know, you're moving from colonoscopy 478 00:28:16,800 --> 00:28:21,280 Speaker 1: to mammogram to p s A test, to medical exam 479 00:28:21,359 --> 00:28:25,520 Speaker 1: to retinal exam. And you can imagine stringing together with 480 00:28:26,119 --> 00:28:29,399 Speaker 1: future devices, a culture in which the body is always 481 00:28:29,400 --> 00:28:33,400 Speaker 1: being hunted scoured for being a potential locus of future disease, 482 00:28:33,640 --> 00:28:36,560 Speaker 1: and that will I think distrought culture fundamentally. It's a 483 00:28:36,640 --> 00:28:40,240 Speaker 1: very Orwell and very scary idea, said Arthur Ludes. Of course, 484 00:28:40,240 --> 00:28:44,080 Speaker 1: Sir George Orwell, whose novel four was prescient about the 485 00:28:44,080 --> 00:28:47,400 Speaker 1: culture of surveillance that's now blooming around us. But I 486 00:28:47,520 --> 00:28:51,400 Speaker 1: never thought about surveillance in medical terms before. And who 487 00:28:51,520 --> 00:28:54,200 Speaker 1: might be surveilling our bodies. One might be a health 488 00:28:54,200 --> 00:28:58,800 Speaker 1: insurance company or the government interested in, you know, who's 489 00:28:58,840 --> 00:29:01,600 Speaker 1: healthy and who's not healthy. There was a chance meeting 490 00:29:01,640 --> 00:29:04,920 Speaker 1: between Siddartha and Sebastian that God Siddartha thinking about AI 491 00:29:05,000 --> 00:29:08,320 Speaker 1: and medicine, but the two have fundamental disagreements on the 492 00:29:08,440 --> 00:29:11,880 Speaker 1: risks and rewards of surveiling the body. I love Sid 493 00:29:11,920 --> 00:29:13,920 Speaker 1: as a person, but I can tell you any doctor 494 00:29:14,000 --> 00:29:18,080 Speaker 1: who tells you less data is better for you is irresponsible. 495 00:29:18,560 --> 00:29:21,120 Speaker 1: If I could give you information with your skin cancer 496 00:29:21,240 --> 00:29:24,600 Speaker 1: every day, you will live longer than if it's just 497 00:29:24,680 --> 00:29:29,160 Speaker 1: consulted avntologies every year or two. But also the unpredictability 498 00:29:29,160 --> 00:29:32,400 Speaker 1: of death is part of the human experience. Our culture 499 00:29:32,400 --> 00:29:34,880 Speaker 1: would be very different if we walked around with signs 500 00:29:34,880 --> 00:29:37,200 Speaker 1: on our foreheads which told us the number of days 501 00:29:37,240 --> 00:29:40,600 Speaker 1: that we had left to live. What Siddartha is describing 502 00:29:40,720 --> 00:29:44,400 Speaker 1: is not some thought experiment. Using AI to predict time 503 00:29:44,400 --> 00:29:48,240 Speaker 1: of death is fast becoming a reality. But what inputs 504 00:29:48,240 --> 00:29:50,560 Speaker 1: does it use? And how am I knowing when we 505 00:29:50,600 --> 00:29:54,360 Speaker 1: will die? Change our culture? Join us after the break. 506 00:30:02,440 --> 00:30:05,000 Speaker 1: Doctors are actually very poor at predicting death. If you 507 00:30:05,040 --> 00:30:09,200 Speaker 1: look at the pattern of how people die, most people 508 00:30:09,320 --> 00:30:14,160 Speaker 1: don't decline along a predictable path towards their death, so 509 00:30:14,400 --> 00:30:17,720 Speaker 1: it's often a series of strings that snaps. If you 510 00:30:17,760 --> 00:30:20,520 Speaker 1: think about the human being being held together like a 511 00:30:20,520 --> 00:30:23,320 Speaker 1: puppet on on many strings, it's not that the puppet 512 00:30:23,440 --> 00:30:27,200 Speaker 1: slowly crumbles at a predictable pace. It's that all of 513 00:30:27,240 --> 00:30:29,800 Speaker 1: a sudden, three strings collapse and the hand comes dangling 514 00:30:29,840 --> 00:30:33,360 Speaker 1: down and the body and medicine tries to prop that 515 00:30:33,440 --> 00:30:36,840 Speaker 1: piece up, and in doing so now two more strings 516 00:30:36,840 --> 00:30:39,360 Speaker 1: get cut, and the and the foot collapses, and when 517 00:30:39,400 --> 00:30:41,800 Speaker 1: a certain number collapses that nothing can be done. So 518 00:30:41,840 --> 00:30:45,680 Speaker 1: it's a fundamental failure of homeostasis that makes death very 519 00:30:45,680 --> 00:30:49,080 Speaker 1: hard to imagine, conceive, And of course there is an 520 00:30:49,080 --> 00:30:53,920 Speaker 1: emotional component to this, but unlike human doctors, AI doesn't 521 00:30:53,920 --> 00:30:57,680 Speaker 1: get distracted by emotion. It looks at evidence and historical 522 00:30:57,800 --> 00:31:02,160 Speaker 1: data to establish patterns. The algorithms actually do quite well 523 00:31:02,200 --> 00:31:05,880 Speaker 1: in predicting death. What is it attaching weight to? Is 524 00:31:05,920 --> 00:31:08,520 Speaker 1: it a combination of things? Is it the fact that 525 00:31:08,560 --> 00:31:11,480 Speaker 1: someone has a brain metastasis and has a slight rise 526 00:31:11,520 --> 00:31:15,600 Speaker 1: in some blood value of some solid that predicts that 527 00:31:15,640 --> 00:31:17,440 Speaker 1: this person is likely to do very badly in the 528 00:31:17,480 --> 00:31:20,160 Speaker 1: next few days. You know, as you refine it further 529 00:31:20,200 --> 00:31:22,600 Speaker 1: and further, many subtle things might start coming up that 530 00:31:22,640 --> 00:31:24,320 Speaker 1: we don't know about, and those will be the most 531 00:31:24,360 --> 00:31:29,800 Speaker 1: interesting ones. It's not just additives. That phrase. It's not 532 00:31:29,880 --> 00:31:32,560 Speaker 1: just additives. It's important to me, Kara because it connects 533 00:31:32,560 --> 00:31:35,280 Speaker 1: the dots between what sadd Arthur is saying about predicting 534 00:31:35,320 --> 00:31:38,160 Speaker 1: time of death and what Andy Schwartz was saying about 535 00:31:38,200 --> 00:31:42,040 Speaker 1: getting better decoding the human brain. They're both about understanding 536 00:31:42,080 --> 00:31:46,000 Speaker 1: systems where one plus one doesn't necessarily equal to where 537 00:31:46,080 --> 00:31:50,160 Speaker 1: unexpected results emerge from complex systems. Thinking about this makes 538 00:31:50,160 --> 00:31:54,440 Speaker 1: me physically ill. Um He's literally talking about predicting when 539 00:31:54,440 --> 00:31:57,959 Speaker 1: we will die and mind blowing. You know what, if 540 00:31:58,240 --> 00:32:01,040 Speaker 1: you could know when you will die could change how 541 00:32:01,080 --> 00:32:03,240 Speaker 1: we choose to live our daily lives. It's one of 542 00:32:03,280 --> 00:32:06,200 Speaker 1: the things I find most disturbing this whole series. So 543 00:32:06,320 --> 00:32:08,400 Speaker 1: much of how we live and how we aspire and 544 00:32:08,440 --> 00:32:11,520 Speaker 1: what we hope for is connected to our uncertainty about 545 00:32:11,520 --> 00:32:13,760 Speaker 1: when we're going to die. That and I can change 546 00:32:13,760 --> 00:32:16,840 Speaker 1: all of that. It's part of this more global line 547 00:32:16,840 --> 00:32:19,320 Speaker 1: of thinking, which is that AI kind of takes the 548 00:32:19,360 --> 00:32:22,000 Speaker 1: fun out of things well, I mean, and it also 549 00:32:22,360 --> 00:32:25,200 Speaker 1: a big part of Western culture is the Bible and 550 00:32:25,440 --> 00:32:29,160 Speaker 1: the fruit of that forbidden tree, and in Paradise Loss, 551 00:32:29,200 --> 00:32:31,920 Speaker 1: there's this warning to know, to know no more, and 552 00:32:31,960 --> 00:32:34,400 Speaker 1: there's always been this idea in history that there's something 553 00:32:34,440 --> 00:32:39,040 Speaker 1: magic about not knowing. Yeah, you know, Milton aside, Okay, okay, okay, 554 00:32:39,120 --> 00:32:42,520 Speaker 1: Nat you go. We can start to think about what 555 00:32:42,640 --> 00:32:46,200 Speaker 1: this might mean more practically. Lifespan data is directly linked 556 00:32:46,200 --> 00:32:49,280 Speaker 1: to life insurance policies. You know, this would significantly change 557 00:32:49,320 --> 00:32:52,320 Speaker 1: how much people pay. You also think about personal injury law, 558 00:32:52,360 --> 00:32:55,480 Speaker 1: which takes into account how long somebody is going to live, 559 00:32:55,880 --> 00:32:58,120 Speaker 1: so you can determine how much money they should get 560 00:32:58,160 --> 00:33:00,680 Speaker 1: for loss of quality of life. These are things that 561 00:33:00,720 --> 00:33:04,600 Speaker 1: would be greatly impacted by people knowing when they're going 562 00:33:04,640 --> 00:33:08,320 Speaker 1: to die. Right. So, Datha calls death the ultimate black box, 563 00:33:08,440 --> 00:33:10,959 Speaker 1: which I think is actually the perfect description of black box. 564 00:33:11,520 --> 00:33:14,400 Speaker 1: We know we will die, we just don't know how, 565 00:33:14,440 --> 00:33:17,160 Speaker 1: and we don't know when. And with AI it's similar. 566 00:33:17,480 --> 00:33:20,360 Speaker 1: We know we'll get a result in endpoint, but we 567 00:33:20,400 --> 00:33:23,120 Speaker 1: don't know exactly how our input factors are combined to 568 00:33:23,160 --> 00:33:26,280 Speaker 1: get there. Ironically, although AI itself is a black box, 569 00:33:26,360 --> 00:33:29,720 Speaker 1: it's helping us unpack other black boxes, death being the 570 00:33:29,800 --> 00:33:33,360 Speaker 1: ultimate one, the brain being another. And then there's the 571 00:33:33,440 --> 00:33:36,560 Speaker 1: human genome, the unique pattern of our DNA that makes 572 00:33:36,600 --> 00:33:39,520 Speaker 1: each of us us. And we've met the genome, but 573 00:33:39,560 --> 00:33:42,040 Speaker 1: there are a lot of concerns about decoding it, that 574 00:33:42,080 --> 00:33:44,880 Speaker 1: it might be a sort of Pandora's box. Well, there's 575 00:33:44,880 --> 00:33:47,520 Speaker 1: that story about the scientists in China who edited the 576 00:33:47,560 --> 00:33:51,080 Speaker 1: genes of two babies using crisper and all the ethical 577 00:33:51,360 --> 00:33:55,080 Speaker 1: concerns of creating genetic babies exactly. So I spoke with 578 00:33:55,080 --> 00:33:58,480 Speaker 1: Andy Schwartz about actual progress being made in decoding the 579 00:33:58,560 --> 00:34:02,920 Speaker 1: human genome. If you go back to the late ninety nineties, 580 00:34:03,000 --> 00:34:05,960 Speaker 1: and the race was on to discover the human genome, 581 00:34:06,640 --> 00:34:09,520 Speaker 1: and the sound bikes were as soon as we understand 582 00:34:09,680 --> 00:34:13,160 Speaker 1: all the genes, we can cure disease. So, for instance, 583 00:34:13,440 --> 00:34:16,000 Speaker 1: there was a breast cancer gene, and there was an 584 00:34:16,000 --> 00:34:19,200 Speaker 1: Alzheimer's gene, and if we just knew what those genes were, 585 00:34:19,680 --> 00:34:23,200 Speaker 1: we'd be able to eradicate these diseases. Well, it's been 586 00:34:23,440 --> 00:34:29,959 Speaker 1: twenty years now and we're just beginning perhaps to get 587 00:34:30,080 --> 00:34:33,799 Speaker 1: some sort of genome based therapies that might address some 588 00:34:33,880 --> 00:34:36,840 Speaker 1: of these And what we found is that there's no 589 00:34:36,960 --> 00:34:40,640 Speaker 1: simple cause and effect. Very rarely are there simple gene 590 00:34:40,640 --> 00:34:46,400 Speaker 1: defects correlated disease. Rather, these diseases have hundreds of genetic 591 00:34:46,760 --> 00:34:50,960 Speaker 1: basis and each of those is relatively weak, but combined together, 592 00:34:51,040 --> 00:34:55,200 Speaker 1: they generate these diseases. And so it's becomes a computational 593 00:34:55,280 --> 00:34:58,040 Speaker 1: problem and we start looking again at this as a 594 00:34:58,080 --> 00:35:02,520 Speaker 1: complex system where causality is no longer clear. And these 595 00:35:02,560 --> 00:35:05,520 Speaker 1: are the complex computational problems. We're getting better and better 596 00:35:05,600 --> 00:35:09,600 Speaker 1: at solving Sadartha himself is interested in exactly this area, 597 00:35:09,800 --> 00:35:13,560 Speaker 1: the confluence of genetics and computation. In fact, in twenty 598 00:35:13,680 --> 00:35:17,000 Speaker 1: eighteen he gave a talk at Vanderbilt University called from 599 00:35:17,120 --> 00:35:21,759 Speaker 1: Artificial Intelligence to Genomic Intelligence, and it's an area where 600 00:35:21,760 --> 00:35:25,840 Speaker 1: we're making rapid progress. The first papers are just starting 601 00:35:25,840 --> 00:35:29,040 Speaker 1: to appear there in preprint. One of them is extraordinarily interesting. 602 00:35:29,600 --> 00:35:33,080 Speaker 1: It appears to be able to predict height based on 603 00:35:33,239 --> 00:35:36,680 Speaker 1: an algorithm and genomic information to all parents tend to 604 00:35:36,719 --> 00:35:39,120 Speaker 1: produce to all children short parents and to produce short children. 605 00:35:39,239 --> 00:35:41,840 Speaker 1: But we did not have ways to predict based on 606 00:35:41,880 --> 00:35:44,440 Speaker 1: genetic information what your actual height was going to be. 607 00:35:44,960 --> 00:35:47,160 Speaker 1: The question becomes, well, how do you take the genome 608 00:35:47,200 --> 00:35:49,919 Speaker 1: and out pops height from it? If you can do that, 609 00:35:49,920 --> 00:35:52,480 Speaker 1: that means you can take a field genome in utero 610 00:35:52,840 --> 00:35:55,440 Speaker 1: and predict this person's future height. You know, based on 611 00:35:55,520 --> 00:35:58,200 Speaker 1: these first few papers that I read about this arena, 612 00:35:58,480 --> 00:36:01,440 Speaker 1: you require deep learning to do this. I wanted to 613 00:36:01,520 --> 00:36:05,960 Speaker 1: understand why this prediction of height requires deep learning algorithms, 614 00:36:06,360 --> 00:36:09,800 Speaker 1: not simple ones like Andy used to interpret Jan's brain waves. 615 00:36:10,360 --> 00:36:13,040 Speaker 1: It's not just additive you can just add up across 616 00:36:13,239 --> 00:36:16,360 Speaker 1: multiple variations in the genome and arrive at a risk score. 617 00:36:16,760 --> 00:36:19,360 Speaker 1: It's that there are interactions between genes that have to 618 00:36:19,360 --> 00:36:23,239 Speaker 1: be captured. Again, these are early days for artificial intelligence 619 00:36:23,560 --> 00:36:26,120 Speaker 1: being unleashed on genomes, but it seems to me that 620 00:36:26,160 --> 00:36:30,479 Speaker 1: complex problems of genetic architecture will soon be predictable using 621 00:36:30,520 --> 00:36:34,920 Speaker 1: these kinds of algorithms, and that ability to predict raises 622 00:36:35,040 --> 00:36:37,680 Speaker 1: huge questions for all of us. If you want to 623 00:36:37,680 --> 00:36:39,799 Speaker 1: know the height of your unborn child, but you want 624 00:36:39,800 --> 00:36:42,640 Speaker 1: to know the risk of dyslexia, those questions are almost 625 00:36:42,640 --> 00:36:47,160 Speaker 1: certainly likely to lead to extraordinarily acrimonious public conversations about 626 00:36:47,200 --> 00:36:49,440 Speaker 1: what should be done and what shouldn't be done in 627 00:36:49,560 --> 00:36:51,839 Speaker 1: terms of accessing the data, who's to store the data, 628 00:36:52,000 --> 00:36:54,200 Speaker 1: how much privacy we should have about it, and how 629 00:36:54,280 --> 00:36:56,840 Speaker 1: much it will distort human culture to have these pieces 630 00:36:56,840 --> 00:37:00,160 Speaker 1: of knowledge. Um So, if you think that clinically going 631 00:37:00,200 --> 00:37:03,160 Speaker 1: when you're going to die is is going to distort culture, 632 00:37:03,680 --> 00:37:06,000 Speaker 1: in knowing how tall your child is going to be 633 00:37:06,080 --> 00:37:08,840 Speaker 1: in the future will also distort human culture. We haven't 634 00:37:08,920 --> 00:37:11,279 Speaker 1: ever lived in a place or a space or a 635 00:37:11,280 --> 00:37:14,040 Speaker 1: time when that knowledge has been predictable from a fetus. 636 00:37:14,760 --> 00:37:18,520 Speaker 1: Artificial intelligence is giving us incredible power to see into 637 00:37:18,560 --> 00:37:22,319 Speaker 1: the future, to ask and answer questions about the generations 638 00:37:22,360 --> 00:37:25,600 Speaker 1: to come. But it is up to us, our generation, 639 00:37:25,920 --> 00:37:28,600 Speaker 1: to decide how we want to use this awesome power. 640 00:37:29,320 --> 00:37:32,359 Speaker 1: But one thing that artificial neural networks can't do is 641 00:37:32,520 --> 00:37:38,279 Speaker 1: defined principles. They can only work on classifying things that 642 00:37:38,320 --> 00:37:41,120 Speaker 1: we tell them to classify. There is still a human 643 00:37:41,200 --> 00:37:44,319 Speaker 1: telling the artificial neural network what it should be doing. 644 00:37:45,000 --> 00:37:49,000 Speaker 1: There is something very fundamental about the human brain, a 645 00:37:49,080 --> 00:37:52,600 Speaker 1: scientist's brain, a doctor's brain, and artist's brain that asks 646 00:37:52,719 --> 00:37:55,759 Speaker 1: questions in a fundamentally different manner, the why question. Why 647 00:37:55,800 --> 00:37:59,120 Speaker 1: did this happen in this person in this time? Why 648 00:37:59,160 --> 00:38:01,120 Speaker 1: does the melanoma appear in the first case? What is 649 00:38:01,120 --> 00:38:04,440 Speaker 1: the molecular basis of that appearance? The most interesting mysteries 650 00:38:04,440 --> 00:38:07,160 Speaker 1: of medicine remain mysteries that have to do with the y, 651 00:38:07,600 --> 00:38:11,040 Speaker 1: and despite being at the absolute cutting edge of medical research, 652 00:38:11,440 --> 00:38:15,600 Speaker 1: Saddatha's most important guiding principle was written in ancient Greek 653 00:38:15,960 --> 00:38:20,080 Speaker 1: over two thousand years ago. Remember the Hipocratic oath begins first, 654 00:38:20,160 --> 00:38:23,200 Speaker 1: do no harm it's maybe the single profession where the 655 00:38:23,239 --> 00:38:25,480 Speaker 1: oath of the profession is in the negative. And this 656 00:38:25,560 --> 00:38:28,840 Speaker 1: is for a reason. It's for a profound reason in medicine, 657 00:38:28,920 --> 00:38:32,040 Speaker 1: because we're intervening on bodies, because we're intervening on homeostasis, 658 00:38:32,040 --> 00:38:36,000 Speaker 1: because we're intervening on cultures. Effectively, the capacity to do 659 00:38:36,160 --> 00:38:39,799 Speaker 1: harm arises very quickly, and so the first do no 660 00:38:39,880 --> 00:38:42,760 Speaker 1: harm injunction in the Hippocratic oath is is an important 661 00:38:42,760 --> 00:38:45,600 Speaker 1: thing to keep in my own mind. Um, you know, 662 00:38:45,680 --> 00:38:48,120 Speaker 1: what are the harms that arise if I were to 663 00:38:48,160 --> 00:38:50,960 Speaker 1: start knowing my risks of future disease, not just what 664 00:38:51,040 --> 00:38:54,440 Speaker 1: advantages would I get in society. And this battle is 665 00:38:54,480 --> 00:38:56,560 Speaker 1: happening in my mind, I assume in the minds of 666 00:38:56,600 --> 00:38:59,480 Speaker 1: virtually every doctor. As we move forward into this uh 667 00:39:00,400 --> 00:39:05,880 Speaker 1: beautiful and perilous future, in a world where our choices 668 00:39:05,920 --> 00:39:08,839 Speaker 1: can create new beauty but also a new peril. It's 669 00:39:08,880 --> 00:39:11,359 Speaker 1: important that we move into that future with real care. 670 00:39:11,880 --> 00:39:15,240 Speaker 1: And it's something sad Arthur has thought a lot about personally, because, 671 00:39:15,400 --> 00:39:19,160 Speaker 1: like Sebastian Throne, he has a family history of heritable conditions. 672 00:39:19,920 --> 00:39:24,439 Speaker 1: The risk is of schizophrenic disease and bipolar disorder, and 673 00:39:24,600 --> 00:39:28,160 Speaker 1: right now the algorithms to predict this still don't exist. 674 00:39:29,080 --> 00:39:32,000 Speaker 1: As the project of sequencing lots of genomes and asking 675 00:39:32,040 --> 00:39:35,239 Speaker 1: what diseases people have matures, this data set will become 676 00:39:35,280 --> 00:39:38,839 Speaker 1: available maybe five ten years from now. I will be 677 00:39:39,320 --> 00:39:42,120 Speaker 1: past the period I suppose where that will make a difference. 678 00:39:42,160 --> 00:39:44,600 Speaker 1: But to my children and my grandchildren, it might make 679 00:39:44,600 --> 00:39:47,480 Speaker 1: a difference, and they'll have to make that decision. I 680 00:39:47,480 --> 00:39:50,719 Speaker 1: will advise them individually, and it will depend on humanistic 681 00:39:50,800 --> 00:39:54,680 Speaker 1: understanding of what an individual's desire to understand their own 682 00:39:54,760 --> 00:40:00,480 Speaker 1: risk is. There's no algorithm that predicts that understanding. As 683 00:40:00,520 --> 00:40:03,400 Speaker 1: AI advances, we're being faced with more and more urgent 684 00:40:03,520 --> 00:40:06,960 Speaker 1: ethical choices. This, in turn, may put a new emphasis 685 00:40:06,960 --> 00:40:10,719 Speaker 1: on the humanities, or, as Kai Fuey suggested, place a 686 00:40:10,760 --> 00:40:15,520 Speaker 1: new premium on personal attention, human interaction, and emotional care. 687 00:40:17,320 --> 00:40:21,560 Speaker 1: Once we give up some of the diagnostic pattern recognition 688 00:40:21,600 --> 00:40:24,279 Speaker 1: material to machines, it will be time to play. It 689 00:40:24,320 --> 00:40:26,719 Speaker 1: will be the time to play in the arena of 690 00:40:27,000 --> 00:40:30,840 Speaker 1: human therapeutics, human biology, the complexity of the human interaction, 691 00:40:30,880 --> 00:40:34,239 Speaker 1: the art of medicine. My hope is that medicine, being 692 00:40:34,239 --> 00:40:38,560 Speaker 1: more playful, will become more compassionate, more able to take 693 00:40:38,600 --> 00:40:43,680 Speaker 1: into account individuals and their individual destinies rather than bucketing 694 00:40:43,719 --> 00:40:47,200 Speaker 1: people in big categories. It means having more time to 695 00:40:47,280 --> 00:40:51,080 Speaker 1: spend with humans. You know, we are so constrained by 696 00:40:51,160 --> 00:40:55,520 Speaker 1: time that even compassion gets three minutes, we won't become 697 00:40:56,000 --> 00:40:59,800 Speaker 1: more robotic, will become less robotic as the robots and 698 00:41:00,080 --> 00:41:04,240 Speaker 1: our own What's the data describes is the holy grail 699 00:41:04,400 --> 00:41:07,560 Speaker 1: of the AI revolution. Could it allow us to be 700 00:41:07,680 --> 00:41:12,200 Speaker 1: more human, to be better doctors, more fulfilled workers, and 701 00:41:12,360 --> 00:41:15,759 Speaker 1: greater artists. Could it take routine work out of our 702 00:41:15,800 --> 00:41:18,560 Speaker 1: hands and allow us to take better care of each other. 703 00:41:20,040 --> 00:41:23,040 Speaker 1: It's a compelling vision, but as always, it has a 704 00:41:23,160 --> 00:41:26,520 Speaker 1: dark side. While most doctors are guided by their hippocratic 705 00:41:26,560 --> 00:41:30,319 Speaker 1: oath do no harm, there's no guarantee that new technologies 706 00:41:30,360 --> 00:41:33,360 Speaker 1: will stay in the right hands. The line between healing 707 00:41:33,480 --> 00:41:37,400 Speaker 1: and upgrading our bodies is thin and contested, and as 708 00:41:37,440 --> 00:41:41,359 Speaker 1: AI improves, we can begin to translate desires directly from 709 00:41:41,400 --> 00:41:45,040 Speaker 1: brain activity, modify the physical traits of our children through 710 00:41:45,120 --> 00:41:49,960 Speaker 1: gene editing, and accurately predict when we will die. In 711 00:41:50,000 --> 00:41:52,400 Speaker 1: the next episode, we ask what does all of this 712 00:41:52,600 --> 00:41:56,239 Speaker 1: mean for our future? As a species. We speak to 713 00:41:56,239 --> 00:41:59,400 Speaker 1: the world's leading thinker on these questions. You've all know 714 00:41:59,640 --> 00:42:04,520 Speaker 1: her are author of Sapiens and Homodaeus. I'm Ozveloshin. See 715 00:42:04,560 --> 00:42:19,839 Speaker 1: you next time. Sleepwalkers is a production of I Heart 716 00:42:19,920 --> 00:42:24,440 Speaker 1: Radio and Unusual Productions. For the latest AI news, live interviews, 717 00:42:24,480 --> 00:42:27,520 Speaker 1: and behind the scenes footage, find us on Instagram, at 718 00:42:27,520 --> 00:42:34,160 Speaker 1: Sleepwalker's podcast or at Sleepwalker's podcast dot com. Sleepwalkers is 719 00:42:34,160 --> 00:42:37,520 Speaker 1: hosted by me Ozveloshin and co hosted by me Kara Price, 720 00:42:37,680 --> 00:42:40,600 Speaker 1: with produced by Julian Weller, with help from Jacobo Penzo 721 00:42:40,760 --> 00:42:44,320 Speaker 1: and Taylor Chakoin mixing by Tristan McNeil and Julian Weller. 722 00:42:44,600 --> 00:42:48,320 Speaker 1: Our story editor is Matthew Riddle. Recording assistance this episode 723 00:42:48,360 --> 00:42:51,240 Speaker 1: from Joe and Luna to Brina Boden and Joseph Friedman. 724 00:42:51,440 --> 00:42:55,600 Speaker 1: Sleepwalkers is executive produced by me Ozveloshin and Mangesh Hattikiller. 725 00:42:55,760 --> 00:42:57,799 Speaker 1: For more podcasts from My Heart Radio, visit the I 726 00:42:57,880 --> 00:43:00,799 Speaker 1: Heart Radio app, Apple Podcasts, or wherever you listen to 727 00:43:00,840 --> 00:43:01,680 Speaker 1: your favorite shows.