WEBVTT - Algorithm, M.D.

0:00:04.600 --> 0:00:08.320
<v Speaker 1>Sleepwalkers is a production of I Heart Radio and Unusual Productions.

0:00:12.760 --> 0:00:16.840
<v Speaker 1>Jan had become paralyzed through a disease process such that

0:00:17.400 --> 0:00:21.520
<v Speaker 1>she couldn't move anything below her neck, and she volunteered

0:00:21.520 --> 0:00:25.759
<v Speaker 1>to have the surgical procedure in which we implanted electrodes

0:00:25.760 --> 0:00:29.240
<v Speaker 1>in her head and we were able to decode the

0:00:29.280 --> 0:00:31.520
<v Speaker 1>signals from her brain and she was able to move

0:00:31.600 --> 0:00:37.120
<v Speaker 1>a high performance prosthetic arm in hand. That's Andy Schwartz,

0:00:37.200 --> 0:00:41.080
<v Speaker 1>a professor of neurobiology at the University of Pittsburgh. He

0:00:41.080 --> 0:00:43.720
<v Speaker 1>helped build the technology that allowed a patient called Jan

0:00:43.840 --> 0:00:48.199
<v Speaker 1>Sherman to control a robotic arm with her mind. In

0:00:48.320 --> 0:00:50.479
<v Speaker 1>order to do that, Andy's team needed to give the

0:00:50.640 --> 0:00:53.240
<v Speaker 1>arm away to know where Jan wanted it to move.

0:00:53.760 --> 0:00:56.440
<v Speaker 1>They had to read signals directly from her brain and

0:00:56.480 --> 0:00:59.120
<v Speaker 1>teach a computer to understand them. So they built a

0:00:59.160 --> 0:01:03.120
<v Speaker 1>brain computer interface a b c I. We implanted electrodes

0:01:03.120 --> 0:01:05.400
<v Speaker 1>in her head. You have to put on these bulky

0:01:05.440 --> 0:01:08.880
<v Speaker 1>connectors with thick cables going to a bank of amplifiers

0:01:08.880 --> 0:01:11.400
<v Speaker 1>and computers. It sounds like something out of a science

0:01:11.400 --> 0:01:15.440
<v Speaker 1>fiction film. But once Andy could see Jan's neurons firing,

0:01:15.920 --> 0:01:20.240
<v Speaker 1>reading her intentions was simpler than you might think. What

0:01:20.360 --> 0:01:23.920
<v Speaker 1>we found is that the rate that these neurons fire

0:01:24.160 --> 0:01:27.120
<v Speaker 1>is related to the direction that the arm moves. When

0:01:27.160 --> 0:01:29.280
<v Speaker 1>you add their signals together, you can get a very

0:01:29.360 --> 0:01:34.840
<v Speaker 1>precise representation of that movement. It's a very very simple algorithm,

0:01:34.880 --> 0:01:37.920
<v Speaker 1>and it's like listening to a Geiger counter um and

0:01:38.120 --> 0:01:41.360
<v Speaker 1>each click of the Geiger counters the same, but as

0:01:41.400 --> 0:01:45.000
<v Speaker 1>you get closer to a radioactive source, those clicks come

0:01:45.040 --> 0:01:50.040
<v Speaker 1>closer together. Listening for those clicks was the breakthrough necessary

0:01:50.040 --> 0:01:53.240
<v Speaker 1>to translate Jan's thoughts into signals that the robotic arm

0:01:53.240 --> 0:01:56.760
<v Speaker 1>could understand, and this meant that Jan could move again.

0:01:57.320 --> 0:02:01.480
<v Speaker 1>She could reach out and touch her husband, and Andy's

0:02:01.520 --> 0:02:04.120
<v Speaker 1>team filmed Jan using the arm to complete a more

0:02:04.160 --> 0:02:08.040
<v Speaker 1>playful goal that she'd set to feed herself chocolate for

0:02:08.080 --> 0:02:16.280
<v Speaker 1>the first time in ten years for a woman, one

0:02:16.440 --> 0:02:22.560
<v Speaker 1>giant fight for DHDR. Jan was able to do that

0:02:22.880 --> 0:02:25.440
<v Speaker 1>with the robot, and not only that, but she made

0:02:25.480 --> 0:02:29.200
<v Speaker 1>graceful and beautiful movements. That's what really blew me away.

0:02:29.760 --> 0:02:32.560
<v Speaker 1>It looks much like a real arm in hand, and

0:02:33.880 --> 0:02:37.720
<v Speaker 1>for someone who studies movement, it was really quite beautiful.

0:02:40.040 --> 0:02:43.120
<v Speaker 1>Andy is a scientist and a researcher not given to

0:02:43.160 --> 0:02:46.680
<v Speaker 1>being sentimental, but the way he talks about Jan moving

0:02:46.720 --> 0:02:50.040
<v Speaker 1>her robotic arm, it's like describing a dance a ballet.

0:02:50.600 --> 0:02:53.160
<v Speaker 1>It's one of the most inspiring examples of humans and

0:02:53.200 --> 0:02:56.640
<v Speaker 1>machines working together in concert that we've come across in

0:02:56.680 --> 0:03:00.639
<v Speaker 1>all of our reporting for Sleepwalkers. But it also raises

0:03:00.680 --> 0:03:04.239
<v Speaker 1>profound questions about the future of our health, our bodies,

0:03:04.280 --> 0:03:08.480
<v Speaker 1>and our society. What are the implications positive and negative?

0:03:08.919 --> 0:03:12.079
<v Speaker 1>As AI makes us ever more able to decode complex

0:03:12.120 --> 0:03:16.240
<v Speaker 1>systems like the brain or the human genome, how is

0:03:16.280 --> 0:03:20.000
<v Speaker 1>AI poised to change the world of medicine. I'm as

0:03:20.040 --> 0:03:36.640
<v Speaker 1>Veloshin and this is Sleepwalkers, So Kara I found Andy's

0:03:36.640 --> 0:03:41.280
<v Speaker 1>story completely mind blowing, no pun intended. You know it

0:03:41.440 --> 0:03:44.720
<v Speaker 1>shakes one of the last remaining mysteries. For the most part,

0:03:44.800 --> 0:03:47.880
<v Speaker 1>neuroscience is still very much a black box problem. We

0:03:47.920 --> 0:03:51.440
<v Speaker 1>know what's happening in the brain, but neuroscientists can't always

0:03:51.640 --> 0:03:54.240
<v Speaker 1>know why and how, which is a lot like the

0:03:54.240 --> 0:03:57.560
<v Speaker 1>problem of black box AI. We are moving quickly into

0:03:57.600 --> 0:03:59.840
<v Speaker 1>a world where sensors can read us better and better

0:04:00.040 --> 0:04:03.080
<v Speaker 1>know they can read pupil dilation, carbon dioxide in the

0:04:03.160 --> 0:04:06.000
<v Speaker 1>breath to understand people's emotions. Yeah, this is the stuff

0:04:06.040 --> 0:04:08.960
<v Speaker 1>we talked about an episode four around using AI to

0:04:09.080 --> 0:04:12.240
<v Speaker 1>better read biometrics with Poppy Chrum. The difference being in

0:04:12.280 --> 0:04:14.880
<v Speaker 1>this case is that Andy isn't monitoring the outside of

0:04:14.880 --> 0:04:18.760
<v Speaker 1>our bodies, so the privacy concern is less. To read

0:04:18.839 --> 0:04:21.800
<v Speaker 1>Jan's brain, he had to drill into her skull and

0:04:21.880 --> 0:04:25.200
<v Speaker 1>place electros into the surface of her brain and then

0:04:25.200 --> 0:04:28.200
<v Speaker 1>connect them to a computer, so that's unlikely to creep

0:04:28.279 --> 0:04:30.680
<v Speaker 1>up on you. Google is not going to be doing

0:04:30.720 --> 0:04:34.919
<v Speaker 1>that to get geolocation, not yet. Because that said, and

0:04:35.040 --> 0:04:37.159
<v Speaker 1>he told me one of his big priorities is trying

0:04:37.200 --> 0:04:40.200
<v Speaker 1>to figure out how to achieve the same effects without

0:04:40.279 --> 0:04:43.440
<v Speaker 1>needing invasive surgery, so that people can use this technology

0:04:43.520 --> 0:04:46.839
<v Speaker 1>at home. Talking about reading the brain, one cutting edge

0:04:46.839 --> 0:04:50.520
<v Speaker 1>application for AI is to restore language. A few months ago,

0:04:50.560 --> 0:04:52.400
<v Speaker 1>I was actually reading this paper in Nature, and I'm

0:04:52.400 --> 0:04:56.040
<v Speaker 1>not sure all podcast listeners read Nature. Well, thanks, Carol,

0:04:56.080 --> 0:04:57.880
<v Speaker 1>you do the hard work so that we don't have to.

0:04:58.000 --> 0:05:01.320
<v Speaker 1>I guess why might they pay me the big bitcoin um?

0:05:01.320 --> 0:05:03.800
<v Speaker 1>But you know, basically this article was about decoding the

0:05:03.880 --> 0:05:07.919
<v Speaker 1>human brain to produce language based on how the brain

0:05:07.960 --> 0:05:11.440
<v Speaker 1>tells the mouth to move well. Funny enough, this is

0:05:11.480 --> 0:05:15.080
<v Speaker 1>something I spoke to Andy about a few months before

0:05:15.440 --> 0:05:18.159
<v Speaker 1>your paper and Nature was published, Kara, and he was

0:05:18.160 --> 0:05:21.559
<v Speaker 1>talking about exactly this. So if you record from motor

0:05:21.640 --> 0:05:26.280
<v Speaker 1>areas associated with producing language, you can start to recognize

0:05:26.360 --> 0:05:30.280
<v Speaker 1>certain words and phrases. Even I think that's realizable in

0:05:30.279 --> 0:05:33.240
<v Speaker 1>the your term. What Andy is saying is that to

0:05:33.320 --> 0:05:36.520
<v Speaker 1>create spoken words directly from the brain, we don't actually

0:05:36.600 --> 0:05:38.800
<v Speaker 1>need to read thoughts. We can just look at the

0:05:38.880 --> 0:05:42.440
<v Speaker 1>last step, the moment when thought becomes action as neurons

0:05:42.480 --> 0:05:45.599
<v Speaker 1>fired to move our tongue, our lips are jaw to

0:05:45.680 --> 0:05:49.600
<v Speaker 1>create sound. And just like with jan, an algorithm can

0:05:49.680 --> 0:05:53.039
<v Speaker 1>listen to those neurons and allow thoughts to become actions.

0:05:53.839 --> 0:05:57.520
<v Speaker 1>And this could transform lives. So if you could start

0:05:57.520 --> 0:06:02.560
<v Speaker 1>to recognize words and language from brain activity, that would

0:06:02.600 --> 0:06:05.800
<v Speaker 1>be very helpful for people who are locked in with

0:06:05.880 --> 0:06:08.880
<v Speaker 1>a l S and can't communicate. We talked in the

0:06:08.960 --> 0:06:12.240
<v Speaker 1>last episode about how many technological breakthroughs have come out

0:06:12.279 --> 0:06:15.360
<v Speaker 1>of DARPER. That's the branch of the Defense Department charged

0:06:15.360 --> 0:06:20.920
<v Speaker 1>with inventing technological surprises, well neural prosthetics or robotic limbs

0:06:20.960 --> 0:06:23.520
<v Speaker 1>controlled by the mind have been an area of heavy

0:06:23.560 --> 0:06:26.880
<v Speaker 1>investment for the agency. You may remember Gil Pratt from

0:06:26.880 --> 0:06:29.400
<v Speaker 1>earlier in the series. He's now the CEO of the

0:06:29.440 --> 0:06:33.919
<v Speaker 1>Toyota Research Institute, but previously he was at DARPA where

0:06:33.960 --> 0:06:37.360
<v Speaker 1>he worked with none other than Andy Schwartz. So he

0:06:37.440 --> 0:06:39.680
<v Speaker 1>was involved in a project at DARPA which was called

0:06:39.720 --> 0:06:44.680
<v Speaker 1>Revolutionizing Prosthetics. And the project that I started was to

0:06:45.040 --> 0:06:48.479
<v Speaker 1>see if we could actually help some of the experimental

0:06:48.520 --> 0:06:52.640
<v Speaker 1>patients that he had to perform even better. So Kara

0:06:52.760 --> 0:06:55.479
<v Speaker 1>I was interviewing Gil Pratt about his work at Toyota

0:06:55.600 --> 0:06:58.239
<v Speaker 1>and on self driving cars, and we got talking about

0:06:58.279 --> 0:07:01.320
<v Speaker 1>his interest in these human machine owner ships, and I said,

0:07:01.360 --> 0:07:04.920
<v Speaker 1>let me tell you a story as a scientist called

0:07:04.960 --> 0:07:08.880
<v Speaker 1>Andy Schwartz and his patient Jan and Gil was like, um, yeah,

0:07:08.960 --> 0:07:13.440
<v Speaker 1>I worked on that. Yeah, no, no no, no, literally, it's funny.

0:07:13.480 --> 0:07:17.040
<v Speaker 1>And it also shows the long arm of DARPA intended.

0:07:17.120 --> 0:07:21.840
<v Speaker 1>Again that's far away from military applications. You know, darp

0:07:21.840 --> 0:07:25.280
<v Speaker 1>AS interest in prosthetics is actually because of veterans, many

0:07:25.360 --> 0:07:29.080
<v Speaker 1>of whom lose limbs on the battlefield, which highlights again

0:07:29.120 --> 0:07:32.280
<v Speaker 1>the dual use nature of so much innovation. Right, um

0:07:32.280 --> 0:07:34.680
<v Speaker 1>and The work Gil was doing at DARPA with Jan

0:07:35.160 --> 0:07:37.320
<v Speaker 1>was all about doing about a job of interpreting her

0:07:37.440 --> 0:07:42.080
<v Speaker 1>brain waves using the existing models. My group made a

0:07:42.360 --> 0:07:46.200
<v Speaker 1>system called arm assist that watched what Jan was trying

0:07:46.240 --> 0:07:49.280
<v Speaker 1>to do. It inferred Okay, now she's trying to pick

0:07:49.360 --> 0:07:51.600
<v Speaker 1>up a block. Now she's trying to move it over

0:07:51.640 --> 0:07:53.840
<v Speaker 1>to the left. Now she's trying to drop the block.

0:07:54.040 --> 0:07:55.800
<v Speaker 1>You know, in a way very similar to if you

0:07:55.920 --> 0:07:58.880
<v Speaker 1>use power Point and you have like snapped to grit

0:07:59.040 --> 0:08:02.200
<v Speaker 1>or snapped the object turned on, it will help you

0:08:02.240 --> 0:08:04.480
<v Speaker 1>move the mouse to where it thinks you want to go.

0:08:05.160 --> 0:08:08.080
<v Speaker 1>This system helped her move the arm to where it

0:08:08.200 --> 0:08:11.080
<v Speaker 1>inferred she wanted to go based on a very noisy

0:08:11.240 --> 0:08:13.800
<v Speaker 1>signal that was coming out of her brain. That noise

0:08:13.840 --> 0:08:16.520
<v Speaker 1>in the signal was partly because the connection between Jan's

0:08:16.560 --> 0:08:20.440
<v Speaker 1>brain and the decoding computer was weak. So Gil's team

0:08:20.440 --> 0:08:25.400
<v Speaker 1>Adoppa designed a program to boost Jan's intentions. We tested

0:08:25.440 --> 0:08:29.200
<v Speaker 1>her by randomly turning the assists on and off. And

0:08:29.360 --> 0:08:32.280
<v Speaker 1>you can think of the assist as a guardian, like

0:08:32.360 --> 0:08:34.800
<v Speaker 1>what we're developing for the car to stop you from

0:08:34.840 --> 0:08:37.959
<v Speaker 1>having a crash, and we had the guardian in this

0:08:38.000 --> 0:08:42.160
<v Speaker 1>case help as little as possible, but still be effective

0:08:42.160 --> 0:08:45.040
<v Speaker 1>at helping her to reach the goal. The amount of

0:08:45.080 --> 0:08:48.600
<v Speaker 1>help that we gave her was so low that she

0:08:48.640 --> 0:08:51.199
<v Speaker 1>couldn't tell whether it was on or off. But when

0:08:51.240 --> 0:08:54.960
<v Speaker 1>Guardian was turned off, Jan's success rate fell. And in

0:08:55.000 --> 0:08:58.320
<v Speaker 1>a human machine partnership is not just the computer that

0:08:58.400 --> 0:09:03.080
<v Speaker 1>adapts to the human brain also adapts to the algorithm.

0:09:03.120 --> 0:09:05.960
<v Speaker 1>And the's algorithms are simple and they rely on human

0:09:06.000 --> 0:09:10.160
<v Speaker 1>programmed rules. So when they see cause in this case,

0:09:10.320 --> 0:09:13.760
<v Speaker 1>enough neurons firing at the same time, they create an effect,

0:09:14.040 --> 0:09:18.840
<v Speaker 1>a movement. But what's tantalizing is that effect that output

0:09:19.200 --> 0:09:25.360
<v Speaker 1>hints at something far deeper and infinitely complex personality. I

0:09:25.440 --> 0:09:28.960
<v Speaker 1>like Moby Dick where he talks about Captain Ahab walking

0:09:29.000 --> 0:09:31.800
<v Speaker 1>on the deck of the ship, and by observing his movements,

0:09:31.840 --> 0:09:34.679
<v Speaker 1>you can really understand what he's thinking about. If you

0:09:34.760 --> 0:09:39.400
<v Speaker 1>think about movement, it's really a communication between your innermost

0:09:39.440 --> 0:09:42.600
<v Speaker 1>thoughts in the outside world. Andy and his team saw

0:09:42.640 --> 0:09:45.560
<v Speaker 1>this come to life before their very eyes when they

0:09:45.600 --> 0:09:50.559
<v Speaker 1>connected another subject, Nathan, to a neural prosthesis. The robotic

0:09:50.679 --> 0:09:53.880
<v Speaker 1>arm moved in accordance with the personality of the user

0:09:54.480 --> 0:09:57.440
<v Speaker 1>Jan when she moved. She was very careful and gentle,

0:09:58.160 --> 0:10:01.240
<v Speaker 1>and Nathan is more are of a video gamer. He's

0:10:01.280 --> 0:10:03.840
<v Speaker 1>a younger guy. He's more of a competitor, so he

0:10:03.880 --> 0:10:07.880
<v Speaker 1>would move much faster, and so he would pick up

0:10:07.880 --> 0:10:10.600
<v Speaker 1>an object and then instead of placing it carefully in

0:10:10.600 --> 0:10:13.240
<v Speaker 1>a receptacle, he would basically toss it into the receptacle

0:10:13.840 --> 0:10:16.920
<v Speaker 1>to be faster. Today we can see the difference in

0:10:17.040 --> 0:10:20.800
<v Speaker 1>Nathan and Gen's personalities from how the brain controls a

0:10:20.920 --> 0:10:25.720
<v Speaker 1>prosthetic arm. We can infer personality from output, but we

0:10:25.760 --> 0:10:29.680
<v Speaker 1>don't even have sophisticated enough tools to ask why. As

0:10:29.720 --> 0:10:32.880
<v Speaker 1>we get better and better at these computational approaches, will

0:10:32.960 --> 0:10:36.880
<v Speaker 1>gain a much better understanding of the way the brain works.

0:10:37.000 --> 0:10:43.360
<v Speaker 1>Rather than having some major event causing some simple consequence. Instead,

0:10:43.679 --> 0:10:46.880
<v Speaker 1>it's more like a perfect storm where many factors come

0:10:46.920 --> 0:10:52.079
<v Speaker 1>together to generate a consequence. And if we understand which

0:10:52.120 --> 0:10:56.000
<v Speaker 1>of those events are important and how they're combined, then

0:10:56.080 --> 0:11:00.960
<v Speaker 1>we can start to understand brain function better. If you

0:11:01.000 --> 0:11:04.400
<v Speaker 1>see a cup and you're thirsty, if we could realize

0:11:04.440 --> 0:11:06.719
<v Speaker 1>that what you want to do is to drink from it,

0:11:07.400 --> 0:11:10.760
<v Speaker 1>then we could understand that you want to grasp the

0:11:10.800 --> 0:11:14.200
<v Speaker 1>cup from the side, and if we could distinguish that

0:11:14.360 --> 0:11:17.800
<v Speaker 1>from you grasping a cup with the intention of passing

0:11:17.800 --> 0:11:20.079
<v Speaker 1>it to me, you might hold it now from the top,

0:11:20.440 --> 0:11:23.079
<v Speaker 1>then we could do a better job generating the correct

0:11:23.360 --> 0:11:27.360
<v Speaker 1>movement for Andy. The next frontier is more deeply understanding

0:11:27.400 --> 0:11:31.240
<v Speaker 1>the human brain carrot by developing new models and better algorithms.

0:11:31.559 --> 0:11:34.120
<v Speaker 1>And that's really a computer science problem as much as

0:11:34.120 --> 0:11:36.880
<v Speaker 1>a medical problem. And then really the next frontier in

0:11:36.920 --> 0:11:40.800
<v Speaker 1>medicine is all about using AI to decode complex interactions.

0:11:41.200 --> 0:11:45.319
<v Speaker 1>And in fact, you reported a piece on exactly this phenomenon. Yeah,

0:11:45.400 --> 0:11:48.319
<v Speaker 1>I spoke to a woman named Regina bars Ali. She's

0:11:48.360 --> 0:11:51.600
<v Speaker 1>a computer scientist m I T and her own experience

0:11:51.600 --> 0:11:54.400
<v Speaker 1>of how she was diagnosed with breast cancer actually inspired

0:11:54.480 --> 0:11:58.000
<v Speaker 1>her to work on bringing AI into the realm of medicine.

0:11:58.720 --> 0:12:01.400
<v Speaker 1>When we come back, we'll here from Regina and take

0:12:01.440 --> 0:12:04.760
<v Speaker 1>a look at other ways AI is changing diagnostics and

0:12:04.840 --> 0:12:15.040
<v Speaker 1>the future of our health. I remember still, I went

0:12:15.080 --> 0:12:18.480
<v Speaker 1>to mamogram and they told me a high density but

0:12:18.559 --> 0:12:22.199
<v Speaker 1>you shouldn't worry. Half of the population have high density breasts,

0:12:22.240 --> 0:12:26.079
<v Speaker 1>so don't worry about it. That's Regina bars Lie. Regina

0:12:26.160 --> 0:12:28.960
<v Speaker 1>is a professor of Electrical Engineering and Computer Science at

0:12:29.040 --> 0:12:31.520
<v Speaker 1>m I T. I couldn't believe in it. I didn't

0:12:31.559 --> 0:12:33.960
<v Speaker 1>feel anything. There was nothing wrong, you know. I was

0:12:34.160 --> 0:12:37.720
<v Speaker 1>continuing my morning runs and being fine. It was later

0:12:37.760 --> 0:12:40.720
<v Speaker 1>on after her mammogram that Regina found out she had

0:12:40.760 --> 0:12:44.280
<v Speaker 1>breast cancer, and as it often does, the news came

0:12:44.280 --> 0:12:47.080
<v Speaker 1>as a surprise. And if I'm looking at myself, I

0:12:47.120 --> 0:12:51.760
<v Speaker 1>really cannot explain what was wrong that I got this

0:12:51.920 --> 0:12:55.560
<v Speaker 1>disease that clearly didn't have any family history. I'm exercising,

0:12:55.640 --> 0:13:00.000
<v Speaker 1>I'm meeting healthy and for many many patients that I'm

0:13:00.000 --> 0:13:03.560
<v Speaker 1>me during my own journey, UH, their diagnosis came to

0:13:03.600 --> 0:13:07.120
<v Speaker 1>them as the biggest surprise. The thing is, according to

0:13:07.160 --> 0:13:09.960
<v Speaker 1>current medical standards, breast cancer risk is based on a

0:13:10.000 --> 0:13:13.000
<v Speaker 1>few factors. Are you a woman, and are you old?

0:13:13.200 --> 0:13:14.880
<v Speaker 1>Do you have the Brocko gene? And do you have

0:13:14.960 --> 0:13:18.240
<v Speaker 1>breast cancer in your family? But those are relatively simple

0:13:18.280 --> 0:13:21.320
<v Speaker 1>inputs and they don't account for what complex systems we

0:13:21.360 --> 0:13:25.080
<v Speaker 1>are above eighty percent of women who diagnosed with breast

0:13:25.080 --> 0:13:29.680
<v Speaker 1>cancer there first in their families, so it's not clear

0:13:30.040 --> 0:13:33.240
<v Speaker 1>you know what causes it. According to the Susan Becoman

0:13:33.280 --> 0:13:37.160
<v Speaker 1>Breast Cancer Foundation, breast cancer is the most common cancer

0:13:37.280 --> 0:13:40.560
<v Speaker 1>amongst women around the world. Every two minutes a case

0:13:40.559 --> 0:13:42.559
<v Speaker 1>of breast cancer is diagnosed and a woman in the

0:13:42.640 --> 0:13:46.200
<v Speaker 1>United States, and every minute somewhere on earth a woman

0:13:46.240 --> 0:13:50.079
<v Speaker 1>dies of breast cancer. That's more than four hundred women

0:13:50.120 --> 0:13:52.880
<v Speaker 1>per day. So if you're listening to this podcast, you

0:13:53.000 --> 0:13:56.079
<v Speaker 1>probably know someone who has been or will be diagnosed

0:13:56.080 --> 0:13:59.200
<v Speaker 1>with breast cancer in their lifetime. And with that many

0:13:59.240 --> 0:14:02.040
<v Speaker 1>people affected that they are bound to be some oversights,

0:14:02.080 --> 0:14:06.200
<v Speaker 1>like in Regina's case. I discovered that my own diagnosis

0:14:06.320 --> 0:14:10.640
<v Speaker 1>was delayed because the malignancy was missed in the previous mammograms.

0:14:10.960 --> 0:14:14.720
<v Speaker 1>And I also discovered that this is not a unique experience.

0:14:15.080 --> 0:14:18.319
<v Speaker 1>So the question I asked is is it possible for

0:14:18.440 --> 0:14:22.480
<v Speaker 1>us to know ahead of time what's to come? In

0:14:22.520 --> 0:14:26.200
<v Speaker 1>other words, can we look into the future prior to

0:14:26.200 --> 0:14:29.080
<v Speaker 1>our diagnosis. Regina had already been a long time computer

0:14:29.120 --> 0:14:31.920
<v Speaker 1>scientists at M I T. She thought often about machine

0:14:32.000 --> 0:14:35.560
<v Speaker 1>learning in her work, but this new personal hardship redirected

0:14:35.560 --> 0:14:39.600
<v Speaker 1>her thinking. When you go to the hospital, you see

0:14:39.680 --> 0:14:42.520
<v Speaker 1>like real pain of other people. You see people who

0:14:42.560 --> 0:14:46.560
<v Speaker 1>go through key most radiation, even though the hospital just

0:14:46.640 --> 0:14:49.760
<v Speaker 1>want stop away from M I T. I just was

0:14:49.800 --> 0:14:52.640
<v Speaker 1>not aware there is so much suffering. And at that

0:14:52.720 --> 0:14:55.760
<v Speaker 1>point when I came back, I was thinking, we create

0:14:56.040 --> 0:14:59.120
<v Speaker 1>so much you know, exciting new technology it and I

0:14:59.240 --> 0:15:03.200
<v Speaker 1>see why I we are not trying to solve it.

0:15:03.200 --> 0:15:07.440
<v Speaker 1>It's it's just a travesty. And so Regine and her

0:15:07.480 --> 0:15:09.960
<v Speaker 1>colleagues that M I T began training a deep learning

0:15:09.960 --> 0:15:13.600
<v Speaker 1>model on over ninety thod mamograms from mass General Hospital,

0:15:14.240 --> 0:15:16.400
<v Speaker 1>and with such a large data set, they were able

0:15:16.440 --> 0:15:19.520
<v Speaker 1>to predict a patient's risk of breast cancer by comparing

0:15:19.560 --> 0:15:23.640
<v Speaker 1>one mammogram to tens of thousands of others instantly. My

0:15:23.800 --> 0:15:27.800
<v Speaker 1>firm belief was that despite the standard risk factors, there

0:15:27.840 --> 0:15:31.800
<v Speaker 1>is a lot of information in women's breast. Human eye

0:15:31.880 --> 0:15:35.200
<v Speaker 1>which even seen you know, thousands or tens of thousands

0:15:35.280 --> 0:15:39.120
<v Speaker 1>images over their lifetime, may not be really able to

0:15:39.200 --> 0:15:43.080
<v Speaker 1>detect it with a great clarity. However, if machine which

0:15:43.160 --> 0:15:46.560
<v Speaker 1>is uh you know, trained on sixty images where it

0:15:46.720 --> 0:15:51.600
<v Speaker 1>knows the outcomes, it can identify the difference in pixel

0:15:51.680 --> 0:15:56.760
<v Speaker 1>distribution that I likely correlate to a future things that

0:15:57.000 --> 0:16:00.400
<v Speaker 1>may come. The amount of data which you can train

0:16:00.400 --> 0:16:04.280
<v Speaker 1>a computer on versus a doctor is massive and the

0:16:04.400 --> 0:16:07.200
<v Speaker 1>AI was able to detect smaller details than the human

0:16:07.280 --> 0:16:09.600
<v Speaker 1>I could pick up, so this does feel like a

0:16:09.680 --> 0:16:15.080
<v Speaker 1>perfect application for the strengths of machine learning. Fifteen months

0:16:15.080 --> 0:16:18.960
<v Speaker 1>ago we put first our density model, which is every

0:16:19.000 --> 0:16:22.520
<v Speaker 1>mammograms that goes through months general it shows a prediction

0:16:22.600 --> 0:16:27.760
<v Speaker 1>to the radiologists and around of the times radiologists degree

0:16:27.920 --> 0:16:30.960
<v Speaker 1>with the machine, and when they disagree, the more experienced

0:16:31.000 --> 0:16:35.440
<v Speaker 1>radiologists typically sides up with the machine. So Regina's early

0:16:35.480 --> 0:16:38.040
<v Speaker 1>efforts are doing really well, and the hope is that

0:16:38.080 --> 0:16:41.000
<v Speaker 1>more hospitals around the country will begin using these models

0:16:41.000 --> 0:16:43.920
<v Speaker 1>for early detection and risk assessment. And that's not only

0:16:43.960 --> 0:16:48.880
<v Speaker 1>because understanding risk is important, but also because misdiagnoses have

0:16:49.040 --> 0:16:53.120
<v Speaker 1>led to unnecessary surgeries. I read this book The Emperor

0:16:53.240 --> 0:16:55.920
<v Speaker 1>Whole Melodies. It's a book about comes, so I really

0:16:56.000 --> 0:16:59.920
<v Speaker 1>like let me just say one particular moment it really

0:17:00.080 --> 0:17:04.760
<v Speaker 1>choices me out. It's about how men, male surgeons who

0:17:04.760 --> 0:17:09.359
<v Speaker 1>are treating women with the surgeries this firmly believe that

0:17:09.480 --> 0:17:12.280
<v Speaker 1>the more you cut out of you know, women's body,

0:17:12.920 --> 0:17:16.720
<v Speaker 1>the beta is w likelihood. In the early twentieth century,

0:17:16.760 --> 0:17:20.480
<v Speaker 1>there was an insurgence of doctors performing radical mastectomies, removing

0:17:20.520 --> 0:17:25.080
<v Speaker 1>the entire breast, thus permanently disfiguring a patient, and radical

0:17:25.160 --> 0:17:28.680
<v Speaker 1>mastectomies became the norm for much of the twentieth century.

0:17:28.840 --> 0:17:31.680
<v Speaker 1>The reasoning was that removing a lump leaves the risk

0:17:31.720 --> 0:17:35.000
<v Speaker 1>of tumors growing elsewhere, so why take the risk. By

0:17:35.040 --> 0:17:38.280
<v Speaker 1>the nineteen eighties, it was clear that radical mastectomies weren't

0:17:38.280 --> 0:17:42.320
<v Speaker 1>actually in effective treatment for many patients, but according to Regina,

0:17:42.680 --> 0:17:45.120
<v Speaker 1>surgeons still remove too much tissue out of an abundance

0:17:45.160 --> 0:17:48.560
<v Speaker 1>of caution. The reason it happens, it's not because there

0:17:48.640 --> 0:17:51.040
<v Speaker 1>is some evil doctors that wants to look another surgery.

0:17:51.080 --> 0:17:53.880
<v Speaker 1>It's just because people are uncertain and it's high risk,

0:17:54.000 --> 0:17:56.639
<v Speaker 1>and many times conceptician would say, I am ready to

0:17:56.680 --> 0:18:01.160
<v Speaker 1>go for the harshest treatment to minimize that chances. So

0:18:01.320 --> 0:18:05.040
<v Speaker 1>what we demonstrated that with machine learning you can actually

0:18:05.520 --> 0:18:09.000
<v Speaker 1>identify a city percent of the populations it doesn't need

0:18:09.280 --> 0:18:13.560
<v Speaker 1>this type of surgery. Regina told me that had these

0:18:13.560 --> 0:18:15.480
<v Speaker 1>deep learning models been in place at the time of

0:18:15.520 --> 0:18:18.760
<v Speaker 1>her early mamograms, she might have detected her risk two

0:18:18.840 --> 0:18:22.199
<v Speaker 1>years sooner. And in many cases, early detection makes a

0:18:22.240 --> 0:18:25.000
<v Speaker 1>big difference in how a patient chooses to treat their cancer.

0:18:25.760 --> 0:18:28.640
<v Speaker 1>Since developing the breast cancer detection model, Regina is now

0:18:28.680 --> 0:18:31.320
<v Speaker 1>co leading m I T S J Clinic, which is

0:18:31.320 --> 0:18:34.359
<v Speaker 1>a new initiative focusing on machine learning and health. What

0:18:34.600 --> 0:18:39.240
<v Speaker 1>I hope that we as a society advance since then

0:18:39.680 --> 0:18:42.000
<v Speaker 1>and we are ready to bring you know, the recent

0:18:42.520 --> 0:18:46.399
<v Speaker 1>science and help women, and even if it means that

0:18:46.480 --> 0:18:51.560
<v Speaker 1>we need to change our beliefs about how risk assessment works.

0:18:52.080 --> 0:18:54.280
<v Speaker 1>And Regina hopes that as a society we can move

0:18:54.320 --> 0:18:57.400
<v Speaker 1>toward greater acceptance of using machine learning to enhance medicine.

0:18:58.160 --> 0:19:01.520
<v Speaker 1>Whichever one does a bit, Joel, that one should prevail.

0:19:05.520 --> 0:19:08.800
<v Speaker 1>So big question, how do you feel, Kara about putting

0:19:08.800 --> 0:19:11.280
<v Speaker 1>your health into the hands of an algorithm? You know,

0:19:11.359 --> 0:19:15.840
<v Speaker 1>after speaking to Regina, who told me that she probably

0:19:15.840 --> 0:19:18.680
<v Speaker 1>would have been diagnosed two years sooner using her models,

0:19:20.040 --> 0:19:23.880
<v Speaker 1>it seems as though machine learning provides this unparalleled form

0:19:23.880 --> 0:19:28.520
<v Speaker 1>of detection right because the most seasoned doctors simply can't

0:19:28.640 --> 0:19:31.160
<v Speaker 1>compare thousands of data points at once. I don't think

0:19:31.200 --> 0:19:34.600
<v Speaker 1>the issue is algorithms replacing doctors. It's more a matter

0:19:34.640 --> 0:19:37.000
<v Speaker 1>of equipping doctors the sharper tools that they can do

0:19:37.040 --> 0:19:40.159
<v Speaker 1>their jobs. They've got to provide a patient with information

0:19:40.320 --> 0:19:44.040
<v Speaker 1>and allow that patient to make informed decisions. Algorithms don't

0:19:44.080 --> 0:19:46.200
<v Speaker 1>have a bedside manner. You know. What you say about

0:19:46.200 --> 0:19:48.560
<v Speaker 1>the shop of tools reminds me of our conversation about

0:19:48.560 --> 0:19:53.320
<v Speaker 1>creativity and episode two, using AI to give artists, musicians,

0:19:53.680 --> 0:19:57.720
<v Speaker 1>screenwriters new tools to do better work. At the same time,

0:19:58.240 --> 0:20:00.879
<v Speaker 1>just like the art world, the medical profession sits on

0:20:00.920 --> 0:20:03.720
<v Speaker 1>this enormous pedestal where we have to trust what they

0:20:03.800 --> 0:20:05.879
<v Speaker 1>say because most of us don't have the tools to

0:20:06.000 --> 0:20:09.840
<v Speaker 1>question them, right unless you're on WebMD. Yeah, the bane

0:20:09.840 --> 0:20:11.960
<v Speaker 1>of every doctor's life. Yeah, you know, doctors have a

0:20:11.960 --> 0:20:14.960
<v Speaker 1>hard enough time explaining medicine to patients. Imagine them having

0:20:14.960 --> 0:20:17.920
<v Speaker 1>to explain artificial intelligence. Well, from experience, we can say

0:20:17.920 --> 0:20:21.520
<v Speaker 1>good luck to him. What's crazy to me actually is

0:20:21.560 --> 0:20:24.760
<v Speaker 1>that Regina and her co author, this woman, doctor Connie Lehman,

0:20:24.760 --> 0:20:28.359
<v Speaker 1>who's a radiologist at Mass General, were rejected from every

0:20:28.400 --> 0:20:30.879
<v Speaker 1>single federal grant they submitted at first. Why would they

0:20:30.880 --> 0:20:33.880
<v Speaker 1>be rejected from all those federal grants? Because as much

0:20:33.920 --> 0:20:36.760
<v Speaker 1>as there is a ton of buzz surrounding AI, I

0:20:36.800 --> 0:20:40.320
<v Speaker 1>think people have to appreciate how new this frontier is right,

0:20:40.560 --> 0:20:44.360
<v Speaker 1>and using machine learning to make predictions about people's cancer

0:20:44.960 --> 0:20:47.240
<v Speaker 1>is very, very new, and it's going to take doctors

0:20:47.280 --> 0:20:49.199
<v Speaker 1>a really long time to learn how to convey this

0:20:49.240 --> 0:20:52.800
<v Speaker 1>information to patients. So that's where Regina is right now,

0:20:53.200 --> 0:20:57.639
<v Speaker 1>figuring out how doctors can explain to patients, Hey, AI

0:20:57.760 --> 0:21:00.439
<v Speaker 1>helped us determine your cancer risk. Well, part of problem

0:21:00.480 --> 0:21:03.400
<v Speaker 1>is that we don't yet have explainable AI. So it's

0:21:03.440 --> 0:21:05.280
<v Speaker 1>not just that it's hard to explain it to patients,

0:21:05.320 --> 0:21:09.160
<v Speaker 1>it's actually a black box. You may remember Sebastian Thrun,

0:21:09.320 --> 0:21:11.960
<v Speaker 1>the founder of Google X from earlier in the series.

0:21:12.520 --> 0:21:15.520
<v Speaker 1>As well as self driving cars and flying cars. He

0:21:15.600 --> 0:21:20.679
<v Speaker 1>works on medical diagnostics and he recognizes a problem. One

0:21:20.720 --> 0:21:23.399
<v Speaker 1>of the conundrums of machine learning is that when you

0:21:23.440 --> 0:21:25.520
<v Speaker 1>open up in doing that brook you look at like

0:21:25.680 --> 0:21:28.680
<v Speaker 1>hundreds of millions of numbers, but he can't quite understand

0:21:28.800 --> 0:21:31.359
<v Speaker 1>what's happening. So people are are concerned. People look at

0:21:31.359 --> 0:21:33.359
<v Speaker 1>that Brooks say, add the thing of diagnals and cancer,

0:21:33.760 --> 0:21:36.560
<v Speaker 1>what does it do. We've talked about the black box

0:21:36.640 --> 0:21:39.200
<v Speaker 1>problem in AI and how hard it is to trust

0:21:39.240 --> 0:21:42.520
<v Speaker 1>decisions that can't be explained, but Sebastian is quick to

0:21:42.520 --> 0:21:46.000
<v Speaker 1>point out that human beings can also be difficult to decipher.

0:21:47.000 --> 0:21:50.520
<v Speaker 1>Let's remind ourselves our doctors are also black boxes. You

0:21:50.560 --> 0:21:52.680
<v Speaker 1>can't open up the brain of your doctor and ask

0:21:53.119 --> 0:21:55.959
<v Speaker 1>what was this here she using for diagonals and cancer.

0:21:56.520 --> 0:21:58.719
<v Speaker 1>It's a fair point, and it's one has also been

0:21:58.800 --> 0:22:02.080
<v Speaker 1>noted by Siddathan Kache. He is one of the world's

0:22:02.080 --> 0:22:05.879
<v Speaker 1>foremost cancer doctors and the Pulitzerprise winning author of The

0:22:05.920 --> 0:22:09.199
<v Speaker 1>Emperor of All Maladies. That's the book that Regina referred

0:22:09.200 --> 0:22:12.960
<v Speaker 1>to earlier, and Siddartha has also written extensively about how

0:22:13.000 --> 0:22:16.359
<v Speaker 1>AI is changing medicine. I've been a huge fan of

0:22:16.359 --> 0:22:18.959
<v Speaker 1>his work for a long time and actually ambushed him

0:22:18.960 --> 0:22:21.440
<v Speaker 1>after a talk he gave in order to persuade him

0:22:21.440 --> 0:22:24.879
<v Speaker 1>to an interview for this podcast. So thinking about AI

0:22:24.960 --> 0:22:28.720
<v Speaker 1>helping diagnose patients, it's worth asking is the human doctor

0:22:28.880 --> 0:22:31.960
<v Speaker 1>so very different? One problem that I think is fascinating

0:22:32.080 --> 0:22:35.000
<v Speaker 1>is when a patient comes into the hospital. If you

0:22:35.080 --> 0:22:39.760
<v Speaker 1>ask a particularly astute physician, that physician can actually describe

0:22:39.800 --> 0:22:41.960
<v Speaker 1>to you what the most likely journey of that patient

0:22:42.040 --> 0:22:44.800
<v Speaker 1>will be in the hospital, whether they're likely to stay

0:22:44.800 --> 0:22:47.800
<v Speaker 1>for twenty five days suffered through bacterial stepsis, you know

0:22:47.840 --> 0:22:50.160
<v Speaker 1>all from peeking in through the door of in emriency room,

0:22:50.400 --> 0:22:54.440
<v Speaker 1>Siddartha started to wonder how doctors make those lightning fast calls,

0:22:54.920 --> 0:22:57.840
<v Speaker 1>and it got him interested in understanding what the brain

0:22:57.920 --> 0:23:01.080
<v Speaker 1>is doing when a doctor makes a die ignosis. We

0:23:01.119 --> 0:23:05.120
<v Speaker 1>actually understand very little about how human beings make diagnoses.

0:23:05.160 --> 0:23:07.480
<v Speaker 1>I mean, the studies that have been done so far

0:23:07.720 --> 0:23:10.840
<v Speaker 1>suggest that most people make diagnosis in a kind of

0:23:11.480 --> 0:23:15.560
<v Speaker 1>recognition sense rather than an algorithmic sense. The classical description

0:23:15.600 --> 0:23:18.760
<v Speaker 1>of how we make diagnosis was extraordinarily algorithmic, sort of

0:23:18.800 --> 0:23:22.360
<v Speaker 1>goes down a series of elimination. It's not this, it's

0:23:22.400 --> 0:23:25.440
<v Speaker 1>not that. Now. Whence the alto says algorithmic, he doesn't

0:23:25.480 --> 0:23:29.480
<v Speaker 1>mean using a computer algorithm. He means using rule based logic.

0:23:29.960 --> 0:23:33.600
<v Speaker 1>If this, then that, et cetera. But what he learned

0:23:33.680 --> 0:23:36.840
<v Speaker 1>was that despite what the textbooks say, that's not actually

0:23:36.840 --> 0:23:40.480
<v Speaker 1>how doctors make a diagnosis. When you put doctors inside

0:23:40.600 --> 0:23:44.040
<v Speaker 1>MRI machines and ask the question how do they make diagnosis?

0:23:44.040 --> 0:23:47.280
<v Speaker 1>In fact what lights up is parts of the brain

0:23:47.359 --> 0:23:50.359
<v Speaker 1>that are much much more to do with pattern recognition.

0:23:50.680 --> 0:23:53.640
<v Speaker 1>Here's a rhinoceros, Here's not a rhinoceros. Here's an elephant,

0:23:53.680 --> 0:23:58.439
<v Speaker 1>here's not an elephant. Especially mature doctors make diagnoses based

0:23:58.520 --> 0:24:02.359
<v Speaker 1>on patent recognition, and they'll flip around like moths around

0:24:02.400 --> 0:24:05.520
<v Speaker 1>the flame and ultimately slowly arrive at the target. It's

0:24:05.600 --> 0:24:09.080
<v Speaker 1>much more geographical way of thinking rather than linear. They're

0:24:09.200 --> 0:24:13.520
<v Speaker 1>using a combination of Bayesian or prior probability understandings. They're

0:24:13.600 --> 0:24:17.520
<v Speaker 1>using pattern recognition. They're understanding things about the patient and

0:24:17.640 --> 0:24:23.920
<v Speaker 1>figuring out what to do. Hearing Saddatha speak about doctor's

0:24:24.000 --> 0:24:28.800
<v Speaker 1>cara in terms like prior probability understandings and Bayesian statistics

0:24:29.320 --> 0:24:32.000
<v Speaker 1>really does make it sound like he's describing AI rather

0:24:32.040 --> 0:24:34.800
<v Speaker 1>than people. Well, it kind of is. You know, neural

0:24:34.840 --> 0:24:38.360
<v Speaker 1>networks are purposely modeled on the human brain. It's not

0:24:38.760 --> 0:24:40.960
<v Speaker 1>as easy as causing effect. It's about drawing on a

0:24:41.000 --> 0:24:43.920
<v Speaker 1>lifetime of experience to make best guesses based on competing

0:24:43.960 --> 0:24:47.399
<v Speaker 1>information that we have to weigh appropriately in micro second.

0:24:47.520 --> 0:24:50.119
<v Speaker 1>It's no easy task. It's funny because we've worn several

0:24:50.119 --> 0:24:52.800
<v Speaker 1>times on this series that we shouldn't be surprised when

0:24:52.800 --> 0:24:56.760
<v Speaker 1>our creations reflect us, and yet it's almost impossible not

0:24:56.840 --> 0:24:59.280
<v Speaker 1>to be. I feel a sense of uncanny chills when

0:24:59.320 --> 0:25:02.760
<v Speaker 1>Saddatha is gribes a human doctor working like an algorithm.

0:25:03.000 --> 0:25:04.800
<v Speaker 1>And he wrote about this in The New Yorker with

0:25:05.040 --> 0:25:09.639
<v Speaker 1>the headline AI versus m D And he made this point,

0:25:09.680 --> 0:25:12.679
<v Speaker 1>which is that human and machine processes of making diagnosis

0:25:12.680 --> 0:25:17.359
<v Speaker 1>are converging. And it makes me wonder who's going to

0:25:17.480 --> 0:25:21.360
<v Speaker 1>have the final word. Well, I asked Sebastian thrun exactly

0:25:21.359 --> 0:25:24.919
<v Speaker 1>that question. He will die of cancer a lot. I

0:25:24.960 --> 0:25:29.040
<v Speaker 1>believe many of those deaths is actually preventable using artificial intelligence.

0:25:29.320 --> 0:25:32.800
<v Speaker 1>It's amazing how diverse diagnostics you get. When you show

0:25:32.840 --> 0:25:35.840
<v Speaker 1>a set of dermatologists the same set of images, some

0:25:35.920 --> 0:25:40.760
<v Speaker 1>will say cancers as I would say five. And Sebastian

0:25:40.840 --> 0:25:44.640
<v Speaker 1>has a personal interest in the topic. My family, unfortunately

0:25:44.680 --> 0:25:47.639
<v Speaker 1>has a long, long, long history of cancer. My my

0:25:47.680 --> 0:25:49.960
<v Speaker 1>sister passed away last year. My mother passed away a

0:25:50.040 --> 0:25:52.439
<v Speaker 1>young age. So one of my questions I had in

0:25:52.480 --> 0:25:56.119
<v Speaker 1>my life with me is since my mother died, um,

0:25:56.160 --> 0:25:58.680
<v Speaker 1>maybe we should not work on on treatment. We should

0:25:58.720 --> 0:26:02.960
<v Speaker 1>really focus on detect not diagnostics. Diagnosis of skin cancer

0:26:03.000 --> 0:26:05.240
<v Speaker 1>doesn't require looking inside your organs. You can just look

0:26:05.240 --> 0:26:08.200
<v Speaker 1>at the person from outside and it turns out we're

0:26:08.240 --> 0:26:11.280
<v Speaker 1>not heavenal symptoms. Before it becomes dangerous, it sits therefore

0:26:11.359 --> 0:26:14.760
<v Speaker 1>quite a vie. It grows below your skin, it spreads,

0:26:14.880 --> 0:26:16.920
<v Speaker 1>and then it destroys your liver, and then your first

0:26:16.920 --> 0:26:19.160
<v Speaker 1>symptom might be that back pain or a yellow face.

0:26:19.520 --> 0:26:22.920
<v Speaker 1>Maybe we should just look every single day. In fact,

0:26:23.119 --> 0:26:25.639
<v Speaker 1>Sebastian his work to make it possible for people to

0:26:25.720 --> 0:26:29.840
<v Speaker 1>check themselves every day. He published a paper in Nature

0:26:29.920 --> 0:26:35.040
<v Speaker 1>called Dermatologist level classification of skin cancer with Deep neural Networks.

0:26:35.520 --> 0:26:38.199
<v Speaker 1>What he demonstrated is that a program that runs on

0:26:38.240 --> 0:26:42.320
<v Speaker 1>an iPhone performs just as well as dermatologists at diagnosing

0:26:42.400 --> 0:26:47.360
<v Speaker 1>skin cancer. It sounds transformative, but Siddhartha has a very

0:26:47.400 --> 0:26:51.280
<v Speaker 1>specific concern about this kind of technology entering the mainstream.

0:26:51.680 --> 0:26:54.880
<v Speaker 1>Well Over diagnosis is an important risk. A classic example

0:26:54.920 --> 0:26:58.800
<v Speaker 1>of that is a lesion in the breast, a spot

0:26:58.960 --> 0:27:01.080
<v Speaker 1>that is actually not reast cancer, but it's picked up

0:27:01.080 --> 0:27:04.000
<v Speaker 1>and described as breast cancer, that leads to a biopsy,

0:27:04.080 --> 0:27:07.240
<v Speaker 1>the barbsy leads to complications and so forth, and at

0:27:07.240 --> 0:27:08.879
<v Speaker 1>the end of it you discover that that you know

0:27:08.880 --> 0:27:11.800
<v Speaker 1>you've achieved not very much, except for subjecting a woman

0:27:11.840 --> 0:27:15.440
<v Speaker 1>to an unpleasant procedure with unpleasant costs. We don't want

0:27:15.440 --> 0:27:17.800
<v Speaker 1>to catch just early cancer. We want to catch the

0:27:17.800 --> 0:27:20.480
<v Speaker 1>early cancers that are likely to kill you, the other

0:27:20.480 --> 0:27:24.280
<v Speaker 1>ones that are unlikely to become anything. We actually want

0:27:24.359 --> 0:27:27.040
<v Speaker 1>to be able to reassure patients that they don't need

0:27:27.080 --> 0:27:31.000
<v Speaker 1>a biopsy. Regina told Kara how screening enabled by machine

0:27:31.080 --> 0:27:36.280
<v Speaker 1>learning could reduce unnecessary mystectomies, But according to Siddartha, we

0:27:36.359 --> 0:27:39.520
<v Speaker 1>also need to be cautious about overscreening pushing us into

0:27:39.600 --> 0:27:44.320
<v Speaker 1>unnecessary procedures. And as AI and sensors become more ubiquitous,

0:27:44.760 --> 0:27:47.879
<v Speaker 1>enabling us to constantly search for illness, there may be

0:27:48.040 --> 0:27:51.960
<v Speaker 1>psychological implications that were not fully prepared for. This is

0:27:52.000 --> 0:27:55.160
<v Speaker 1>the very or valiant notion of previvors. It's the word

0:27:55.240 --> 0:27:57.640
<v Speaker 1>that I first encountering clinic in it was a woman

0:27:57.640 --> 0:27:59.960
<v Speaker 1>who had brack of one mutation, but in fact did

0:28:00.040 --> 0:28:02.800
<v Speaker 1>not have any breast cancer. She called herself a pre

0:28:02.920 --> 0:28:05.119
<v Speaker 1>vivor of breast cancer. She was a survivor of a

0:28:05.160 --> 0:28:08.119
<v Speaker 1>disease that she yet did not have. Our culture hasn't

0:28:08.119 --> 0:28:10.080
<v Speaker 1>reached the place that you know, we're routinely thinking of

0:28:10.119 --> 0:28:13.080
<v Speaker 1>ourselves as previvors. But it has reached a place where

0:28:13.119 --> 0:28:16.720
<v Speaker 1>surveillance is is constant. You know, you're moving from colonoscopy

0:28:16.800 --> 0:28:21.280
<v Speaker 1>to mammogram to p s A test, to medical exam

0:28:21.359 --> 0:28:25.520
<v Speaker 1>to retinal exam. And you can imagine stringing together with

0:28:26.119 --> 0:28:29.399
<v Speaker 1>future devices, a culture in which the body is always

0:28:29.400 --> 0:28:33.400
<v Speaker 1>being hunted scoured for being a potential locus of future disease,

0:28:33.640 --> 0:28:36.560
<v Speaker 1>and that will I think distrought culture fundamentally. It's a

0:28:36.640 --> 0:28:40.240
<v Speaker 1>very Orwell and very scary idea, said Arthur Ludes. Of course,

0:28:40.240 --> 0:28:44.080
<v Speaker 1>Sir George Orwell, whose novel four was prescient about the

0:28:44.080 --> 0:28:47.400
<v Speaker 1>culture of surveillance that's now blooming around us. But I

0:28:47.520 --> 0:28:51.400
<v Speaker 1>never thought about surveillance in medical terms before. And who

0:28:51.520 --> 0:28:54.200
<v Speaker 1>might be surveilling our bodies. One might be a health

0:28:54.200 --> 0:28:58.800
<v Speaker 1>insurance company or the government interested in, you know, who's

0:28:58.840 --> 0:29:01.600
<v Speaker 1>healthy and who's not healthy. There was a chance meeting

0:29:01.640 --> 0:29:04.920
<v Speaker 1>between Siddartha and Sebastian that God Siddartha thinking about AI

0:29:05.000 --> 0:29:08.320
<v Speaker 1>and medicine, but the two have fundamental disagreements on the

0:29:08.440 --> 0:29:11.880
<v Speaker 1>risks and rewards of surveiling the body. I love Sid

0:29:11.920 --> 0:29:13.920
<v Speaker 1>as a person, but I can tell you any doctor

0:29:14.000 --> 0:29:18.080
<v Speaker 1>who tells you less data is better for you is irresponsible.

0:29:18.560 --> 0:29:21.120
<v Speaker 1>If I could give you information with your skin cancer

0:29:21.240 --> 0:29:24.600
<v Speaker 1>every day, you will live longer than if it's just

0:29:24.680 --> 0:29:29.160
<v Speaker 1>consulted avntologies every year or two. But also the unpredictability

0:29:29.160 --> 0:29:32.400
<v Speaker 1>of death is part of the human experience. Our culture

0:29:32.400 --> 0:29:34.880
<v Speaker 1>would be very different if we walked around with signs

0:29:34.880 --> 0:29:37.200
<v Speaker 1>on our foreheads which told us the number of days

0:29:37.240 --> 0:29:40.600
<v Speaker 1>that we had left to live. What Siddartha is describing

0:29:40.720 --> 0:29:44.400
<v Speaker 1>is not some thought experiment. Using AI to predict time

0:29:44.400 --> 0:29:48.240
<v Speaker 1>of death is fast becoming a reality. But what inputs

0:29:48.240 --> 0:29:50.560
<v Speaker 1>does it use? And how am I knowing when we

0:29:50.600 --> 0:29:54.360
<v Speaker 1>will die? Change our culture? Join us after the break.

0:30:02.440 --> 0:30:05.000
<v Speaker 1>Doctors are actually very poor at predicting death. If you

0:30:05.040 --> 0:30:09.200
<v Speaker 1>look at the pattern of how people die, most people

0:30:09.320 --> 0:30:14.160
<v Speaker 1>don't decline along a predictable path towards their death, so

0:30:14.400 --> 0:30:17.720
<v Speaker 1>it's often a series of strings that snaps. If you

0:30:17.760 --> 0:30:20.520
<v Speaker 1>think about the human being being held together like a

0:30:20.520 --> 0:30:23.320
<v Speaker 1>puppet on on many strings, it's not that the puppet

0:30:23.440 --> 0:30:27.200
<v Speaker 1>slowly crumbles at a predictable pace. It's that all of

0:30:27.240 --> 0:30:29.800
<v Speaker 1>a sudden, three strings collapse and the hand comes dangling

0:30:29.840 --> 0:30:33.360
<v Speaker 1>down and the body and medicine tries to prop that

0:30:33.440 --> 0:30:36.840
<v Speaker 1>piece up, and in doing so now two more strings

0:30:36.840 --> 0:30:39.360
<v Speaker 1>get cut, and the and the foot collapses, and when

0:30:39.400 --> 0:30:41.800
<v Speaker 1>a certain number collapses that nothing can be done. So

0:30:41.840 --> 0:30:45.680
<v Speaker 1>it's a fundamental failure of homeostasis that makes death very

0:30:45.680 --> 0:30:49.080
<v Speaker 1>hard to imagine, conceive, And of course there is an

0:30:49.080 --> 0:30:53.920
<v Speaker 1>emotional component to this, but unlike human doctors, AI doesn't

0:30:53.920 --> 0:30:57.680
<v Speaker 1>get distracted by emotion. It looks at evidence and historical

0:30:57.800 --> 0:31:02.160
<v Speaker 1>data to establish patterns. The algorithms actually do quite well

0:31:02.200 --> 0:31:05.880
<v Speaker 1>in predicting death. What is it attaching weight to? Is

0:31:05.920 --> 0:31:08.520
<v Speaker 1>it a combination of things? Is it the fact that

0:31:08.560 --> 0:31:11.480
<v Speaker 1>someone has a brain metastasis and has a slight rise

0:31:11.520 --> 0:31:15.600
<v Speaker 1>in some blood value of some solid that predicts that

0:31:15.640 --> 0:31:17.440
<v Speaker 1>this person is likely to do very badly in the

0:31:17.480 --> 0:31:20.160
<v Speaker 1>next few days. You know, as you refine it further

0:31:20.200 --> 0:31:22.600
<v Speaker 1>and further, many subtle things might start coming up that

0:31:22.640 --> 0:31:24.320
<v Speaker 1>we don't know about, and those will be the most

0:31:24.360 --> 0:31:29.800
<v Speaker 1>interesting ones. It's not just additives. That phrase. It's not

0:31:29.880 --> 0:31:32.560
<v Speaker 1>just additives. It's important to me, Kara because it connects

0:31:32.560 --> 0:31:35.280
<v Speaker 1>the dots between what sadd Arthur is saying about predicting

0:31:35.320 --> 0:31:38.160
<v Speaker 1>time of death and what Andy Schwartz was saying about

0:31:38.200 --> 0:31:42.040
<v Speaker 1>getting better decoding the human brain. They're both about understanding

0:31:42.080 --> 0:31:46.000
<v Speaker 1>systems where one plus one doesn't necessarily equal to where

0:31:46.080 --> 0:31:50.160
<v Speaker 1>unexpected results emerge from complex systems. Thinking about this makes

0:31:50.160 --> 0:31:54.440
<v Speaker 1>me physically ill. Um He's literally talking about predicting when

0:31:54.440 --> 0:31:57.959
<v Speaker 1>we will die and mind blowing. You know what, if

0:31:58.240 --> 0:32:01.040
<v Speaker 1>you could know when you will die could change how

0:32:01.080 --> 0:32:03.240
<v Speaker 1>we choose to live our daily lives. It's one of

0:32:03.280 --> 0:32:06.200
<v Speaker 1>the things I find most disturbing this whole series. So

0:32:06.320 --> 0:32:08.400
<v Speaker 1>much of how we live and how we aspire and

0:32:08.440 --> 0:32:11.520
<v Speaker 1>what we hope for is connected to our uncertainty about

0:32:11.520 --> 0:32:13.760
<v Speaker 1>when we're going to die. That and I can change

0:32:13.760 --> 0:32:16.840
<v Speaker 1>all of that. It's part of this more global line

0:32:16.840 --> 0:32:19.320
<v Speaker 1>of thinking, which is that AI kind of takes the

0:32:19.360 --> 0:32:22.000
<v Speaker 1>fun out of things well, I mean, and it also

0:32:22.360 --> 0:32:25.200
<v Speaker 1>a big part of Western culture is the Bible and

0:32:25.440 --> 0:32:29.160
<v Speaker 1>the fruit of that forbidden tree, and in Paradise Loss,

0:32:29.200 --> 0:32:31.920
<v Speaker 1>there's this warning to know, to know no more, and

0:32:31.960 --> 0:32:34.400
<v Speaker 1>there's always been this idea in history that there's something

0:32:34.440 --> 0:32:39.040
<v Speaker 1>magic about not knowing. Yeah, you know, Milton aside, Okay, okay, okay,

0:32:39.120 --> 0:32:42.520
<v Speaker 1>Nat you go. We can start to think about what

0:32:42.640 --> 0:32:46.200
<v Speaker 1>this might mean more practically. Lifespan data is directly linked

0:32:46.200 --> 0:32:49.280
<v Speaker 1>to life insurance policies. You know, this would significantly change

0:32:49.320 --> 0:32:52.320
<v Speaker 1>how much people pay. You also think about personal injury law,

0:32:52.360 --> 0:32:55.480
<v Speaker 1>which takes into account how long somebody is going to live,

0:32:55.880 --> 0:32:58.120
<v Speaker 1>so you can determine how much money they should get

0:32:58.160 --> 0:33:00.680
<v Speaker 1>for loss of quality of life. These are things that

0:33:00.720 --> 0:33:04.600
<v Speaker 1>would be greatly impacted by people knowing when they're going

0:33:04.640 --> 0:33:08.320
<v Speaker 1>to die. Right. So, Datha calls death the ultimate black box,

0:33:08.440 --> 0:33:10.959
<v Speaker 1>which I think is actually the perfect description of black box.

0:33:11.520 --> 0:33:14.400
<v Speaker 1>We know we will die, we just don't know how,

0:33:14.440 --> 0:33:17.160
<v Speaker 1>and we don't know when. And with AI it's similar.

0:33:17.480 --> 0:33:20.360
<v Speaker 1>We know we'll get a result in endpoint, but we

0:33:20.400 --> 0:33:23.120
<v Speaker 1>don't know exactly how our input factors are combined to

0:33:23.160 --> 0:33:26.280
<v Speaker 1>get there. Ironically, although AI itself is a black box,

0:33:26.360 --> 0:33:29.720
<v Speaker 1>it's helping us unpack other black boxes, death being the

0:33:29.800 --> 0:33:33.360
<v Speaker 1>ultimate one, the brain being another. And then there's the

0:33:33.440 --> 0:33:36.560
<v Speaker 1>human genome, the unique pattern of our DNA that makes

0:33:36.600 --> 0:33:39.520
<v Speaker 1>each of us us. And we've met the genome, but

0:33:39.560 --> 0:33:42.040
<v Speaker 1>there are a lot of concerns about decoding it, that

0:33:42.080 --> 0:33:44.880
<v Speaker 1>it might be a sort of Pandora's box. Well, there's

0:33:44.880 --> 0:33:47.520
<v Speaker 1>that story about the scientists in China who edited the

0:33:47.560 --> 0:33:51.080
<v Speaker 1>genes of two babies using crisper and all the ethical

0:33:51.360 --> 0:33:55.080
<v Speaker 1>concerns of creating genetic babies exactly. So I spoke with

0:33:55.080 --> 0:33:58.480
<v Speaker 1>Andy Schwartz about actual progress being made in decoding the

0:33:58.560 --> 0:34:02.920
<v Speaker 1>human genome. If you go back to the late ninety nineties,

0:34:03.000 --> 0:34:05.960
<v Speaker 1>and the race was on to discover the human genome,

0:34:06.640 --> 0:34:09.520
<v Speaker 1>and the sound bikes were as soon as we understand

0:34:09.680 --> 0:34:13.160
<v Speaker 1>all the genes, we can cure disease. So, for instance,

0:34:13.440 --> 0:34:16.000
<v Speaker 1>there was a breast cancer gene, and there was an

0:34:16.000 --> 0:34:19.200
<v Speaker 1>Alzheimer's gene, and if we just knew what those genes were,

0:34:19.680 --> 0:34:23.200
<v Speaker 1>we'd be able to eradicate these diseases. Well, it's been

0:34:23.440 --> 0:34:29.959
<v Speaker 1>twenty years now and we're just beginning perhaps to get

0:34:30.080 --> 0:34:33.799
<v Speaker 1>some sort of genome based therapies that might address some

0:34:33.880 --> 0:34:36.840
<v Speaker 1>of these And what we found is that there's no

0:34:36.960 --> 0:34:40.640
<v Speaker 1>simple cause and effect. Very rarely are there simple gene

0:34:40.640 --> 0:34:46.400
<v Speaker 1>defects correlated disease. Rather, these diseases have hundreds of genetic

0:34:46.760 --> 0:34:50.960
<v Speaker 1>basis and each of those is relatively weak, but combined together,

0:34:51.040 --> 0:34:55.200
<v Speaker 1>they generate these diseases. And so it's becomes a computational

0:34:55.280 --> 0:34:58.040
<v Speaker 1>problem and we start looking again at this as a

0:34:58.080 --> 0:35:02.520
<v Speaker 1>complex system where causality is no longer clear. And these

0:35:02.560 --> 0:35:05.520
<v Speaker 1>are the complex computational problems. We're getting better and better

0:35:05.600 --> 0:35:09.600
<v Speaker 1>at solving Sadartha himself is interested in exactly this area,

0:35:09.800 --> 0:35:13.560
<v Speaker 1>the confluence of genetics and computation. In fact, in twenty

0:35:13.680 --> 0:35:17.000
<v Speaker 1>eighteen he gave a talk at Vanderbilt University called from

0:35:17.120 --> 0:35:21.759
<v Speaker 1>Artificial Intelligence to Genomic Intelligence, and it's an area where

0:35:21.760 --> 0:35:25.840
<v Speaker 1>we're making rapid progress. The first papers are just starting

0:35:25.840 --> 0:35:29.040
<v Speaker 1>to appear there in preprint. One of them is extraordinarily interesting.

0:35:29.600 --> 0:35:33.080
<v Speaker 1>It appears to be able to predict height based on

0:35:33.239 --> 0:35:36.680
<v Speaker 1>an algorithm and genomic information to all parents tend to

0:35:36.719 --> 0:35:39.120
<v Speaker 1>produce to all children short parents and to produce short children.

0:35:39.239 --> 0:35:41.840
<v Speaker 1>But we did not have ways to predict based on

0:35:41.880 --> 0:35:44.440
<v Speaker 1>genetic information what your actual height was going to be.

0:35:44.960 --> 0:35:47.160
<v Speaker 1>The question becomes, well, how do you take the genome

0:35:47.200 --> 0:35:49.919
<v Speaker 1>and out pops height from it? If you can do that,

0:35:49.920 --> 0:35:52.480
<v Speaker 1>that means you can take a field genome in utero

0:35:52.840 --> 0:35:55.440
<v Speaker 1>and predict this person's future height. You know, based on

0:35:55.520 --> 0:35:58.200
<v Speaker 1>these first few papers that I read about this arena,

0:35:58.480 --> 0:36:01.440
<v Speaker 1>you require deep learning to do this. I wanted to

0:36:01.520 --> 0:36:05.960
<v Speaker 1>understand why this prediction of height requires deep learning algorithms,

0:36:06.360 --> 0:36:09.800
<v Speaker 1>not simple ones like Andy used to interpret Jan's brain waves.

0:36:10.360 --> 0:36:13.040
<v Speaker 1>It's not just additive you can just add up across

0:36:13.239 --> 0:36:16.360
<v Speaker 1>multiple variations in the genome and arrive at a risk score.

0:36:16.760 --> 0:36:19.360
<v Speaker 1>It's that there are interactions between genes that have to

0:36:19.360 --> 0:36:23.239
<v Speaker 1>be captured. Again, these are early days for artificial intelligence

0:36:23.560 --> 0:36:26.120
<v Speaker 1>being unleashed on genomes, but it seems to me that

0:36:26.160 --> 0:36:30.479
<v Speaker 1>complex problems of genetic architecture will soon be predictable using

0:36:30.520 --> 0:36:34.920
<v Speaker 1>these kinds of algorithms, and that ability to predict raises

0:36:35.040 --> 0:36:37.680
<v Speaker 1>huge questions for all of us. If you want to

0:36:37.680 --> 0:36:39.799
<v Speaker 1>know the height of your unborn child, but you want

0:36:39.800 --> 0:36:42.640
<v Speaker 1>to know the risk of dyslexia, those questions are almost

0:36:42.640 --> 0:36:47.160
<v Speaker 1>certainly likely to lead to extraordinarily acrimonious public conversations about

0:36:47.200 --> 0:36:49.440
<v Speaker 1>what should be done and what shouldn't be done in

0:36:49.560 --> 0:36:51.839
<v Speaker 1>terms of accessing the data, who's to store the data,

0:36:52.000 --> 0:36:54.200
<v Speaker 1>how much privacy we should have about it, and how

0:36:54.280 --> 0:36:56.840
<v Speaker 1>much it will distort human culture to have these pieces

0:36:56.840 --> 0:37:00.160
<v Speaker 1>of knowledge. Um So, if you think that clinically going

0:37:00.200 --> 0:37:03.160
<v Speaker 1>when you're going to die is is going to distort culture,

0:37:03.680 --> 0:37:06.000
<v Speaker 1>in knowing how tall your child is going to be

0:37:06.080 --> 0:37:08.840
<v Speaker 1>in the future will also distort human culture. We haven't

0:37:08.920 --> 0:37:11.279
<v Speaker 1>ever lived in a place or a space or a

0:37:11.280 --> 0:37:14.040
<v Speaker 1>time when that knowledge has been predictable from a fetus.

0:37:14.760 --> 0:37:18.520
<v Speaker 1>Artificial intelligence is giving us incredible power to see into

0:37:18.560 --> 0:37:22.319
<v Speaker 1>the future, to ask and answer questions about the generations

0:37:22.360 --> 0:37:25.600
<v Speaker 1>to come. But it is up to us, our generation,

0:37:25.920 --> 0:37:28.600
<v Speaker 1>to decide how we want to use this awesome power.

0:37:29.320 --> 0:37:32.359
<v Speaker 1>But one thing that artificial neural networks can't do is

0:37:32.520 --> 0:37:38.279
<v Speaker 1>defined principles. They can only work on classifying things that

0:37:38.320 --> 0:37:41.120
<v Speaker 1>we tell them to classify. There is still a human

0:37:41.200 --> 0:37:44.319
<v Speaker 1>telling the artificial neural network what it should be doing.

0:37:45.000 --> 0:37:49.000
<v Speaker 1>There is something very fundamental about the human brain, a

0:37:49.080 --> 0:37:52.600
<v Speaker 1>scientist's brain, a doctor's brain, and artist's brain that asks

0:37:52.719 --> 0:37:55.759
<v Speaker 1>questions in a fundamentally different manner, the why question. Why

0:37:55.800 --> 0:37:59.120
<v Speaker 1>did this happen in this person in this time? Why

0:37:59.160 --> 0:38:01.120
<v Speaker 1>does the melanoma appear in the first case? What is

0:38:01.120 --> 0:38:04.440
<v Speaker 1>the molecular basis of that appearance? The most interesting mysteries

0:38:04.440 --> 0:38:07.160
<v Speaker 1>of medicine remain mysteries that have to do with the y,

0:38:07.600 --> 0:38:11.040
<v Speaker 1>and despite being at the absolute cutting edge of medical research,

0:38:11.440 --> 0:38:15.600
<v Speaker 1>Saddatha's most important guiding principle was written in ancient Greek

0:38:15.960 --> 0:38:20.080
<v Speaker 1>over two thousand years ago. Remember the Hipocratic oath begins first,

0:38:20.160 --> 0:38:23.200
<v Speaker 1>do no harm it's maybe the single profession where the

0:38:23.239 --> 0:38:25.480
<v Speaker 1>oath of the profession is in the negative. And this

0:38:25.560 --> 0:38:28.840
<v Speaker 1>is for a reason. It's for a profound reason in medicine,

0:38:28.920 --> 0:38:32.040
<v Speaker 1>because we're intervening on bodies, because we're intervening on homeostasis,

0:38:32.040 --> 0:38:36.000
<v Speaker 1>because we're intervening on cultures. Effectively, the capacity to do

0:38:36.160 --> 0:38:39.799
<v Speaker 1>harm arises very quickly, and so the first do no

0:38:39.880 --> 0:38:42.760
<v Speaker 1>harm injunction in the Hippocratic oath is is an important

0:38:42.760 --> 0:38:45.600
<v Speaker 1>thing to keep in my own mind. Um, you know,

0:38:45.680 --> 0:38:48.120
<v Speaker 1>what are the harms that arise if I were to

0:38:48.160 --> 0:38:50.960
<v Speaker 1>start knowing my risks of future disease, not just what

0:38:51.040 --> 0:38:54.440
<v Speaker 1>advantages would I get in society. And this battle is

0:38:54.480 --> 0:38:56.560
<v Speaker 1>happening in my mind, I assume in the minds of

0:38:56.600 --> 0:38:59.480
<v Speaker 1>virtually every doctor. As we move forward into this uh

0:39:00.400 --> 0:39:05.880
<v Speaker 1>beautiful and perilous future, in a world where our choices

0:39:05.920 --> 0:39:08.839
<v Speaker 1>can create new beauty but also a new peril. It's

0:39:08.880 --> 0:39:11.359
<v Speaker 1>important that we move into that future with real care.

0:39:11.880 --> 0:39:15.240
<v Speaker 1>And it's something sad Arthur has thought a lot about personally, because,

0:39:15.400 --> 0:39:19.160
<v Speaker 1>like Sebastian Throne, he has a family history of heritable conditions.

0:39:19.920 --> 0:39:24.439
<v Speaker 1>The risk is of schizophrenic disease and bipolar disorder, and

0:39:24.600 --> 0:39:28.160
<v Speaker 1>right now the algorithms to predict this still don't exist.

0:39:29.080 --> 0:39:32.000
<v Speaker 1>As the project of sequencing lots of genomes and asking

0:39:32.040 --> 0:39:35.239
<v Speaker 1>what diseases people have matures, this data set will become

0:39:35.280 --> 0:39:38.839
<v Speaker 1>available maybe five ten years from now. I will be

0:39:39.320 --> 0:39:42.120
<v Speaker 1>past the period I suppose where that will make a difference.

0:39:42.160 --> 0:39:44.600
<v Speaker 1>But to my children and my grandchildren, it might make

0:39:44.600 --> 0:39:47.480
<v Speaker 1>a difference, and they'll have to make that decision. I

0:39:47.480 --> 0:39:50.719
<v Speaker 1>will advise them individually, and it will depend on humanistic

0:39:50.800 --> 0:39:54.680
<v Speaker 1>understanding of what an individual's desire to understand their own

0:39:54.760 --> 0:40:00.480
<v Speaker 1>risk is. There's no algorithm that predicts that understanding. As

0:40:00.520 --> 0:40:03.400
<v Speaker 1>AI advances, we're being faced with more and more urgent

0:40:03.520 --> 0:40:06.960
<v Speaker 1>ethical choices. This, in turn, may put a new emphasis

0:40:06.960 --> 0:40:10.719
<v Speaker 1>on the humanities, or, as Kai Fuey suggested, place a

0:40:10.760 --> 0:40:15.520
<v Speaker 1>new premium on personal attention, human interaction, and emotional care.

0:40:17.320 --> 0:40:21.560
<v Speaker 1>Once we give up some of the diagnostic pattern recognition

0:40:21.600 --> 0:40:24.279
<v Speaker 1>material to machines, it will be time to play. It

0:40:24.320 --> 0:40:26.719
<v Speaker 1>will be the time to play in the arena of

0:40:27.000 --> 0:40:30.840
<v Speaker 1>human therapeutics, human biology, the complexity of the human interaction,

0:40:30.880 --> 0:40:34.239
<v Speaker 1>the art of medicine. My hope is that medicine, being

0:40:34.239 --> 0:40:38.560
<v Speaker 1>more playful, will become more compassionate, more able to take

0:40:38.600 --> 0:40:43.680
<v Speaker 1>into account individuals and their individual destinies rather than bucketing

0:40:43.719 --> 0:40:47.200
<v Speaker 1>people in big categories. It means having more time to

0:40:47.280 --> 0:40:51.080
<v Speaker 1>spend with humans. You know, we are so constrained by

0:40:51.160 --> 0:40:55.520
<v Speaker 1>time that even compassion gets three minutes, we won't become

0:40:56.000 --> 0:40:59.800
<v Speaker 1>more robotic, will become less robotic as the robots and

0:41:00.080 --> 0:41:04.240
<v Speaker 1>our own What's the data describes is the holy grail

0:41:04.400 --> 0:41:07.560
<v Speaker 1>of the AI revolution. Could it allow us to be

0:41:07.680 --> 0:41:12.200
<v Speaker 1>more human, to be better doctors, more fulfilled workers, and

0:41:12.360 --> 0:41:15.759
<v Speaker 1>greater artists. Could it take routine work out of our

0:41:15.800 --> 0:41:18.560
<v Speaker 1>hands and allow us to take better care of each other.

0:41:20.040 --> 0:41:23.040
<v Speaker 1>It's a compelling vision, but as always, it has a

0:41:23.160 --> 0:41:26.520
<v Speaker 1>dark side. While most doctors are guided by their hippocratic

0:41:26.560 --> 0:41:30.319
<v Speaker 1>oath do no harm, there's no guarantee that new technologies

0:41:30.360 --> 0:41:33.360
<v Speaker 1>will stay in the right hands. The line between healing

0:41:33.480 --> 0:41:37.400
<v Speaker 1>and upgrading our bodies is thin and contested, and as

0:41:37.440 --> 0:41:41.359
<v Speaker 1>AI improves, we can begin to translate desires directly from

0:41:41.400 --> 0:41:45.040
<v Speaker 1>brain activity, modify the physical traits of our children through

0:41:45.120 --> 0:41:49.960
<v Speaker 1>gene editing, and accurately predict when we will die. In

0:41:50.000 --> 0:41:52.400
<v Speaker 1>the next episode, we ask what does all of this

0:41:52.600 --> 0:41:56.239
<v Speaker 1>mean for our future? As a species. We speak to

0:41:56.239 --> 0:41:59.400
<v Speaker 1>the world's leading thinker on these questions. You've all know

0:41:59.640 --> 0:42:04.520
<v Speaker 1>her are author of Sapiens and Homodaeus. I'm Ozveloshin. See

0:42:04.560 --> 0:42:19.839
<v Speaker 1>you next time. Sleepwalkers is a production of I Heart

0:42:19.920 --> 0:42:24.440
<v Speaker 1>Radio and Unusual Productions. For the latest AI news, live interviews,

0:42:24.480 --> 0:42:27.520
<v Speaker 1>and behind the scenes footage, find us on Instagram, at

0:42:27.520 --> 0:42:34.160
<v Speaker 1>Sleepwalker's podcast or at Sleepwalker's podcast dot com. Sleepwalkers is

0:42:34.160 --> 0:42:37.520
<v Speaker 1>hosted by me Ozveloshin and co hosted by me Kara Price,

0:42:37.680 --> 0:42:40.600
<v Speaker 1>with produced by Julian Weller, with help from Jacobo Penzo

0:42:40.760 --> 0:42:44.320
<v Speaker 1>and Taylor Chakoin mixing by Tristan McNeil and Julian Weller.

0:42:44.600 --> 0:42:48.320
<v Speaker 1>Our story editor is Matthew Riddle. Recording assistance this episode

0:42:48.360 --> 0:42:51.240
<v Speaker 1>from Joe and Luna to Brina Boden and Joseph Friedman.

0:42:51.440 --> 0:42:55.600
<v Speaker 1>Sleepwalkers is executive produced by me Ozveloshin and Mangesh Hattikiller.

0:42:55.760 --> 0:42:57.799
<v Speaker 1>For more podcasts from My Heart Radio, visit the I

0:42:57.880 --> 0:43:00.799
<v Speaker 1>Heart Radio app, Apple Podcasts, or wherever you listen to

0:43:00.840 --> 0:43:01.680
<v Speaker 1>your favorite shows.