1 00:00:04,600 --> 00:00:17,560 Speaker 1: Sleepwalkers is a production of our Heart Radio and Unusual Productions. Hi, 2 00:00:17,720 --> 00:00:20,880 Speaker 1: I'm Aloan and I'm care Price. Welcome to a special 3 00:00:20,960 --> 00:00:36,760 Speaker 1: bonus episode of Sleepwalkers. Well, Cara, it's very good to 4 00:00:36,760 --> 00:00:39,000 Speaker 1: be back to the office. That you It is good 5 00:00:39,000 --> 00:00:41,120 Speaker 1: to be back, as I just want to apologize to 6 00:00:41,120 --> 00:00:43,720 Speaker 1: our listeners. We don't have an algorithm that's going to 7 00:00:43,760 --> 00:00:48,320 Speaker 1: create season two, so things have taken a little longer. Yes, unfortunately, 8 00:00:48,520 --> 00:00:51,680 Speaker 1: unfortunately a I can't do everything yet. But we are 9 00:00:51,680 --> 00:00:54,000 Speaker 1: hard at work on season two of Sleepwalkers, and we're 10 00:00:54,040 --> 00:00:57,600 Speaker 1: focusing on stories that really contextualize the implications of AI, 11 00:00:57,960 --> 00:01:00,520 Speaker 1: what it's doing, what the future is, and how it's 12 00:01:00,520 --> 00:01:03,400 Speaker 1: affecting us. You know, we had such a good time 13 00:01:03,560 --> 00:01:06,560 Speaker 1: in season one of Sleepwalkers wrapping our heads around the 14 00:01:06,560 --> 00:01:10,200 Speaker 1: meaning of artificial intelligence. It becomes basically a principle of 15 00:01:10,240 --> 00:01:14,360 Speaker 1: statistics prediction, how we're using data to inform our decisions, 16 00:01:14,560 --> 00:01:18,200 Speaker 1: and how that's becoming ingrained in products and services and 17 00:01:18,400 --> 00:01:22,080 Speaker 1: everything we do. Really, it's true the Pandora's box of 18 00:01:22,120 --> 00:01:24,520 Speaker 1: AI has been opened, but we still have the black 19 00:01:24,560 --> 00:01:27,880 Speaker 1: box problem that is true explainable AI. We can't tell 20 00:01:27,880 --> 00:01:30,360 Speaker 1: what neural networks are doing yet, but people are working 21 00:01:30,400 --> 00:01:32,000 Speaker 1: on it, and that's a story we're going to be 22 00:01:32,000 --> 00:01:35,440 Speaker 1: covering closely in season two. But in this bonus episode 23 00:01:35,480 --> 00:01:37,440 Speaker 1: of Sleepwalkers, we're going to take a look back at 24 00:01:37,440 --> 00:01:40,520 Speaker 1: some of the most poignant stories and interesting applications of 25 00:01:40,520 --> 00:01:43,399 Speaker 1: AI that we talked about in the first season, and 26 00:01:43,520 --> 00:01:45,360 Speaker 1: later in this episode, I'm going to give you a 27 00:01:45,400 --> 00:01:48,280 Speaker 1: preview of a fascinating new company that's using AI to 28 00:01:48,320 --> 00:01:54,200 Speaker 1: predict very specific consumer taste, as in preferences, not like 29 00:01:54,320 --> 00:01:57,440 Speaker 1: tasting clothing. Before we get through though us, when you 30 00:01:57,520 --> 00:01:59,640 Speaker 1: look back at season one, what stands out to you 31 00:01:59,640 --> 00:02:01,560 Speaker 1: the most? One of the things that stands out to 32 00:02:01,600 --> 00:02:04,320 Speaker 1: me most is the story you did about liar Bird. 33 00:02:04,400 --> 00:02:07,880 Speaker 1: Thank you, but seriously, the way they use an algorithm 34 00:02:07,960 --> 00:02:09,920 Speaker 1: to create a deep fake of your voice. But in 35 00:02:09,960 --> 00:02:13,560 Speaker 1: that particular piece, the questions it raised about mortality and 36 00:02:13,600 --> 00:02:16,079 Speaker 1: would you want to hear your father's voice beyond the 37 00:02:16,160 --> 00:02:19,000 Speaker 1: grave has stuck with me and I was one of 38 00:02:19,000 --> 00:02:21,639 Speaker 1: the most powerful moments in the whole podcast. Yeah so, 39 00:02:22,000 --> 00:02:24,959 Speaker 1: Jose Satello, one of the founders of Liar Bird, had 40 00:02:25,000 --> 00:02:27,080 Speaker 1: me sit down on a microphone for an hour and 41 00:02:27,120 --> 00:02:30,800 Speaker 1: just speak which was a personal dream of mine, and 42 00:02:30,840 --> 00:02:36,160 Speaker 1: then using that out me interrupting. Using that voice data, 43 00:02:36,440 --> 00:02:39,400 Speaker 1: liar Bird's algorithms created a version of my voice that 44 00:02:39,400 --> 00:02:42,200 Speaker 1: could turn any written text into something that sounded like me. 45 00:02:42,720 --> 00:02:46,280 Speaker 1: But Jose actually explained it better. So can you hit 46 00:02:46,320 --> 00:02:49,280 Speaker 1: the clip? Not an AI scientist, but we do have 47 00:02:49,360 --> 00:02:53,600 Speaker 1: the sophistication to roll tape. I know it might sound 48 00:02:53,639 --> 00:02:56,919 Speaker 1: a bit like magic, but in reality, the way that 49 00:02:57,160 --> 00:02:59,919 Speaker 1: our algorithm's work is basically they are just a battern 50 00:03:00,080 --> 00:03:04,680 Speaker 1: matching algorithms, and so it's trying to figure out how 51 00:03:05,240 --> 00:03:09,160 Speaker 1: to identify the patterns in your voice by comparing it 52 00:03:09,320 --> 00:03:12,800 Speaker 1: against thousands of other voices a shoually tens of thousands 53 00:03:12,840 --> 00:03:16,000 Speaker 1: of other voices, and trying to figure out what is 54 00:03:16,040 --> 00:03:19,480 Speaker 1: it that makes your voice unique. Once Jose's algorithms identified 55 00:03:19,520 --> 00:03:22,919 Speaker 1: what was unique about my voice, obviously everything they had 56 00:03:22,919 --> 00:03:25,639 Speaker 1: the building blocks they needed to make a fake. Then 57 00:03:25,639 --> 00:03:28,120 Speaker 1: we sent Jose a set of sentences we wanted robot 58 00:03:28,160 --> 00:03:31,040 Speaker 1: care to say, and he used another set of algorithms 59 00:03:31,080 --> 00:03:33,359 Speaker 1: to turn the text into what we heard. The way 60 00:03:33,400 --> 00:03:35,360 Speaker 1: they do this is they use it's called a generative 61 00:03:35,480 --> 00:03:40,440 Speaker 1: adversarial network again, which is a system where one neural 62 00:03:40,480 --> 00:03:43,720 Speaker 1: net tries to trick another one a thousand times per second, 63 00:03:44,240 --> 00:03:47,120 Speaker 1: So each time the second network to texts of fake, 64 00:03:47,560 --> 00:03:50,960 Speaker 1: the first one tries again. It basically learns from its mistakes, 65 00:03:51,000 --> 00:03:53,800 Speaker 1: and once it tricks its adversary, it's ready to show 66 00:03:53,800 --> 00:03:57,440 Speaker 1: its results. In our case, liar bird pits my fake 67 00:03:57,520 --> 00:04:00,200 Speaker 1: voice against my real voice until it sounds like this 68 00:04:00,800 --> 00:04:04,800 Speaker 1: sub dog Scara Karen. One of the reasons why this 69 00:04:04,840 --> 00:04:07,160 Speaker 1: story has stuck with me is because it feels like 70 00:04:07,200 --> 00:04:10,360 Speaker 1: we're just at the beginning of tapping the potential and 71 00:04:10,400 --> 00:04:14,480 Speaker 1: the potential for harm of deep fakes. May be remembered 72 00:04:14,480 --> 00:04:17,240 Speaker 1: as the year of the first significant deep fake crime. 73 00:04:17,720 --> 00:04:20,920 Speaker 1: An employee at a UK based energy firm believed he 74 00:04:21,000 --> 00:04:23,600 Speaker 1: was on the phone to his boss and followed instructions 75 00:04:23,640 --> 00:04:27,560 Speaker 1: to transfer two hundred thousand pounds to a scammers bank account. 76 00:04:28,160 --> 00:04:30,720 Speaker 1: That certainly won't be the last deep fatefore we hear of, 77 00:04:30,960 --> 00:04:35,200 Speaker 1: and it raises questions about responsibility and accountability. Who's liable 78 00:04:35,240 --> 00:04:38,000 Speaker 1: in a case like this. Facebook has actually gone so 79 00:04:38,040 --> 00:04:41,320 Speaker 1: far as to create a deep fake detection challenge to 80 00:04:41,400 --> 00:04:44,760 Speaker 1: get the best minds thinking about deep fakes and how 81 00:04:44,760 --> 00:04:46,960 Speaker 1: we might solve the problem and offering like a million 82 00:04:46,960 --> 00:04:50,200 Speaker 1: dollar prize reach prize, which is like a dollar but 83 00:04:50,240 --> 00:04:52,800 Speaker 1: it also shows how important the issue is, especially when 84 00:04:52,800 --> 00:04:55,279 Speaker 1: a company like Facebook gets behind it. Um. There's another 85 00:04:55,320 --> 00:04:59,720 Speaker 1: side to deep fake technology that actually highlights this dichotomy 86 00:04:59,760 --> 00:05:01,600 Speaker 1: and tech anology right now, which is that it can 87 00:05:01,640 --> 00:05:05,120 Speaker 1: be used for menacing purposes but also really powerful and 88 00:05:05,160 --> 00:05:08,560 Speaker 1: beautiful applications. Jose goes on to talk about how liar 89 00:05:08,640 --> 00:05:11,000 Speaker 1: Bird can be used with als patients and give them 90 00:05:11,040 --> 00:05:14,560 Speaker 1: the ability to speak when they've lost all ability to speak, 91 00:05:14,800 --> 00:05:16,960 Speaker 1: when it could give them the opportunity to speak in 92 00:05:17,120 --> 00:05:19,920 Speaker 1: a version of their voice to their children again, which 93 00:05:19,960 --> 00:05:23,200 Speaker 1: is quite profound. One area where I think technology is 94 00:05:23,200 --> 00:05:26,560 Speaker 1: a powerhouse for change is in medicine. Technologists and doctors 95 00:05:26,560 --> 00:05:30,080 Speaker 1: alike are looking at AI to predict, treat, and diagnose, 96 00:05:30,520 --> 00:05:33,359 Speaker 1: you know, everything from depression to cancer, and that's a 97 00:05:33,480 --> 00:05:36,440 Speaker 1: very wide spectrum. And it reminds me of one of 98 00:05:36,480 --> 00:05:39,880 Speaker 1: my favorite interviews that you did, which was your interview 99 00:05:39,920 --> 00:05:43,400 Speaker 1: with Saddartha. Muker g just so fascinated by this article 100 00:05:43,440 --> 00:05:46,000 Speaker 1: he'd written for The New Yoker called AI versus m 101 00:05:46,080 --> 00:05:48,680 Speaker 1: D which laid out all of the kind of benefits 102 00:05:48,680 --> 00:05:52,480 Speaker 1: and potential applications of using AI in medicine, including some 103 00:05:52,640 --> 00:05:54,800 Speaker 1: of the downside such as the black box problem of 104 00:05:54,839 --> 00:05:57,720 Speaker 1: AI that you mentioned, not knowing why an algorithm has 105 00:05:57,720 --> 00:06:00,400 Speaker 1: made a recommendation, and also another problem, which is that 106 00:06:00,440 --> 00:06:02,840 Speaker 1: if we really too much on technology, it can erode 107 00:06:02,920 --> 00:06:05,920 Speaker 1: human skills. There is a fear that AI could move 108 00:06:06,000 --> 00:06:08,039 Speaker 1: us into a very black and white way of thinking. 109 00:06:08,400 --> 00:06:11,080 Speaker 1: The computer says you have cancer, or the computer says 110 00:06:11,400 --> 00:06:14,040 Speaker 1: you to have your liver removed, said Arthur, who is 111 00:06:14,080 --> 00:06:16,920 Speaker 1: one of the world's foremost oncologists and a Pullet Surprise 112 00:06:16,960 --> 00:06:21,440 Speaker 1: winning author. Provided a different perspective. There is something very 113 00:06:21,480 --> 00:06:25,880 Speaker 1: fundamental about the human brain, a scientist's brain, a doctor's brain, 114 00:06:25,920 --> 00:06:29,640 Speaker 1: and artist's brain that asks questions in a fundamentally different manner, 115 00:06:29,960 --> 00:06:32,400 Speaker 1: the why question. Why did this happen in this person 116 00:06:32,480 --> 00:06:35,719 Speaker 1: in this time? Why does the melanoma appear in the 117 00:06:35,720 --> 00:06:38,280 Speaker 1: first calase? What is the molecular basis of that appearance. 118 00:06:38,480 --> 00:06:41,640 Speaker 1: The most interesting mysteries of medicine remain mysteries that have 119 00:06:41,720 --> 00:06:45,159 Speaker 1: to do with the why. Once we give up some 120 00:06:45,320 --> 00:06:49,440 Speaker 1: of the diagnostic pattern recognition material to machines, it will 121 00:06:49,480 --> 00:06:50,919 Speaker 1: be time to play. It will be the time to 122 00:06:50,960 --> 00:06:54,960 Speaker 1: play in the arena of human therapeutics, human biology, the 123 00:06:55,040 --> 00:06:58,680 Speaker 1: complexity of the human interaction, the art of medicine. My 124 00:06:58,800 --> 00:07:01,360 Speaker 1: hope is that medicine, in being more playful, will become 125 00:07:01,760 --> 00:07:06,120 Speaker 1: more compassionate, more able to take into account individuals and 126 00:07:06,160 --> 00:07:10,920 Speaker 1: their individual destinies rather than bucketing people in big categories. 127 00:07:11,720 --> 00:07:15,360 Speaker 1: It means having more time to spend with humans. You know, 128 00:07:15,400 --> 00:07:19,360 Speaker 1: we are so constrained by time that even compassion gets 129 00:07:19,360 --> 00:07:23,520 Speaker 1: three minutes, We won't become more robotic, will become less 130 00:07:23,600 --> 00:07:28,520 Speaker 1: robotic as the robots enter our own So Dartha's point 131 00:07:28,680 --> 00:07:32,040 Speaker 1: is that these tools could make doctors more efficient so 132 00:07:32,080 --> 00:07:34,440 Speaker 1: that they can provide better care. It sort of takes 133 00:07:34,480 --> 00:07:39,720 Speaker 1: the grunt work out of medicine and puts the patient 134 00:07:39,880 --> 00:07:43,120 Speaker 1: care work back in the doctor's hands. This idea that 135 00:07:43,200 --> 00:07:46,600 Speaker 1: technology can actually allow us to be more human, make 136 00:07:46,680 --> 00:07:50,120 Speaker 1: us more empathetic, is fascinating, and it also raises the 137 00:07:50,200 --> 00:07:52,920 Speaker 1: questions about new types of skills that may need to 138 00:07:52,920 --> 00:07:55,760 Speaker 1: be developed in an age of AI. Yeah, and Regina 139 00:07:55,800 --> 00:07:58,360 Speaker 1: Barslay from m I T spoke a lot about this. 140 00:07:58,560 --> 00:08:02,360 Speaker 1: How doctors have to now equip themselves with new ways 141 00:08:02,520 --> 00:08:06,480 Speaker 1: of translating data to patients, we still do not communicate 142 00:08:06,520 --> 00:08:10,040 Speaker 1: it to the patient because I think now there is 143 00:08:10,080 --> 00:08:13,080 Speaker 1: a walk to be done, not on computer science or AIPAD, 144 00:08:13,560 --> 00:08:16,200 Speaker 1: but really on the clinical side. What is the best 145 00:08:16,240 --> 00:08:20,280 Speaker 1: way to communicate it to the patient and what is 146 00:08:20,720 --> 00:08:23,920 Speaker 1: um you know, the past that you're going to give them. 147 00:08:23,920 --> 00:08:25,920 Speaker 1: It is not just enough to say you know you 148 00:08:25,960 --> 00:08:30,600 Speaker 1: are high risk. You need to propose some suggestion and solution. 149 00:08:31,200 --> 00:08:35,520 Speaker 1: So currently the clinical stuff is thinking and looking at 150 00:08:35,520 --> 00:08:39,760 Speaker 1: the ways of effective you know, clinical engagement with a patient. 151 00:08:40,480 --> 00:08:42,880 Speaker 1: You know, I speaking of data. You all know Harri 152 00:08:43,080 --> 00:08:45,520 Speaker 1: was another person who made you and me think about 153 00:08:45,600 --> 00:08:49,840 Speaker 1: humans as reducible to data. I think he's mostly known 154 00:08:49,960 --> 00:08:52,920 Speaker 1: as a historian and for his book Sapiens, but you 155 00:08:52,960 --> 00:08:55,280 Speaker 1: spoke with him about the data we produce as humans 156 00:08:55,320 --> 00:08:58,439 Speaker 1: and how that influences our relationship with technology. That's right, 157 00:08:58,440 --> 00:09:00,760 Speaker 1: which is the topic of his book Daus. And he 158 00:09:00,800 --> 00:09:03,760 Speaker 1: has this phrase data is m to describe how we've 159 00:09:03,840 --> 00:09:07,200 Speaker 1: kind of come to worship the data we create and 160 00:09:07,360 --> 00:09:12,400 Speaker 1: our own technological creations. So what happens when based on 161 00:09:12,440 --> 00:09:15,520 Speaker 1: all of our past behavior, AI starts to know us 162 00:09:15,640 --> 00:09:18,440 Speaker 1: better than we know ourselves. Here's a clip from You 163 00:09:18,520 --> 00:09:22,320 Speaker 1: are talking about exactly that. When we talk about AI, 164 00:09:22,600 --> 00:09:26,640 Speaker 1: we tend to greatly exaggerate the potential abilities, but at 165 00:09:26,679 --> 00:09:30,560 Speaker 1: the same time we also tend to exaggerate the abilities 166 00:09:30,640 --> 00:09:35,360 Speaker 1: of humans. People say that AI is not going to 167 00:09:35,440 --> 00:09:39,080 Speaker 1: take over our lives because it's very imperfect and it 168 00:09:39,160 --> 00:09:42,880 Speaker 1: won't be able to know us perfectly. But what people 169 00:09:43,000 --> 00:09:47,280 Speaker 1: forget is that humans often have a very poor understanding 170 00:09:47,600 --> 00:09:52,719 Speaker 1: of themselves, of the desires, of their emotions, of their 171 00:09:52,760 --> 00:09:57,400 Speaker 1: mental states. For AI to take over your life, it 172 00:09:57,559 --> 00:10:01,160 Speaker 1: doesn't need to know you perfectly, just need to know 173 00:10:01,280 --> 00:10:04,720 Speaker 1: you better than you know yourself. And that's not very 174 00:10:04,720 --> 00:10:08,360 Speaker 1: difficult because we often don't know the most important things 175 00:10:08,480 --> 00:10:11,160 Speaker 1: about ourselves. So, but let's say you could turn back 176 00:10:11,160 --> 00:10:14,680 Speaker 1: the clock to being fifteen, would you have wanted to 177 00:10:14,800 --> 00:10:17,560 Speaker 1: live in a world where there was sufficiently good sensors 178 00:10:17,640 --> 00:10:21,400 Speaker 1: to monitor your eyes, your eye movement, you're breathing, you know, 179 00:10:21,400 --> 00:10:23,640 Speaker 1: while you're going about your daily life, and then to 180 00:10:23,720 --> 00:10:27,199 Speaker 1: interpret that and say to you you have all more 181 00:10:27,240 --> 00:10:30,080 Speaker 1: likely than not you're gay. That's a very good question 182 00:10:30,440 --> 00:10:34,080 Speaker 1: which will become very practical questions in a few years. 183 00:10:34,440 --> 00:10:38,240 Speaker 1: And the way that I grew up and developed. It 184 00:10:38,320 --> 00:10:42,480 Speaker 1: would have been a very bad idea. I wouldn't like 185 00:10:42,760 --> 00:10:46,480 Speaker 1: to receive this kind of insight from form a machine. 186 00:10:46,800 --> 00:10:49,640 Speaker 1: I'm not sure how I would have dealt with it 187 00:10:49,760 --> 00:10:51,760 Speaker 1: when I was fifteen, you know, in Israel in the 188 00:10:51,840 --> 00:10:54,840 Speaker 1: nineteen eighties, and maybe partly it was you know, a 189 00:10:54,880 --> 00:10:57,800 Speaker 1: defense mechanism. In the future too. It it depends where 190 00:10:57,840 --> 00:11:02,400 Speaker 1: you live. Brunei has instituted the death penalty for gay people, 191 00:11:02,440 --> 00:11:05,720 Speaker 1: at least for people engaged in homosexual sex. So if 192 00:11:05,760 --> 00:11:08,480 Speaker 1: I'm a teenager in Brunei, I don't want to be 193 00:11:08,520 --> 00:11:11,720 Speaker 1: told by the computer that I'm gay, because the computer 194 00:11:11,800 --> 00:11:14,079 Speaker 1: will then be able to tell that to the police 195 00:11:14,120 --> 00:11:17,679 Speaker 1: and to the authorities as well. So the apps we use, 196 00:11:17,800 --> 00:11:20,320 Speaker 1: the product we buy, the number of steps we take, 197 00:11:20,679 --> 00:11:23,920 Speaker 1: the delivery I ordered last night, that will becomes data, 198 00:11:24,400 --> 00:11:27,080 Speaker 1: and that data can feed into neural networks to create 199 00:11:27,120 --> 00:11:30,480 Speaker 1: statistical models of us and what we might do next, 200 00:11:30,920 --> 00:11:33,920 Speaker 1: sometimes in order to diagnose a medical condition, and other 201 00:11:33,960 --> 00:11:37,720 Speaker 1: times to sell us a product. Here's V again, looking 202 00:11:37,760 --> 00:11:41,320 Speaker 1: to the future, say ten twenty years. The danger is 203 00:11:41,600 --> 00:11:43,720 Speaker 1: if I still don't know that I'm gay, but the 204 00:11:43,800 --> 00:11:47,920 Speaker 1: government and Coca Cola and and Amazon and Google. They 205 00:11:47,920 --> 00:11:51,680 Speaker 1: already know it. I'm at a huge disadvantage. So it 206 00:11:51,760 --> 00:11:55,600 Speaker 1: could be something as frightening as the secret police coming 207 00:11:55,720 --> 00:11:58,880 Speaker 1: and taking me to a concentration camp. But it could 208 00:11:58,960 --> 00:12:03,199 Speaker 1: also be something Coca Cola knowing that I'm gay, they 209 00:12:03,200 --> 00:12:05,959 Speaker 1: want to sell me a new drink, and they choose 210 00:12:06,040 --> 00:12:10,480 Speaker 1: the advertisement with the shirtless guy and not the advertisement 211 00:12:10,840 --> 00:12:13,280 Speaker 1: with the girl in the bikini. And next day morning 212 00:12:13,400 --> 00:12:15,880 Speaker 1: I go and I buy this soft drink and I 213 00:12:15,920 --> 00:12:19,520 Speaker 1: don't even know why, and they have this huge advantage 214 00:12:19,559 --> 00:12:23,480 Speaker 1: over me and can manipulate me in all kinds of ways. Well, 215 00:12:23,480 --> 00:12:26,760 Speaker 1: as you've all brought up soda, I was not allowed 216 00:12:26,760 --> 00:12:29,960 Speaker 1: to drink soda as a child. My parents tricked me 217 00:12:30,000 --> 00:12:33,640 Speaker 1: into thinking that Seltzer was soda. I later found out 218 00:12:33,720 --> 00:12:38,000 Speaker 1: that soda is soda and Seltzer is water. And somehow 219 00:12:38,080 --> 00:12:41,640 Speaker 1: the Seltzer of it all is the perfect segue because 220 00:12:41,679 --> 00:12:44,400 Speaker 1: AI is not only being used to sell a product, 221 00:12:45,160 --> 00:12:49,240 Speaker 1: it's also being used to create products like Seltzer in 222 00:12:49,280 --> 00:12:52,640 Speaker 1: the R and D research and development phase. And for 223 00:12:52,679 --> 00:12:56,960 Speaker 1: this bombs episode, Julian remember Juliana, a lovely producer, And 224 00:12:57,080 --> 00:13:00,400 Speaker 1: I went to meet the company behind the gastrograph app, 225 00:13:00,440 --> 00:13:04,600 Speaker 1: which is using consumer preference data to make predictions about 226 00:13:04,760 --> 00:13:07,920 Speaker 1: new flavors. People might like, I think you hate Nettle, 227 00:13:08,640 --> 00:13:15,680 Speaker 1: think against Analytical Flavor Systems or a f S is 228 00:13:15,720 --> 00:13:19,600 Speaker 1: tucked away down a side street in Chinatown, up in 229 00:13:19,640 --> 00:13:23,280 Speaker 1: a third story walk up. It does actually and when 230 00:13:23,280 --> 00:13:25,760 Speaker 1: we were there, the office was still waiting for furniture. 231 00:13:25,800 --> 00:13:27,840 Speaker 1: You know, it had this We're going to disrupt the 232 00:13:27,880 --> 00:13:31,360 Speaker 1: industry vibe. We moved in like a month or two ago. 233 00:13:31,640 --> 00:13:34,760 Speaker 1: But it turns out you can't just buy office furniture. 234 00:13:36,040 --> 00:13:38,720 Speaker 1: We met the founder of the company, Jason Cohen, and 235 00:13:38,760 --> 00:13:41,079 Speaker 1: we believe that in the future, in order to be competitive, 236 00:13:41,160 --> 00:13:43,680 Speaker 1: you have to be targeted. That that there won't be 237 00:13:43,760 --> 00:13:47,360 Speaker 1: any billion dollar brands right in ten years if you 238 00:13:47,400 --> 00:13:49,800 Speaker 1: don't have an m b A. Here's what Jason's talking about. 239 00:13:50,040 --> 00:13:52,360 Speaker 1: Think about the coffee you drank this morning, is it 240 00:13:52,559 --> 00:13:55,560 Speaker 1: Third Wave? Did you get it from Starbucks or an 241 00:13:55,600 --> 00:13:59,040 Speaker 1: indie roaster. Food and beverage companies are moving more and 242 00:13:59,120 --> 00:14:01,840 Speaker 1: more towards knee markets. The problem is that they have 243 00:14:02,000 --> 00:14:05,120 Speaker 1: very old school ways of developing new products. But a 244 00:14:05,280 --> 00:14:08,560 Speaker 1: FS is offering another way to reach those customers, and 245 00:14:08,600 --> 00:14:13,080 Speaker 1: that's by allowing companies to formulate more specific products using AI. 246 00:14:13,400 --> 00:14:16,079 Speaker 1: Here's Jason again. Usually the way that things are done 247 00:14:16,120 --> 00:14:20,760 Speaker 1: today is you get some conceptual brief. You might say, 248 00:14:20,800 --> 00:14:23,400 Speaker 1: we want to develop a new fruit flavored beverage for 249 00:14:24,160 --> 00:14:27,360 Speaker 1: Japanese millennial women. Right, you would look at what other 250 00:14:27,400 --> 00:14:29,840 Speaker 1: fruit flavored beverages are out there. You'd look at your 251 00:14:29,840 --> 00:14:31,600 Speaker 1: own product lineup and say, well, we already have a 252 00:14:31,680 --> 00:14:35,040 Speaker 1: lemon flavor, and we're going to send out these briefs, 253 00:14:35,080 --> 00:14:36,880 Speaker 1: and we're gonna send these out to these flavor companies. 254 00:14:36,880 --> 00:14:38,920 Speaker 1: We're gonna see what other fruits we can get, and 255 00:14:38,920 --> 00:14:41,360 Speaker 1: you're gonna wind up with very mainstream things. You're gonna 256 00:14:41,400 --> 00:14:45,040 Speaker 1: wind up with peach and mango and strawberry and grapefruit, right, 257 00:14:45,040 --> 00:14:48,680 Speaker 1: and maybe you'll wind up with something interesting like low 258 00:14:48,800 --> 00:14:53,360 Speaker 1: quad or uzo or dragon fruit. Right. And then you're 259 00:14:53,360 --> 00:14:57,680 Speaker 1: gonna have your own consumer tasting panel internally hopefully uh 260 00:14:57,680 --> 00:14:59,520 Speaker 1: and you're gonna have them taste it, and they're gonna 261 00:14:59,560 --> 00:15:02,240 Speaker 1: have to like some of those more than your current 262 00:15:02,280 --> 00:15:05,520 Speaker 1: offering or more than a competitors. After you've done all 263 00:15:05,520 --> 00:15:07,360 Speaker 1: of this work, which sometimes costs in the tens of 264 00:15:07,360 --> 00:15:09,680 Speaker 1: thousands of dollars in order to have the samples developed. 265 00:15:09,680 --> 00:15:11,760 Speaker 1: How the samples sent to you recruit the consumers, but 266 00:15:11,960 --> 00:15:14,320 Speaker 1: the product in front of the consumers, that data is 267 00:15:14,400 --> 00:15:17,360 Speaker 1: only ever usable once. Right. All you get from that 268 00:15:17,400 --> 00:15:20,360 Speaker 1: as a binary yes no, sixty percent of the population 269 00:15:20,400 --> 00:15:23,280 Speaker 1: liked more than the competitors, and so what we're doing 270 00:15:23,560 --> 00:15:27,960 Speaker 1: is entirely different. Jason's team wants to take product development 271 00:15:28,000 --> 00:15:31,120 Speaker 1: out of the yes no binary. Instead of just saying 272 00:15:31,320 --> 00:15:35,040 Speaker 1: coke or pepsi, they can calculate which parts of each 273 00:15:35,080 --> 00:15:38,760 Speaker 1: soft drink people liked and disliked, and then a f 274 00:15:38,920 --> 00:15:41,480 Speaker 1: s can make entirely new flavors based on what a 275 00:15:41,520 --> 00:15:44,600 Speaker 1: person kind of liked about pepsi and kind of liked 276 00:15:44,600 --> 00:15:48,280 Speaker 1: about coke. And finally, they can transfer those preferences to 277 00:15:48,640 --> 00:15:52,600 Speaker 1: entirely different demographics. So what might someone in Mexico want 278 00:15:52,640 --> 00:15:55,280 Speaker 1: to taste in their cola compared to what a Japanese 279 00:15:55,280 --> 00:15:58,720 Speaker 1: millennial woman might want to taste in her cola? This 280 00:15:58,800 --> 00:16:02,080 Speaker 1: is what Jason believes is truly disruptive, being able to 281 00:16:02,080 --> 00:16:04,400 Speaker 1: say to a brand, if you want to launch in Mexico, 282 00:16:04,480 --> 00:16:07,360 Speaker 1: you should tweak your flavors in this way because we're 283 00:16:07,400 --> 00:16:10,000 Speaker 1: actually able to collect this data, develop a data set, 284 00:16:10,240 --> 00:16:12,720 Speaker 1: and make predictions. So we could take data from say, white, 285 00:16:12,720 --> 00:16:15,040 Speaker 1: twenty to three year old college educated males, and use 286 00:16:15,080 --> 00:16:17,240 Speaker 1: that to predict what every other population in the United 287 00:16:17,280 --> 00:16:19,120 Speaker 1: States is going to perceive in that product. And so 288 00:16:19,160 --> 00:16:21,520 Speaker 1: we're taking an industry that has never seen predictions of 289 00:16:21,520 --> 00:16:24,960 Speaker 1: any kind before and finally being able to actually predict things, 290 00:16:25,120 --> 00:16:27,280 Speaker 1: predict who's gonna like it, how much they're gonna like 291 00:16:27,360 --> 00:16:29,360 Speaker 1: it right, and what we can do to optimize it 292 00:16:29,440 --> 00:16:31,600 Speaker 1: so that they like it more. We can actually create 293 00:16:31,640 --> 00:16:33,440 Speaker 1: products that no one would have ever thought of, and 294 00:16:33,560 --> 00:16:35,200 Speaker 1: no one ever would have thought that a segment of 295 00:16:35,240 --> 00:16:37,160 Speaker 1: the population would have liked. And this is something that 296 00:16:37,200 --> 00:16:38,920 Speaker 1: we now do with the companies that we work with. 297 00:16:39,160 --> 00:16:41,760 Speaker 1: We did talk quite a bit about developing a pine 298 00:16:41,800 --> 00:16:45,240 Speaker 1: flavored beverage in Japan. When we first showed these results 299 00:16:45,240 --> 00:16:47,720 Speaker 1: to a company there, they said, natzu desca, do you 300 00:16:47,760 --> 00:16:50,920 Speaker 1: mean pineapple? Because it was just so out of the yeah, 301 00:16:50,960 --> 00:16:54,320 Speaker 1: out of the ordinary. The way Jason and his team 302 00:16:54,360 --> 00:16:56,640 Speaker 1: are able to get such nuanced data is with an 303 00:16:56,680 --> 00:16:59,960 Speaker 1: app they developed called Gastrograph. Gastrograph looks a lot like 304 00:17:00,200 --> 00:17:02,800 Speaker 1: the flavor wheels they use in coffee and wine tasting 305 00:17:02,840 --> 00:17:04,840 Speaker 1: to help people map out what they taste when they 306 00:17:04,880 --> 00:17:07,520 Speaker 1: try a new product. We think of every flavor, romance, 307 00:17:07,520 --> 00:17:10,000 Speaker 1: texture as a signature. You can have the five basic 308 00:17:10,040 --> 00:17:13,240 Speaker 1: flavors bitter, sweet, salty, sour mommy, and then underneath that 309 00:17:13,240 --> 00:17:16,080 Speaker 1: you're gonna have categorical flavors like fruity earth, the herbaceous nuts, 310 00:17:16,080 --> 00:17:18,800 Speaker 1: and seedge roasted. I mentioned that he's a professional taster, right, 311 00:17:18,960 --> 00:17:22,240 Speaker 1: and then underneath that you can have subcategorical like citrus, 312 00:17:22,320 --> 00:17:24,840 Speaker 1: or specific like lemon are very specific like Meyer lemon 313 00:17:24,840 --> 00:17:27,280 Speaker 1: are very very specific like Meyer lemons as right, So 314 00:17:27,400 --> 00:17:31,199 Speaker 1: all of those signatures exist in some infinite dimensional space 315 00:17:32,080 --> 00:17:36,480 Speaker 1: flavor space. So, car, since you're such a Seltzer fiend, 316 00:17:36,560 --> 00:17:38,919 Speaker 1: we demo gaster graph and got a feel for the 317 00:17:38,960 --> 00:17:42,040 Speaker 1: app by doing a Seltzer tasting. Yeah, we tasted five 318 00:17:42,119 --> 00:17:44,560 Speaker 1: Seltzers that are already on the market that you'd buy, 319 00:17:44,640 --> 00:17:47,920 Speaker 1: like seven eleven, and we rated different components of them. 320 00:17:48,000 --> 00:17:50,359 Speaker 1: So if I tasted fruit, I'd rate that from zero 321 00:17:50,440 --> 00:17:54,040 Speaker 1: to five, and I could add adjectives like strawberry or mango. 322 00:17:54,320 --> 00:17:56,920 Speaker 1: You know, I'm not into like Seltzer two point oh 323 00:17:56,920 --> 00:18:01,919 Speaker 1: with like kumquat flavored sparkling seltzer. You know, I plane vintage, 324 00:18:02,400 --> 00:18:05,720 Speaker 1: but a f s is Resident chemist Ryan On agreed 325 00:18:05,760 --> 00:18:09,399 Speaker 1: to formulate a seltzer based on just our extremely small 326 00:18:09,480 --> 00:18:12,560 Speaker 1: data set. Um, you'd probably be a pretty fast process. 327 00:18:12,600 --> 00:18:15,520 Speaker 1: So we have tons of seltzer data. What we would 328 00:18:15,520 --> 00:18:19,000 Speaker 1: want to do is um run through a couple of 329 00:18:19,000 --> 00:18:20,840 Speaker 1: different flavors to get an idea of the types of 330 00:18:20,880 --> 00:18:23,920 Speaker 1: things that you like, build a model specifically around that, 331 00:18:24,520 --> 00:18:27,280 Speaker 1: run an optimization, predict a new flavor that you've never 332 00:18:27,320 --> 00:18:31,120 Speaker 1: had before, and then have you tried again. So after 333 00:18:31,160 --> 00:18:34,760 Speaker 1: we did all of that, we went home. We sat 334 00:18:34,800 --> 00:18:38,440 Speaker 1: in our hands, and then we went back to the office, 335 00:18:38,600 --> 00:18:41,360 Speaker 1: which actually a little bit more furniture when we got 336 00:18:41,400 --> 00:18:45,560 Speaker 1: back there to see if they actually could create a 337 00:18:45,600 --> 00:18:48,359 Speaker 1: seltzer that both Julian and I would like. So it 338 00:18:48,440 --> 00:18:51,200 Speaker 1: was a blind tasting. Here we go, all right, drink 339 00:18:51,200 --> 00:18:56,399 Speaker 1: one pair. I don't know why I think such a 340 00:18:56,400 --> 00:19:02,680 Speaker 1: thing against pears. The number two berries, that's delish. Let's 341 00:19:02,680 --> 00:19:05,600 Speaker 1: see what we got here. I wouldn't know huckleberry if 342 00:19:05,640 --> 00:19:10,119 Speaker 1: it hit me in the face, honestly, but whatever, Number 343 00:19:10,119 --> 00:19:18,119 Speaker 1: three honestly, curry. Do you not taste curry like there's spice? 344 00:19:18,200 --> 00:19:24,920 Speaker 1: It's so interesting? One? Oh, I love it like hops 345 00:19:24,920 --> 00:19:29,919 Speaker 1: and I'm si care just taste like grass? What are you? 346 00:19:32,359 --> 00:19:35,800 Speaker 1: We're done? So the first thing you guys should tell 347 00:19:35,880 --> 00:19:40,000 Speaker 1: us is which is your favorite? For me? Is number three? 348 00:19:40,119 --> 00:19:42,520 Speaker 1: Just because I like the complexity of flavor. Three was 349 00:19:42,560 --> 00:19:46,919 Speaker 1: a beverage too. I'm used to and I've probably had before. 350 00:19:46,960 --> 00:19:51,520 Speaker 1: Maybe three. I really enjoyed one I did not like. 351 00:19:52,000 --> 00:19:55,000 Speaker 1: I don't like those flavors. We have to reveal. We're 352 00:19:55,000 --> 00:19:58,600 Speaker 1: almost at the reveal can to clarify. Your job as 353 00:19:58,640 --> 00:20:03,560 Speaker 1: a company is to predict future products that people will 354 00:20:03,640 --> 00:20:07,200 Speaker 1: enjoy and come back to. Yes, So in that regard, 355 00:20:07,520 --> 00:20:13,720 Speaker 1: three was a winner. All right? Oh my god, tes us. 356 00:20:13,840 --> 00:20:15,639 Speaker 1: So we had to do one product that was going 357 00:20:15,720 --> 00:20:18,159 Speaker 1: to be optimal for both of you, and we got it. 358 00:20:18,160 --> 00:20:24,240 Speaker 1: It was number three. They nailed it. You could say 359 00:20:24,240 --> 00:20:27,560 Speaker 1: that we got flavor hacked up. Here. This this blue graph, 360 00:20:27,600 --> 00:20:29,840 Speaker 1: this is saying that there's a you know, a seventy 361 00:20:29,920 --> 00:20:33,680 Speaker 1: percent chance that you would give this a six? Did 362 00:20:33,680 --> 00:20:36,240 Speaker 1: I give it a six? I think I did? You? Did? 363 00:20:36,280 --> 00:20:38,120 Speaker 1: You both gave it a six? Um? So we were 364 00:20:38,119 --> 00:20:40,480 Speaker 1: pretty confident on this, but we didn't have that level 365 00:20:40,520 --> 00:20:43,000 Speaker 1: of confidence that we saw this. What I tasted is 366 00:20:43,040 --> 00:20:46,919 Speaker 1: not something I ever predicted I would have liked, but 367 00:20:47,640 --> 00:20:50,639 Speaker 1: it's absolutely something I will continue to think about. It 368 00:20:50,680 --> 00:20:53,240 Speaker 1: was such a unique flavor and it's actually something I 369 00:20:53,240 --> 00:20:55,520 Speaker 1: would buy. It's just that I had never tasted something 370 00:20:55,520 --> 00:20:57,800 Speaker 1: like it before. So when I first tasted it, I 371 00:20:57,880 --> 00:21:01,760 Speaker 1: was like, this is strange. In the case scenario, gastrographs 372 00:21:01,800 --> 00:21:05,560 Speaker 1: AI can help companies create foods that satisfying more specific 373 00:21:05,640 --> 00:21:08,960 Speaker 1: tastes and even bring people more joy, and that's good 374 00:21:08,960 --> 00:21:11,879 Speaker 1: for business. Instead of making huge bets and trying to 375 00:21:11,920 --> 00:21:14,920 Speaker 1: market a product to an entire country, a f S 376 00:21:14,960 --> 00:21:17,520 Speaker 1: has created a way to make more specialized bets and 377 00:21:17,600 --> 00:21:20,800 Speaker 1: help companies tap into those niches. And this isn't about 378 00:21:20,880 --> 00:21:24,720 Speaker 1: AI reducing our experiences to data. AI is being used 379 00:21:24,720 --> 00:21:29,520 Speaker 1: to change how we experience the world. More sleepwalkers after 380 00:21:29,560 --> 00:21:43,359 Speaker 1: the break So, Karen, I would have been quite nervous 381 00:21:43,359 --> 00:21:45,800 Speaker 1: to stand in the shoes of Jason and Ryan and 382 00:21:45,840 --> 00:21:48,359 Speaker 1: the gastrograph team because I know that you're such a 383 00:21:48,440 --> 00:21:54,160 Speaker 1: connoisseur of Seltzer. How do they do? They did really 384 00:21:54,200 --> 00:21:57,560 Speaker 1: well actually, and I think it's important to mention that 385 00:21:57,680 --> 00:22:00,800 Speaker 1: they weren't trying to create something they thought I would 386 00:22:00,880 --> 00:22:04,639 Speaker 1: already like, like I love cranberry, right. They were trying 387 00:22:04,680 --> 00:22:08,680 Speaker 1: to create something that I hadn't tasted before and also liked. 388 00:22:08,760 --> 00:22:11,439 Speaker 1: So it was really difficult, and I think it also 389 00:22:11,560 --> 00:22:14,280 Speaker 1: shows that there's a bit of reversal in the way 390 00:22:14,280 --> 00:22:17,560 Speaker 1: that we do things. Companies have always used market research 391 00:22:17,680 --> 00:22:20,720 Speaker 1: to predict consumer preference, but it's often based on things 392 00:22:20,720 --> 00:22:24,640 Speaker 1: like focus groups or survey research. What we have now 393 00:22:25,080 --> 00:22:28,399 Speaker 1: is massive amounts of data being funneled through an algorithm 394 00:22:28,640 --> 00:22:32,000 Speaker 1: to deliver the perfect product for a very specific type 395 00:22:32,000 --> 00:22:37,679 Speaker 1: of person exactly. That's the or yeah, age, demographic, socio 396 00:22:37,720 --> 00:22:42,640 Speaker 1: economic race. They can target it to all these particular categories, 397 00:22:42,920 --> 00:22:46,639 Speaker 1: and I think this is cool and also a little unsettling. 398 00:22:46,880 --> 00:22:48,840 Speaker 1: I think as humans, we like to be in control, 399 00:22:48,920 --> 00:22:51,280 Speaker 1: you know. I like to think that my preferences are 400 00:22:51,320 --> 00:22:55,040 Speaker 1: just that, my own preferences, and this sort of up 401 00:22:55,200 --> 00:22:58,639 Speaker 1: ends that notion. You know, using pre existing data, I 402 00:22:58,720 --> 00:23:02,359 Speaker 1: can kind of be read they're reading me, and that 403 00:23:02,400 --> 00:23:05,240 Speaker 1: makes me feel a little less special. I do think 404 00:23:05,240 --> 00:23:07,920 Speaker 1: it's cool that companies are trying to service niche markets, 405 00:23:08,280 --> 00:23:10,640 Speaker 1: and I think that's a trend I would definitely get 406 00:23:10,680 --> 00:23:13,119 Speaker 1: on board with as far as AI being used to 407 00:23:13,119 --> 00:23:16,919 Speaker 1: make predictions. And the reason this gastrograph pieces interesting is 408 00:23:16,960 --> 00:23:20,159 Speaker 1: because it's a perfect demonstration that AI is not some 409 00:23:20,720 --> 00:23:22,920 Speaker 1: thing which is going to happen in the future. It's 410 00:23:23,000 --> 00:23:30,880 Speaker 1: here with us today. We can literally taste it already 411 00:23:34,800 --> 00:23:37,840 Speaker 1: AIS in our lives. It's interpreting our data, is analyzing 412 00:23:37,840 --> 00:23:41,560 Speaker 1: our preferences, is predicting our behaviors. But we're just starting 413 00:23:41,600 --> 00:23:44,560 Speaker 1: to respond to what that means culturally. And so there's 414 00:23:44,560 --> 00:23:46,760 Speaker 1: a lot of new technologies and new issues that were 415 00:23:46,800 --> 00:23:50,680 Speaker 1: very excited to get our hands dirty with on season two. Absolutely, 416 00:23:50,760 --> 00:23:53,000 Speaker 1: and one of the important issues we're going to explore 417 00:23:53,080 --> 00:23:56,359 Speaker 1: is bias and technology. It's easy to think that algorithms 418 00:23:56,359 --> 00:23:59,439 Speaker 1: are neutral, but the reality is that technology is built 419 00:23:59,440 --> 00:24:02,520 Speaker 1: by someone, and that person's bias can be built into 420 00:24:02,520 --> 00:24:07,560 Speaker 1: a system. This Princeton professor named Ruha Benjamin has introduced 421 00:24:07,560 --> 00:24:10,840 Speaker 1: a concept directly related to this, called the new Gym code, 422 00:24:11,200 --> 00:24:15,440 Speaker 1: which asks us to consider the inequities encoded in algorithms. Well, 423 00:24:15,440 --> 00:24:18,360 Speaker 1: it's the algorithms and also the data they learned from right. 424 00:24:18,359 --> 00:24:22,560 Speaker 1: I mean AI harnesses the power of processing huge amounts 425 00:24:22,600 --> 00:24:24,800 Speaker 1: of data about things that have happened in the past 426 00:24:25,160 --> 00:24:27,399 Speaker 1: in order to predict a future, and so we have 427 00:24:27,480 --> 00:24:30,320 Speaker 1: to be very careful about what that data contains, or 428 00:24:30,320 --> 00:24:32,720 Speaker 1: we might not like the future. It's bits out. I 429 00:24:32,760 --> 00:24:35,200 Speaker 1: think it's a particularly relevant in the air of medicine. 430 00:24:35,240 --> 00:24:38,440 Speaker 1: We see huge, huge promise about honesting AI for better 431 00:24:38,440 --> 00:24:41,000 Speaker 1: medical outcomes, but we also need to be very careful 432 00:24:41,080 --> 00:24:43,399 Speaker 1: about how the data is being used and who has 433 00:24:43,440 --> 00:24:46,520 Speaker 1: access to it, and how can you prevent your data 434 00:24:46,560 --> 00:24:49,239 Speaker 1: from being used against you. Well, we have to think 435 00:24:49,280 --> 00:24:53,040 Speaker 1: about the potential for data to harm but also to 436 00:24:53,080 --> 00:24:58,320 Speaker 1: provide comfort and to drive innovation, sometimes extraordinary, very unexpected innovation. 437 00:24:59,000 --> 00:25:00,800 Speaker 1: There are two stories I can't wait to dive into 438 00:25:00,800 --> 00:25:03,600 Speaker 1: in season two. One is about a doctor using AI 439 00:25:03,760 --> 00:25:09,200 Speaker 1: to record and optimize conversational strategies with very sick patients. 440 00:25:09,760 --> 00:25:13,679 Speaker 1: What should they say, how and when? Another is about 441 00:25:13,760 --> 00:25:18,240 Speaker 1: using natural language processing to enable immersive conversations with holograms 442 00:25:18,280 --> 00:25:22,320 Speaker 1: of people from history, everyone from astronauts to Holocaust survivors. 443 00:25:22,760 --> 00:25:25,560 Speaker 1: In other words, using technology to bring history into the 444 00:25:25,600 --> 00:25:29,960 Speaker 1: present and ensuring we never forget our past it's wild, 445 00:25:31,320 --> 00:25:33,560 Speaker 1: so we're obviously looking forward to seeing you in the 446 00:25:33,560 --> 00:25:35,800 Speaker 1: next season. We have a lot of amazing stories lined up. 447 00:25:36,200 --> 00:25:38,399 Speaker 1: We'd love to hear from you guys about stories that 448 00:25:38,480 --> 00:25:40,560 Speaker 1: you want to hear or subjects that you want to 449 00:25:40,560 --> 00:25:44,240 Speaker 1: hear about, So tweet us at Sleepwalker's Pod on Twitter 450 00:25:44,600 --> 00:25:49,200 Speaker 1: obviously and on Instagram. We are Sleepwalkers Podcast. Yeah, thank 451 00:25:49,240 --> 00:25:51,200 Speaker 1: you so much. We love your feedback and we're really 452 00:25:51,200 --> 00:25:53,840 Speaker 1: looking forward to seeing you for season two very soon. 453 00:26:06,440 --> 00:26:10,200 Speaker 1: Sleepwalkers is a production of I Heart Radio and Unusual Productions. 454 00:26:10,680 --> 00:26:13,560 Speaker 1: For the latest AI news, live interviews, and behind the 455 00:26:13,640 --> 00:26:17,720 Speaker 1: scenes footage, find us on Instagram, at Sleepwalker's Podcast or 456 00:26:17,840 --> 00:26:22,199 Speaker 1: at Sleepwalker's podcast dot com. Sleepwalkers is hosted by me 457 00:26:22,359 --> 00:26:25,680 Speaker 1: Ozveloshin and co hosted by me Kara Price. Were produced 458 00:26:25,680 --> 00:26:29,040 Speaker 1: by Julian Weller with help from Jacopo Penzo and Taylor Chcogin, 459 00:26:29,280 --> 00:26:32,560 Speaker 1: mixing by Tristan McNeil and Julian Weller. Our story editor 460 00:26:32,640 --> 00:26:37,080 Speaker 1: is Matthew Riddle. Sleepwalkers is executive produced by me Ozveloshin 461 00:26:37,200 --> 00:26:40,920 Speaker 1: and Mangesh hat Together. For more podcasts from my Heart Radio, 462 00:26:41,040 --> 00:26:43,959 Speaker 1: visit the I Heart Radio app Apple podcasts, or wherever 463 00:26:44,000 --> 00:26:45,359 Speaker 1: you listen to your favorite shows.