1 00:00:04,600 --> 00:00:08,360 Speaker 1: Sleepwalkers is a production of our Heart Radio and unusual productions. 2 00:00:12,360 --> 00:00:16,079 Speaker 1: So I'm here for a surprise poetry reading. It's about 3 00:00:16,120 --> 00:00:21,120 Speaker 1: to start. The silence is hardly final. Somewhere in the street, 4 00:00:21,400 --> 00:00:24,600 Speaker 1: I can see the trees begin to rise and fall 5 00:00:24,720 --> 00:00:27,319 Speaker 1: for the light of the dark thing above me. The 6 00:00:27,400 --> 00:00:29,600 Speaker 1: dream is like a shiny black hair, and the sun 7 00:00:29,760 --> 00:00:32,800 Speaker 1: is like a dream. I stand up and watch the 8 00:00:32,800 --> 00:00:35,800 Speaker 1: sun shine on a single day, and the sun has 9 00:00:35,840 --> 00:00:39,320 Speaker 1: a chance to accomplish from the springs of my own delight. 10 00:00:40,760 --> 00:00:45,960 Speaker 1: Kind of haunting, abstract, yes, but beautiful too. And crucially, 11 00:00:47,400 --> 00:00:50,360 Speaker 1: when I read this, I felt, as you just did. 12 00:00:50,400 --> 00:00:53,120 Speaker 1: I hope that it is beautiful. I found it evocative 13 00:00:53,240 --> 00:00:55,280 Speaker 1: of experience that I've had it in the past. I 14 00:00:55,280 --> 00:00:58,279 Speaker 1: found it nostalgic um and I'm like, oh my god, 15 00:00:58,320 --> 00:01:01,720 Speaker 1: I'm having real human being emotions. That was filmmaker Oscar Sharp, 16 00:01:02,000 --> 00:01:05,639 Speaker 1: and that poem wasn't written by him or by anyone else. 17 00:01:06,200 --> 00:01:10,119 Speaker 1: It was written by a computer, a machine poet. We're 18 00:01:10,120 --> 00:01:13,160 Speaker 1: more and more worried about robots coming to take our jobs. 19 00:01:13,400 --> 00:01:16,160 Speaker 1: And though perhaps few would regret less hips to poets 20 00:01:16,160 --> 00:01:20,280 Speaker 1: shambly around Brooklyn, somehow machines in the creative world are 21 00:01:20,480 --> 00:01:24,800 Speaker 1: especially uncanny, even frightening, because poetry and music and humor 22 00:01:24,840 --> 00:01:27,560 Speaker 1: are supposed to be the things that define our humanity, 23 00:01:27,600 --> 00:01:30,720 Speaker 1: aren't they. In this episode, we look at how AI 24 00:01:30,920 --> 00:01:33,520 Speaker 1: is being used in the creative arts, and in doing so, 25 00:01:33,959 --> 00:01:36,920 Speaker 1: we understand a lot more about how this often intimidating 26 00:01:36,959 --> 00:01:55,520 Speaker 1: technology actually works. I'm Moza Lush and welcome to Sleepwalkers. Hi, 27 00:01:56,040 --> 00:02:01,120 Speaker 1: Hi Karen So did Oscar's poems spring your delight? I 28 00:02:01,160 --> 00:02:04,040 Speaker 1: would have preferred it in your British act? Then I think, well, 29 00:02:04,040 --> 00:02:06,040 Speaker 1: I can't blame you for that. It did remind me 30 00:02:06,080 --> 00:02:08,360 Speaker 1: of Um. My friend is in a band called the 31 00:02:08,520 --> 00:02:12,040 Speaker 1: x Ambassadors and they partnered with this producer called Alex 32 00:02:12,160 --> 00:02:15,040 Speaker 1: the Kid who was asked by IBM to make a 33 00:02:15,080 --> 00:02:17,799 Speaker 1: song using Watson. And the song is not bad. The 34 00:02:17,840 --> 00:02:21,200 Speaker 1: song is not bad. Um, the sounds good. But basically, 35 00:02:21,840 --> 00:02:24,519 Speaker 1: the way in which they used Watson was that they 36 00:02:24,680 --> 00:02:29,640 Speaker 1: crunched the twenty six thousand songs from the top one charts. 37 00:02:30,400 --> 00:02:32,359 Speaker 1: It's from over what time period, like over a few 38 00:02:32,400 --> 00:02:35,960 Speaker 1: years presume, I don't know, But the point is that 39 00:02:36,000 --> 00:02:39,640 Speaker 1: they used them to discover patterns in the songs. What 40 00:02:39,800 --> 00:02:42,120 Speaker 1: makes the top hundred song. Basically that's right, and then 41 00:02:42,160 --> 00:02:44,560 Speaker 1: that's reproduce it right, which is interesting because I think 42 00:02:44,560 --> 00:02:46,640 Speaker 1: anybody could kind of tell you what makes the top 43 00:02:46,680 --> 00:02:49,040 Speaker 1: on hundred song right. They're called earworms. But when you 44 00:02:49,040 --> 00:02:52,200 Speaker 1: think about a data set of twenty six thousand, like, 45 00:02:52,320 --> 00:02:54,200 Speaker 1: no human being can listen to that many songs and 46 00:02:54,400 --> 00:02:57,320 Speaker 1: do any productive after they hear it. So in this episode, 47 00:02:57,360 --> 00:03:00,359 Speaker 1: we're going to look at AI and art and different 48 00:03:00,400 --> 00:03:03,800 Speaker 1: kinds of fields to really understand how computers crunched data 49 00:03:03,919 --> 00:03:06,840 Speaker 1: to crack open this creative code. But first I want 50 00:03:06,880 --> 00:03:09,400 Speaker 1: to go back to the algorithm who wrote the poem 51 00:03:09,440 --> 00:03:11,920 Speaker 1: we heard at the beginning of the episode, because that 52 00:03:12,000 --> 00:03:15,239 Speaker 1: algorithm also wrote the film that Steven Spielberg of the 53 00:03:15,280 --> 00:03:18,799 Speaker 1: machine learning world, and the algorithm is called Benjamin. So 54 00:03:18,880 --> 00:03:22,080 Speaker 1: we're going to meet Benjamin and a few people not 55 00:03:22,240 --> 00:03:26,480 Speaker 1: named Benjamin. I was a speech trator for John Kerry, 56 00:03:27,040 --> 00:03:30,000 Speaker 1: Tim Gaitner, and Barack Obama, um, not in that order, 57 00:03:30,080 --> 00:03:32,640 Speaker 1: And I'm essentially a ghost writer and a photographer who 58 00:03:32,760 --> 00:03:37,040 Speaker 1: learned to code one giant Frankenstein monster. That's Ross Goodwin 59 00:03:37,480 --> 00:03:41,240 Speaker 1: and that Frankenstein monster it goes by Benjamin and it's 60 00:03:41,280 --> 00:03:44,400 Speaker 1: the work of Ross Goodwin and Oscar Sharp, who you'll 61 00:03:44,400 --> 00:03:47,680 Speaker 1: remember from that poetry reading, but neither of them actually 62 00:03:47,800 --> 00:03:51,800 Speaker 1: named Benjamin. Well, they named itself, or rather, there was 63 00:03:51,840 --> 00:03:53,520 Speaker 1: a piece of paper that came out of it that 64 00:03:53,600 --> 00:03:55,840 Speaker 1: said my name is Benjamin on it. I read that 65 00:03:56,040 --> 00:03:58,360 Speaker 1: in response to a question that was put to it, 66 00:03:58,760 --> 00:04:02,480 Speaker 1: and a room full of people went, oh, a program 67 00:04:02,520 --> 00:04:06,080 Speaker 1: that names itself is rather uncanny. And Oscar had been 68 00:04:06,120 --> 00:04:08,400 Speaker 1: chasing that uncanny nous ever since he was at n 69 00:04:08,560 --> 00:04:11,280 Speaker 1: y U for graduate school. Whenever I met anyone who 70 00:04:11,280 --> 00:04:14,160 Speaker 1: good program, I would grab them by the pels and 71 00:04:14,280 --> 00:04:16,440 Speaker 1: yell into the face. Can you can you build something 72 00:04:16,480 --> 00:04:19,520 Speaker 1: that can write like people talk in some way? And 73 00:04:19,600 --> 00:04:22,720 Speaker 1: one day in class Oscar notices there's this sneakerhead and 74 00:04:22,720 --> 00:04:25,000 Speaker 1: he's sitting on his laptop and his laptop is writing 75 00:04:25,160 --> 00:04:30,080 Speaker 1: without him touching it. And I'm like, oh, so we go, 76 00:04:30,320 --> 00:04:33,360 Speaker 1: So we go for coffee, and that gut coffee was. 77 00:04:33,400 --> 00:04:35,560 Speaker 1: It was a lengthy coffee. We're still having that coffee. 78 00:04:35,560 --> 00:04:39,719 Speaker 1: We're still having such a cold coffee. By now, you 79 00:04:39,800 --> 00:04:41,960 Speaker 1: might not believe it if it happened in a film, 80 00:04:42,080 --> 00:04:44,800 Speaker 1: but Oscar had stumbled on exactly the person he was 81 00:04:44,839 --> 00:04:47,680 Speaker 1: looking for. Oscar came to me and he said, I 82 00:04:47,720 --> 00:04:50,239 Speaker 1: want to make a movie from a computer generated screenplay. 83 00:04:50,360 --> 00:04:52,120 Speaker 1: And I said, you know, of course that sounds amazing. 84 00:04:52,200 --> 00:04:54,280 Speaker 1: Let's do it. But let's figure out how we're going 85 00:04:54,279 --> 00:04:57,480 Speaker 1: to generate the screenplay, because that's a nuanced process with 86 00:04:57,520 --> 00:04:59,679 Speaker 1: lots of stabs, and we need to consider like every 87 00:04:59,680 --> 00:05:03,760 Speaker 1: part it. So Oscar volunteered himself to teach me all 88 00:05:03,880 --> 00:05:07,800 Speaker 1: the things about storytelling and narrative and filmmaking. He turned 89 00:05:07,800 --> 00:05:11,360 Speaker 1: me onto like Vladimir prop Joseph campbell Um, all these 90 00:05:11,400 --> 00:05:16,000 Speaker 1: theories of storytelling, and so they begin to experiment. I 91 00:05:16,040 --> 00:05:19,400 Speaker 1: tried a bunch of prototypes that used like various structures 92 00:05:19,440 --> 00:05:23,159 Speaker 1: that had been postulated by these theorists over time, and 93 00:05:23,360 --> 00:05:27,680 Speaker 1: the output was not interesting. Despite following the rules laid 94 00:05:27,680 --> 00:05:31,680 Speaker 1: out by narrative theorists, Ross couldn't get anything good just 95 00:05:31,760 --> 00:05:35,599 Speaker 1: telling his programs what a story should contain. So a 96 00:05:35,680 --> 00:05:39,000 Speaker 1: year passes, Oscar moves to l A and when I 97 00:05:39,040 --> 00:05:42,320 Speaker 1: get this email from Ross and it's the results in 98 00:05:42,360 --> 00:05:43,960 Speaker 1: one of those experiments that he wants me to read, 99 00:05:44,200 --> 00:05:47,240 Speaker 1: and read it he did. Rossity mailed the poem from 100 00:05:47,240 --> 00:05:50,480 Speaker 1: the beginning of the episode, the room is blown away 101 00:05:50,480 --> 00:05:53,240 Speaker 1: from the door and the stones are beginning to shine. 102 00:05:53,760 --> 00:05:55,839 Speaker 1: I immediately was like, oh my god, I don't know 103 00:05:55,839 --> 00:05:57,400 Speaker 1: how he's doing this. But he said, I don't know 104 00:05:57,400 --> 00:06:00,120 Speaker 1: what technology you're using right now, but can we it 105 00:06:00,200 --> 00:06:03,680 Speaker 1: for screenplay? And so they did, and not just a screenplay, 106 00:06:03,720 --> 00:06:06,680 Speaker 1: they actually produced a short film called Sunspring and they 107 00:06:06,720 --> 00:06:09,839 Speaker 1: even got Thomas middle Ditch, the lead on Hbos Silaken Valley, 108 00:06:10,000 --> 00:06:15,599 Speaker 1: to star in it. Principle is completely constructed. Of the 109 00:06:15,600 --> 00:06:18,320 Speaker 1: same time, it's all about you. To be true, you 110 00:06:18,360 --> 00:06:20,279 Speaker 1: didn't even watch the movie with the rest of the base. 111 00:06:20,360 --> 00:06:22,880 Speaker 1: I don't know, I don't care. I know it's a 112 00:06:22,920 --> 00:06:26,279 Speaker 1: consequence whatever you need to know about the presence of 113 00:06:26,320 --> 00:06:29,120 Speaker 1: the story. I'm a little bit of a boy on 114 00:06:29,160 --> 00:06:32,680 Speaker 1: the floor. So what do you think, Carol? It kind 115 00:06:32,680 --> 00:06:34,440 Speaker 1: of reminds me of when my parents used to take 116 00:06:34,480 --> 00:06:37,520 Speaker 1: me to like a bad production of Macbeth or as 117 00:06:37,560 --> 00:06:41,880 Speaker 1: you like it. Traumatic. You're there and you're seven or eight, 118 00:06:42,200 --> 00:06:44,880 Speaker 1: and you want to understand what's going on, and so 119 00:06:45,160 --> 00:06:47,640 Speaker 1: you kind of pay as close attention as you possibly 120 00:06:47,680 --> 00:06:50,520 Speaker 1: can to what the actors are doing because you have 121 00:06:50,720 --> 00:06:53,840 Speaker 1: no idea what the dialogue means. Yeah, I mean that 122 00:06:53,920 --> 00:06:57,279 Speaker 1: to me is quite impressive because a machine can create 123 00:06:57,360 --> 00:07:00,120 Speaker 1: something which has enough of the elements in common the 124 00:07:00,240 --> 00:07:02,400 Speaker 1: film that we can talk about a real film. You 125 00:07:02,440 --> 00:07:05,080 Speaker 1: can't say it's not a film absolutely. Of course. What's 126 00:07:05,080 --> 00:07:07,680 Speaker 1: different is it didn't take Benjamin very long at all 127 00:07:07,800 --> 00:07:10,800 Speaker 1: to make it. Once you press the button fraction of 128 00:07:10,800 --> 00:07:12,760 Speaker 1: a second there was a couple of seconds perpase, maybe 129 00:07:12,800 --> 00:07:14,480 Speaker 1: maybe a couple of seconds total, actually a fraction of 130 00:07:14,480 --> 00:07:17,880 Speaker 1: a second per page. That's right. After months of agonizing 131 00:07:17,920 --> 00:07:21,720 Speaker 1: over centuries of storytelling theory, the final output only took 132 00:07:21,800 --> 00:07:25,640 Speaker 1: a couple of seconds. So what was Ross's breakthrough? To 133 00:07:25,760 --> 00:07:28,080 Speaker 1: understand we turned to one of the most famous AI 134 00:07:28,120 --> 00:07:32,760 Speaker 1: scientists in the world, Sebastian Throne. Recently, something magical had 135 00:07:32,760 --> 00:07:37,800 Speaker 1: happened recently. The feat has discovered was called machine learning. 136 00:07:38,080 --> 00:07:41,680 Speaker 1: With AI, computers can now find their own rules. They 137 00:07:41,720 --> 00:07:45,440 Speaker 1: are called neural networks. They're comprised of hundreds of millions 138 00:07:45,440 --> 00:07:48,280 Speaker 1: of little vase sample processing units, and those units are 139 00:07:48,360 --> 00:07:51,800 Speaker 1: modeled after what a neurons do in our physical brains. 140 00:07:52,040 --> 00:07:54,640 Speaker 1: You just give them examples, very much like the way 141 00:07:54,680 --> 00:07:58,600 Speaker 1: we we waste children. We don't give our children rules 142 00:07:58,640 --> 00:08:01,560 Speaker 1: for every contingency in life. In the first eight years 143 00:08:01,560 --> 00:08:04,720 Speaker 1: of education. We let them learn, They experience the world, 144 00:08:05,120 --> 00:08:07,880 Speaker 1: and they loan behold. They make their own rules. And 145 00:08:07,880 --> 00:08:09,560 Speaker 1: we are now in the world where computers can do 146 00:08:09,680 --> 00:08:12,360 Speaker 1: the same thing. And this means machine learning can be 147 00:08:12,440 --> 00:08:16,160 Speaker 1: used in all kinds of different fields. Sebastian himself applied 148 00:08:16,160 --> 00:08:19,240 Speaker 1: the technology at Google, where he led the initial development 149 00:08:19,280 --> 00:08:21,840 Speaker 1: of their self driving car. When you want to read 150 00:08:21,880 --> 00:08:24,720 Speaker 1: a book, a book on like what the car should 151 00:08:24,760 --> 00:08:27,640 Speaker 1: do in every situation, that rule book is really complicated 152 00:08:27,680 --> 00:08:29,680 Speaker 1: and it can promise you no matter how many years 153 00:08:29,760 --> 00:08:33,040 Speaker 1: you spent writing it, it's not gonna work. But when 154 00:08:33,080 --> 00:08:36,360 Speaker 1: you give the machine the ability to learn its own rules, 155 00:08:36,640 --> 00:08:40,040 Speaker 1: it is actually able to surpass how people can drive. 156 00:08:41,200 --> 00:08:45,079 Speaker 1: We'll hear more from Sebastian later, but machine learning mL 157 00:08:45,520 --> 00:08:48,040 Speaker 1: is the engine that drives almost all of the excitement 158 00:08:48,080 --> 00:08:52,280 Speaker 1: about AI today, from identifying targets on the battlefield to 159 00:08:52,600 --> 00:08:56,640 Speaker 1: understanding genetic diseases. And it's also what allowed Ross and 160 00:08:56,640 --> 00:08:59,959 Speaker 1: Oscar to create a usable movie script. Rather than laying 161 00:09:00,000 --> 00:09:04,199 Speaker 1: down storytelling rules, they simply showed Benjamin hundreds of examples 162 00:09:04,520 --> 00:09:12,160 Speaker 1: and the algorithm found patterns and learned for itself more sleepwalkers. 163 00:09:12,360 --> 00:09:23,160 Speaker 1: After the break, you're like, oh, did we essentially we 164 00:09:23,240 --> 00:09:27,000 Speaker 1: teach this algorithm anything else about screenplay other than just 165 00:09:27,040 --> 00:09:30,280 Speaker 1: putting in a bunch of screenplays, right, And that's the 166 00:09:30,320 --> 00:09:33,680 Speaker 1: way that machine learning works. What is happening in a 167 00:09:33,760 --> 00:09:36,439 Speaker 1: deep learning algorithm of this kind is it's building an 168 00:09:36,480 --> 00:09:41,199 Speaker 1: extraordinarily complicated mathematical formula by reading all of this stuff 169 00:09:41,240 --> 00:09:43,439 Speaker 1: over and over again, like the auto complete on your phone. 170 00:09:43,440 --> 00:09:46,160 Speaker 1: The neural that is actually sampling from a probability distribution 171 00:09:46,200 --> 00:09:49,640 Speaker 1: of which letters, bass or punctuation become next. So the 172 00:09:49,679 --> 00:09:53,240 Speaker 1: script for sun Spring was essentially the most mathematically probable 173 00:09:53,280 --> 00:09:56,800 Speaker 1: Sci Fi script except Ross and Oscar did have one 174 00:09:56,880 --> 00:10:01,040 Speaker 1: important lever of creative control. The other of parameters that 175 00:10:01,080 --> 00:10:04,480 Speaker 1: you're probably wondering about, there's one called the temperature is 176 00:10:04,480 --> 00:10:08,240 Speaker 1: the riskiness of those next letter predictions. What Ross is 177 00:10:08,280 --> 00:10:12,200 Speaker 1: describing is almost like a dial for creativity. Turn it 178 00:10:12,240 --> 00:10:14,680 Speaker 1: up to a really high temperature, and the neural net 179 00:10:14,720 --> 00:10:17,880 Speaker 1: is going to be extra creative and start making up words, 180 00:10:17,960 --> 00:10:22,600 Speaker 1: babbling at a very high temperature. It's essentially drunk. Low temperature, 181 00:10:22,600 --> 00:10:25,200 Speaker 1: it's going to be very repetitive and possible even begin 182 00:10:25,240 --> 00:10:28,760 Speaker 1: to plagiarize its source material. So it'll be very repetitive. 183 00:10:28,760 --> 00:10:30,480 Speaker 1: It'll be like the streets and the streets, and the 184 00:10:30,520 --> 00:10:34,400 Speaker 1: streets and the streets. It's essentially went working for network television. Yeah, exactly. 185 00:10:34,480 --> 00:10:36,280 Speaker 1: So we wanted it to be sort of in the middle. 186 00:10:37,000 --> 00:10:39,280 Speaker 1: In the middle is where we found the best output 187 00:10:39,440 --> 00:10:44,280 Speaker 1: and the most I think usable output, and Sunspring was born. 188 00:10:45,960 --> 00:10:49,320 Speaker 1: So Benjamin Ross and Oscar right together now they write 189 00:10:49,320 --> 00:10:53,080 Speaker 1: poetry and movies and sometimes what Benjamin spits out is good. 190 00:10:53,600 --> 00:10:55,360 Speaker 1: Often they have to sift through it to find the 191 00:10:55,400 --> 00:10:59,440 Speaker 1: best stuff. But he's prolific and he never ever suffers 192 00:10:59,440 --> 00:11:04,520 Speaker 1: from writers. Look, so Kara was telling us earlier about 193 00:11:04,559 --> 00:11:07,320 Speaker 1: Alex the kidd and using AI to make music, and 194 00:11:07,320 --> 00:11:09,199 Speaker 1: that's something I want to understand a bit more about. 195 00:11:09,559 --> 00:11:13,640 Speaker 1: So Julian went on a little bit of an expedition. Yes, 196 00:11:13,720 --> 00:11:15,920 Speaker 1: I did. I've been seeing a lot of articles lately 197 00:11:15,960 --> 00:11:18,040 Speaker 1: about AI and the arts, and I've been pretty curious 198 00:11:18,040 --> 00:11:20,760 Speaker 1: about music specifically. We might take it for granted, but 199 00:11:20,920 --> 00:11:24,040 Speaker 1: music is this primal emotional thing that's been with us forever. 200 00:11:24,080 --> 00:11:27,240 Speaker 1: It might even predate language. But now Warner Music Group 201 00:11:27,320 --> 00:11:30,360 Speaker 1: made history in April two nineteen, is the first major 202 00:11:30,440 --> 00:11:33,800 Speaker 1: label to sign an AI to a record deal. Yeah, 203 00:11:33,840 --> 00:11:37,720 Speaker 1: they signed this bot called Endel, which makes ambient noises 204 00:11:37,760 --> 00:11:39,840 Speaker 1: based on where you are and what the weather is 205 00:11:39,880 --> 00:11:41,760 Speaker 1: and what time of day it is. When I think 206 00:11:41,800 --> 00:11:44,240 Speaker 1: of this kind of music, I think of those Spotify 207 00:11:44,320 --> 00:11:48,880 Speaker 1: playlists like Peaceful Piano and Blissed Out Dinner Party, which 208 00:11:48,920 --> 00:11:52,440 Speaker 1: would become extremely popular. It's not the same thing as Beyonce, No, 209 00:11:52,640 --> 00:11:56,280 Speaker 1: definitely not. But Warner Music Group signed Endl to generate 210 00:11:56,320 --> 00:11:59,600 Speaker 1: twenty albums of ambient music. And now that we live 211 00:11:59,600 --> 00:12:03,120 Speaker 1: in a world where aies can get record deals, what 212 00:12:03,240 --> 00:12:05,559 Speaker 1: does this mean for artists? What does this mean for 213 00:12:05,600 --> 00:12:08,920 Speaker 1: even just music as we know it? Well, in my 214 00:12:09,000 --> 00:12:12,000 Speaker 1: quest to find out, I visited this company called Amper. 215 00:12:12,640 --> 00:12:15,840 Speaker 1: My name is Drew Silverstein. I am the co founder 216 00:12:15,920 --> 00:12:18,720 Speaker 1: and CEO of Amper Music. Amper is an AI music 217 00:12:18,760 --> 00:12:22,120 Speaker 1: company that Drew says will enable anyone to create music. 218 00:12:22,360 --> 00:12:24,320 Speaker 1: In fact, the only things you need to know are 219 00:12:24,600 --> 00:12:27,160 Speaker 1: the genre of music you want to create, the mood 220 00:12:27,440 --> 00:12:29,400 Speaker 1: you'd like to convey, and the length of your piece 221 00:12:29,400 --> 00:12:31,760 Speaker 1: of music. That's all you know. You can create a 222 00:12:31,760 --> 00:12:34,160 Speaker 1: brand new, unique piece of music in a matter of seconds. 223 00:12:34,280 --> 00:12:39,679 Speaker 1: So the big question is should musicians worry about computers 224 00:12:39,720 --> 00:12:43,400 Speaker 1: taking their job? Well, let's try it and see. So 225 00:12:43,760 --> 00:12:48,200 Speaker 1: what do you want to do? Cinematic, documentary, folk cinematic cinematic, 226 00:12:48,640 --> 00:12:53,960 Speaker 1: minimal percussion or quirky percussion. It's rendering a song right now. 227 00:12:54,000 --> 00:13:00,520 Speaker 1: And here we go. We've got something. I'm were deep 228 00:13:00,559 --> 00:13:04,439 Speaker 1: in the forest of Nicaragua. There's a breed of jaguar. 229 00:13:05,360 --> 00:13:07,960 Speaker 1: You might have heard of it. It's called the take 230 00:13:08,000 --> 00:13:12,480 Speaker 1: a Killer panther. Look, here's the thing. I don't know 231 00:13:12,760 --> 00:13:17,439 Speaker 1: what the difference between that and music is. I really don't. Yeah, 232 00:13:17,440 --> 00:13:20,760 Speaker 1: so you're woud Yeah, all right, So there's that. And 233 00:13:21,000 --> 00:13:24,520 Speaker 1: this isn't the only AI music app out there. Another 234 00:13:24,600 --> 00:13:28,440 Speaker 1: major player is called Magenta, and big surprise, they're at Google. 235 00:13:28,720 --> 00:13:31,160 Speaker 1: Magenta are using AI to create a ton of new tools. 236 00:13:31,240 --> 00:13:34,360 Speaker 1: From a piano genie that makes it impossible to play 237 00:13:34,400 --> 00:13:37,439 Speaker 1: bad notes to something that can generate drum loops, or 238 00:13:37,520 --> 00:13:40,560 Speaker 1: something that can even play piano duets with you. You 239 00:13:40,559 --> 00:13:44,560 Speaker 1: can even translate raw audio to a piano score. Raw audio, 240 00:13:44,640 --> 00:13:47,480 Speaker 1: like if I play just something raw on the piano, 241 00:13:47,880 --> 00:13:53,760 Speaker 1: raw audio like oh, literal raw audio, literal raw audio. 242 00:13:54,520 --> 00:13:57,040 Speaker 1: And Magenta has also trained a neural network just like 243 00:13:57,360 --> 00:14:00,240 Speaker 1: Ross and Oscar, only instead of sci fi scripts, they 244 00:14:00,320 --> 00:14:04,559 Speaker 1: trained on over four hundred performances by skilled pianists. They 245 00:14:04,600 --> 00:14:07,080 Speaker 1: fed it into the neural network and let me play 246 00:14:07,120 --> 00:14:09,480 Speaker 1: one of the piano experts. First. This is a real 247 00:14:09,640 --> 00:14:17,440 Speaker 1: piano player, all right, so nice? Right? Yeah, okay, ready 248 00:14:17,440 --> 00:14:28,680 Speaker 1: for the AI. What that's a computer that was all 249 00:14:28,680 --> 00:14:30,920 Speaker 1: a computer. I didn't ever play a human one. That's 250 00:14:30,920 --> 00:14:33,800 Speaker 1: a computer that was trained by a human playing piano. 251 00:14:34,480 --> 00:14:37,080 Speaker 1: And then how do you make a computer come up 252 00:14:37,120 --> 00:14:39,560 Speaker 1: with that? Right? So even though it's not a screenplay, 253 00:14:39,600 --> 00:14:41,520 Speaker 1: it's still data that you can feed a neural network 254 00:14:41,560 --> 00:14:44,440 Speaker 1: with to find patterns. And in this case, Magenta used 255 00:14:44,440 --> 00:14:47,680 Speaker 1: a data set from the Yamahai Piano competition. So human 256 00:14:47,720 --> 00:14:50,480 Speaker 1: pianists played on these digital keyboards which recorded the nuances 257 00:14:50,480 --> 00:14:52,640 Speaker 1: of their performance, like how long they hit notes, and 258 00:14:52,680 --> 00:14:54,960 Speaker 1: it recorded all that information into a digital score that 259 00:14:55,000 --> 00:14:57,600 Speaker 1: a computer could interpret. And we've actually had that technology 260 00:14:57,640 --> 00:14:59,840 Speaker 1: for a while now, it's called MIDI. But training and 261 00:15:00,000 --> 00:15:02,080 Speaker 1: on network on the data is new. See. The thing 262 00:15:02,120 --> 00:15:05,520 Speaker 1: that I come back to is that a computer doesn't 263 00:15:05,600 --> 00:15:09,040 Speaker 1: know it's playing music, so much of watching a musical 264 00:15:09,080 --> 00:15:13,480 Speaker 1: performance is knowing that this is coming from someone who 265 00:15:13,560 --> 00:15:18,280 Speaker 1: is emoting. Right, Yeah, there's actually there's an emotional communication happening, right, 266 00:15:18,360 --> 00:15:21,960 Speaker 1: that's right. I do think though the future is not 267 00:15:22,200 --> 00:15:26,240 Speaker 1: rejecting this. It's better to imagine what would Stravinsky have 268 00:15:26,440 --> 00:15:29,040 Speaker 1: done with this kind of technology, because Stravinsky is still 269 00:15:29,080 --> 00:15:42,600 Speaker 1: a musical genius. Right, Yeah, Definitely it's cool to listen 270 00:15:42,640 --> 00:15:45,840 Speaker 1: to those musical examples of machine learning because you can 271 00:15:45,880 --> 00:15:50,720 Speaker 1: really hear how the algorithm is reinterpreting existing material. Of course, 272 00:15:51,200 --> 00:15:54,040 Speaker 1: listening to the output is one thing. Tasting it is 273 00:15:54,120 --> 00:15:58,440 Speaker 1: quite another. The problem was that somebody had told me 274 00:15:58,480 --> 00:16:00,520 Speaker 1: that they had made the recipe for Stan, that it 275 00:16:00,640 --> 00:16:03,360 Speaker 1: was good, and what it was as a recipe called 276 00:16:03,440 --> 00:16:08,160 Speaker 1: chocolate baked and serves. That's Janelle Shane. She's a research 277 00:16:08,200 --> 00:16:11,240 Speaker 1: scientist and the author of a blog called AI Weirdness. 278 00:16:11,800 --> 00:16:14,080 Speaker 1: She's talking about a recipe written by Ai that she 279 00:16:14,120 --> 00:16:19,680 Speaker 1: actually cooked and eight. It starts out as a perfectly ordinary, 280 00:16:19,760 --> 00:16:24,000 Speaker 1: flowerless chocolate brownie all the way until the very last ingredient, 281 00:16:24,400 --> 00:16:26,760 Speaker 1: which is a cup of horse badish. I knew I 282 00:16:26,800 --> 00:16:28,480 Speaker 1: was in trouble when I opened the oven door and 283 00:16:28,520 --> 00:16:33,760 Speaker 1: my eyes just started watering. It was yeah, it was terrible. 284 00:16:34,080 --> 00:16:36,960 Speaker 1: On her blog, Jenelle experiments with putting AI to a 285 00:16:37,080 --> 00:16:40,800 Speaker 1: range of tasks, from writing new pickup lines to naming 286 00:16:40,840 --> 00:16:44,720 Speaker 1: Halloween costumes, and often her experiments with machine learning are 287 00:16:44,720 --> 00:16:49,640 Speaker 1: pretty revealing about us. It plays into this thought experiment, 288 00:16:49,760 --> 00:16:52,560 Speaker 1: what would an alien think of our world? It takes 289 00:16:52,560 --> 00:16:57,400 Speaker 1: something that's very ordinary and mixes it up into this 290 00:16:57,520 --> 00:17:01,560 Speaker 1: thing that sounds like the original, but the meaning has 291 00:17:01,600 --> 00:17:05,240 Speaker 1: been completely changed. Chopped whipping cream may be an ingredient 292 00:17:05,320 --> 00:17:08,840 Speaker 1: and fold water, enrolled it into cubes, or spread the 293 00:17:08,840 --> 00:17:12,080 Speaker 1: butter in the refrigerator. That's another direction that came up with. 294 00:17:12,640 --> 00:17:15,560 Speaker 1: Remember Ross and Oscar playing with the creativity setting for 295 00:17:15,600 --> 00:17:19,439 Speaker 1: their scripts. Janelle plays with herbot's temperature too, so I 296 00:17:19,480 --> 00:17:22,160 Speaker 1: can turn it up and the neural net may choose 297 00:17:22,160 --> 00:17:24,880 Speaker 1: its second best or third best guess as to what 298 00:17:25,080 --> 00:17:28,200 Speaker 1: letter comes next. And if I turn the creativity all 299 00:17:28,200 --> 00:17:31,440 Speaker 1: the way down, then everything maybe something like the the 300 00:17:32,560 --> 00:17:36,200 Speaker 1: the or recipes may be just you know, one teaspoon 301 00:17:36,240 --> 00:17:38,960 Speaker 1: of vanilla over and over and over again, because that's 302 00:17:39,040 --> 00:17:44,640 Speaker 1: just a very likely ingredient. It's really interesting with the 303 00:17:44,680 --> 00:17:48,879 Speaker 1: recipes to turn down the creativity and see what it 304 00:17:48,920 --> 00:17:52,320 Speaker 1: comes up with as the most quintessential recipes. At the 305 00:17:52,359 --> 00:17:55,320 Speaker 1: lowest setting, you may not get hole Strandish brownies, but 306 00:17:55,400 --> 00:17:57,320 Speaker 1: you do get a clear picture of what we eat 307 00:17:57,640 --> 00:18:00,399 Speaker 1: and who we are. I look at what kinds of 308 00:18:00,760 --> 00:18:03,200 Speaker 1: recipe titles that comes up with. There are things like 309 00:18:03,760 --> 00:18:07,919 Speaker 1: chocolate chicken chicken cake, and another one that's chocolate chocolate 310 00:18:08,000 --> 00:18:10,520 Speaker 1: chocolate chocolate cake. And there was a lot of cheese 311 00:18:10,560 --> 00:18:14,080 Speaker 1: in these recipes too, so it's kind of revealing about 312 00:18:14,119 --> 00:18:18,880 Speaker 1: what sorts of things we cook with. Then we like chocolate, 313 00:18:18,920 --> 00:18:21,960 Speaker 1: cheese and chicken apparently. But then I did the same 314 00:18:22,000 --> 00:18:26,920 Speaker 1: experiment with recipes from Bone Appetite, and then the most 315 00:18:26,920 --> 00:18:31,280 Speaker 1: common ingredients that kept using were cilantro and pomegranate juice. 316 00:18:32,160 --> 00:18:35,000 Speaker 1: So these algorithms essentially hold up a mirror to the 317 00:18:35,080 --> 00:18:37,840 Speaker 1: data sets that we give them. They do, yeah, they 318 00:18:37,880 --> 00:18:41,800 Speaker 1: reflect the data sets back to us in really weird ways, 319 00:18:42,160 --> 00:18:45,280 Speaker 1: and they can absolutely pick up whatever bias there is 320 00:18:45,320 --> 00:18:48,240 Speaker 1: in a input data set. And I think what we're 321 00:18:48,280 --> 00:18:52,359 Speaker 1: discovering is just how prevalent that bias is and how 322 00:18:52,400 --> 00:18:55,840 Speaker 1: easy it is for neural networks to latch onto that 323 00:18:55,880 --> 00:18:59,720 Speaker 1: bias and copy it as a handy tool toward copying 324 00:18:59,720 --> 00:19:02,800 Speaker 1: whatever where the humans are doing. They say that the 325 00:19:02,800 --> 00:19:05,679 Speaker 1: way to a person's heart is through their stomach. But 326 00:19:05,760 --> 00:19:09,160 Speaker 1: Janelle didn't stop at chocolate, bakes and surfs. She's also 327 00:19:09,200 --> 00:19:12,680 Speaker 1: turned AI onto some more direct roots. I really liked 328 00:19:12,720 --> 00:19:15,000 Speaker 1: the pickup lines. And there are all these puns and 329 00:19:15,040 --> 00:19:17,879 Speaker 1: all this wordplay that it didn't have any way to 330 00:19:17,880 --> 00:19:20,159 Speaker 1: grab hold of and figure out how to use. But 331 00:19:20,600 --> 00:19:25,880 Speaker 1: I think what it produced this sort of charming surrealism 332 00:19:26,000 --> 00:19:30,480 Speaker 1: and kind of garble nonsensical. I think it's an improvement 333 00:19:30,640 --> 00:19:34,080 Speaker 1: on every single one of the originals. My very favorite 334 00:19:34,080 --> 00:19:37,240 Speaker 1: one is you look like a thing and I love you. 335 00:19:37,240 --> 00:19:39,400 Speaker 1: You are so beautiful that you make me feel better 336 00:19:39,480 --> 00:19:42,679 Speaker 1: to see you. Or you must be a tringle because 337 00:19:42,680 --> 00:19:46,800 Speaker 1: you're the only thing here. Are you a camera? Because 338 00:19:46,840 --> 00:19:50,480 Speaker 1: I want to see the most beautiful than you. Yeah, 339 00:19:49,640 --> 00:19:54,400 Speaker 1: I'll definitely lie with you. No one's have a used 340 00:19:54,480 --> 00:19:56,960 Speaker 1: real pickup line on me? Use one on you know? 341 00:19:57,640 --> 00:20:01,280 Speaker 1: Do I look like someone who would receive a pickup Well, 342 00:20:01,320 --> 00:20:04,880 Speaker 1: here's one of them. I don't know you. That's good 343 00:20:05,119 --> 00:20:07,760 Speaker 1: a lot of girls are into that. Are you a candle? 344 00:20:07,880 --> 00:20:12,639 Speaker 1: Because you're so hard of the looks with you. So 345 00:20:12,640 --> 00:20:15,680 Speaker 1: in effect, what the algorithm is doing is highlighting patterns 346 00:20:15,680 --> 00:20:18,360 Speaker 1: in the data. I mean, there's sound structural like pickup 347 00:20:18,400 --> 00:20:20,960 Speaker 1: lines about the words themselves don't make any sense. The 348 00:20:21,000 --> 00:20:24,440 Speaker 1: machines are reflecting their creators and spitting back something which 349 00:20:24,880 --> 00:20:27,560 Speaker 1: resembles pick up lines and makes us think a little 350 00:20:27,560 --> 00:20:31,320 Speaker 1: bit more carefully about what a pickup line is. And 351 00:20:31,359 --> 00:20:34,280 Speaker 1: while training Benjamin Ross and Oscar found the same thing 352 00:20:34,840 --> 00:20:38,680 Speaker 1: as the algorithm learned patterns revealed bias present in our cinema. 353 00:20:40,600 --> 00:20:43,199 Speaker 1: When you train an algorithm like Benjamin on millions and 354 00:20:43,240 --> 00:20:45,879 Speaker 1: millions in this case of synopsis from the Internet of 355 00:20:45,920 --> 00:20:49,119 Speaker 1: the movies, the synopsis that come out have certain patterns 356 00:20:49,160 --> 00:20:51,520 Speaker 1: in them. For example, they mentioned men full times more 357 00:20:51,520 --> 00:20:54,680 Speaker 1: often than they mentioned women. But you you learn other 358 00:20:54,760 --> 00:20:57,120 Speaker 1: things about it than that. You learned that the most 359 00:20:57,119 --> 00:20:59,080 Speaker 1: common phrase in the in the output is a young 360 00:20:59,119 --> 00:21:02,040 Speaker 1: man in a small town. So what does a filmmaker 361 00:21:02,080 --> 00:21:04,600 Speaker 1: like Oscar learn from this? I used to call this 362 00:21:04,640 --> 00:21:06,960 Speaker 1: project the average movie projects. And the reason I called 363 00:21:06,960 --> 00:21:09,359 Speaker 1: it that is the theory was for me, if you 364 00:21:09,359 --> 00:21:11,520 Speaker 1: could make the right kind of algorithm that the movie 365 00:21:11,560 --> 00:21:14,200 Speaker 1: that you would make that would be the theoretically perfect 366 00:21:14,280 --> 00:21:17,240 Speaker 1: movie would also be the most boring movie ever made, 367 00:21:17,480 --> 00:21:19,120 Speaker 1: and that it would. It would it would be by 368 00:21:19,119 --> 00:21:21,399 Speaker 1: definition all of the things that were the most clich 369 00:21:21,640 --> 00:21:23,399 Speaker 1: because that's what cliche means, is the thing that that 370 00:21:23,480 --> 00:21:26,400 Speaker 1: you can rely on to work. And why do that? 371 00:21:26,560 --> 00:21:28,679 Speaker 1: Because the thing I'm most interested in is doing the 372 00:21:28,720 --> 00:21:30,360 Speaker 1: thing that we haven't done yet. I want to move 373 00:21:30,400 --> 00:21:33,440 Speaker 1: the form forward. Seeing all of these biases and assumptions 374 00:21:33,480 --> 00:21:36,639 Speaker 1: that are baked into our movies and our snacks doesn't 375 00:21:36,680 --> 00:21:39,560 Speaker 1: mean we're doomed to repeat them. In fact, the awareness 376 00:21:39,560 --> 00:21:42,800 Speaker 1: can be liberating. That's what's helped me, I think, is 377 00:21:42,840 --> 00:21:46,600 Speaker 1: seeing Benjamin's capacity to show me more directly what it 378 00:21:46,680 --> 00:21:49,160 Speaker 1: is that are our patents, are our habits, and then 379 00:21:49,359 --> 00:21:51,600 Speaker 1: I can ask more easily how to move forward from that. 380 00:21:53,720 --> 00:22:05,000 Speaker 1: We'll get there after the break. So we've heard about 381 00:22:05,080 --> 00:22:08,240 Speaker 1: Janelle Shane using AI to reveal bias, and Ross and 382 00:22:08,280 --> 00:22:10,879 Speaker 1: Oscar using it to help them think more creatively about 383 00:22:10,920 --> 00:22:13,919 Speaker 1: filmmaking as well as how it can be applied to music. 384 00:22:14,520 --> 00:22:17,200 Speaker 1: And that's the great promise of AI. We may worry 385 00:22:17,200 --> 00:22:20,159 Speaker 1: about replacing jobs, but it can augment our lives in 386 00:22:20,200 --> 00:22:23,520 Speaker 1: so many ways. At least that's how Sebastian Throne sees it. 387 00:22:24,480 --> 00:22:27,359 Speaker 1: I would say the term AI is a bit deceptive 388 00:22:27,359 --> 00:22:29,760 Speaker 1: because it sets up computers to be on equal power 389 00:22:29,760 --> 00:22:33,000 Speaker 1: with people. I see it to be stronger where we 390 00:22:33,040 --> 00:22:36,480 Speaker 1: are weak, and weaker where you're strong. It's not a 391 00:22:36,680 --> 00:22:40,680 Speaker 1: technology that will replace us, as it's not really empowers 392 00:22:40,680 --> 00:22:44,080 Speaker 1: but what might that empowerment look like beyond bias detection 393 00:22:44,240 --> 00:22:48,800 Speaker 1: and piano playing well. In seventeen, Sebastian published a paper 394 00:22:48,840 --> 00:22:52,840 Speaker 1: in Nature on using AI to diagnose skin cancer using 395 00:22:52,840 --> 00:22:56,600 Speaker 1: just an iPhone. So in medicine, you can think of 396 00:22:56,640 --> 00:23:00,960 Speaker 1: your iPhone that can find skin cancer as turning regular 397 00:23:01,400 --> 00:23:04,879 Speaker 1: physicians or anybody in the world into an expert on 398 00:23:04,960 --> 00:23:07,320 Speaker 1: day one, because now they have the superpower to be 399 00:23:07,359 --> 00:23:10,359 Speaker 1: able to distinguish something that previously would have taken tens 400 00:23:10,359 --> 00:23:13,199 Speaker 1: of years to learn. The same is true for the 401 00:23:13,240 --> 00:23:16,239 Speaker 1: self driving car. Now children can drive and and and 402 00:23:16,320 --> 00:23:18,240 Speaker 1: blind people can drive, head blind people drive around and 403 00:23:18,240 --> 00:23:21,800 Speaker 1: self driving cars. So for me, the real opportunities to 404 00:23:22,160 --> 00:23:24,800 Speaker 1: use the eye to extract the knowledge from some human 405 00:23:24,840 --> 00:23:27,760 Speaker 1: experts that are well trained and transpose this knowledge to 406 00:23:27,840 --> 00:23:30,480 Speaker 1: other brains that people not so well trained. This is 407 00:23:30,480 --> 00:23:33,560 Speaker 1: what I personally find so fascinating about AI and a 408 00:23:33,600 --> 00:23:37,400 Speaker 1: big reason I wanted to do Sleepwalkers. The same technology 409 00:23:37,560 --> 00:23:42,719 Speaker 1: underlies rossan Oscar's films, Janelle's recipes and self driving cars 410 00:23:42,720 --> 00:23:46,959 Speaker 1: and cancer diagnostics and so much more. The training for 411 00:23:47,359 --> 00:23:50,800 Speaker 1: skin cancer detection or cancer detection radeology and the training 412 00:23:50,800 --> 00:23:55,040 Speaker 1: for the self dime car amazingly similar. In both cases, 413 00:23:55,080 --> 00:23:57,840 Speaker 1: what you do is you compile the data set typically 414 00:23:57,920 --> 00:24:01,120 Speaker 1: hundreds of thousands up to hundreds of millions of images. 415 00:24:01,600 --> 00:24:04,760 Speaker 1: In skin cancer, we use biopsies. We had a database 416 00:24:04,800 --> 00:24:07,399 Speaker 1: of a hundred twenty nine thousand images that a lab 417 00:24:07,480 --> 00:24:11,159 Speaker 1: had biopsied and provided. In the self driving car, you 418 00:24:11,560 --> 00:24:13,880 Speaker 1: could be as easy as having a human driver provide 419 00:24:13,920 --> 00:24:16,480 Speaker 1: inputs with their student veel and the and the break 420 00:24:16,800 --> 00:24:19,119 Speaker 1: as to what the right thing is to do, and 421 00:24:19,200 --> 00:24:22,600 Speaker 1: then the network mimics human behavior, it mimics the diagnostics 422 00:24:22,680 --> 00:24:25,560 Speaker 1: of a physician, or it mimics the style of a driver. 423 00:24:25,800 --> 00:24:29,800 Speaker 1: The underlying albums are amazingly similar. What makes this moment 424 00:24:29,800 --> 00:24:32,119 Speaker 1: all the more interesting is that AI is in the 425 00:24:32,200 --> 00:24:37,160 Speaker 1: process of being consumerized like Sebastion said, your iPhone can 426 00:24:37,200 --> 00:24:41,199 Speaker 1: diagnose skin cancer. Self driving cars are already on the roads, 427 00:24:41,240 --> 00:24:44,639 Speaker 1: and as these tools become more and more accessible, society 428 00:24:44,800 --> 00:24:48,840 Speaker 1: will start to change. These technologies become closer and closer 429 00:24:48,880 --> 00:24:51,320 Speaker 1: to us. The fact that you carry yourself phone is 430 00:24:51,320 --> 00:24:53,960 Speaker 1: a big deal. You might not see this way, but 431 00:24:54,119 --> 00:24:57,679 Speaker 1: what it does it puts the computer seamlessly into your life. 432 00:24:58,040 --> 00:25:01,280 Speaker 1: You're texting app, your SMS is so close to you 433 00:25:01,520 --> 00:25:04,200 Speaker 1: that you can now talk to people thousands of miles 434 00:25:04,200 --> 00:25:06,600 Speaker 1: of a on a button press or on a microphone. 435 00:25:06,960 --> 00:25:11,400 Speaker 1: That makes you effectively super human without the actual physical implant. 436 00:25:11,680 --> 00:25:14,400 Speaker 1: But when it comes to AI, for now, leading edge 437 00:25:14,400 --> 00:25:17,200 Speaker 1: algorithms are off limits to those of us who can't 438 00:25:17,240 --> 00:25:20,040 Speaker 1: code or who don't have the means to learn. Oscar 439 00:25:20,080 --> 00:25:24,000 Speaker 1: wouldn't have been able to make Sunspring without Rosses technical expertise, 440 00:25:24,680 --> 00:25:28,119 Speaker 1: but that's all starting to change, Julian, you spoke with 441 00:25:28,200 --> 00:25:31,480 Speaker 1: somebody making AI more accessible. Yeah, while we were looking 442 00:25:31,480 --> 00:25:34,040 Speaker 1: into AI and music, we came across Runway m L 443 00:25:34,359 --> 00:25:37,680 Speaker 1: their lab based in Bushwick, and they feel strongly about 444 00:25:37,720 --> 00:25:41,399 Speaker 1: letting more people into work with AI Creatively. I spoke 445 00:25:41,400 --> 00:25:43,840 Speaker 1: with Christo bal Valence Weela, the co founder of Runway, 446 00:25:43,880 --> 00:25:46,560 Speaker 1: which is basically like the Adobe Creative Suite for AI. 447 00:25:46,760 --> 00:25:49,320 Speaker 1: So think of a program that looks like Photoshop. They're 448 00:25:49,320 --> 00:25:51,920 Speaker 1: adding tons of different AI models to the Runway app, 449 00:25:51,960 --> 00:25:54,199 Speaker 1: where instead of having to know how to code, you 450 00:25:54,200 --> 00:25:57,199 Speaker 1: can just manipulate some sliders and dials and still have 451 00:25:57,240 --> 00:25:59,800 Speaker 1: a I generate something. If we really think of this 452 00:26:00,080 --> 00:26:02,439 Speaker 1: the game changing technology that will impact us for like 453 00:26:02,560 --> 00:26:05,160 Speaker 1: years to come, we need to have more people from 454 00:26:05,160 --> 00:26:08,960 Speaker 1: different backgrounds and disciplines jumping to that discussion and proposing 455 00:26:09,000 --> 00:26:11,640 Speaker 1: ways of looking at algorithms that those researchers and those 456 00:26:11,640 --> 00:26:13,960 Speaker 1: scientists are not thinking of. This is gonna impact us, 457 00:26:14,040 --> 00:26:15,679 Speaker 1: and it's going to change the way we see not 458 00:26:15,840 --> 00:26:18,879 Speaker 1: just the world, but ourselves. Not just ourselves, but how 459 00:26:18,960 --> 00:26:21,440 Speaker 1: we think about our creativity. And the list of things 460 00:26:21,520 --> 00:26:24,440 Speaker 1: that Runway can help you do is frankly crazy. Will 461 00:26:24,480 --> 00:26:28,280 Speaker 1: post some on our Instagram at Sleepwalkers podcast, but just 462 00:26:28,359 --> 00:26:31,480 Speaker 1: one example, you can take video of anyone adds and 463 00:26:31,520 --> 00:26:34,199 Speaker 1: have their body copy the poses that you make in 464 00:26:34,240 --> 00:26:37,639 Speaker 1: your webcam. So Krystoval actually tweeted one of him controlling 465 00:26:37,680 --> 00:26:40,879 Speaker 1: Stephen Colbert's body on The Late Show just with his 466 00:26:40,920 --> 00:26:44,400 Speaker 1: webcam moving his arms. Stephen moves his arms. It's nuts, right, 467 00:26:44,560 --> 00:26:47,120 Speaker 1: And imagine that kind of technology in the hands of artists. 468 00:26:47,440 --> 00:26:50,440 Speaker 1: Start thinking about them as not like something that's gonna 469 00:26:50,480 --> 00:26:53,760 Speaker 1: destroy our creativity or gonna replace writers an artists or whatever. 470 00:26:53,880 --> 00:26:55,440 Speaker 1: This is gonna be a typewriter, This is gonna be 471 00:26:55,480 --> 00:26:58,239 Speaker 1: a paintbrush, and people will start building and using it 472 00:26:58,359 --> 00:27:02,600 Speaker 1: to understand their own creativity in a new way. Of course, 473 00:27:02,680 --> 00:27:05,440 Speaker 1: as always, it's up to us to make sure we 474 00:27:05,560 --> 00:27:09,879 Speaker 1: use these new tools for good. If I build a shovel, okay, 475 00:27:09,920 --> 00:27:12,280 Speaker 1: and you decide to go to the beach and digging sand, 476 00:27:12,880 --> 00:27:15,200 Speaker 1: you're biased. You're digging in sand and guess what You're 477 00:27:15,200 --> 00:27:17,440 Speaker 1: shoveled only to an up sand. The same as true 478 00:27:17,480 --> 00:27:19,280 Speaker 1: for AI. If you give it a certain type of 479 00:27:19,359 --> 00:27:21,800 Speaker 1: data set, I can promise you whatever I get out 480 00:27:21,960 --> 00:27:24,600 Speaker 1: reflects the data you're put in. It's up to us, 481 00:27:24,720 --> 00:27:27,960 Speaker 1: the people, to make responsible decisions. And as we want 482 00:27:28,000 --> 00:27:31,800 Speaker 1: to create equal opportunity and evadicate certain biases society that 483 00:27:31,920 --> 00:27:34,480 Speaker 1: exists today, is up to us to do it, and 484 00:27:34,520 --> 00:27:37,320 Speaker 1: I promise you, if you work hard on this, technologies 485 00:27:37,560 --> 00:27:41,280 Speaker 1: will reflect that. But even Sebastian, one of Silicon Valleys 486 00:27:41,320 --> 00:27:47,119 Speaker 1: Great Optimists, recognizes the risks all technologies can harm people. 487 00:27:47,520 --> 00:27:50,480 Speaker 1: In fact, technologies can be abused to harm people, Like 488 00:27:50,920 --> 00:27:53,360 Speaker 1: my kitchen knife, which serves me a great purpose every 489 00:27:53,359 --> 00:27:56,280 Speaker 1: time I've guests over in shopping. My produce can also 490 00:27:56,359 --> 00:28:01,240 Speaker 1: be abused to harm people. In the next episode of Sleepwalkers, 491 00:28:01,320 --> 00:28:04,760 Speaker 1: we dive deep into the ability of algorithms to cause harm. 492 00:28:04,800 --> 00:28:07,800 Speaker 1: We traveled from China and the social credit system to 493 00:28:08,119 --> 00:28:10,800 Speaker 1: a parole board in New York and we speak with 494 00:28:10,840 --> 00:28:15,120 Speaker 1: people building technology they believe will make us safer. I'm 495 00:28:15,160 --> 00:28:31,440 Speaker 1: as Vlachen, see you next time. Sleepwalkers is a production 496 00:28:31,480 --> 00:28:34,920 Speaker 1: of our heart Radio and unusual productions. There's so much 497 00:28:34,960 --> 00:28:37,080 Speaker 1: we don't have time for in our episodes, but that 498 00:28:37,119 --> 00:28:39,680 Speaker 1: we'd love to share with you. So for the latest 499 00:28:39,720 --> 00:28:43,200 Speaker 1: AI news, live interviews, and behind the scenes footage, find 500 00:28:43,280 --> 00:28:47,440 Speaker 1: us on Instagram, at Sleepwalker's podcast or at sleepwalkers podcast 501 00:28:47,480 --> 00:28:50,360 Speaker 1: dot com. Special thanks on this episode to paw Suris, 502 00:28:50,440 --> 00:28:53,960 Speaker 1: who introduced us to Oscar Ross and Benjamin and to 503 00:28:54,200 --> 00:28:58,160 Speaker 1: artificial intelligence, which composed over half of music in this episode. 504 00:28:58,520 --> 00:29:02,479 Speaker 1: Could you tell which was which Sleepwalkers is hosted by 505 00:29:02,520 --> 00:29:05,640 Speaker 1: me Ozveloshin and co hosted by me Kara Price. Were 506 00:29:05,640 --> 00:29:08,600 Speaker 1: produced by Julian Weller with help from Jacopo Penzo and 507 00:29:08,680 --> 00:29:12,280 Speaker 1: Taylor Chacog. Mixing by Tristan McNeil and Julian Weller. Our 508 00:29:12,320 --> 00:29:15,800 Speaker 1: story editor is Matthew Riddle. Recording assistance. This episode from 509 00:29:15,920 --> 00:29:20,160 Speaker 1: Joanne de Luna. Sleepwalkers is executive produced by me Ozveloshin 510 00:29:20,280 --> 00:29:23,600 Speaker 1: and Mangesh hatt Together. For more podcasts from my heart Radio, 511 00:29:23,680 --> 00:29:26,600 Speaker 1: visit the i heart Radio app, Apple Podcasts, or wherever 512 00:29:26,640 --> 00:29:27,960 Speaker 1: you listen to your favorite shows,