1 00:00:04,519 --> 00:00:12,760 Speaker 1: Technology with tech Stuff from stuff works dot com. Hey there, 2 00:00:12,800 --> 00:00:17,080 Speaker 1: and welcome to tech Stuff. I'm your host, Jonathan Strickland. 3 00:00:17,200 --> 00:00:20,279 Speaker 1: I'm a senior writer with how stuff works dot com. 4 00:00:20,360 --> 00:00:23,360 Speaker 1: I talk about all things tech and today we're gonna 5 00:00:23,400 --> 00:00:26,840 Speaker 1: get a little musical with things and get a little 6 00:00:26,880 --> 00:00:31,920 Speaker 1: help from our buddy Noel. Noel, who is the producer extraordinary. 7 00:00:32,000 --> 00:00:36,440 Speaker 1: He's the head of of of podcast production here at 8 00:00:36,440 --> 00:00:39,159 Speaker 1: how stuff Works, also one of the co hosts of 9 00:00:39,200 --> 00:00:41,800 Speaker 1: Stuff they Don't Want You to Know. Noel went to 10 00:00:41,880 --> 00:00:45,599 Speaker 1: mog Fest in and and got the chance to talk 11 00:00:45,600 --> 00:00:48,600 Speaker 1: to a whole bunch of really cool people, including Alexander 12 00:00:48,720 --> 00:00:51,960 Speaker 1: Lurch and we'll hear more about that a little bit 13 00:00:52,000 --> 00:00:56,920 Speaker 1: later in this podcast. Mog Fest ostensibly is about music 14 00:00:56,960 --> 00:01:00,600 Speaker 1: and technology, but it actually involves lot lots of other 15 00:01:00,680 --> 00:01:05,080 Speaker 1: stuff to not just not just those two already broad fields, 16 00:01:05,120 --> 00:01:09,680 Speaker 1: but other ones as well, including elements of philosophy and 17 00:01:09,680 --> 00:01:13,160 Speaker 1: and even particle physics. Will have an episode in the 18 00:01:13,200 --> 00:01:18,200 Speaker 1: near future that will include some elements from uh interviews 19 00:01:18,240 --> 00:01:21,360 Speaker 1: we had with folks from the Large Hadron Collider. So 20 00:01:21,480 --> 00:01:24,800 Speaker 1: mog Fest has all sorts of really smart, talented people 21 00:01:24,840 --> 00:01:30,320 Speaker 1: getting together and having these incredible symposia and and and performances, 22 00:01:30,360 --> 00:01:32,880 Speaker 1: And so Noel was able to go and talk with 23 00:01:32,920 --> 00:01:36,880 Speaker 1: someone about some really cool stuff, and that kind of 24 00:01:36,880 --> 00:01:40,320 Speaker 1: ties into what I wanted to chat about today. You know, 25 00:01:40,360 --> 00:01:42,920 Speaker 1: once upon a time here at How Stuff Works, we 26 00:01:42,959 --> 00:01:45,520 Speaker 1: had a show called Stuff from the B Side, and 27 00:01:45,640 --> 00:01:49,360 Speaker 1: this was a podcast all about music. Episodes focused on 28 00:01:49,800 --> 00:01:53,760 Speaker 1: everything musical, including elements that are more general concepts or 29 00:01:53,800 --> 00:01:58,080 Speaker 1: philosophical ideas. And music and technology are two things that 30 00:01:58,200 --> 00:02:03,080 Speaker 1: really do closely tied together. After all, almost every musical 31 00:02:03,160 --> 00:02:07,600 Speaker 1: instrument is some form of technology, ranging from the relatively 32 00:02:07,720 --> 00:02:11,720 Speaker 1: primitive versions of certain percussive instruments all the way up 33 00:02:11,760 --> 00:02:14,840 Speaker 1: to high tech digital rigs. So I thought it might 34 00:02:14,840 --> 00:02:16,720 Speaker 1: be cool to revisit music and tech and look at 35 00:02:16,720 --> 00:02:22,400 Speaker 1: a particular subset of it, musical analysis and music generation. Now, 36 00:02:22,480 --> 00:02:26,520 Speaker 1: music analysis and technology are also related in that we 37 00:02:26,639 --> 00:02:30,600 Speaker 1: now have various automated recommendation engines that will suggest music 38 00:02:30,680 --> 00:02:33,320 Speaker 1: for us to listen to based upon what we've already 39 00:02:33,320 --> 00:02:36,400 Speaker 1: said we enjoy. Now these engines look for new pieces 40 00:02:36,440 --> 00:02:39,560 Speaker 1: of music that in some way match criteria we seem 41 00:02:39,639 --> 00:02:43,400 Speaker 1: to find appealing. We have indicated to that service that 42 00:02:43,480 --> 00:02:46,079 Speaker 1: we like that particular type of music, so it starts 43 00:02:46,080 --> 00:02:48,720 Speaker 1: to try and find matches that kind of follow in 44 00:02:48,760 --> 00:02:51,600 Speaker 1: the same lines. As they become more adept at figuring 45 00:02:51,600 --> 00:02:54,600 Speaker 1: out what qualities we really enjoy, they can hone in 46 00:02:54,639 --> 00:02:57,400 Speaker 1: on songs that appeal to us, perhaps even changing them 47 00:02:57,520 --> 00:03:00,080 Speaker 1: up based upon other criterias, which is the time of 48 00:03:00,160 --> 00:03:03,079 Speaker 1: day or an activity. We're doing so, for example, with 49 00:03:03,200 --> 00:03:06,920 Speaker 1: Google Music, and this show is not sponsored by Google 50 00:03:07,000 --> 00:03:10,280 Speaker 1: Music or anything of that nature, but it will detect 51 00:03:10,360 --> 00:03:14,160 Speaker 1: if I'm on my way somewhere. It might suggest music 52 00:03:14,240 --> 00:03:17,480 Speaker 1: that would be conducive to a trip, or if it 53 00:03:17,560 --> 00:03:20,840 Speaker 1: knows that I'm at the gym, it may suggest music 54 00:03:20,919 --> 00:03:23,799 Speaker 1: that's good for keeping my heart rate up, stuff like that. 55 00:03:24,600 --> 00:03:28,639 Speaker 1: So we'll just imagine a hypothetical situation. I've just woken 56 00:03:28,720 --> 00:03:31,640 Speaker 1: up and the recommendation engine might find some peppy music 57 00:03:31,680 --> 00:03:34,800 Speaker 1: to get me on my way. So Google Music is saying, hey, 58 00:03:34,840 --> 00:03:37,440 Speaker 1: it's Monday morning, you need all the help you can get. 59 00:03:37,600 --> 00:03:40,200 Speaker 1: Here's a radio station based off the song Walking on 60 00:03:40,320 --> 00:03:43,640 Speaker 1: Sunshine by Katrina and the Waves. And then my phone 61 00:03:43,680 --> 00:03:46,280 Speaker 1: detects that I'm going to the gem, so then the 62 00:03:46,360 --> 00:03:49,840 Speaker 1: music engine switches to the song's meant to keep me 63 00:03:49,880 --> 00:03:51,960 Speaker 1: moving at a particular pace while I desperately try to 64 00:03:52,000 --> 00:03:55,760 Speaker 1: find the exit to the gym. I'm sorry, I'm uh 65 00:03:55,960 --> 00:03:59,160 Speaker 1: to actually work out. So in that case, it's probably 66 00:03:59,600 --> 00:04:03,200 Speaker 1: you know, something with a nice driving beat a good 67 00:04:03,240 --> 00:04:07,040 Speaker 1: tempo to it. These are basic things that music engines 68 00:04:07,080 --> 00:04:08,920 Speaker 1: can do now, but the reason they can do them 69 00:04:08,960 --> 00:04:12,720 Speaker 1: at all is because of music analysis. This isn't always 70 00:04:12,800 --> 00:04:16,560 Speaker 1: done in an automated fashion. In fact, automating music analysis 71 00:04:16,680 --> 00:04:19,919 Speaker 1: is pretty tricky. Sometimes it relies instead on just a 72 00:04:20,000 --> 00:04:23,440 Speaker 1: lot of work, and that's work done by real, live, 73 00:04:23,720 --> 00:04:29,280 Speaker 1: human beings. So let's take the Music Genome Project for example. 74 00:04:29,520 --> 00:04:33,160 Speaker 1: This is the database that the internet radio service Pandora 75 00:04:33,320 --> 00:04:36,040 Speaker 1: relies upon when it creates a radio station based off 76 00:04:36,080 --> 00:04:39,320 Speaker 1: an artist or a song that you've submitted as the 77 00:04:39,440 --> 00:04:43,200 Speaker 1: seed for a new channel. For more than ten years, 78 00:04:43,240 --> 00:04:47,640 Speaker 1: Pandora's staff have analyzed and categorized music, breaking down songs 79 00:04:47,880 --> 00:04:51,400 Speaker 1: into all the basic components, which they call genes. These 80 00:04:51,440 --> 00:04:56,440 Speaker 1: are the elements that make songs what they are. And 81 00:04:56,960 --> 00:05:00,799 Speaker 1: I find this approach both fascinating and and a little odd, 82 00:05:01,720 --> 00:05:04,719 Speaker 1: because in a way, it seems a little weird to 83 00:05:04,760 --> 00:05:09,000 Speaker 1: take a really awesome song. Let's say it's um Blue 84 00:05:09,000 --> 00:05:12,279 Speaker 1: Oyster Cults, Don't Fear the Reaper, one of the best 85 00:05:12,279 --> 00:05:15,320 Speaker 1: songs ever written. And then you have to sift it 86 00:05:15,400 --> 00:05:19,320 Speaker 1: down to all those little basic components, those genes that 87 00:05:19,400 --> 00:05:22,760 Speaker 1: make up that song. It also reinforces this notion that 88 00:05:22,800 --> 00:05:25,520 Speaker 1: a song is more than just the sum of all 89 00:05:25,600 --> 00:05:28,680 Speaker 1: its parts. If you were to look at those components 90 00:05:28,680 --> 00:05:31,240 Speaker 1: and attempt to make a song that included all of them, 91 00:05:31,279 --> 00:05:33,400 Speaker 1: I bet it wouldn't be half as awesome as Don't 92 00:05:33,400 --> 00:05:37,880 Speaker 1: Fear the Reaper. So you take a song, you identify 93 00:05:38,000 --> 00:05:41,200 Speaker 1: all these different qualities of it, and may involve things 94 00:05:41,320 --> 00:05:46,120 Speaker 1: like the tempo of the song, the the the structure 95 00:05:46,160 --> 00:05:49,040 Speaker 1: of it, as far as versus and choruses are concerned, 96 00:05:49,640 --> 00:05:52,520 Speaker 1: the whether what kind of vocalists there are, what kind 97 00:05:52,520 --> 00:05:57,040 Speaker 1: of instruments are used, all of these different individual, tiny 98 00:05:57,120 --> 00:06:00,680 Speaker 1: components of the song, and you put them into say spreadsheet, 99 00:06:01,120 --> 00:06:06,719 Speaker 1: and that represents the collection of genes that are possessed 100 00:06:06,800 --> 00:06:10,040 Speaker 1: by Don't Fear the Reaper. You take that same collection, 101 00:06:10,160 --> 00:06:11,960 Speaker 1: you give them to a musician and say, I want 102 00:06:12,000 --> 00:06:14,200 Speaker 1: you to write me a song that has all of 103 00:06:14,240 --> 00:06:18,600 Speaker 1: these components in it. Well, again, probably not gonna get 104 00:06:18,640 --> 00:06:21,039 Speaker 1: Don't Fear the Reaper. You'll get something, and maybe it 105 00:06:21,040 --> 00:06:23,159 Speaker 1: will be good. Maybe it'll even be better than Don't 106 00:06:23,160 --> 00:06:27,960 Speaker 1: Fear the Reaper. I doubt it, but yeah, there's there's 107 00:06:27,960 --> 00:06:35,080 Speaker 1: something magical or apparently magical about music that transcends the 108 00:06:35,200 --> 00:06:40,400 Speaker 1: quantitative elements that we can list now. Pandora's Music Genome 109 00:06:40,480 --> 00:06:45,400 Speaker 1: project identifies four hundred fifty different musical attributes or genes. 110 00:06:45,880 --> 00:06:49,039 Speaker 1: They include lots of different types of data. Some of 111 00:06:49,080 --> 00:06:51,719 Speaker 1: them are relatively straightforward, such as does the song have 112 00:06:51,800 --> 00:06:54,480 Speaker 1: a vocalist? If it does have a vocalist, is it 113 00:06:54,520 --> 00:06:58,040 Speaker 1: a male vocalist or a female vocalist? Are there multiple vocalists? 114 00:06:58,640 --> 00:07:01,360 Speaker 1: Then starts getting way more granular. So if a song 115 00:07:01,400 --> 00:07:04,720 Speaker 1: has electric guitar, for example, there might be a subset 116 00:07:04,800 --> 00:07:08,279 Speaker 1: of information about that, such as how much distortion is 117 00:07:08,360 --> 00:07:10,800 Speaker 1: on that guitar? Does it have a lot of distortion 118 00:07:10,840 --> 00:07:13,520 Speaker 1: in this song or not a lot? And so you 119 00:07:13,600 --> 00:07:17,920 Speaker 1: start to subdivide down the line. Same thing is true 120 00:07:17,920 --> 00:07:20,880 Speaker 1: for other instruments as well. Now, not all songs have 121 00:07:21,160 --> 00:07:24,320 Speaker 1: the same number of genes, meaning some genres of music 122 00:07:24,320 --> 00:07:28,000 Speaker 1: are actually easier to describe with a fewer terms than others. 123 00:07:28,480 --> 00:07:32,720 Speaker 1: For example, rock songs have about one fifty genes. You 124 00:07:32,720 --> 00:07:34,920 Speaker 1: can break down your rock song into about a hundred 125 00:07:34,960 --> 00:07:39,280 Speaker 1: fifty different little individual components. Rap songs are more like 126 00:07:39,360 --> 00:07:42,440 Speaker 1: three d fifty. So that indicates that there are gradations 127 00:07:42,440 --> 00:07:47,560 Speaker 1: and variations between different songs within the same genre. Uh So, 128 00:07:47,840 --> 00:07:51,119 Speaker 1: to make a recommendation engine, you first have to put 129 00:07:51,160 --> 00:07:55,920 Speaker 1: all the music within the library. Through this process, you 130 00:07:55,960 --> 00:07:59,120 Speaker 1: need to identify the important qualities that make the music 131 00:07:59,280 --> 00:08:01,760 Speaker 1: what it is is. And you could use something like 132 00:08:01,800 --> 00:08:03,880 Speaker 1: a spreadsheet and you lay it all out, and then 133 00:08:03,920 --> 00:08:06,400 Speaker 1: when someone wants to make a new radio station off 134 00:08:06,440 --> 00:08:09,880 Speaker 1: of a song, you can use that song's genome all 135 00:08:09,920 --> 00:08:13,680 Speaker 1: the jenes listed for that specific song to guide a 136 00:08:13,760 --> 00:08:17,120 Speaker 1: decision engine to pick other songs that are similar to 137 00:08:17,160 --> 00:08:20,440 Speaker 1: the first one within a certain degree. So you could 138 00:08:20,480 --> 00:08:23,920 Speaker 1: set this dynamically in your search engine. Right Like, let's 139 00:08:23,920 --> 00:08:26,880 Speaker 1: say that you are the one designing the new, latest 140 00:08:26,920 --> 00:08:30,720 Speaker 1: and greatest version of Pandora, and you've got this enormous 141 00:08:30,840 --> 00:08:34,600 Speaker 1: database of music that's all been analyzed by professionals. We're 142 00:08:34,600 --> 00:08:38,040 Speaker 1: talking about actual musicians and musicologists who have listened to 143 00:08:38,040 --> 00:08:41,559 Speaker 1: the music, broken it down into its basic elements identified 144 00:08:41,559 --> 00:08:45,640 Speaker 1: all of them, and someone has joined your service and 145 00:08:45,679 --> 00:08:48,960 Speaker 1: they say, I'm going to make a radio station based 146 00:08:49,040 --> 00:08:52,640 Speaker 1: off the song, Uh, the statue got me high by 147 00:08:52,640 --> 00:08:58,520 Speaker 1: they might be giants. You would end up accessing the database, 148 00:08:59,520 --> 00:09:02,160 Speaker 1: pulling the record for the statue that got me high, 149 00:09:02,360 --> 00:09:05,840 Speaker 1: looking at all the genes that are associated with that, 150 00:09:06,240 --> 00:09:08,560 Speaker 1: and then you would look for a certain percentage of 151 00:09:08,600 --> 00:09:11,640 Speaker 1: similarity with other songs, like are there other songs that 152 00:09:11,720 --> 00:09:17,600 Speaker 1: have the same genes as this song does? If so, 153 00:09:18,040 --> 00:09:21,000 Speaker 1: serve it up see if the person likes it. You 154 00:09:21,080 --> 00:09:23,960 Speaker 1: might set the threshold higher or lower. If it's a 155 00:09:24,040 --> 00:09:27,960 Speaker 1: song that's particularly avant garde. There may not be a 156 00:09:28,000 --> 00:09:33,240 Speaker 1: lot of other songs that strongly resemble your original, so 157 00:09:33,360 --> 00:09:36,600 Speaker 1: you have to kind of play fast and loose with this. Now, 158 00:09:37,280 --> 00:09:42,560 Speaker 1: an important component of this service is user feedback. Services 159 00:09:42,600 --> 00:09:46,120 Speaker 1: like Pandora nearly always include a method for users to 160 00:09:46,200 --> 00:09:49,720 Speaker 1: indicate if they like or don't like a particular song. 161 00:09:50,320 --> 00:09:54,480 Speaker 1: The recommendation engine uses that data to fine tune its selections. 162 00:09:54,520 --> 00:09:57,679 Speaker 1: No two songs are going to be exactly alike, so 163 00:09:57,720 --> 00:10:01,160 Speaker 1: it may be that the ways the news song deviated 164 00:10:01,200 --> 00:10:05,000 Speaker 1: from your seed songs format were the parts that made 165 00:10:05,040 --> 00:10:09,199 Speaker 1: you detest it, So it could have been that the 166 00:10:09,200 --> 00:10:13,400 Speaker 1: the the engine said, well, this song resembles the seed song, 167 00:10:13,480 --> 00:10:17,319 Speaker 1: the original tune of the way. Let's serve it up 168 00:10:17,440 --> 00:10:19,959 Speaker 1: and you listen to it for like three seconds, you say, no, 169 00:10:20,360 --> 00:10:22,719 Speaker 1: this is this is not what I want. You give 170 00:10:22,760 --> 00:10:25,960 Speaker 1: it a thumbs down. The algorithm might say, all right, well, 171 00:10:26,040 --> 00:10:28,720 Speaker 1: I'm gonna keep note of where it was the same 172 00:10:28,760 --> 00:10:31,560 Speaker 1: and where it was different from that original song. Meanwhile, 173 00:10:31,600 --> 00:10:35,920 Speaker 1: I'll serve up this next song that has similarity. And 174 00:10:35,960 --> 00:10:37,760 Speaker 1: if you say, yeah, that's a good song. I really 175 00:10:37,760 --> 00:10:39,640 Speaker 1: like it, and you give it the thumbs up, then 176 00:10:39,679 --> 00:10:43,000 Speaker 1: the recommendation engine starts looking at the differences between the 177 00:10:43,040 --> 00:10:46,280 Speaker 1: song you said no two and the song you said 178 00:10:46,360 --> 00:10:50,760 Speaker 1: yeah too, and it starts to identify stuff that you 179 00:10:50,880 --> 00:10:54,079 Speaker 1: might not even be aware you don't like. It might 180 00:10:54,120 --> 00:10:57,280 Speaker 1: be certain elements of songs, and the recommendation engine has 181 00:10:57,280 --> 00:11:01,080 Speaker 1: figured it out. Maybe it's figured out, oh uh, Jonathan 182 00:11:01,200 --> 00:11:05,840 Speaker 1: really doesn't like it when there's a clarinet in the 183 00:11:05,960 --> 00:11:10,319 Speaker 1: song for no reason, but he isn't able to vocalize 184 00:11:10,360 --> 00:11:13,760 Speaker 1: that he doesn't he's not aware of it consciously, but 185 00:11:13,880 --> 00:11:16,439 Speaker 1: every time it's popping up he's saying no to that song, 186 00:11:16,520 --> 00:11:19,280 Speaker 1: So we're gonna We're gonna put the kai bosh on 187 00:11:19,320 --> 00:11:21,959 Speaker 1: the clarinet from here on out. That was just a 188 00:11:22,320 --> 00:11:25,360 Speaker 1: random example. I don't I don't have a hatred of 189 00:11:25,400 --> 00:11:29,600 Speaker 1: the clarinet, but it is a way for the engine 190 00:11:29,600 --> 00:11:32,040 Speaker 1: to work with the user in order to get a 191 00:11:32,080 --> 00:11:34,840 Speaker 1: better understanding of the type of songs that it should 192 00:11:34,920 --> 00:11:38,560 Speaker 1: serve up to you. Now, there are plenty of other 193 00:11:38,600 --> 00:11:45,640 Speaker 1: ways to analyze and describe music besides this genetic approach. 194 00:11:45,720 --> 00:11:49,880 Speaker 1: There are entire courses dedicated to this. Musicology is a 195 00:11:50,000 --> 00:11:53,720 Speaker 1: rich and interesting field, and some of these approaches go 196 00:11:53,800 --> 00:11:58,280 Speaker 1: beyond the components that are directly perceptible. These analytic methods 197 00:11:58,320 --> 00:12:01,720 Speaker 1: try to capture the essence of the feel of music. 198 00:12:02,000 --> 00:12:05,640 Speaker 1: For example, if you take a bunch of components individually, 199 00:12:06,080 --> 00:12:10,640 Speaker 1: you might quantitatively describe the music with accuracy, but you 200 00:12:10,720 --> 00:12:15,679 Speaker 1: can't capture how they collectively create a particular effect. Perceptual 201 00:12:15,720 --> 00:12:19,680 Speaker 1: analysis attempts to bring human perception and emotional reaction into 202 00:12:19,679 --> 00:12:24,240 Speaker 1: account with everything else. But why is the Music Genome 203 00:12:24,280 --> 00:12:27,720 Speaker 1: project powered by humans? Why is Pandora using actual human 204 00:12:27,760 --> 00:12:30,120 Speaker 1: beings to listen to music and then write out all 205 00:12:30,160 --> 00:12:34,760 Speaker 1: these genes, couldn't you find some easier way? Well? Listening 206 00:12:34,800 --> 00:12:37,439 Speaker 1: to music and being able to describe its structure beyond 207 00:12:37,520 --> 00:12:42,720 Speaker 1: some relatively simple angles is a particularly tricky computational problem. 208 00:12:42,840 --> 00:12:46,360 Speaker 1: It's something that's easy for humans and hard for machines. 209 00:12:46,960 --> 00:12:50,160 Speaker 1: In two thousand five, Way Chai of m I T 210 00:12:50,400 --> 00:12:54,640 Speaker 1: wrote a paper titled Automated Analysis of Musical Structure in 211 00:12:54,679 --> 00:12:57,800 Speaker 1: which she laid out the challenges of creating an automatic 212 00:12:57,800 --> 00:13:01,679 Speaker 1: approach to analyzing music. Her pay Earth is nineties six 213 00:13:01,720 --> 00:13:03,760 Speaker 1: pages long, and that kind of gives you an idea 214 00:13:03,760 --> 00:13:06,600 Speaker 1: of how complicated a problem this is that we're talking 215 00:13:06,640 --> 00:13:12,400 Speaker 1: about here. China's team relied on music cognition, machine learning, 216 00:13:12,480 --> 00:13:16,280 Speaker 1: and signal processing to segment and analyze pieces of music, 217 00:13:16,640 --> 00:13:20,720 Speaker 1: with the goal of isolating and analyzing the recurrent structures 218 00:13:20,760 --> 00:13:23,040 Speaker 1: of a piece. You know, the whole verse, course, verse, 219 00:13:23,920 --> 00:13:27,559 Speaker 1: all my fellow Pixies fans out there, the chord progression 220 00:13:27,840 --> 00:13:31,800 Speaker 1: or key changes that are present in music. Identifying parts 221 00:13:31,920 --> 00:13:34,960 Speaker 1: of a piece that make it representative of the whole. 222 00:13:35,000 --> 00:13:38,280 Speaker 1: In other words, finding that hook or finding that element 223 00:13:38,280 --> 00:13:41,000 Speaker 1: of a song that make it stand out. China's team 224 00:13:41,000 --> 00:13:42,720 Speaker 1: had to figure out how to make a machine do 225 00:13:42,880 --> 00:13:45,960 Speaker 1: stuff that we tend to do naturally, even without the 226 00:13:45,960 --> 00:13:49,600 Speaker 1: benefit of formal musical training. So, for example, I have 227 00:13:49,720 --> 00:13:55,360 Speaker 1: never taken any class beyond music appreciation, which is about 228 00:13:55,440 --> 00:13:58,280 Speaker 1: as one oh one as you get, and yet I 229 00:13:58,320 --> 00:14:03,280 Speaker 1: am able to voke realize certain things about music easily. 230 00:14:03,320 --> 00:14:07,000 Speaker 1: I can recognize these differences, things that a computer cannot 231 00:14:07,080 --> 00:14:10,040 Speaker 1: natively do without all and it requires a whole lot 232 00:14:10,080 --> 00:14:13,400 Speaker 1: of work. The whole paper is available to read online. 233 00:14:13,640 --> 00:14:16,840 Speaker 1: It's really interesting. I recommend checking it out. There's a 234 00:14:16,880 --> 00:14:19,240 Speaker 1: PDF you can just download for free and read over it, 235 00:14:19,280 --> 00:14:22,720 Speaker 1: and it's fascinating. It delves into not just the programming 236 00:14:22,800 --> 00:14:26,960 Speaker 1: challenge of creating this analysis software, but also the peculiarities 237 00:14:27,080 --> 00:14:30,280 Speaker 1: of music itself. For example, what makes one piece of 238 00:14:30,360 --> 00:14:35,280 Speaker 1: music more memorable than another piece? What element does repetition 239 00:14:35,360 --> 00:14:38,160 Speaker 1: play when it comes to making a masterpiece? Was the 240 00:14:38,200 --> 00:14:41,160 Speaker 1: relationship between music, which, when you get down to it, 241 00:14:41,200 --> 00:14:44,960 Speaker 1: really is just math and motion and human perception. And 242 00:14:45,000 --> 00:14:47,360 Speaker 1: I could do an entire episode on Chi's work and 243 00:14:47,440 --> 00:14:50,400 Speaker 1: what her team developed and how they set out to 244 00:14:50,440 --> 00:14:53,080 Speaker 1: design this automated system to analyze music, but that's gonna 245 00:14:53,080 --> 00:14:55,440 Speaker 1: have to wait for a later episode. For now, it's 246 00:14:55,480 --> 00:14:58,040 Speaker 1: just important to understand the music is something that we're 247 00:14:58,080 --> 00:15:02,880 Speaker 1: able to experience in a level that machine just cannot. Now, 248 00:15:03,120 --> 00:15:05,240 Speaker 1: when we come back from the break, we're going to 249 00:15:05,360 --> 00:15:08,240 Speaker 1: listen in on an interview that Noel Brown had with 250 00:15:08,360 --> 00:15:12,680 Speaker 1: Alexander Lurch and learn more about musical analysis and music generation. 251 00:15:12,920 --> 00:15:23,080 Speaker 1: But first let's take a quick break to thank our sponsor. Now, 252 00:15:23,120 --> 00:15:25,600 Speaker 1: Like I said the top of the show, earlier this year, 253 00:15:25,600 --> 00:15:29,800 Speaker 1: in Producer Extraordinary, Noel Brown took a trip to mog Fest, 254 00:15:29,840 --> 00:15:32,040 Speaker 1: which was a you know, it's a conference about music 255 00:15:32,080 --> 00:15:34,840 Speaker 1: and technology and science and lots of other awesome stuff, 256 00:15:35,120 --> 00:15:37,440 Speaker 1: and he got to speak with a music analysis expert, 257 00:15:37,520 --> 00:15:42,480 Speaker 1: Alexander Larch And what follows is their conversation. So as 258 00:15:42,480 --> 00:15:45,000 Speaker 1: a bit of a layman, I interpret a lot of 259 00:15:45,040 --> 00:15:47,600 Speaker 1: what you do in the field of like generative music. 260 00:15:47,800 --> 00:15:52,800 Speaker 1: Is that kind of along the right lines. So um, 261 00:15:52,840 --> 00:15:55,600 Speaker 1: I would say my book may kind of lead to 262 00:15:55,720 --> 00:15:58,760 Speaker 1: generative music, but what I'm actually currently focusing on is 263 00:15:58,800 --> 00:16:02,880 Speaker 1: more analyzing music, so figuring out what's going on in 264 00:16:02,920 --> 00:16:06,520 Speaker 1: the music. So, um, it might start with you just 265 00:16:06,720 --> 00:16:09,120 Speaker 1: have an audio signal and you want to know, okay, 266 00:16:09,160 --> 00:16:11,360 Speaker 1: what is the temple, what is the what is the key, 267 00:16:11,400 --> 00:16:13,200 Speaker 1: what is the hook line, what is the base doing? 268 00:16:13,600 --> 00:16:16,920 Speaker 1: What is the mood of this piece of music? And 269 00:16:16,960 --> 00:16:20,720 Speaker 1: that is when trying to apply artificial intelligence and signal 270 00:16:20,760 --> 00:16:24,920 Speaker 1: processing methods to get this information to extract this inflammation 271 00:16:25,400 --> 00:16:28,840 Speaker 1: from the signal. So that's something like the hit factories 272 00:16:28,880 --> 00:16:31,720 Speaker 1: in Sweden would be all about, you know what they're 273 00:16:31,720 --> 00:16:33,480 Speaker 1: all about, Like it seems that they take a very 274 00:16:33,520 --> 00:16:36,680 Speaker 1: analytical approach to writing pop songs, where you know, they've 275 00:16:36,680 --> 00:16:38,520 Speaker 1: got people that are experts in hooks, they have people 276 00:16:38,560 --> 00:16:40,680 Speaker 1: that are experts in versus, and they have all these 277 00:16:40,800 --> 00:16:44,240 Speaker 1: kind of human algorithms on like how long everything needs 278 00:16:44,280 --> 00:16:46,800 Speaker 1: to play for in order to elicit the proper response. 279 00:16:47,240 --> 00:16:49,360 Speaker 1: Is it sort of along those lines as well, yes, 280 00:16:49,480 --> 00:16:53,000 Speaker 1: and so so you you want to find out, um, 281 00:16:53,120 --> 00:16:56,720 Speaker 1: what kind of makes the songs successful and this might 282 00:16:56,800 --> 00:17:00,800 Speaker 1: have really many many different factors impacting that. Right. So 283 00:17:00,840 --> 00:17:04,160 Speaker 1: there's the structure, of course, but there's there's so many 284 00:17:04,160 --> 00:17:07,320 Speaker 1: other dimensions here that it's really hard to nail it down. 285 00:17:07,760 --> 00:17:12,320 Speaker 1: So using using the computer to analyze this, we try 286 00:17:12,359 --> 00:17:15,200 Speaker 1: to find out more about what's going on and maybe 287 00:17:15,320 --> 00:17:21,280 Speaker 1: identifying these little things that might make something popular or 288 00:17:21,400 --> 00:17:24,639 Speaker 1: might give you goose bumps, or something that an example 289 00:17:24,680 --> 00:17:28,320 Speaker 1: or something that maybe one wouldn't expect might accomplish something 290 00:17:28,359 --> 00:17:29,960 Speaker 1: like that, or just just like an element that maybe 291 00:17:30,440 --> 00:17:36,480 Speaker 1: isn't so obvious to the average listener. It's okay, let 292 00:17:36,480 --> 00:17:39,400 Speaker 1: me let me think. Like it's it's hard to come 293 00:17:39,480 --> 00:17:42,080 Speaker 1: up with a very good example that would be surprising 294 00:17:42,160 --> 00:17:47,800 Speaker 1: to everybody. But it's definitely the combination of tiny things 295 00:17:47,880 --> 00:17:52,560 Speaker 1: like maybe intonation that is somehow a little bit off, 296 00:17:52,800 --> 00:17:57,040 Speaker 1: so you would say, or timing is a very obvious thing. 297 00:17:57,280 --> 00:17:59,800 Speaker 1: If something grooves or not it might have the same rhythm, 298 00:18:00,160 --> 00:18:03,400 Speaker 1: it might really impact you on a on a completely 299 00:18:03,440 --> 00:18:07,200 Speaker 1: different level. Right, So these are examples that are maybe 300 00:18:07,280 --> 00:18:11,439 Speaker 1: not surprising, but but still um point to the direction. Yeah, 301 00:18:11,560 --> 00:18:15,440 Speaker 1: is it maybe an element of human human human interaction? 302 00:18:15,520 --> 00:18:19,000 Speaker 1: Like I think things are too quantized, it's maybe less emotional, 303 00:18:19,200 --> 00:18:21,560 Speaker 1: whereas when people enter the notes by hand and they're 304 00:18:21,600 --> 00:18:25,159 Speaker 1: a little bit imperfect, or for example, the singer Adele, 305 00:18:25,200 --> 00:18:27,240 Speaker 1: there was an article about how she sort of slides 306 00:18:27,280 --> 00:18:29,800 Speaker 1: into her notes and that gives you goose bumps because 307 00:18:29,840 --> 00:18:33,080 Speaker 1: it's got this human quality where you sense that raw 308 00:18:33,240 --> 00:18:36,120 Speaker 1: human emotion in the same way. Maybe someone who does 309 00:18:36,200 --> 00:18:39,520 Speaker 1: electronic music makes mistakes and leaves them in and that's 310 00:18:39,560 --> 00:18:44,080 Speaker 1: what kind of makes it more approachable. Absolutely. I mean, 311 00:18:44,160 --> 00:18:45,679 Speaker 1: one thing you have to keep in mind is that 312 00:18:45,760 --> 00:18:48,800 Speaker 1: it's all jover and artists dependent as well, right, so 313 00:18:48,840 --> 00:18:51,800 Speaker 1: there's there will definitely never be a formula. So if 314 00:18:51,800 --> 00:18:53,560 Speaker 1: you want to have goose bumps, just do that and 315 00:18:53,600 --> 00:18:57,480 Speaker 1: then it looks right, So you can always analyze in retrospect. Okay, 316 00:18:57,520 --> 00:19:01,000 Speaker 1: this artist has this specific thing thing that he or 317 00:19:01,040 --> 00:19:06,680 Speaker 1: she does and that makes things so so um fascinating 318 00:19:06,800 --> 00:19:12,119 Speaker 1: or also that makes you hooked on that, But that 319 00:19:12,280 --> 00:19:14,840 Speaker 1: might not work for a different genre or for a 320 00:19:14,880 --> 00:19:19,080 Speaker 1: new song, right, especially because it's also about expectation and 321 00:19:19,119 --> 00:19:22,560 Speaker 1: what you already know. So um, I can maybe let 322 00:19:22,560 --> 00:19:26,040 Speaker 1: a computer compose something in Mozart style, right, and it 323 00:19:26,160 --> 00:19:29,119 Speaker 1: might be a really good motor piece, but that doesn't 324 00:19:29,119 --> 00:19:34,280 Speaker 1: mean it really gets you as a listener because you 325 00:19:34,320 --> 00:19:37,240 Speaker 1: have heard so many Mozart pieces and the original will 326 00:19:37,400 --> 00:19:41,200 Speaker 1: still be better. It's it's always an imitation, right, so 327 00:19:41,200 --> 00:19:44,399 Speaker 1: so then it might actually miss something there, right, Even 328 00:19:44,440 --> 00:19:48,920 Speaker 1: if the composition itself is very much like Mozart did it, well, 329 00:19:49,320 --> 00:19:52,160 Speaker 1: so is the end product of your research to make 330 00:19:52,200 --> 00:19:55,360 Speaker 1: computers better at doing this or are you just interested 331 00:19:55,359 --> 00:19:58,679 Speaker 1: in kind of you know, breaking down pieces of music 332 00:19:58,720 --> 00:20:01,760 Speaker 1: and to their based elements. So at the moment, I'm 333 00:20:01,800 --> 00:20:04,480 Speaker 1: doing exactly that, I'm breaking it down. I I want 334 00:20:04,520 --> 00:20:08,200 Speaker 1: to be able to let a computer transcribe what's going 335 00:20:08,240 --> 00:20:11,199 Speaker 1: on in the music. I want to understand maybe on 336 00:20:11,200 --> 00:20:14,679 Speaker 1: a perceptional level. So what makes what parameters that you 337 00:20:14,680 --> 00:20:18,840 Speaker 1: can objectively extract from the audio signal? Um? What impact 338 00:20:19,080 --> 00:20:22,080 Speaker 1: might they have on the listener? Right? So so how 339 00:20:22,119 --> 00:20:28,359 Speaker 1: does the listener react to certain um specific characteristics of 340 00:20:28,560 --> 00:20:34,680 Speaker 1: the music. But this knowledge is then also can most 341 00:20:34,720 --> 00:20:39,960 Speaker 1: definitely be used to actually generate new music, um, following 342 00:20:40,000 --> 00:20:44,320 Speaker 1: specific rules that you have extracted from the music and 343 00:20:44,359 --> 00:20:47,640 Speaker 1: then create something new. And this is what my colleague 344 00:20:48,160 --> 00:20:52,720 Speaker 1: Gil Weinberg woks a lot on with his robots that 345 00:20:52,760 --> 00:20:55,680 Speaker 1: make music. Okay, tell me more about that. And let's 346 00:20:55,720 --> 00:20:57,960 Speaker 1: not he the mr what it was interested? Right? Yeah? 347 00:20:58,000 --> 00:21:00,600 Speaker 1: So so there's um he has a robot called him On. 348 00:21:01,240 --> 00:21:08,320 Speaker 1: So she's a marimba playing robot. Um. So what Also, 349 00:21:08,480 --> 00:21:11,400 Speaker 1: my my colleague is a lot into jazz, so Simon 350 00:21:11,520 --> 00:21:14,280 Speaker 1: plays also a lot of jazz UM. So there's a 351 00:21:14,320 --> 00:21:18,760 Speaker 1: lot of um interaction on the stage with the live musicians, 352 00:21:18,880 --> 00:21:22,800 Speaker 1: and the question answer games between what what Simon plays 353 00:21:22,840 --> 00:21:26,520 Speaker 1: on the marimba and what the musician then plays, and 354 00:21:26,800 --> 00:21:31,159 Speaker 1: so it's it's constantly analyzed what's being what's being played, 355 00:21:31,200 --> 00:21:35,920 Speaker 1: and then the robot improvises or tries to um give 356 00:21:36,000 --> 00:21:38,600 Speaker 1: some answers to that jazz. I mean, you have to listen, 357 00:21:38,640 --> 00:21:40,439 Speaker 1: you have to be able to follow the leads that 358 00:21:40,520 --> 00:21:43,600 Speaker 1: you're you know, fellow musicians are putting out there, otherwise 359 00:21:43,640 --> 00:21:46,680 Speaker 1: you're not any good exactly. This whole interaction thing is 360 00:21:46,680 --> 00:21:49,120 Speaker 1: is part of the of the research obviously, and it's 361 00:21:49,119 --> 00:21:52,639 Speaker 1: not only the music, right, it's only it's also just 362 00:21:52,640 --> 00:21:55,400 Speaker 1: just it's eye contact and so on. So that's why 363 00:21:55,440 --> 00:21:58,680 Speaker 1: this robot, even if it doesn't make any sound, has 364 00:21:58,720 --> 00:22:03,520 Speaker 1: actually ahead where where she can look at specific musicians 365 00:22:04,080 --> 00:22:07,480 Speaker 1: um and not her head and so on. So you see, 366 00:22:07,720 --> 00:22:10,400 Speaker 1: you kind of can interact with the robot. So this, 367 00:22:10,400 --> 00:22:14,080 Speaker 1: this human robot interaction is part of the research as well. Fascinating. 368 00:22:14,520 --> 00:22:18,000 Speaker 1: What can you describe the difference between an algorithm that 369 00:22:18,520 --> 00:22:20,960 Speaker 1: does what you're talking about and analyzes music and one 370 00:22:21,040 --> 00:22:23,840 Speaker 1: that might create generative music. It seems like there's sort 371 00:22:23,840 --> 00:22:25,920 Speaker 1: of a crossover between the two, and I'm just I 372 00:22:26,000 --> 00:22:27,840 Speaker 1: just was probably you could kind of like spell that 373 00:22:27,840 --> 00:22:31,800 Speaker 1: out a little bit for us. So, UM, in essence, 374 00:22:32,040 --> 00:22:36,240 Speaker 1: the the algorithm that analyzes music is kind of the 375 00:22:36,280 --> 00:22:39,359 Speaker 1: information you gain from that algorithm has to feed the 376 00:22:39,440 --> 00:22:43,720 Speaker 1: generative algorithm. So, for example, you cannot compose something in 377 00:22:43,760 --> 00:22:46,919 Speaker 1: classical style if you don't know classical style, right, so 378 00:22:47,000 --> 00:22:49,040 Speaker 1: you have to learn it from data. That is the 379 00:22:49,040 --> 00:22:55,040 Speaker 1: analysis part, and then you try to infer models from that. Right. 380 00:22:55,080 --> 00:22:58,840 Speaker 1: So you you have all this data, you have you know, um, 381 00:22:58,880 --> 00:23:01,960 Speaker 1: you have structural data, you have voice leading, you have 382 00:23:02,440 --> 00:23:05,680 Speaker 1: maybe intonation if it's about performance, and then you try 383 00:23:05,720 --> 00:23:09,880 Speaker 1: to fix this data into rules, and these rules then 384 00:23:10,280 --> 00:23:15,600 Speaker 1: would generate music, for example, jazz improvisation or something that. 385 00:23:16,280 --> 00:23:18,960 Speaker 1: So Brian you know, has has been kind of delving 386 00:23:18,960 --> 00:23:22,240 Speaker 1: into generative music lately, and it's actually really interesting. There's 387 00:23:22,240 --> 00:23:24,760 Speaker 1: a BBC documentary of him kind of showing his methods 388 00:23:24,760 --> 00:23:27,359 Speaker 1: and he's just using logic and he has these little 389 00:23:27,440 --> 00:23:29,000 Speaker 1: kind of nodes I guess you could call on the 390 00:23:29,040 --> 00:23:32,480 Speaker 1: scripts or whatever that can set rules for like a 391 00:23:32,560 --> 00:23:34,800 Speaker 1: drum part or something like that where it will say, 392 00:23:34,840 --> 00:23:38,080 Speaker 1: subdivide every other whatever, like any number of things that 393 00:23:38,119 --> 00:23:42,359 Speaker 1: you could input like that. Um, I guess are we 394 00:23:42,440 --> 00:23:44,679 Speaker 1: at a place where that's still just kind of a 395 00:23:44,720 --> 00:23:47,600 Speaker 1: gimmick or are we Are we really trying to recreate 396 00:23:48,680 --> 00:23:51,480 Speaker 1: a human mind creating music or is it just kind 397 00:23:51,480 --> 00:23:54,520 Speaker 1: of a different animal altogether, you know what I mean? Like, 398 00:23:54,600 --> 00:23:57,119 Speaker 1: I'm wondering, are we really trying to have AI that 399 00:23:57,200 --> 00:24:01,320 Speaker 1: can compose mozart, or that can place a producer or 400 00:24:01,320 --> 00:24:03,720 Speaker 1: replace a songwriter, or is it just sort of like 401 00:24:03,800 --> 00:24:07,480 Speaker 1: its own thing that's fascinating in and of itself. So 402 00:24:07,560 --> 00:24:12,359 Speaker 1: I don't think that the goal here is to replace musicians, 403 00:24:12,520 --> 00:24:16,480 Speaker 1: but I think it's um from a research perspective, Um, 404 00:24:16,520 --> 00:24:20,800 Speaker 1: giving a machine creativity is a really fascinating topic, right, 405 00:24:20,840 --> 00:24:27,120 Speaker 1: So is it possible if you just have something algorithm driven, um, 406 00:24:27,160 --> 00:24:30,880 Speaker 1: that it actually creates something new that it hasn't seen before. Right? 407 00:24:31,440 --> 00:24:37,919 Speaker 1: So um I UM, I wouldn't be worried about being replaced, 408 00:24:38,440 --> 00:24:41,719 Speaker 1: although I mean I could see in the future, like 409 00:24:41,800 --> 00:24:47,399 Speaker 1: for example, generating elevator music, right, UM that that I 410 00:24:47,440 --> 00:24:51,600 Speaker 1: can easily see being automatically generated in the future. Um. 411 00:24:51,640 --> 00:24:55,879 Speaker 1: And there, yes, you would actually the AI would actually 412 00:24:55,880 --> 00:24:59,600 Speaker 1: replace the human composer and that in that area. But 413 00:25:00,040 --> 00:25:04,960 Speaker 1: I don't. I don't think that. Um. I think the 414 00:25:05,160 --> 00:25:09,520 Speaker 1: the phenomenon of creativity is still not completely understood. Um. 415 00:25:09,560 --> 00:25:13,240 Speaker 1: And it's with current technologies, it's I think it's really 416 00:25:13,280 --> 00:25:16,200 Speaker 1: hard to get there. I mean, we do use some 417 00:25:16,520 --> 00:25:20,040 Speaker 1: random randomizations and so on, so it generates something that 418 00:25:20,080 --> 00:25:24,639 Speaker 1: you haven't heard before, but well it's random, right, so 419 00:25:24,720 --> 00:25:29,239 Speaker 1: it's not necessarily an act of creativity here. So so 420 00:25:29,359 --> 00:25:31,520 Speaker 1: we're trying to get there, but I think it's still 421 00:25:31,520 --> 00:25:34,600 Speaker 1: a long way to have to create something that is 422 00:25:34,640 --> 00:25:37,920 Speaker 1: really creative. It's not getting a Creativity seems to be 423 00:25:38,000 --> 00:25:40,520 Speaker 1: sort of subjective in and of itself. It's like, does 424 00:25:40,640 --> 00:25:43,760 Speaker 1: creativity mean that it was created by a human? You know, 425 00:25:43,840 --> 00:25:47,199 Speaker 1: like is that exclusively what creativity is? And if we 426 00:25:47,280 --> 00:25:50,760 Speaker 1: have something that is somewhat sentient, can it be creative? 427 00:25:51,119 --> 00:25:54,840 Speaker 1: You know? I would say that the definition of creativity 428 00:25:54,920 --> 00:26:00,720 Speaker 1: is mostly subject based. So there's no godlike instance who says, Okay, 429 00:26:00,720 --> 00:26:03,000 Speaker 1: this is creative and this is not creative. But what 430 00:26:03,000 --> 00:26:10,080 Speaker 1: what it depends on is what the listeners thinks of this, right, um? So, 431 00:26:11,400 --> 00:26:14,720 Speaker 1: which is then in a way makes it really difficult 432 00:26:14,760 --> 00:26:18,800 Speaker 1: to do research because as there's no clear definition of 433 00:26:18,800 --> 00:26:22,080 Speaker 1: what we're measuring. Um, it's it's all the subject driven. 434 00:26:22,400 --> 00:26:25,440 Speaker 1: It's really hard to say, Okay, this is something where 435 00:26:25,440 --> 00:26:27,359 Speaker 1: it's going in the right direction and this is not 436 00:26:27,400 --> 00:26:31,040 Speaker 1: so much. Yeah, but I mean it's so that the 437 00:26:31,119 --> 00:26:35,399 Speaker 1: problem is kind of mentioned learning about official intelligence algorithms, 438 00:26:35,720 --> 00:26:39,720 Speaker 1: they all try to they learned from data, and they 439 00:26:39,960 --> 00:26:43,280 Speaker 1: essentially always try to reproduce something that they learned from 440 00:26:43,320 --> 00:26:47,680 Speaker 1: the data. Right, while real creativity is always thinking all 441 00:26:47,720 --> 00:26:51,560 Speaker 1: of the box. I wanted to be unexpected, like you know, 442 00:26:51,800 --> 00:26:54,879 Speaker 1: uses these algorithms because he wants to surprise himself, but 443 00:26:54,920 --> 00:26:57,920 Speaker 1: he likes to set certain conditions that are appealing to him. 444 00:26:58,240 --> 00:27:00,240 Speaker 1: It's sort of like being the prime mover in the 445 00:27:00,280 --> 00:27:02,840 Speaker 1: situation and then just sort of letting the pieces fall 446 00:27:02,880 --> 00:27:04,399 Speaker 1: where they may at the end of the day. But 447 00:27:04,480 --> 00:27:07,200 Speaker 1: you are sort of still putting yourself into the equation. 448 00:27:07,440 --> 00:27:10,679 Speaker 1: But then you are hoping for unexpected results to surprise yourself. 449 00:27:11,040 --> 00:27:14,560 Speaker 1: And this is definitely one very good way of dealing 450 00:27:14,640 --> 00:27:17,440 Speaker 1: with that. Right because you you have some kind of 451 00:27:17,680 --> 00:27:21,280 Speaker 1: random components there, Um, you don't trust everything that is 452 00:27:21,280 --> 00:27:24,920 Speaker 1: being output. It right, So, but something might be good, 453 00:27:25,040 --> 00:27:27,879 Speaker 1: So you generate a lot of variations of of what 454 00:27:28,000 --> 00:27:30,760 Speaker 1: you might want to achieve, and then you just pick 455 00:27:30,840 --> 00:27:34,320 Speaker 1: something that that really bokes and then you um use 456 00:27:34,400 --> 00:27:36,399 Speaker 1: this as a starting point from where you want to 457 00:27:36,440 --> 00:27:39,520 Speaker 1: go to where you want to go. I mentioned elevator music, 458 00:27:39,560 --> 00:27:41,119 Speaker 1: and I get that for sure, but aren't they already 459 00:27:41,160 --> 00:27:43,320 Speaker 1: using generative music and video games where they have to 460 00:27:43,400 --> 00:27:47,040 Speaker 1: have music constantly playing And obviously it would take ages 461 00:27:47,119 --> 00:27:50,080 Speaker 1: for a single person to compose, you know, hundreds of 462 00:27:50,080 --> 00:27:52,439 Speaker 1: hours of music. And I know there are cues in 463 00:27:52,600 --> 00:27:55,000 Speaker 1: games that are composed, but then there are parts where 464 00:27:55,000 --> 00:27:58,000 Speaker 1: you're maybe wandering around and like the you know RPG 465 00:27:58,160 --> 00:28:00,520 Speaker 1: type game and it's sort of ambient music. It just 466 00:28:00,560 --> 00:28:03,840 Speaker 1: seems to morph and change, right, I mean, this is 467 00:28:03,880 --> 00:28:05,600 Speaker 1: but this is rule based as far as I know. 468 00:28:05,640 --> 00:28:07,720 Speaker 1: I'm I'm far from being an expert in in what 469 00:28:08,240 --> 00:28:11,719 Speaker 1: really happens in these game engines, but my understanding is 470 00:28:12,160 --> 00:28:16,919 Speaker 1: that they define specific states, um and then they have 471 00:28:17,080 --> 00:28:22,480 Speaker 1: certain rules for either looping something, looping specific loops or 472 00:28:22,640 --> 00:28:27,200 Speaker 1: just generating some some more atmospheric background tones within a 473 00:28:27,320 --> 00:28:30,760 Speaker 1: palet or within like a scale or something that's you know, 474 00:28:30,880 --> 00:28:33,640 Speaker 1: but I'm I'm I'm pretty sure that this is not 475 00:28:33,680 --> 00:28:37,120 Speaker 1: necessarily automatically generated. I mean, there might be randomness in there, 476 00:28:37,119 --> 00:28:40,120 Speaker 1: but I think it's basically rule based. So somebody during 477 00:28:40,160 --> 00:28:45,120 Speaker 1: the development specified, okay in this state do something like this. 478 00:28:46,120 --> 00:28:49,120 Speaker 1: How do you think that technology will shape music over 479 00:28:49,160 --> 00:28:51,800 Speaker 1: the next ten twenty years. I mean, obviously, we're at 480 00:28:51,800 --> 00:28:54,680 Speaker 1: a conference festival that is very much involved in the 481 00:28:55,680 --> 00:28:59,000 Speaker 1: connection between technology and music. I love it. I think 482 00:28:59,080 --> 00:29:01,040 Speaker 1: it's amazing. There are some people that are kind of 483 00:29:01,040 --> 00:29:02,800 Speaker 1: freaks out, But I wonder what you think about, like, 484 00:29:02,920 --> 00:29:06,880 Speaker 1: where's it going? Oh, well, that's that's obviously very hard 485 00:29:06,920 --> 00:29:12,240 Speaker 1: to answer. I mean, I mean, so, okay, let me 486 00:29:12,280 --> 00:29:15,520 Speaker 1: start historically, right, so, so technology and music they have 487 00:29:15,680 --> 00:29:20,360 Speaker 1: always interacted very closely. Right, So, there's actually genres who 488 00:29:20,400 --> 00:29:25,680 Speaker 1: would not which would not be there without the technology 489 00:29:26,800 --> 00:29:30,360 Speaker 1: technology exactly, so the electric guitar rock and roll wouldn't 490 00:29:30,360 --> 00:29:32,640 Speaker 1: have happened with all the electric guitar, and the electric 491 00:29:32,680 --> 00:29:37,320 Speaker 1: guitar was in essence and engineering effort, right synthesizer. Obviously, 492 00:29:37,360 --> 00:29:40,920 Speaker 1: we are here at mook Fest. Um so, so there 493 00:29:41,000 --> 00:29:47,800 Speaker 1: was always close interaction between technology, um so. Um what 494 00:29:48,440 --> 00:29:50,840 Speaker 1: the trends that I currently see, and they are not 495 00:29:50,920 --> 00:29:55,800 Speaker 1: really surprising, I guess, but I think that, um, the 496 00:29:55,840 --> 00:29:59,480 Speaker 1: interaction of the performer with any kind of sound generation 497 00:29:59,480 --> 00:30:06,840 Speaker 1: of music generation will will um grow more cohesive. So 498 00:30:07,240 --> 00:30:11,560 Speaker 1: any kind of controller will be easier to use and 499 00:30:11,560 --> 00:30:14,600 Speaker 1: and uh, it will also be easier to use for 500 00:30:14,640 --> 00:30:17,600 Speaker 1: everybody to create music. And this is definitely a trend 501 00:30:17,640 --> 00:30:19,640 Speaker 1: you already see with d J apps and so on, 502 00:30:20,320 --> 00:30:23,400 Speaker 1: where they automatically create matchups for you and and all 503 00:30:23,480 --> 00:30:27,400 Speaker 1: this stuff. Um, it's this is this is definitely going 504 00:30:27,440 --> 00:30:31,600 Speaker 1: to happen that the user will be even if they 505 00:30:31,600 --> 00:30:36,160 Speaker 1: have no music background, will be able to create music 506 00:30:37,080 --> 00:30:39,960 Speaker 1: in a way that that makes sense. It might only 507 00:30:39,960 --> 00:30:44,240 Speaker 1: be loop based for now, but there's a lot of 508 00:30:44,280 --> 00:30:48,960 Speaker 1: possibilities here. Um, I see all the possibilities in more 509 00:30:49,000 --> 00:30:53,840 Speaker 1: crowd based approaches to this. Right, So, um, what happens 510 00:30:53,880 --> 00:30:56,280 Speaker 1: if you put a hundred people into a room and 511 00:30:56,320 --> 00:30:58,840 Speaker 1: give them, I don't know, an app or something that 512 00:30:58,880 --> 00:31:01,760 Speaker 1: they can control and then they make music together? Neural 513 00:31:01,880 --> 00:31:07,959 Speaker 1: network music exactly. And and there's also in this context 514 00:31:08,040 --> 00:31:11,360 Speaker 1: there's new forms how artists can communicate with their fans. Right, 515 00:31:11,440 --> 00:31:14,560 Speaker 1: so you could release something that is actually interactive, So 516 00:31:14,920 --> 00:31:18,360 Speaker 1: so fans could, in the easiest form could vote on something, 517 00:31:18,400 --> 00:31:21,400 Speaker 1: but maybe some more complex input would shape the music 518 00:31:21,440 --> 00:31:24,160 Speaker 1: and outcome there. So I think these are very very 519 00:31:24,200 --> 00:31:27,840 Speaker 1: interesting forms where you already see the seats in what's 520 00:31:27,880 --> 00:31:31,600 Speaker 1: currently happening, um, and I think this will definitely evolve. 521 00:31:32,080 --> 00:31:35,200 Speaker 1: Knowl and Lurch makes some great points about the subtleties 522 00:31:35,240 --> 00:31:38,040 Speaker 1: of music and analysis as well as the potential for 523 00:31:38,120 --> 00:31:40,920 Speaker 1: their future. And when we come back, we'll talk more 524 00:31:40,960 --> 00:31:52,280 Speaker 1: about generating music from a computational standpoint. Generating music, like 525 00:31:52,640 --> 00:31:55,480 Speaker 1: musical analysis, is a non trivial task. How do you 526 00:31:55,520 --> 00:31:59,640 Speaker 1: program a computer so that it might dynamically create, esthetically 527 00:32:00,080 --> 00:32:03,520 Speaker 1: leasing measures of music without becoming too repetitive or boring, 528 00:32:04,040 --> 00:32:06,600 Speaker 1: or straying too far away from a melody line to 529 00:32:06,680 --> 00:32:10,200 Speaker 1: sound like anything other than just random series of notes. Now, 530 00:32:10,400 --> 00:32:14,360 Speaker 1: some music, maybe even a lot of music, is written 531 00:32:14,520 --> 00:32:19,360 Speaker 1: very deliberately. You know, your painstakingly sitting down and figuring 532 00:32:19,400 --> 00:32:22,480 Speaker 1: out what chord comes next, when should you put in 533 00:32:22,520 --> 00:32:26,040 Speaker 1: the key change, how many times should you repeat the chorus. 534 00:32:26,560 --> 00:32:28,880 Speaker 1: It's not as if some mythical muse has reached down 535 00:32:28,960 --> 00:32:32,160 Speaker 1: to touch the musician's brain and create the song fully formed. 536 00:32:32,560 --> 00:32:35,080 Speaker 1: But there have been attempts by humans to create music 537 00:32:35,120 --> 00:32:39,640 Speaker 1: from an almost engineering perspective, so that it almost it 538 00:32:39,680 --> 00:32:42,040 Speaker 1: almost feels like you're taking the artistry out. That's not 539 00:32:42,200 --> 00:32:45,240 Speaker 1: entirely fair. I don't really believe that is so, but 540 00:32:45,640 --> 00:32:48,720 Speaker 1: there do. There are some songs out there that were 541 00:32:48,760 --> 00:32:52,240 Speaker 1: created by committee, and you could argue that some of 542 00:32:52,280 --> 00:32:58,880 Speaker 1: them perhaps seem to have less merit to them than others. Now, 543 00:32:59,160 --> 00:33:04,600 Speaker 1: there's some commit the design music that is amazing for 544 00:33:04,760 --> 00:33:07,360 Speaker 1: reasons that are difficult to put into words. For example, 545 00:33:07,400 --> 00:33:11,960 Speaker 1: in n Dave Soldier, a composer, worked with two artists, 546 00:33:12,720 --> 00:33:17,040 Speaker 1: Komar and Melamid, to create what they titled the Most 547 00:33:17,280 --> 00:33:21,360 Speaker 1: Unwanted Song. They conducted a public survey to find out 548 00:33:21,400 --> 00:33:24,720 Speaker 1: what people most liked and hated in music, and then 549 00:33:24,760 --> 00:33:28,880 Speaker 1: they created two different songs that incorporated many of those elements. 550 00:33:28,960 --> 00:33:33,600 Speaker 1: The ones that included the lowest scoring elements became part 551 00:33:33,680 --> 00:33:36,560 Speaker 1: of the Most Unwanted Song. And it's a song that 552 00:33:36,640 --> 00:33:41,440 Speaker 1: lasts about twenty minutes. It's incredibly long. It's a song 553 00:33:41,520 --> 00:33:47,160 Speaker 1: that includes accordion, bagpipes, children's voices, and opera singer rapping, 554 00:33:47,400 --> 00:33:53,840 Speaker 1: and also incorporated advertising. It's gloriously awful and it sounds 555 00:33:53,880 --> 00:34:26,480 Speaker 1: like this now. They also did the most Wanted Music, 556 00:34:26,840 --> 00:34:29,160 Speaker 1: and they created a song that incorporated the elements that 557 00:34:29,200 --> 00:34:32,719 Speaker 1: the survey takers identified as being the most pleasant components 558 00:34:33,040 --> 00:34:35,839 Speaker 1: of music. The result is something that would likely put 559 00:34:35,920 --> 00:34:39,799 Speaker 1: Kenny g into a coma. It's listening so easy you 560 00:34:39,840 --> 00:34:43,279 Speaker 1: don't even know you're listening. It's a shout out to 561 00:34:43,320 --> 00:34:45,920 Speaker 1: Peter shik Ali right there. I actually think that this 562 00:34:45,960 --> 00:34:49,239 Speaker 1: song is worse than the most Unwanted Song, but take 563 00:34:49,239 --> 00:35:18,920 Speaker 1: a listen the world. Both examples illustrate the power of 564 00:35:19,000 --> 00:35:22,200 Speaker 1: music analysis, as well as how it can easily be 565 00:35:22,320 --> 00:35:25,560 Speaker 1: misinterpreted or misused, which can create I think we can 566 00:35:25,600 --> 00:35:30,359 Speaker 1: all agree horrific results. But neither of those pieces were 567 00:35:30,440 --> 00:35:33,440 Speaker 1: actually generated by computers. That was all the work of 568 00:35:33,520 --> 00:35:37,080 Speaker 1: human beings. Human beings with a wonky sense of humor, 569 00:35:37,200 --> 00:35:39,439 Speaker 1: but still human. And you might think that the first 570 00:35:39,440 --> 00:35:42,319 Speaker 1: computer generated music must have come a decade or so later. 571 00:35:42,400 --> 00:35:45,359 Speaker 1: I mean, the Unwanted Song and Wanted Song both came 572 00:35:45,360 --> 00:35:50,799 Speaker 1: out nine, but now was late for computer generated music. 573 00:35:50,880 --> 00:35:54,560 Speaker 1: The first actual piece written by computer was the Iliac 574 00:35:54,800 --> 00:35:59,440 Speaker 1: Sweet for String Quartet, created in nineteen fifty seven. This 575 00:35:59,520 --> 00:36:03,319 Speaker 1: was the work of Learn Hiller, a composer, and Leonard Isaacson, 576 00:36:03,480 --> 00:36:07,440 Speaker 1: a mathematician, and their approach was fairly straightforward. They created 577 00:36:07,440 --> 00:36:11,440 Speaker 1: a program that would generate pseudo random integers, which in 578 00:36:11,480 --> 00:36:15,319 Speaker 1: turn would represent important information with regards to musical composition 579 00:36:15,480 --> 00:36:21,000 Speaker 1: such as pitch, rhythm, dynamics, and other factors. This processed 580 00:36:21,080 --> 00:36:24,000 Speaker 1: information would then go through a pass on a filter, 581 00:36:24,200 --> 00:36:26,840 Speaker 1: and that filter would force the data to follow rules 582 00:36:26,840 --> 00:36:31,040 Speaker 1: of composition, so it sort out anything that went outside 583 00:36:31,040 --> 00:36:34,279 Speaker 1: of the rules of composition and anything that was when 584 00:36:34,320 --> 00:36:37,520 Speaker 1: then the rules would get a pass and the resulting 585 00:36:37,560 --> 00:36:41,640 Speaker 1: piece of music for a string quartet sounds a bit experimental, 586 00:36:41,920 --> 00:36:45,239 Speaker 1: but it doesn't exactly sound mechanical. It sounds kind of 587 00:36:45,280 --> 00:37:02,640 Speaker 1: like this. Other experiments and music generation followed, but they 588 00:37:02,680 --> 00:37:07,120 Speaker 1: all depended pretty heavily on computers working within relatively strict 589 00:37:07,239 --> 00:37:10,200 Speaker 1: sets of rules, with a good deal of human guidance 590 00:37:10,200 --> 00:37:12,759 Speaker 1: along the way, and of course the computers had no 591 00:37:12,800 --> 00:37:17,080 Speaker 1: actual understanding of music. You could program in rules for 592 00:37:17,120 --> 00:37:20,560 Speaker 1: different musical genres and computers can do that. That's what 593 00:37:20,640 --> 00:37:23,680 Speaker 1: computers do. They're really good at following rules, but the 594 00:37:23,719 --> 00:37:26,520 Speaker 1: machines have no way of knowing why those rules exist 595 00:37:26,680 --> 00:37:29,680 Speaker 1: or what sort of effect those rules have on the 596 00:37:29,760 --> 00:37:34,279 Speaker 1: music itself. Computer scientists have created some interesting experiments to 597 00:37:34,360 --> 00:37:38,800 Speaker 1: build music generators. For example, Matt Vitelli and Erin Naiebe 598 00:37:39,160 --> 00:37:42,360 Speaker 1: built software that analyzed a piece of music by Medean, 599 00:37:42,960 --> 00:37:47,120 Speaker 1: a French DJ, from the day on I suppose I 600 00:37:47,160 --> 00:37:52,759 Speaker 1: apologize my Francie is uh not very good. The software 601 00:37:52,840 --> 00:37:56,759 Speaker 1: analyzed Medeans work and then attempted to replicate it. It 602 00:37:56,960 --> 00:38:00,400 Speaker 1: used recurrent neural networks an attempt to capture the essence 603 00:38:00,440 --> 00:38:03,680 Speaker 1: of the music and make something similar. The neural network 604 00:38:03,800 --> 00:38:07,279 Speaker 1: learned with every iteration of music uh, and learned how 605 00:38:07,320 --> 00:38:10,800 Speaker 1: to more closely mimic the style, So when it first 606 00:38:10,840 --> 00:38:18,680 Speaker 1: started it sounded like pure noise. It took two thousand 607 00:38:18,760 --> 00:38:22,240 Speaker 1: iterations before it generated something that resembled a song more 608 00:38:22,600 --> 00:38:31,359 Speaker 1: than noise. But it shows that these learning algorithms are 609 00:38:31,400 --> 00:38:35,719 Speaker 1: able to start focusing on what those elements are that 610 00:38:35,800 --> 00:38:42,720 Speaker 1: represent meaningful information versus meaningless information. So would this eventually 611 00:38:42,760 --> 00:38:45,120 Speaker 1: be able to create its own music if you were 612 00:38:45,239 --> 00:38:48,840 Speaker 1: to say, said it to listening to a radio station 613 00:38:48,880 --> 00:38:52,600 Speaker 1: for long enough. Who's to say? Over at Google, the 614 00:38:52,680 --> 00:38:55,240 Speaker 1: Brain team is working on a ton of different projects 615 00:38:55,239 --> 00:38:59,799 Speaker 1: related to machine learning and artificial intelligence, including exploring opportunities 616 00:38:59,800 --> 00:39:03,880 Speaker 1: for computer generated music. This falls under something called the 617 00:39:03,880 --> 00:39:07,640 Speaker 1: Magenta Project, and the project has two purposes. The first 618 00:39:07,719 --> 00:39:11,239 Speaker 1: is to experiment with machines creating different forms of art automatically, 619 00:39:11,400 --> 00:39:15,640 Speaker 1: including music. The second purpose is to foster a community 620 00:39:15,719 --> 00:39:18,560 Speaker 1: of artists and programmers to find new and interesting ways 621 00:39:18,600 --> 00:39:23,160 Speaker 1: to use this technology that Google has created. On the 622 00:39:23,200 --> 00:39:27,360 Speaker 1: official page for Magenta, Douglas Eck points out that artists 623 00:39:27,360 --> 00:39:30,239 Speaker 1: have always found innovative ways to put technology to use 624 00:39:30,320 --> 00:39:33,319 Speaker 1: beyond what the creators had in mind, and that's where 625 00:39:33,360 --> 00:39:36,760 Speaker 1: true innovation lies. So in other words, when you create 626 00:39:36,800 --> 00:39:39,920 Speaker 1: an electric guitar for the first time, you're probably not 627 00:39:40,040 --> 00:39:43,040 Speaker 1: anticipating the way Jimmy Hendrix is going to play that 628 00:39:43,400 --> 00:39:46,959 Speaker 1: decades later. So artists have been able to take things 629 00:39:46,960 --> 00:39:51,120 Speaker 1: that people have created and move it beyond even the 630 00:39:51,120 --> 00:39:53,960 Speaker 1: creator's expectations. That's kind of what they're hoping over at 631 00:39:53,960 --> 00:39:57,840 Speaker 1: the Magenta Project. Ck goes on to point out that 632 00:39:57,880 --> 00:40:00,759 Speaker 1: short form machine generated music can be quite effective, and 633 00:40:00,800 --> 00:40:03,839 Speaker 1: it's been around for a while. There are generators out 634 00:40:03,840 --> 00:40:08,000 Speaker 1: there that can make short songs essentially are short pieces 635 00:40:08,000 --> 00:40:13,120 Speaker 1: of music. But if you increase the duration requirement, if 636 00:40:13,120 --> 00:40:16,880 Speaker 1: you require the music to last longer, you start running 637 00:40:16,920 --> 00:40:19,399 Speaker 1: into the limitations of the technology. They start to become 638 00:40:19,440 --> 00:40:22,360 Speaker 1: more apparent, and it becomes clear that machines aren't really 639 00:40:22,400 --> 00:40:25,840 Speaker 1: good at sustaining a long term narrative in any format. 640 00:40:26,560 --> 00:40:30,200 Speaker 1: The Magenta project isn't just a single approach. It's not 641 00:40:30,280 --> 00:40:33,040 Speaker 1: like a group of folks who are just working on 642 00:40:33,040 --> 00:40:36,400 Speaker 1: one set of algorithms. Think of it more like a 643 00:40:36,440 --> 00:40:41,480 Speaker 1: platform or a list of assets, a list of available 644 00:40:42,000 --> 00:40:46,759 Speaker 1: uh bits and pieces other people can use, and programmers 645 00:40:46,760 --> 00:40:49,680 Speaker 1: and musicians can build tools out of those pieces for 646 00:40:49,719 --> 00:40:52,960 Speaker 1: generating music. Now, some of those tools may end up 647 00:40:52,960 --> 00:40:56,919 Speaker 1: being way more effective than other tools. Just figuring out 648 00:40:56,920 --> 00:41:00,480 Speaker 1: how to evaluate the abilities of the software could end 649 00:41:00,520 --> 00:41:02,960 Speaker 1: up becoming a challenge. How can you tell if one 650 00:41:03,000 --> 00:41:07,080 Speaker 1: autonomous music generator is quote unquote better than another one. 651 00:41:07,719 --> 00:41:10,879 Speaker 1: Music is pretty subjective and what I might like might 652 00:41:10,920 --> 00:41:13,960 Speaker 1: not be what you like, And there are some qualitative 653 00:41:13,960 --> 00:41:17,840 Speaker 1: elements that we can look at that are pretty difficult 654 00:41:17,880 --> 00:41:20,719 Speaker 1: to to get a conversation going, because if you have 655 00:41:20,800 --> 00:41:24,400 Speaker 1: a very different set of of pros and cons or 656 00:41:24,440 --> 00:41:27,719 Speaker 1: or set of preferences I should say about music than 657 00:41:27,760 --> 00:41:31,000 Speaker 1: I do. Then we might hit a wall. But there's 658 00:41:31,040 --> 00:41:34,360 Speaker 1: some quantitative elements such as the amount of variation in 659 00:41:34,400 --> 00:41:38,160 Speaker 1: a piece and whether the music generated fits whatever genre 660 00:41:38,239 --> 00:41:41,440 Speaker 1: you're aiming for, that you can use those. That's a 661 00:41:41,480 --> 00:41:44,719 Speaker 1: little bit easier because it's a quantitative or more or 662 00:41:44,800 --> 00:41:47,399 Speaker 1: less a quantitative element. But pretty soon you get into 663 00:41:47,440 --> 00:41:49,680 Speaker 1: more subjective territory, and that's where it all breaks down. 664 00:41:50,200 --> 00:41:53,480 Speaker 1: At the moment, machines are better at interpreting and combining 665 00:41:53,560 --> 00:41:57,480 Speaker 1: musical pieces than they aren't creating something entirely new. For example, 666 00:41:57,840 --> 00:42:00,239 Speaker 1: David Cope, who is a professor emerit us at the 667 00:42:00,320 --> 00:42:04,000 Speaker 1: University of California, Santa Cruz, is also a composer, launched 668 00:42:04,000 --> 00:42:07,520 Speaker 1: a project called Experiments in Musical Intelligence many years ago 669 00:42:07,920 --> 00:42:11,480 Speaker 1: and use the computer program to analyze various classical composers 670 00:42:11,560 --> 00:42:15,920 Speaker 1: musical work. Then the program would construct new pieces using 671 00:42:16,160 --> 00:42:19,920 Speaker 1: the elements it had analyzed as building blocks for that piece. 672 00:42:20,040 --> 00:42:23,799 Speaker 1: So the program wasn't really writing something entirely new, but 673 00:42:23,920 --> 00:42:28,480 Speaker 1: rather combining found elements in new ways. Now, perhaps in 674 00:42:28,520 --> 00:42:31,000 Speaker 1: the future machines will be able to make art on 675 00:42:31,040 --> 00:42:34,440 Speaker 1: their own with minimal human input, and if that happens, 676 00:42:34,440 --> 00:42:37,560 Speaker 1: we'll likely have to face some tough philosophical questions about 677 00:42:37,600 --> 00:42:40,440 Speaker 1: the nature of art. If a machine doesn't possess self 678 00:42:40,480 --> 00:42:44,359 Speaker 1: awareness or consciousness and really is just a complicated set 679 00:42:44,400 --> 00:42:48,520 Speaker 1: of equations that generate data according to some general rules, 680 00:42:49,280 --> 00:42:54,319 Speaker 1: is its production actually art? Is intent required for it 681 00:42:54,400 --> 00:42:56,680 Speaker 1: to be art? Does the artist have to intend something 682 00:42:56,719 --> 00:42:59,520 Speaker 1: in order for it to be art? If people enjoy 683 00:42:59,600 --> 00:43:04,080 Speaker 1: the work and find it intellectually or emotionally stimulating, does 684 00:43:04,160 --> 00:43:08,320 Speaker 1: that make it real music? If if I like something 685 00:43:08,600 --> 00:43:10,920 Speaker 1: and I find out later on that a computer generated 686 00:43:10,920 --> 00:43:13,919 Speaker 1: it completely from start to finish, does that at all 687 00:43:14,040 --> 00:43:17,239 Speaker 1: lesson the value of that music? Or does the fact 688 00:43:17,280 --> 00:43:19,719 Speaker 1: that I like it mean that it's quote unquote real. 689 00:43:20,760 --> 00:43:23,080 Speaker 1: We're none at the stage right now where those questions 690 00:43:23,080 --> 00:43:25,719 Speaker 1: need urgent answers, but I do think they're really interesting, 691 00:43:25,960 --> 00:43:28,760 Speaker 1: and now it's time that we play our own music 692 00:43:29,000 --> 00:43:31,000 Speaker 1: and get the heck out of here. So if you 693 00:43:31,000 --> 00:43:34,520 Speaker 1: guys have any suggestions for future episodes of tech Stuff, 694 00:43:35,440 --> 00:43:37,919 Speaker 1: right to me. Let me know what you think. Our 695 00:43:38,040 --> 00:43:41,680 Speaker 1: email address is tech Stuff at how stuff works dot com, 696 00:43:41,960 --> 00:43:44,520 Speaker 1: or you can drop me a line on Twitter or Facebook. 697 00:43:44,840 --> 00:43:46,520 Speaker 1: The handle for the show at both of those is 698 00:43:46,560 --> 00:43:50,880 Speaker 1: tech Stuff hsw on Wednesdays and Friday's. I record in 699 00:43:51,000 --> 00:43:54,080 Speaker 1: the studio and you can watch me live on twitch 700 00:43:54,160 --> 00:43:57,920 Speaker 1: dot tv slash tech Stuff. Watch as I struggle for 701 00:43:58,000 --> 00:44:03,120 Speaker 1: words and fail and then head desk and then tell 702 00:44:03,200 --> 00:44:05,520 Speaker 1: Dylan to pause the recording so I can come up 703 00:44:05,560 --> 00:44:07,680 Speaker 1: with something and then start the recording again. You get 704 00:44:07,719 --> 00:44:10,040 Speaker 1: to see the whole thing, so all the stuff that 705 00:44:10,080 --> 00:44:12,440 Speaker 1: gets cut out of the podcasts, you can watch it 706 00:44:12,520 --> 00:44:17,320 Speaker 1: happen live. Sometimes I dance. I hope to see you 707 00:44:17,400 --> 00:44:21,080 Speaker 1: Wednesdays and Fridays at twitch dot tv slash text Stuff 708 00:44:21,080 --> 00:44:30,440 Speaker 1: and I'll talk to you again really soon. For more 709 00:44:30,480 --> 00:44:32,799 Speaker 1: on this and thousands of other topics, because it has 710 00:44:32,800 --> 00:44:43,480 Speaker 1: stop works dot com