1 00:00:04,160 --> 00:00:07,200 Speaker 1: Get in text with technology with tech Stuff from half 2 00:00:07,240 --> 00:00:13,880 Speaker 1: stuff works dot com. PA there and welcome to tech Stuff. 3 00:00:13,920 --> 00:00:17,160 Speaker 1: I'm your host, Jonathan Strickland. I'm an executive producer and 4 00:00:17,320 --> 00:00:20,959 Speaker 1: I love all things tech and today I'm going to 5 00:00:21,040 --> 00:00:25,080 Speaker 1: tackle a topic suggested by listener Jesse, So shout out 6 00:00:25,120 --> 00:00:28,440 Speaker 1: to Jesse. Jesse wanted to know more about MIDDI. What 7 00:00:28,840 --> 00:00:31,280 Speaker 1: is middy, how does it work? Why is it nearly 8 00:00:31,360 --> 00:00:35,320 Speaker 1: synonymous with computer music? And is it actually a type 9 00:00:35,320 --> 00:00:38,479 Speaker 1: of music in of itself. I'm going to tackle that 10 00:00:38,560 --> 00:00:41,000 Speaker 1: last question first, because that was just an easy setup. 11 00:00:41,040 --> 00:00:43,800 Speaker 1: While some people use MIDDY as shorthand for computer music, 12 00:00:43,800 --> 00:00:47,199 Speaker 1: the two are not exactly the same thing. MIDDI stands 13 00:00:47,240 --> 00:00:51,320 Speaker 1: for Musical Instrument Digital Interface and it's a protocol, a 14 00:00:51,360 --> 00:00:55,320 Speaker 1: set of rules that allows a synthesizer or MIDI controller 15 00:00:55,360 --> 00:00:59,040 Speaker 1: to send data to a computer or other synthesizers in 16 00:00:59,080 --> 00:01:02,960 Speaker 1: a meaningful way. And in fact, no sound is sent 17 00:01:03,000 --> 00:01:07,039 Speaker 1: through MINI at all, which might seem a little strange. 18 00:01:07,440 --> 00:01:11,800 Speaker 1: To understand MINY and how it works, it first behooves 19 00:01:11,920 --> 00:01:15,000 Speaker 1: us to go into a little history on synthesizers in general, 20 00:01:15,160 --> 00:01:20,120 Speaker 1: starting with analog synthesizers. Now an analog synthesizer is an 21 00:01:20,120 --> 00:01:24,280 Speaker 1: electronic musical instrument that makes use of various components to 22 00:01:24,360 --> 00:01:28,960 Speaker 1: produce and shape sound. These components can be modular. In fact, 23 00:01:28,959 --> 00:01:33,080 Speaker 1: the earliest analog synthesizers were entirely modular. You have to 24 00:01:33,120 --> 00:01:36,560 Speaker 1: get a whole bunch of different components and patch them 25 00:01:36,600 --> 00:01:40,000 Speaker 1: together with cables. This is what we call patches in 26 00:01:40,120 --> 00:01:42,360 Speaker 1: order to make any sort of meaningful sound at all, 27 00:01:42,760 --> 00:01:45,800 Speaker 1: and certain modules are in charge of creating certain effects 28 00:01:45,959 --> 00:01:51,360 Speaker 1: or sounds. Modules can include stuff like oscillators, filters, and 29 00:01:51,480 --> 00:01:57,400 Speaker 1: voltage control amplifiers. Typically, a synthesizer has at minimum three 30 00:01:57,600 --> 00:02:01,760 Speaker 1: basic modules. The first is an oscillator. The oscillator's job 31 00:02:01,880 --> 00:02:05,280 Speaker 1: is to create a base tone. This tone is what 32 00:02:05,400 --> 00:02:08,240 Speaker 1: the rest of the modules can shape to create the 33 00:02:08,360 --> 00:02:12,280 Speaker 1: different pitches and effects that change the shape of the sound. 34 00:02:13,040 --> 00:02:16,760 Speaker 1: And oscillator causes energy to move between two states at 35 00:02:16,760 --> 00:02:20,280 Speaker 1: a particular frequency. Now this is easiest to imagine. I 36 00:02:20,320 --> 00:02:23,720 Speaker 1: think with a physical oscillator like a pendulum. If you 37 00:02:23,760 --> 00:02:27,320 Speaker 1: push a pendulum, it will begin to swing or oscillate, 38 00:02:28,040 --> 00:02:31,320 Speaker 1: and one full swing is one full oscillation. At the 39 00:02:31,320 --> 00:02:33,560 Speaker 1: height of its swing, all of the energy in the 40 00:02:33,600 --> 00:02:36,799 Speaker 1: system is potential energy. Right, it's not moving. It's at 41 00:02:36,840 --> 00:02:40,720 Speaker 1: its highest point that energy converts into kinetic energy as 42 00:02:40,720 --> 00:02:44,880 Speaker 1: gravity takes hold in the pendulum swings downward. This swinging 43 00:02:45,200 --> 00:02:49,360 Speaker 1: is the oscillation. But oscillators will eventually run out of 44 00:02:49,440 --> 00:02:52,000 Speaker 1: energy due to loss in the system. This is that 45 00:02:52,120 --> 00:02:56,880 Speaker 1: law of thermodynamics. In this physical example, friction cuts down 46 00:02:56,960 --> 00:02:59,639 Speaker 1: on the amount of energy within the system. It actually 47 00:02:59,680 --> 00:03:01,919 Speaker 1: means the you're losing energy out of the system due 48 00:03:01,960 --> 00:03:06,519 Speaker 1: to heat in circuits. Oscillators lose energy due to electrical resistance, 49 00:03:06,520 --> 00:03:09,040 Speaker 1: so it's very similar. The point being that unless you 50 00:03:09,120 --> 00:03:11,880 Speaker 1: continue to pour energy back into the system, it will 51 00:03:11,919 --> 00:03:16,240 Speaker 1: eventually run down because it will lose enough energy so 52 00:03:16,280 --> 00:03:20,240 Speaker 1: it doesn't perpetuate itself anymore. Now, again, to the physics 53 00:03:20,240 --> 00:03:22,440 Speaker 1: of oscillators would take up a bit more time, so 54 00:03:22,600 --> 00:03:24,560 Speaker 1: let's just leave it at the idea that there is 55 00:03:24,560 --> 00:03:29,160 Speaker 1: a component within an analog synthesizer that generates a steady 56 00:03:29,240 --> 00:03:33,000 Speaker 1: frequency that serves as the baseline for all other modules 57 00:03:33,120 --> 00:03:37,000 Speaker 1: in the synthesizer. The second component in a typical synthesizer 58 00:03:37,080 --> 00:03:40,200 Speaker 1: is the mechanism for controlling the oscillator, which is usually 59 00:03:40,200 --> 00:03:43,000 Speaker 1: a keyboard similar to one on a piano, you could 60 00:03:43,040 --> 00:03:45,960 Speaker 1: use other means to change the wave form, though, for example, 61 00:03:46,040 --> 00:03:50,240 Speaker 1: Derriman's use fluctuations in the electromagnetic field to affect the 62 00:03:50,280 --> 00:03:54,120 Speaker 1: baseline waveform. Though precise complete control of the signal isn't 63 00:03:54,240 --> 00:03:57,000 Speaker 1: really possible with such an instrument, even under the control 64 00:03:57,040 --> 00:04:00,800 Speaker 1: of a skilled player. The keyboard or pitch wheel or 65 00:04:00,880 --> 00:04:05,160 Speaker 1: whatever can set the oscillators frequency will affect the pitch. 66 00:04:05,520 --> 00:04:08,000 Speaker 1: The frequency of a sound and how we perceive that 67 00:04:08,120 --> 00:04:13,240 Speaker 1: sound are directly related. Lower frequencies produce lower pitch sounds. 68 00:04:13,600 --> 00:04:17,240 Speaker 1: Human hearing ranges from about twenty hurts or twenty cycles 69 00:04:17,279 --> 00:04:21,200 Speaker 1: of a sound wave per second, up to twenty thousand hurts. 70 00:04:21,720 --> 00:04:25,920 Speaker 1: As we get older, like me, those upper ranges start 71 00:04:25,960 --> 00:04:28,560 Speaker 1: to get harder for us to perceive. This is the 72 00:04:28,600 --> 00:04:33,440 Speaker 1: principle behind some anti youth, anti loitering strategy supposedly employed 73 00:04:33,440 --> 00:04:37,040 Speaker 1: by certain convenience store owners, which have reportedly resorted to 74 00:04:37,120 --> 00:04:41,000 Speaker 1: playing very high pitch sounds that adults can't really hear 75 00:04:41,080 --> 00:04:43,680 Speaker 1: because they've lost that ability, they've lost that range of hearing. 76 00:04:44,080 --> 00:04:47,480 Speaker 1: But those lousy kids in that mangi mutt can totally 77 00:04:47,600 --> 00:04:49,560 Speaker 1: hear it, and it irritates the heck out of them 78 00:04:49,600 --> 00:04:51,560 Speaker 1: so they don't stick a range your store for too long. 79 00:04:51,920 --> 00:04:55,240 Speaker 1: The third component found in typical synthesizers would be the 80 00:04:55,240 --> 00:04:58,320 Speaker 1: filters and effects you can apply to the sounds waveform 81 00:04:58,440 --> 00:05:01,360 Speaker 1: to change the nature of the sound the feel of it. 82 00:05:01,960 --> 00:05:04,640 Speaker 1: These filters let you select which elements of the frequency 83 00:05:04,720 --> 00:05:07,120 Speaker 1: can pass through to an amplifier so that it can 84 00:05:07,200 --> 00:05:10,560 Speaker 1: hit a speaker and be heard by gate Keeping elements 85 00:05:10,600 --> 00:05:13,599 Speaker 1: of frequencies, you can change that shape or nature of 86 00:05:13,640 --> 00:05:16,160 Speaker 1: a sound, which is why you can have a synthesizer 87 00:05:16,200 --> 00:05:19,240 Speaker 1: take on many different sounds even though it's starting with 88 00:05:19,320 --> 00:05:23,800 Speaker 1: the same basic wave form. Beyond frequency or pitch and 89 00:05:23,960 --> 00:05:27,560 Speaker 1: amplitude or volume, you can also manipulate the change in 90 00:05:27,680 --> 00:05:31,400 Speaker 1: volume over the lifespan of a sound. So if you 91 00:05:31,480 --> 00:05:35,040 Speaker 1: press down on a piano key quickly and firmly, you'll 92 00:05:35,080 --> 00:05:38,520 Speaker 1: notice that the sound is initially loud and then fades off, 93 00:05:38,560 --> 00:05:41,200 Speaker 1: and when you let up off the piano key, it 94 00:05:41,240 --> 00:05:44,800 Speaker 1: will eventually stop. If you mess with the sustained pedals, 95 00:05:44,839 --> 00:05:47,000 Speaker 1: you can push down that same key with the same 96 00:05:47,000 --> 00:05:49,160 Speaker 1: force and hear it play out a little differently. And 97 00:05:49,200 --> 00:05:53,000 Speaker 1: we describe this process with synthesizers by dividing it up 98 00:05:53,040 --> 00:05:58,240 Speaker 1: into phases, and they're called attack, decay, sustain and release, 99 00:05:58,560 --> 00:06:02,159 Speaker 1: or a D s R. The attack describes the time 100 00:06:02,160 --> 00:06:04,599 Speaker 1: it takes from the press of a key or the 101 00:06:04,760 --> 00:06:09,280 Speaker 1: null sound zero volume to reach the peak of that 102 00:06:09,440 --> 00:06:12,320 Speaker 1: keys volume. The decay is the time it takes to 103 00:06:12,360 --> 00:06:15,279 Speaker 1: go from the peak of the volume to a designated 104 00:06:15,480 --> 00:06:18,440 Speaker 1: sustain level. The sustain is the volume of sound that 105 00:06:18,480 --> 00:06:21,719 Speaker 1: should play until the respective key is released by the player. 106 00:06:22,200 --> 00:06:24,080 Speaker 1: The release time is the amount of time it takes 107 00:06:24,120 --> 00:06:27,640 Speaker 1: for the sustained volume to decay to null again. The 108 00:06:27,720 --> 00:06:31,400 Speaker 1: various effects on synthesizers can change these elements, creating louder 109 00:06:31,560 --> 00:06:33,680 Speaker 1: or softer sustains. You can even have a sustain that 110 00:06:33,720 --> 00:06:36,520 Speaker 1: gets louder than the attack if you wanted to, and 111 00:06:36,600 --> 00:06:39,640 Speaker 1: you could have longer or shorter decay times and tons 112 00:06:39,720 --> 00:06:42,880 Speaker 1: more effects. It helps create a more dynamic experience with 113 00:06:43,000 --> 00:06:47,960 Speaker 1: a synthesized instrument. After the synthesizer manipulates the basic wave 114 00:06:48,000 --> 00:06:50,920 Speaker 1: form based on the keys pressed or however you're controlling 115 00:06:50,960 --> 00:06:53,880 Speaker 1: the pitch and the various filters or effects that are 116 00:06:53,880 --> 00:06:57,320 Speaker 1: in play, it sends the electrical signal to an amplifier. 117 00:06:57,680 --> 00:07:00,839 Speaker 1: The amplifier's job is to control the volume the played sound, 118 00:07:00,880 --> 00:07:02,960 Speaker 1: typically by passing it through a series of what are 119 00:07:03,000 --> 00:07:05,839 Speaker 1: called envelope controls. That goes back to that a D 120 00:07:06,120 --> 00:07:09,800 Speaker 1: s R. I was talking about. Envelope controls are essentially 121 00:07:09,840 --> 00:07:12,320 Speaker 1: tables of data points that describe the nature of the 122 00:07:12,320 --> 00:07:16,560 Speaker 1: sound generated when a key is pressed. Early synthesizers used 123 00:07:16,600 --> 00:07:20,120 Speaker 1: actual physically distinct modules to control all this, like I 124 00:07:20,120 --> 00:07:22,560 Speaker 1: said before, and you would hook all these modules up 125 00:07:22,560 --> 00:07:25,760 Speaker 1: to a keyboard with various patch wires, and you would 126 00:07:25,800 --> 00:07:28,640 Speaker 1: manipulate various switches and knobs to coax the sound you 127 00:07:28,680 --> 00:07:31,600 Speaker 1: wanted out of the synthesizer. And if you didn't get 128 00:07:31,600 --> 00:07:33,480 Speaker 1: the sound you wanted, you might have to add additional 129 00:07:33,520 --> 00:07:37,560 Speaker 1: components to change things up. Now, the history of synthesizers 130 00:07:38,240 --> 00:07:41,400 Speaker 1: is somewhat debatable, and that's because people disagree over what 131 00:07:41,560 --> 00:07:45,760 Speaker 1: actually counts as a synthesizer. Some say that the tell 132 00:07:45,800 --> 00:07:49,280 Speaker 1: harmonium should count. The tell harmonium, also known as the 133 00:07:49,520 --> 00:07:54,240 Speaker 1: dynamo phone, which I swear sounds like something Homer Simpson 134 00:07:54,280 --> 00:07:59,080 Speaker 1: would say, was invented by Thaddeus Hill in the eighteen nineties. 135 00:07:59,360 --> 00:08:01,520 Speaker 1: It was an elect trick organ that could send music 136 00:08:01,560 --> 00:08:05,520 Speaker 1: electronically across telephone networks. Now, his goal was to create 137 00:08:05,560 --> 00:08:10,320 Speaker 1: an instrument capable of creating perfect tones consistently. Physical musical 138 00:08:10,360 --> 00:08:13,000 Speaker 1: instruments need to be tuned, and they can change their 139 00:08:13,040 --> 00:08:16,720 Speaker 1: tones based upon variables like temperature and humidity, not to 140 00:08:16,760 --> 00:08:19,400 Speaker 1: mention the skill of the person playing, But the tell 141 00:08:19,440 --> 00:08:24,280 Speaker 1: Harmonium would harness electricity to create pitch perfect tones over 142 00:08:24,480 --> 00:08:27,760 Speaker 1: and over again, or so is the idea. But the 143 00:08:27,760 --> 00:08:31,520 Speaker 1: tell Harmonium didn't allow the player to put precise controls 144 00:08:31,560 --> 00:08:35,240 Speaker 1: on the quality of a sound, something that some argue 145 00:08:35,240 --> 00:08:38,240 Speaker 1: should be a basic trait of synthesizers. So they say, well, 146 00:08:38,240 --> 00:08:42,600 Speaker 1: it shouldn't count. The Thereman, which came out in nineteen nineteen, 147 00:08:42,840 --> 00:08:48,040 Speaker 1: also fails in this regard. French inventors Eduard Couplau and 148 00:08:48,200 --> 00:08:52,079 Speaker 1: Armand give Lais created the piano with electronic components in 149 00:08:53,120 --> 00:08:56,440 Speaker 1: that comes closer to the definition many except as cannon 150 00:08:56,559 --> 00:08:59,880 Speaker 1: for synthesizers. The first device to actually use the word 151 00:09:00,040 --> 00:09:03,760 Speaker 1: synthesizer appears to have been the r C A Electronic 152 00:09:03,960 --> 00:09:07,720 Speaker 1: music Synthesizer Mark one, which debuted in nineteen fifty six 153 00:09:08,080 --> 00:09:12,160 Speaker 1: and used tuning forks to generate tones It read music 154 00:09:12,240 --> 00:09:15,000 Speaker 1: from a strip of paper tape that had holes punched 155 00:09:15,000 --> 00:09:17,040 Speaker 1: into it, so it's sort of like a player piano. 156 00:09:17,520 --> 00:09:20,680 Speaker 1: But if we're talking modern synthesizers, we got to talk 157 00:09:20,760 --> 00:09:25,440 Speaker 1: about Robert Moge, the genius behind the Mogue synthesizer. I've 158 00:09:25,440 --> 00:09:28,160 Speaker 1: done a full episode about Mogue in the past, so 159 00:09:28,240 --> 00:09:30,600 Speaker 1: I'm not gonna dwell on it too much here. I'll 160 00:09:30,600 --> 00:09:33,800 Speaker 1: just add that he created the first commercial synthesizer by 161 00:09:33,840 --> 00:09:36,800 Speaker 1: modern standards in nineteen sixty four, and it was the 162 00:09:36,920 --> 00:09:42,839 Speaker 1: Moges nine series Modular Systems. One big limitation in most 163 00:09:42,920 --> 00:09:46,400 Speaker 1: analog synthesizers is in the number of notes it can 164 00:09:46,440 --> 00:09:52,280 Speaker 1: play simultaneously. Many analog synthesizers are monophonic, meaning they can 165 00:09:52,320 --> 00:09:55,440 Speaker 1: only produce one tone at a time. If you held 166 00:09:55,480 --> 00:09:58,760 Speaker 1: down two keys, you would not get two tones. If 167 00:09:58,800 --> 00:10:01,200 Speaker 1: you want to create a poly phonic sound the way 168 00:10:01,240 --> 00:10:03,920 Speaker 1: you could with say a piano, you'd have to either 169 00:10:04,000 --> 00:10:06,960 Speaker 1: get a whole bunch of musicians together, each playing one 170 00:10:07,000 --> 00:10:11,239 Speaker 1: section of a polyphonic piece on their own mode synthesizer, 171 00:10:11,480 --> 00:10:16,000 Speaker 1: whichever analog synthesizer they're using, all in time with one another, 172 00:10:16,800 --> 00:10:19,560 Speaker 1: or you'd have to record multiple tracks to fill in 173 00:10:19,600 --> 00:10:24,679 Speaker 1: the tones, so each track would represent a different monophonic melody, 174 00:10:25,080 --> 00:10:28,400 Speaker 1: and played together you would get the polyphonic effect. Eventually, 175 00:10:28,559 --> 00:10:32,400 Speaker 1: some analog synthesizers supported polyphonic tones at a limited level, 176 00:10:32,480 --> 00:10:36,320 Speaker 1: for example four notes played simultaneously, and they tended to 177 00:10:36,320 --> 00:10:41,679 Speaker 1: be incredibly expensive. As for digital synthesizers, which are at 178 00:10:41,720 --> 00:10:45,360 Speaker 1: their hearts computers working with bits as a good old 179 00:10:45,559 --> 00:10:49,240 Speaker 1: zeros and ones of machine language, those trace their history 180 00:10:49,280 --> 00:10:52,720 Speaker 1: back to research in the late fifties, but commercial digital 181 00:10:52,720 --> 00:10:57,000 Speaker 1: synthesizers really got their start in the nineteen eighties next 182 00:10:57,040 --> 00:11:01,079 Speaker 1: craze New Wave. Like analog sent the sizers, they generate 183 00:11:01,200 --> 00:11:04,760 Speaker 1: or modulate waveforms to create sounds. The process from a 184 00:11:04,960 --> 00:11:07,840 Speaker 1: very high level is similar, but the details are different, 185 00:11:08,120 --> 00:11:11,839 Speaker 1: and digital synthesizers can do some things that analog synthesizers 186 00:11:11,880 --> 00:11:15,560 Speaker 1: either cannot do or it cannot do very well. For example, 187 00:11:15,640 --> 00:11:20,319 Speaker 1: one analog synthesizer might be monophonic or have limited polyphonic capabilities. 188 00:11:20,679 --> 00:11:24,520 Speaker 1: A basic digital synthesizer could have a polyphony if you like, 189 00:11:25,000 --> 00:11:29,640 Speaker 1: of sixty four notes being played simultaneously, although I should 190 00:11:29,840 --> 00:11:32,920 Speaker 1: add that that depends also on how many voices you're 191 00:11:32,920 --> 00:11:36,760 Speaker 1: playing on this synthesizer. With each voice, you reduce the 192 00:11:36,880 --> 00:11:40,360 Speaker 1: number of notes that can be played simultaneously, because each 193 00:11:40,760 --> 00:11:43,880 Speaker 1: voice gets a certain number of notes dedicated to it. 194 00:11:44,280 --> 00:11:46,959 Speaker 1: That being said, there's no guarantee that a digital synthesizer 195 00:11:47,200 --> 00:11:51,360 Speaker 1: will sound better than an analog one. It could or 196 00:11:51,400 --> 00:11:53,840 Speaker 1: it might not. It all depends upon build quality of 197 00:11:53,840 --> 00:11:56,840 Speaker 1: the two synthesizers. Sound quality relies on more than just 198 00:11:56,920 --> 00:11:59,280 Speaker 1: the number of options you have when you're shaping sound. 199 00:12:00,080 --> 00:12:03,000 Speaker 1: All right, So that's the basic info on synthesizers. Now 200 00:12:03,080 --> 00:12:06,760 Speaker 1: let's talk about many. In the early nineteen eighties, a 201 00:12:06,760 --> 00:12:09,760 Speaker 1: man named Dave Smith saw the need for a universal 202 00:12:09,800 --> 00:12:12,880 Speaker 1: standard that would allow synthesizers to send data to other 203 00:12:12,920 --> 00:12:17,600 Speaker 1: instruments or to computers. This would give musicians unprecedented options 204 00:12:17,600 --> 00:12:21,839 Speaker 1: when making music, including new ways to manipulate sound. Synthesizers 205 00:12:21,840 --> 00:12:25,520 Speaker 1: were versatile, but no two models were exactly alike, particularly 206 00:12:25,559 --> 00:12:28,720 Speaker 1: from different manufacturers. One model might have a really cool 207 00:12:28,800 --> 00:12:31,680 Speaker 1: feature that other synthesizers lacked, but fall short on a 208 00:12:31,720 --> 00:12:35,800 Speaker 1: completely different feature. A universal protocol could let a musician 209 00:12:35,880 --> 00:12:40,080 Speaker 1: chain together multiple instruments or perform additional processes on sound 210 00:12:40,200 --> 00:12:43,920 Speaker 1: at the computer level. Related to this problem, is one 211 00:12:44,120 --> 00:12:49,479 Speaker 1: of competing proprietary approaches to musical interfaces. Without a standard, 212 00:12:49,880 --> 00:12:53,840 Speaker 1: each synthesizer manufacturer would be compelled to produce its own 213 00:12:53,960 --> 00:12:58,520 Speaker 1: interface with other synthesizers and with computers. In fact, such 214 00:12:58,559 --> 00:13:01,720 Speaker 1: standards did exist, they weren't there weren't really standards, They 215 00:13:01,720 --> 00:13:06,840 Speaker 1: were proprietary approaches that were unique to specific manufacturers like Rowland, 216 00:13:06,840 --> 00:13:09,640 Speaker 1: for example, or Yamaha. Then you would have a bunch 217 00:13:09,640 --> 00:13:13,080 Speaker 1: of competing technologies on the market that more likely than not, 218 00:13:13,240 --> 00:13:16,480 Speaker 1: would be impossible to chain together, so you'd be locked 219 00:13:16,520 --> 00:13:19,560 Speaker 1: into one ecosystem. You would have to be all in 220 00:13:19,640 --> 00:13:21,960 Speaker 1: on Rowland, or all in on Mogue, or all in 221 00:13:22,000 --> 00:13:25,480 Speaker 1: on Yamaha. You couldn't mix and match because they wouldn't 222 00:13:25,480 --> 00:13:27,200 Speaker 1: be able to talk to each other. It's sort of 223 00:13:27,240 --> 00:13:31,000 Speaker 1: like the early days of computing before arpanet came along 224 00:13:31,400 --> 00:13:33,240 Speaker 1: and you had a set of protocols that would let 225 00:13:33,240 --> 00:13:36,960 Speaker 1: computers talk to each other. Same basic problem existed at 226 00:13:37,240 --> 00:13:40,679 Speaker 1: at the early nineteen eighties. It was a huge mess 227 00:13:40,720 --> 00:13:43,880 Speaker 1: for musicians and producers, So a universal standard would set 228 00:13:43,920 --> 00:13:46,960 Speaker 1: a level playing field, give musicians and producers the greatest 229 00:13:47,040 --> 00:13:50,680 Speaker 1: number of options when creating music and avoid fragmentation of 230 00:13:50,720 --> 00:13:53,839 Speaker 1: the market. Dave Smith first proposed such a standard in 231 00:13:53,960 --> 00:13:57,520 Speaker 1: nine one at a meeting of the Audio Engineering Society, 232 00:13:57,800 --> 00:14:01,440 Speaker 1: and he called his first approach the Universal Synthesizer Interface. 233 00:14:02,080 --> 00:14:05,680 Speaker 1: Smith recognized that while manufacturers were able to create systems 234 00:14:05,720 --> 00:14:08,400 Speaker 1: that would allow you to control multiple synthesizers made by 235 00:14:08,440 --> 00:14:11,560 Speaker 1: that manufacturer, there was still no standard that would allow 236 00:14:11,640 --> 00:14:15,880 Speaker 1: for interoperability, and manufacturers were concerned that this issue was 237 00:14:15,960 --> 00:14:20,640 Speaker 1: costing them customers by creating this frustrating environment. Two years later, 238 00:14:20,840 --> 00:14:23,760 Speaker 1: he would release the first version of the MIDI protocols. 239 00:14:23,800 --> 00:14:27,800 Speaker 1: This three, he didn't develop the protocol all by himself. 240 00:14:27,920 --> 00:14:32,320 Speaker 1: Major synthesizer companies like Roland, Yamaha, and several others were 241 00:14:32,360 --> 00:14:35,400 Speaker 1: all involved in designing the set of rules and standards. 242 00:14:35,720 --> 00:14:38,920 Speaker 1: It was a pretty remarkable display of competitors working together 243 00:14:39,000 --> 00:14:41,920 Speaker 1: to create a technology that would benefit the entire industry, 244 00:14:42,200 --> 00:14:45,840 Speaker 1: not just one company within it. The designers decided that 245 00:14:45,920 --> 00:14:49,040 Speaker 1: MIDI would send information as a list of events or 246 00:14:49,160 --> 00:14:52,880 Speaker 1: messages to instruct a device how to make a certain 247 00:14:52,960 --> 00:14:56,520 Speaker 1: type of sound. Now, again, this wasn't a music file 248 00:14:56,760 --> 00:15:00,240 Speaker 1: or any other form of music, but rather direct is 249 00:15:00,280 --> 00:15:04,560 Speaker 1: the recipient would follow to generate the appropriate sound. I'll 250 00:15:04,560 --> 00:15:07,200 Speaker 1: talk about some of the typical MINI messages in the 251 00:15:07,240 --> 00:15:10,520 Speaker 1: next section, but first let's take a quick break to 252 00:15:10,680 --> 00:15:20,440 Speaker 1: thank our sponsor. Here are a few basic messages the 253 00:15:20,480 --> 00:15:24,720 Speaker 1: Mini protocol defined note on. This is a message that 254 00:15:24,800 --> 00:15:28,360 Speaker 1: indicates a note has been initiated, which is pretty self 255 00:15:28,360 --> 00:15:31,560 Speaker 1: explanatory by the name. So on a keyboard, this would 256 00:15:31,560 --> 00:15:34,760 Speaker 1: be when a key has been pressed, but other instruments 257 00:15:34,760 --> 00:15:36,960 Speaker 1: can also have mini ports on them, so it could 258 00:15:36,960 --> 00:15:39,840 Speaker 1: also mean a guitar string is strummed or a clarinet 259 00:15:39,920 --> 00:15:43,960 Speaker 1: has produced a note. The instructions tell the device receiving 260 00:15:44,400 --> 00:15:48,360 Speaker 1: this data which note has been played and the velocity 261 00:15:48,400 --> 00:15:52,040 Speaker 1: of the note. Velocity equals how hard the note was played, 262 00:15:52,440 --> 00:15:55,240 Speaker 1: so with a piano key it relates to volume. For example, 263 00:15:55,280 --> 00:15:58,640 Speaker 1: if you press the key faster, it indicates a harder strike, 264 00:15:58,760 --> 00:16:01,960 Speaker 1: which means that the note should be louder. Not every 265 00:16:02,040 --> 00:16:05,880 Speaker 1: MIDI keyboard is capable of actually recording that information, but 266 00:16:05,960 --> 00:16:09,160 Speaker 1: a lot of them are. They have that velocity sensitive keys, 267 00:16:09,440 --> 00:16:13,360 Speaker 1: so you can actually record that info. Note off is 268 00:16:13,400 --> 00:16:16,200 Speaker 1: a similar message. It tells when the receiving device uh 269 00:16:16,240 --> 00:16:19,520 Speaker 1: that a played note has ended, so it might be 270 00:16:19,560 --> 00:16:22,920 Speaker 1: when you have released a key, or when vibrations stop 271 00:16:23,000 --> 00:16:25,840 Speaker 1: along a string, and that message says, all right, at 272 00:16:25,840 --> 00:16:29,560 Speaker 1: this point, stop playing a note because there's no longer 273 00:16:30,080 --> 00:16:34,440 Speaker 1: a thing that's generating that sound. Polyphonic key pressure is 274 00:16:34,480 --> 00:16:37,600 Speaker 1: another instruction that tells the receiving device how hard a 275 00:16:37,720 --> 00:16:40,720 Speaker 1: key was pressed once it bottoms out in its lowest position. 276 00:16:41,200 --> 00:16:43,720 Speaker 1: Some keyboards use this to add effects to notes, such 277 00:16:43,720 --> 00:16:46,120 Speaker 1: as vibrato. The bar brato. By the way, that's a 278 00:16:46,240 --> 00:16:50,800 Speaker 1: rapid variation in pitch, So you add a quick oscillation 279 00:16:50,840 --> 00:16:55,960 Speaker 1: and pitch to create vibrato. It adds a richness to sound. Also, 280 00:16:56,240 --> 00:16:58,240 Speaker 1: singers use it to cover up the fact that they 281 00:16:58,240 --> 00:17:01,080 Speaker 1: can't hit a note. That's a little shade throwing right there. Also, 282 00:17:01,160 --> 00:17:04,640 Speaker 1: I do this too. So. Control change is a message 283 00:17:04,680 --> 00:17:07,879 Speaker 1: that indicates that some sort of controller has been activated 284 00:17:08,119 --> 00:17:11,320 Speaker 1: to affect the quality of a sound. Controllers can take 285 00:17:11,400 --> 00:17:14,000 Speaker 1: many forms. You could have pedals, you could have knobs. 286 00:17:14,200 --> 00:17:18,280 Speaker 1: Then control change message contains information that indicates which controller 287 00:17:18,359 --> 00:17:21,280 Speaker 1: was used and the signs of value from zero to one, 288 00:17:21,720 --> 00:17:25,639 Speaker 1: seven or one to eight, depending upon the implementation. To 289 00:17:25,760 --> 00:17:30,479 Speaker 1: describe the magnitude of this change, The pitch wheel change 290 00:17:30,680 --> 00:17:35,800 Speaker 1: message records instances of pitch wheel use. That's not useful 291 00:17:35,840 --> 00:17:38,280 Speaker 1: at all, is it. A pitch wheel is a control 292 00:17:38,320 --> 00:17:40,639 Speaker 1: that allows a musician to affect the pitch of a 293 00:17:40,720 --> 00:17:44,240 Speaker 1: played note, and they can control it dynamically. This creates 294 00:17:44,240 --> 00:17:47,920 Speaker 1: the effect of bending a musical note. So if you've 295 00:17:47,960 --> 00:17:51,760 Speaker 1: ever heard musical piece where a note is played and 296 00:17:51,800 --> 00:17:54,600 Speaker 1: then starts to shift to a different pitch without a 297 00:17:54,680 --> 00:17:57,280 Speaker 1: new note being played, that's kind of what a pitch 298 00:17:57,280 --> 00:17:59,679 Speaker 1: wheel is able to do. And then there are system 299 00:17:59,720 --> 00:18:04,440 Speaker 1: ex collusive or six X messages. This allows for custom 300 00:18:04,520 --> 00:18:08,560 Speaker 1: patches and effects. Manufacturers could use these messages to allow 301 00:18:08,640 --> 00:18:12,000 Speaker 1: a mini controller to take advantage of unique features of 302 00:18:12,359 --> 00:18:16,399 Speaker 1: their instruments. For example, so let's say you're a manufacturer 303 00:18:16,400 --> 00:18:18,600 Speaker 1: and you've got a synthesizer that has a new type 304 00:18:18,640 --> 00:18:22,800 Speaker 1: of effect, and no other synthesizers have this. It's proprietary. 305 00:18:22,840 --> 00:18:25,040 Speaker 1: You've got this cool effect that no one else has 306 00:18:25,080 --> 00:18:27,920 Speaker 1: been able to replicate. The sis X feature would allow 307 00:18:27,960 --> 00:18:30,439 Speaker 1: you to designate a method for a mini controller to 308 00:18:30,520 --> 00:18:34,919 Speaker 1: engage that feature without sharing it to everybody else. Otherwise, 309 00:18:34,960 --> 00:18:37,720 Speaker 1: you'd have a keyboard that has a really cool ability, 310 00:18:37,760 --> 00:18:39,240 Speaker 1: but you never be able to use it through a 311 00:18:39,280 --> 00:18:44,960 Speaker 1: MIDI controller because there'd be no way to designate that command. Right, 312 00:18:45,119 --> 00:18:48,879 Speaker 1: you have your own proprietary effect. If you don't create 313 00:18:48,920 --> 00:18:52,440 Speaker 1: a command for it in MIDI, then the controller won't 314 00:18:52,480 --> 00:18:54,879 Speaker 1: have any instructions that can send to the synthesizer to 315 00:18:54,960 --> 00:18:57,520 Speaker 1: replicate it. Now, you could still get the effect by 316 00:18:57,560 --> 00:19:00,320 Speaker 1: working with the synthesizer directly, but you want be able 317 00:19:00,359 --> 00:19:03,680 Speaker 1: to send those MIDI instructions to any other device because 318 00:19:03,680 --> 00:19:06,800 Speaker 1: there wouldn't there wouldn't be language to take care of 319 00:19:06,840 --> 00:19:11,000 Speaker 1: that particular instance. Sis X messages allowed for these exceptions, 320 00:19:11,040 --> 00:19:15,359 Speaker 1: these custom patches. A MIDI file has the extension m 321 00:19:15,440 --> 00:19:18,640 Speaker 1: I D or mid. If you could read these files 322 00:19:18,680 --> 00:19:21,679 Speaker 1: in natural language, like if you were able to translate 323 00:19:21,760 --> 00:19:24,479 Speaker 1: this as a set of instructions, they would seem like 324 00:19:24,760 --> 00:19:29,320 Speaker 1: really detailed instructions on how to play a certain piece 325 00:19:29,359 --> 00:19:32,160 Speaker 1: of music, not just what the notes are, but how 326 00:19:32,200 --> 00:19:34,200 Speaker 1: to play those notes. It would be similar to reading 327 00:19:34,240 --> 00:19:36,760 Speaker 1: sheet music, but only if the sheet music contained all 328 00:19:36,800 --> 00:19:40,240 Speaker 1: sorts of minutia about the performance of the piece. And 329 00:19:40,280 --> 00:19:43,360 Speaker 1: it's not just how that one piece should be performed, 330 00:19:43,359 --> 00:19:46,680 Speaker 1: it's how that piece actually was performed once upon a time, 331 00:19:47,000 --> 00:19:50,080 Speaker 1: So it means you're not just transcribing music. You are 332 00:19:50,200 --> 00:19:54,600 Speaker 1: actually re creating a performance of a musical piece. And 333 00:19:54,640 --> 00:19:56,679 Speaker 1: you could actually create a mid file by playing a 334 00:19:56,720 --> 00:20:00,359 Speaker 1: MIDI enabled musical instrument connected to a computer. It it's 335 00:20:00,400 --> 00:20:05,240 Speaker 1: actually sequencing your playing as you play it, so you 336 00:20:05,320 --> 00:20:08,800 Speaker 1: are creating a precise record on how to play the 337 00:20:08,840 --> 00:20:11,200 Speaker 1: same piece of music the exact same way in the future. 338 00:20:11,640 --> 00:20:14,080 Speaker 1: So again, it's not a recording, it's a set of 339 00:20:14,080 --> 00:20:16,679 Speaker 1: instructions saying, if you want to play what I just 340 00:20:16,760 --> 00:20:20,879 Speaker 1: played the way I played it, follow these instructions exactly, 341 00:20:21,400 --> 00:20:23,679 Speaker 1: and it will be as if I were playing it 342 00:20:23,760 --> 00:20:26,560 Speaker 1: all over again, which is pretty cool all by itself. 343 00:20:26,560 --> 00:20:29,479 Speaker 1: But an additional benefit of the MIDI system is that 344 00:20:29,520 --> 00:20:34,280 Speaker 1: you can modify those instructions in different ways without affecting everything. So, 345 00:20:34,320 --> 00:20:37,040 Speaker 1: for example, if you record a piece of music in 346 00:20:37,119 --> 00:20:40,080 Speaker 1: some sort of conventional format and you then play it 347 00:20:40,119 --> 00:20:42,879 Speaker 1: at a faster speed, you're going to increase the pitch. 348 00:20:43,359 --> 00:20:46,199 Speaker 1: If I recorded a performance onto a physical medium like 349 00:20:46,240 --> 00:20:48,840 Speaker 1: a vinyl record, and then I played the record back 350 00:20:49,160 --> 00:20:51,280 Speaker 1: at a speed that was one and a half times 351 00:20:51,359 --> 00:20:56,760 Speaker 1: faster than what a normal playback would be. But with midfiles, 352 00:20:56,800 --> 00:20:59,920 Speaker 1: you can increase the speed of a playback without effecting 353 00:21:00,320 --> 00:21:03,960 Speaker 1: the pitch. You aren't speeding up a recording of a performance, 354 00:21:04,080 --> 00:21:07,800 Speaker 1: but rather decreasing the amount of time between instructions, and 355 00:21:07,840 --> 00:21:11,240 Speaker 1: so you can change the temple of music easily without 356 00:21:11,320 --> 00:21:14,840 Speaker 1: also changing the pitch of the recording. Or if you 357 00:21:14,880 --> 00:21:17,200 Speaker 1: want to change the pitch, you could do that too. 358 00:21:17,280 --> 00:21:19,840 Speaker 1: You could take the instructions and apply a new instruction 359 00:21:20,160 --> 00:21:23,399 Speaker 1: to shift the playback into a different key of music. 360 00:21:23,680 --> 00:21:25,880 Speaker 1: The tempo would be the same, but the key would 361 00:21:25,880 --> 00:21:28,320 Speaker 1: be completely different. You could take music that was programmed 362 00:21:28,359 --> 00:21:29,840 Speaker 1: in a major key and you can flip it to 363 00:21:29,880 --> 00:21:32,239 Speaker 1: a minor key. Or you could take a song and 364 00:21:32,280 --> 00:21:35,440 Speaker 1: shift the pitch down or up to better suit someone's voice. 365 00:21:35,920 --> 00:21:39,159 Speaker 1: If you've ever gone to karaoke and the karaoke machine 366 00:21:39,200 --> 00:21:42,320 Speaker 1: had an option to change the pitch to pitch something 367 00:21:42,440 --> 00:21:45,200 Speaker 1: up or down so it's closer to your vocal range, 368 00:21:45,520 --> 00:21:48,720 Speaker 1: you've experienced this. The karaoke machine was using mini files 369 00:21:48,720 --> 00:21:52,800 Speaker 1: to recreate a song, and then you can dynamically tell it, Hey, 370 00:21:52,800 --> 00:21:55,199 Speaker 1: I need this pitched up or pitched down so I 371 00:21:55,240 --> 00:21:58,120 Speaker 1: can actually rock out with Hit me with your best 372 00:21:58,119 --> 00:22:01,520 Speaker 1: shot and make sure it's in my vocal range. Another 373 00:22:01,560 --> 00:22:04,800 Speaker 1: big benefit of the mid file format is file size. 374 00:22:05,200 --> 00:22:08,480 Speaker 1: Because there's no recorded media in the file, the file 375 00:22:08,520 --> 00:22:11,920 Speaker 1: sizes are relatively small, so a minute of compressed audio 376 00:22:12,119 --> 00:22:15,040 Speaker 1: like an MP three might end up being about ten 377 00:22:15,119 --> 00:22:17,920 Speaker 1: megabytes of data data. But if you take a MIDI 378 00:22:17,960 --> 00:22:21,280 Speaker 1: file and it represents the exact same amount of sound, 379 00:22:22,000 --> 00:22:25,080 Speaker 1: it's a minute worth of sound. Although again remember there's 380 00:22:25,080 --> 00:22:28,320 Speaker 1: no recorded sound in a MIDI file just represents that 381 00:22:28,320 --> 00:22:31,840 Speaker 1: that would only take up ten kilobytes of space, so 382 00:22:32,200 --> 00:22:35,400 Speaker 1: much smaller file sizes and a lot of data gets 383 00:22:35,440 --> 00:22:38,919 Speaker 1: packed into those small files. The MIDI protocol supports a 384 00:22:38,960 --> 00:22:43,840 Speaker 1: total of one notes, ranging from the C five five 385 00:22:43,840 --> 00:22:46,679 Speaker 1: octaves below middle C all the way up to the 386 00:22:46,760 --> 00:22:51,159 Speaker 1: G ten octaves above middle C up to sixteen. Separate 387 00:22:51,200 --> 00:22:54,080 Speaker 1: devices can be controlled through a single chain or more 388 00:22:54,119 --> 00:22:56,960 Speaker 1: if you want multiple devices to produce the same response. 389 00:22:57,040 --> 00:22:59,560 Speaker 1: So for example, if you want both a piano and 390 00:22:59,560 --> 00:23:01,880 Speaker 1: a claren at to play back the same melody line 391 00:23:01,920 --> 00:23:03,720 Speaker 1: and a piece of music, they could each follow the 392 00:23:03,720 --> 00:23:06,639 Speaker 1: exact same set of instructions and they would count as 393 00:23:06,680 --> 00:23:09,880 Speaker 1: just one channel of data rather than two channels of data. 394 00:23:09,920 --> 00:23:12,600 Speaker 1: You would just put those in serial with each other, 395 00:23:12,960 --> 00:23:14,800 Speaker 1: and you could still divide up the rest of the 396 00:23:14,880 --> 00:23:17,840 Speaker 1: channels among other instruments. In addition to this, you can 397 00:23:17,880 --> 00:23:20,720 Speaker 1: have up to a D eight voice or effect settings 398 00:23:20,720 --> 00:23:24,080 Speaker 1: called programs. These are the various modifiers that can change 399 00:23:24,119 --> 00:23:27,120 Speaker 1: the shape of the sound in various ways to keep 400 00:23:27,119 --> 00:23:31,240 Speaker 1: everything synchronized across multiple instruments. Not to mention other elements 401 00:23:31,280 --> 00:23:34,159 Speaker 1: that I'll talk about later, MIDI has support for built 402 00:23:34,160 --> 00:23:37,400 Speaker 1: in clock pulses. The clock pulses make sure that each 403 00:23:37,440 --> 00:23:40,680 Speaker 1: component in the overall system is on the same starting point. 404 00:23:41,200 --> 00:23:43,719 Speaker 1: If the MIDI standard didn't have this, there'd be no 405 00:23:43,880 --> 00:23:48,199 Speaker 1: way to synchronize a controller, to manipulate multiple devices and 406 00:23:48,280 --> 00:23:50,240 Speaker 1: have them work in harmony with each other. They would 407 00:23:50,280 --> 00:23:53,399 Speaker 1: all start to get off time with each other and 408 00:23:53,440 --> 00:23:55,199 Speaker 1: you would end up with a huge mess. So you 409 00:23:55,560 --> 00:23:57,840 Speaker 1: have to have this clock pulse feature to make sure 410 00:23:57,960 --> 00:24:02,000 Speaker 1: every single instrument in the system is sync with each other. 411 00:24:02,280 --> 00:24:05,000 Speaker 1: The way you generate a MIDI file is using either 412 00:24:05,160 --> 00:24:09,040 Speaker 1: a MIDI enabled synthesizer which doesn't have to be a keyboard, 413 00:24:09,080 --> 00:24:12,520 Speaker 1: but more frequently than not it is a keyboard, or 414 00:24:12,560 --> 00:24:16,440 Speaker 1: a MIDI controller. So what's the difference. Well, synthesizers can 415 00:24:16,480 --> 00:24:20,679 Speaker 1: create sound while they simultantaneously generate MIDI data. They have 416 00:24:21,119 --> 00:24:26,440 Speaker 1: a sound generator built into the device, they become workstations. 417 00:24:27,000 --> 00:24:30,760 Speaker 1: A MIDI controller only generates the data. So many MIDI 418 00:24:30,840 --> 00:24:33,840 Speaker 1: controllers look like musical keyboards, but they do not generate 419 00:24:33,880 --> 00:24:37,120 Speaker 1: any music when you play them, uh on their own. 420 00:24:37,160 --> 00:24:40,040 Speaker 1: So you're not like tickling the keys and hearing music 421 00:24:40,080 --> 00:24:42,200 Speaker 1: back unless you've already hooked it up to a computer 422 00:24:42,680 --> 00:24:44,920 Speaker 1: and the computers sound card is able to generate the 423 00:24:45,000 --> 00:24:47,320 Speaker 1: music in real time back to you. So what's the 424 00:24:47,320 --> 00:24:50,280 Speaker 1: whole point. Well, imagine that you have your MIDI controller 425 00:24:50,400 --> 00:24:53,520 Speaker 1: keyboard in front of you, and you've used cables to 426 00:24:53,520 --> 00:24:56,600 Speaker 1: connect to your controller to several other devices such as 427 00:24:56,880 --> 00:25:02,800 Speaker 1: a MIDI enabled drum machine, MIDI enabled synthesizer, and electronic clarinet. 428 00:25:03,320 --> 00:25:06,600 Speaker 1: You've mapped your MIDI controller keyboard keys to each of 429 00:25:06,600 --> 00:25:09,760 Speaker 1: those connected components, so that when you play one section 430 00:25:09,800 --> 00:25:13,400 Speaker 1: of the keyboard, you're controlling one of them. Like let's 431 00:25:13,400 --> 00:25:17,440 Speaker 1: say that you've got sixty four keys, and the bottom 432 00:25:17,520 --> 00:25:20,680 Speaker 1: few you've got maybe the Bobs sixteen that controls the 433 00:25:20,720 --> 00:25:24,800 Speaker 1: drum pad, and then the other two sections control the 434 00:25:24,800 --> 00:25:28,720 Speaker 1: the synthesizer, and the top section controls the electronic clarinet. 435 00:25:29,359 --> 00:25:32,040 Speaker 1: That would be one way of doing this. So while 436 00:25:32,160 --> 00:25:36,959 Speaker 1: the controller itself doesn't generate sound, the instructions that sends 437 00:25:37,040 --> 00:25:42,119 Speaker 1: to each of those components makes those components make the sound. 438 00:25:42,800 --> 00:25:45,240 Speaker 1: So you really just have a control system. It's really 439 00:25:45,240 --> 00:25:47,960 Speaker 1: no different from like a joystick or a mouse. It's 440 00:25:48,000 --> 00:25:51,480 Speaker 1: an input device and your output devices happen to be 441 00:25:51,560 --> 00:25:54,440 Speaker 1: these other components. So think of it in that sense. 442 00:25:54,480 --> 00:25:58,000 Speaker 1: When you think of MIDI controllers and synthesizers as very 443 00:25:58,040 --> 00:26:01,560 Speaker 1: specialized computers, it starts to be a little easier to understand. 444 00:26:01,960 --> 00:26:03,959 Speaker 1: So you've just increased the number of instruments you can 445 00:26:03,960 --> 00:26:09,400 Speaker 1: simultaneously control using a single MIDI controller connected to them. 446 00:26:09,440 --> 00:26:12,280 Speaker 1: You could also use one of these silent controllers to 447 00:26:12,280 --> 00:26:14,879 Speaker 1: create a MIDI file on a computer, but unless you 448 00:26:14,920 --> 00:26:17,119 Speaker 1: were playing that music back on the computer, would be 449 00:26:17,119 --> 00:26:20,000 Speaker 1: really hard to hear how it was turning out. It 450 00:26:20,000 --> 00:26:22,159 Speaker 1: would be difficult to see if in fact what you 451 00:26:22,200 --> 00:26:26,639 Speaker 1: were doing was what you wanted. So most MIDI sequencers, 452 00:26:26,800 --> 00:26:30,280 Speaker 1: which are what we call the programs that translate the 453 00:26:30,320 --> 00:26:34,399 Speaker 1: actions you take into the mathematical data that is a 454 00:26:34,440 --> 00:26:37,639 Speaker 1: MIDI file. Uh, most of them have playback so that 455 00:26:37,720 --> 00:26:41,200 Speaker 1: you can actually hear what's happening while you're playing. Otherwise 456 00:26:41,400 --> 00:26:43,760 Speaker 1: it would be really difficult to figure out if you 457 00:26:43,800 --> 00:26:46,600 Speaker 1: were doing things correctly. And as I say, the process 458 00:26:46,680 --> 00:26:50,240 Speaker 1: is called sequencing. So the sequencer is the tool that 459 00:26:50,359 --> 00:26:54,399 Speaker 1: records all those messages, the messages that are in the 460 00:26:54,400 --> 00:26:57,399 Speaker 1: they're in eight bits per message, so eight bits is 461 00:26:57,400 --> 00:27:00,000 Speaker 1: a bite. Each message is made up of a bite, 462 00:27:00,440 --> 00:27:04,959 Speaker 1: and the sequencer maps those instructions out against a timeline. 463 00:27:05,320 --> 00:27:08,439 Speaker 1: It records when a note is played and at what velocity, 464 00:27:08,720 --> 00:27:10,800 Speaker 1: what strength, as well as any effects that were on 465 00:27:10,840 --> 00:27:13,200 Speaker 1: the note at that time of it being played. And 466 00:27:13,320 --> 00:27:16,520 Speaker 1: sequencers can be standalone programs. They can be built directly 467 00:27:16,520 --> 00:27:20,320 Speaker 1: into musical instruments. They can also be independent pieces of hardware. 468 00:27:20,359 --> 00:27:22,720 Speaker 1: So you could have a sequencer that is its own 469 00:27:23,359 --> 00:27:28,480 Speaker 1: individual electronic unit and you plug into it, or a 470 00:27:28,480 --> 00:27:31,720 Speaker 1: sequencer could be built into a synthesizer, or a sequencer 471 00:27:31,760 --> 00:27:33,800 Speaker 1: could be a piece of software running on a computer. 472 00:27:33,960 --> 00:27:36,159 Speaker 1: You have lots of different options. So let's say you've 473 00:27:36,200 --> 00:27:39,760 Speaker 1: got a MIDI enabled synthesizer and you want to record 474 00:27:40,119 --> 00:27:43,560 Speaker 1: to a mid file. What else do you need? Well, 475 00:27:43,920 --> 00:27:46,320 Speaker 1: if it's a keyboard that also has a MIDI sequencer 476 00:27:46,640 --> 00:27:49,280 Speaker 1: in it, then you have a workstation. You've got everything 477 00:27:49,280 --> 00:27:51,600 Speaker 1: you need right there. You could just record it to 478 00:27:51,800 --> 00:27:55,000 Speaker 1: the device. If it's not if it's a synthesizer that 479 00:27:55,040 --> 00:27:57,080 Speaker 1: has a mini output but does not have a mini 480 00:27:57,160 --> 00:28:00,680 Speaker 1: sequencer itself, you could get a hardware sequence. Those tend 481 00:28:00,760 --> 00:28:02,320 Speaker 1: to be a little expensive, but you get what you 482 00:28:02,359 --> 00:28:06,280 Speaker 1: pay for. You can find low cost software sequencers on 483 00:28:06,480 --> 00:28:10,680 Speaker 1: a computer, and then you could hook up your synthesizer 484 00:28:10,960 --> 00:28:15,600 Speaker 1: via cable to the computer, depending upon how old we're talking, 485 00:28:15,680 --> 00:28:19,360 Speaker 1: Like if you're using an ancient synthesizer and an ancient computer, 486 00:28:19,720 --> 00:28:22,200 Speaker 1: you'll be using a specific MIDI cable for that. These 487 00:28:22,240 --> 00:28:25,040 Speaker 1: days we mostly use USB. I'll tell you more about 488 00:28:25,119 --> 00:28:28,239 Speaker 1: cables in just a minute. And you can look at 489 00:28:28,280 --> 00:28:32,040 Speaker 1: all sorts of options in between, so expensive hardware, cheap software, 490 00:28:32,080 --> 00:28:34,040 Speaker 1: and then there's a whole bunch of different stuff and 491 00:28:34,440 --> 00:28:37,720 Speaker 1: in between also keyboards that have the MIDI sequencers built 492 00:28:37,760 --> 00:28:41,640 Speaker 1: into them. Those can range from being fairly reasonably priced 493 00:28:41,640 --> 00:28:44,360 Speaker 1: to really expensive. If you want something that is top 494 00:28:44,400 --> 00:28:47,200 Speaker 1: of the line, We're talking hundreds of dollars in those cases. 495 00:28:47,200 --> 00:28:49,760 Speaker 1: But that's the kind of stuff that professionals will use 496 00:28:50,040 --> 00:28:53,560 Speaker 1: if they are arranging music and they're trying to record stuff. 497 00:28:54,200 --> 00:28:57,400 Speaker 1: Some older MIDI keyboards could push MIDI data out through 498 00:28:57,400 --> 00:29:00,480 Speaker 1: a port, but wouldn't play sound while doing so, so 499 00:29:00,520 --> 00:29:02,240 Speaker 1: instead you'd have to listen to the music as it 500 00:29:02,240 --> 00:29:04,880 Speaker 1: plays on your computers. Sound card sound cards used to 501 00:29:04,920 --> 00:29:07,480 Speaker 1: be a much bigger deal back in the nineties, back 502 00:29:07,480 --> 00:29:10,920 Speaker 1: when putting together a computer could become an enormous headache 503 00:29:11,240 --> 00:29:16,560 Speaker 1: because you had lots of choices in graphics cards, sound cards, CPUs. 504 00:29:16,720 --> 00:29:19,000 Speaker 1: Not all of them were compatible with each other, so 505 00:29:19,080 --> 00:29:21,560 Speaker 1: sometimes you would find that the build you had selected 506 00:29:22,080 --> 00:29:26,000 Speaker 1: didn't actually work because there were incompatibilities between the various components. 507 00:29:26,120 --> 00:29:30,880 Speaker 1: It was a nightmare. But those days are mostly behind us. 508 00:29:30,920 --> 00:29:33,280 Speaker 1: These days, it's a lot easier to build a machine, 509 00:29:33,720 --> 00:29:38,120 Speaker 1: and it's the need for a discrete sound card has 510 00:29:38,200 --> 00:29:41,720 Speaker 1: decreased because computers can handle a lot of this using 511 00:29:41,760 --> 00:29:44,719 Speaker 1: their standard hardware these days, but back in the nineties 512 00:29:44,760 --> 00:29:49,080 Speaker 1: you needed very specific types of hardware. Rowland made an 513 00:29:49,080 --> 00:29:55,160 Speaker 1: amazing sound card. I had a uh the sound the 514 00:29:55,200 --> 00:29:57,800 Speaker 1: sound Blaster sound card, but there were tons of different 515 00:29:57,800 --> 00:30:01,160 Speaker 1: sound cards that came out in that time period. UM 516 00:30:01,480 --> 00:30:05,920 Speaker 1: good old creative labs Man, so these days not so 517 00:30:06,000 --> 00:30:08,240 Speaker 1: much a big deal. But back in those days, those 518 00:30:08,240 --> 00:30:11,400 Speaker 1: sound cards had ports on them where you could plug 519 00:30:11,440 --> 00:30:16,360 Speaker 1: in a MIDI cable. The connectors were these, uh appropriately 520 00:30:16,440 --> 00:30:21,120 Speaker 1: enough called MIDI cables. They were a five prong DN 521 00:30:21,520 --> 00:30:24,680 Speaker 1: connector d I N connector. So what does d I 522 00:30:24,880 --> 00:30:29,840 Speaker 1: N stand for. HYAN stands for Deutsch Institute for Norman. 523 00:30:30,240 --> 00:30:33,840 Speaker 1: Of course, that, by the way, as a national standards 524 00:30:33,920 --> 00:30:37,400 Speaker 1: organization in Germany and the one that defined this particular 525 00:30:37,440 --> 00:30:39,840 Speaker 1: standard for connectors. And there are a lot of different 526 00:30:39,880 --> 00:30:44,320 Speaker 1: orientations for d I N connectors, not just the MIDI style. 527 00:30:44,400 --> 00:30:48,880 Speaker 1: Their tons different variations, including different layouts for five pin connectors, 528 00:30:49,080 --> 00:30:52,480 Speaker 1: but standard MIDI cables all have the same orientation because 529 00:30:52,480 --> 00:30:57,360 Speaker 1: it's a standard. These cables didn't send variable voltage signals. 530 00:30:57,400 --> 00:31:00,920 Speaker 1: So the old analog synthesizers, the way they generated music 531 00:31:01,120 --> 00:31:05,520 Speaker 1: was all through varying voltage. That was kind of the 532 00:31:06,280 --> 00:31:09,040 Speaker 1: secret sauce if you want to get down to the 533 00:31:09,080 --> 00:31:12,120 Speaker 1: basic level of what's happening from an electronic standpoint, it's 534 00:31:12,120 --> 00:31:15,120 Speaker 1: all about varying voltage to get different effects and create 535 00:31:15,120 --> 00:31:20,640 Speaker 1: different sounds. But that's not how many synthesizers communicate. All 536 00:31:20,680 --> 00:31:24,800 Speaker 1: the information that mini synthesizers send is in binary that's 537 00:31:24,920 --> 00:31:28,360 Speaker 1: a zero or a one. With such a basic system, 538 00:31:28,400 --> 00:31:30,800 Speaker 1: you don't have to vary voltage. You just have to 539 00:31:30,840 --> 00:31:33,840 Speaker 1: have either voltage applied, which would be like a one, 540 00:31:34,200 --> 00:31:38,040 Speaker 1: or no voltage applied, which would be a zero. Mini messages, 541 00:31:38,240 --> 00:31:39,960 Speaker 1: like I said, are in the form of bytes or 542 00:31:40,000 --> 00:31:42,640 Speaker 1: eight bits. Coding with Mini tends to be done in 543 00:31:42,680 --> 00:31:47,000 Speaker 1: hexadecimal format, which represents nibbles, and nibble is half of 544 00:31:47,000 --> 00:31:50,400 Speaker 1: a byte, so it's four bits. So with every four 545 00:31:50,400 --> 00:31:54,440 Speaker 1: bits you can use that to create a hexadecimal figure. 546 00:31:54,440 --> 00:31:57,920 Speaker 1: Hexadecimal is um base sixteen. I mentioned it in a 547 00:31:57,960 --> 00:32:02,959 Speaker 1: previous podcast, and the way you express base sixteen is 548 00:32:03,000 --> 00:32:06,440 Speaker 1: after you get past the number ten, you typically start 549 00:32:06,520 --> 00:32:12,600 Speaker 1: using letters like abc, etcetera. Uh So hexadecimal makes it 550 00:32:12,640 --> 00:32:17,960 Speaker 1: easier to understand what each of those nibbles happens to be. 551 00:32:18,400 --> 00:32:22,240 Speaker 1: It's easier to understand that compared to just looking at 552 00:32:22,320 --> 00:32:25,200 Speaker 1: zeros or ones um As it turns out, eventually you 553 00:32:25,240 --> 00:32:28,400 Speaker 1: wouldn't have to worry about even working in hexadecimal because 554 00:32:29,000 --> 00:32:31,920 Speaker 1: you would get MIDI editing software that would have a 555 00:32:31,960 --> 00:32:35,160 Speaker 1: graphic user interface or COUEY, so you no longer had 556 00:32:35,200 --> 00:32:39,960 Speaker 1: to worry about even looking at just lists of hexadecimal figures, 557 00:32:39,960 --> 00:32:44,280 Speaker 1: which would cause me to get a really severe headache 558 00:32:44,320 --> 00:32:48,080 Speaker 1: and cry. So I'm glad that that's not a thing anymore. 559 00:32:48,480 --> 00:32:51,640 Speaker 1: The MIDI protocol supports bit data rates of up to 560 00:32:51,920 --> 00:32:56,160 Speaker 1: thirty one two d fifty bits per second. Information on 561 00:32:56,200 --> 00:32:59,760 Speaker 1: a MIDI cable is strictly one way only, so if 562 00:32:59,760 --> 00:33:02,840 Speaker 1: you I to have two way communication between various components, 563 00:33:03,200 --> 00:33:06,240 Speaker 1: you would have to have two cables. Media equipment from 564 00:33:06,240 --> 00:33:09,120 Speaker 1: this early era tends to have multiple ports with labels 565 00:33:09,160 --> 00:33:13,440 Speaker 1: like in, out, and through. Now those labels tell you 566 00:33:13,440 --> 00:33:17,360 Speaker 1: which direction information will flow from that port through a 567 00:33:17,400 --> 00:33:21,680 Speaker 1: MIDI cable, So out means that data will move out 568 00:33:21,920 --> 00:33:24,320 Speaker 1: from that port. If you connect a cable to that port, 569 00:33:24,760 --> 00:33:27,640 Speaker 1: information will go out over that cable, and then you 570 00:33:27,640 --> 00:33:30,040 Speaker 1: would connect the other end of that cable into the 571 00:33:30,160 --> 00:33:34,000 Speaker 1: inport of some other MIDI component. So let's say I've 572 00:33:34,040 --> 00:33:35,800 Speaker 1: got a controller and I want to hook it up 573 00:33:35,800 --> 00:33:38,520 Speaker 1: to a drum pad my controller, I would hook the 574 00:33:38,560 --> 00:33:42,520 Speaker 1: cable to the outport, and in the drum pad, I 575 00:33:42,560 --> 00:33:45,400 Speaker 1: would connect that to the import. That way, all the 576 00:33:45,440 --> 00:33:49,040 Speaker 1: instructions I play on the controller will go out and 577 00:33:49,280 --> 00:33:52,720 Speaker 1: into the drum pad. The through port, by the way, 578 00:33:53,080 --> 00:33:57,880 Speaker 1: duplicates anything that's coming in through the in port. And 579 00:33:57,920 --> 00:34:01,840 Speaker 1: the reason for that is if you want two hook 580 00:34:01,920 --> 00:34:04,400 Speaker 1: up a bunch of components in sequence, and you want 581 00:34:04,440 --> 00:34:08,000 Speaker 1: all of them to follow the exact same set of instructions, 582 00:34:08,560 --> 00:34:12,239 Speaker 1: then you hook up your MIDI controller to the outport, 583 00:34:12,680 --> 00:34:17,520 Speaker 1: put it into the first device in its import, then 584 00:34:17,560 --> 00:34:20,600 Speaker 1: hook up a second cable to that devices through port 585 00:34:20,719 --> 00:34:24,799 Speaker 1: into a second devices import, and then both device one 586 00:34:24,880 --> 00:34:27,600 Speaker 1: and two will follow the same set of instructions because 587 00:34:28,160 --> 00:34:32,400 Speaker 1: the second device is copying the same set of instructions 588 00:34:32,400 --> 00:34:35,960 Speaker 1: that the first one is getting from your controller. And 589 00:34:36,000 --> 00:34:39,480 Speaker 1: not every device out there a mini enabled device has 590 00:34:39,520 --> 00:34:42,439 Speaker 1: all of these ports, some of them only have imports, 591 00:34:42,480 --> 00:34:45,839 Speaker 1: some of them only have outports. It just depends upon 592 00:34:45,960 --> 00:34:49,640 Speaker 1: what the device is and how expensive it is, because 593 00:34:49,880 --> 00:34:53,640 Speaker 1: the more features you add to these gadgets typically the 594 00:34:53,680 --> 00:34:57,440 Speaker 1: more expensive they get, so because you're adding extra components 595 00:34:57,480 --> 00:35:00,680 Speaker 1: into the electronic device. These days, as you may encounter 596 00:35:00,719 --> 00:35:05,080 Speaker 1: MIDI controllers and synthesizers that use USB or Universal Serial 597 00:35:05,120 --> 00:35:08,640 Speaker 1: Bus connectors instead of those standard MIDI cables. Those MIDI 598 00:35:08,640 --> 00:35:10,440 Speaker 1: cables are are more or less a thing of the 599 00:35:10,480 --> 00:35:15,279 Speaker 1: past unless you're using antiquated equipment these days, and that's 600 00:35:15,320 --> 00:35:18,440 Speaker 1: because many of these devices have an integrated MIDI interface 601 00:35:18,520 --> 00:35:22,000 Speaker 1: that can accept the digital information directly without the need 602 00:35:22,080 --> 00:35:25,120 Speaker 1: for that special cable. The information itself remains the same, 603 00:35:25,400 --> 00:35:29,000 Speaker 1: only the the delivery of the information has changed, so 604 00:35:29,400 --> 00:35:31,960 Speaker 1: the type of data hasn't changed at all, it's just 605 00:35:32,040 --> 00:35:33,959 Speaker 1: the way it gets from point A to point B. 606 00:35:35,160 --> 00:35:38,360 Speaker 1: The quality of a MIDI playback depends heavily upon the 607 00:35:38,400 --> 00:35:40,640 Speaker 1: equipment you're using to play the file, So if you 608 00:35:40,640 --> 00:35:44,160 Speaker 1: have a super sweet MIDI enabled keyboard, the quality should 609 00:35:44,160 --> 00:35:46,520 Speaker 1: be pretty darn good. If you're using a cheap piece 610 00:35:46,560 --> 00:35:50,399 Speaker 1: of technology with a weedy sound processing capability, it might 611 00:35:50,440 --> 00:35:53,239 Speaker 1: be less impressive. So for a long time, the m 612 00:35:53,280 --> 00:35:56,520 Speaker 1: I D mid file format was the preferred one for 613 00:35:56,600 --> 00:35:59,359 Speaker 1: cell phone ring tones. The files took up a small 614 00:35:59,400 --> 00:36:01,319 Speaker 1: amount of space and could allow a phone to play 615 00:36:01,360 --> 00:36:04,600 Speaker 1: all sorts of songs, including popular ones everyone knows, and 616 00:36:05,200 --> 00:36:07,600 Speaker 1: not just a dozen or so default ring tones that 617 00:36:07,640 --> 00:36:11,080 Speaker 1: seemed to come with every phone. These days, storage space 618 00:36:11,120 --> 00:36:13,640 Speaker 1: on phones is not as big a concern, and a 619 00:36:13,640 --> 00:36:17,360 Speaker 1: lot of ringtones use other file formats, including MP three files. 620 00:36:17,760 --> 00:36:21,160 Speaker 1: So Mini is not as big a deal on phones anymore, 621 00:36:21,200 --> 00:36:24,799 Speaker 1: but it still has its place where Well I'll talk 622 00:36:24,800 --> 00:36:26,319 Speaker 1: a little bit more about that in a second, but 623 00:36:26,360 --> 00:36:29,560 Speaker 1: first let's take another quick break to thank our sponsor. 624 00:36:36,800 --> 00:36:40,000 Speaker 1: Why did the Mini protocols become a standard. Well, remember 625 00:36:40,000 --> 00:36:43,360 Speaker 1: that Dave Smith created the first protocol in three and 626 00:36:43,400 --> 00:36:46,840 Speaker 1: at that time computers and related equipment had a limited 627 00:36:46,880 --> 00:36:50,439 Speaker 1: ability to handle data transfers, so this was an era 628 00:36:50,600 --> 00:36:54,399 Speaker 1: before broadband and high speed data transfer cables. The Mini 629 00:36:54,440 --> 00:36:58,080 Speaker 1: protocols allowed musicians to create detailed instructions on how a 630 00:36:58,160 --> 00:37:01,960 Speaker 1: performance should be played and send it in manageable chunks 631 00:37:02,000 --> 00:37:05,799 Speaker 1: of data, either to a computer or to other musical instruments. 632 00:37:05,880 --> 00:37:09,280 Speaker 1: It was an elegant solution for a particularly tricky problem. 633 00:37:09,680 --> 00:37:15,240 Speaker 1: The Atari ST, which debuted in February, featured a built 634 00:37:15,320 --> 00:37:19,279 Speaker 1: in MIDI port and supported mini sequencer software, bringing the 635 00:37:19,320 --> 00:37:23,360 Speaker 1: ability to record music into the home studio. This was 636 00:37:23,440 --> 00:37:26,319 Speaker 1: a huge shift from the norm where you'd have to 637 00:37:26,360 --> 00:37:30,359 Speaker 1: rely upon a professional recording space to lay down tracks. Now, 638 00:37:30,360 --> 00:37:34,279 Speaker 1: with the right MIDI controllers and atri e st, you 639 00:37:34,320 --> 00:37:38,520 Speaker 1: could create your own musical masterpiece, and other computers followed suit, 640 00:37:38,920 --> 00:37:43,520 Speaker 1: and plenty of sound card manufacturers included support for mini connections. 641 00:37:44,480 --> 00:37:48,080 Speaker 1: In addition, many allowed people to record on multiple channels, 642 00:37:48,160 --> 00:37:51,000 Speaker 1: and then, because the data was all digital, you can 643 00:37:51,120 --> 00:37:53,919 Speaker 1: mess with it after you recorded it, so you could 644 00:37:53,920 --> 00:37:56,920 Speaker 1: not only tweak instructions to play back at a different 645 00:37:56,920 --> 00:37:59,400 Speaker 1: pitch or a different tempo. You could also copy and 646 00:37:59,480 --> 00:38:03,440 Speaker 1: paste sections, making a short drum track repeat the entire 647 00:38:03,560 --> 00:38:05,960 Speaker 1: length of a piece of music, for example, or you 648 00:38:05,960 --> 00:38:08,319 Speaker 1: could grab a section of music and shift it to 649 00:38:08,400 --> 00:38:11,719 Speaker 1: a different place along the overall piece. You can mix 650 00:38:11,800 --> 00:38:13,880 Speaker 1: up a track, You can mash it with other mini 651 00:38:13,960 --> 00:38:16,600 Speaker 1: tracks and come up with all sorts of interesting effects. 652 00:38:17,360 --> 00:38:21,279 Speaker 1: Using a software based synthesizer, also known as a soft scent, 653 00:38:21,800 --> 00:38:26,040 Speaker 1: you can create virtual instruments. These software packages tend to 654 00:38:26,080 --> 00:38:29,400 Speaker 1: have massive options in them, allowing you to replicate the 655 00:38:29,440 --> 00:38:33,520 Speaker 1: sound of specific instruments just by selecting a few options. 656 00:38:33,920 --> 00:38:36,919 Speaker 1: Using such a synthesizer, you could choose to play back 657 00:38:37,040 --> 00:38:41,719 Speaker 1: a guitar riff on a synthesized nineteen four Fender stratocaster 658 00:38:42,200 --> 00:38:44,920 Speaker 1: or a nineteen fifty eight Gibson E S three thirty 659 00:38:45,000 --> 00:38:48,880 Speaker 1: five or even a Cordoba C three M classical guitar, 660 00:38:49,400 --> 00:38:51,440 Speaker 1: or let's say you want the keyboard part played on 661 00:38:51,480 --> 00:38:55,880 Speaker 1: a classic Moge synthesizer, complete with all the faders and knobs. 662 00:38:56,239 --> 00:39:00,880 Speaker 1: A good soft Scent package will contain emulators for hundreds 663 00:39:00,960 --> 00:39:04,759 Speaker 1: of different instruments, complete with all the options they came with. 664 00:39:04,920 --> 00:39:09,040 Speaker 1: Plus they frequently will offer up additional features that can 665 00:39:09,080 --> 00:39:13,040 Speaker 1: be applied to the sound beyond what the instruments would support. Natively, 666 00:39:13,640 --> 00:39:17,520 Speaker 1: the MIDI standard has had several revisions since its introduction. 667 00:39:17,680 --> 00:39:21,560 Speaker 1: They tend to be backwards compatible with earlier versions, but 668 00:39:21,800 --> 00:39:25,719 Speaker 1: not necessarily interoperable with each other. Think of them as 669 00:39:25,880 --> 00:39:31,560 Speaker 1: branching pathways. For example, Roland created Roland's General Standard or 670 00:39:31,719 --> 00:39:35,160 Speaker 1: Roland GS to add in additional instruments and features not 671 00:39:35,280 --> 00:39:39,760 Speaker 1: supported by the original MIDI protocol. Yamaha did something similar 672 00:39:39,760 --> 00:39:44,080 Speaker 1: with the Yamaha's Extended General MIDI or x G. Both 673 00:39:44,120 --> 00:39:47,840 Speaker 1: were compatible with the first generation of MIDI protocols, but 674 00:39:47,960 --> 00:39:50,839 Speaker 1: they were not compatible with each other, and so there 675 00:39:50,880 --> 00:39:52,960 Speaker 1: was a bit of splintering and what was meant to 676 00:39:52,960 --> 00:39:56,919 Speaker 1: be a universal standard. Other refinements to the standard allow 677 00:39:57,000 --> 00:40:01,120 Speaker 1: for things such as tying MIDI files to show controls 678 00:40:01,160 --> 00:40:05,200 Speaker 1: like lighting or motion controls. You could create a sequence 679 00:40:05,239 --> 00:40:09,080 Speaker 1: of lighting cueues tightly coupled with sound cues, this way 680 00:40:09,120 --> 00:40:12,640 Speaker 1: automating the entire sequence. These sort of features are useful 681 00:40:12,640 --> 00:40:17,719 Speaker 1: and everything from theatrical presentations to crazy parties. I imagine 682 00:40:18,640 --> 00:40:21,000 Speaker 1: never get invited to crazy parties, but I've been to 683 00:40:21,040 --> 00:40:25,080 Speaker 1: a lot of musicals. Other specifications allowed users to incorporate 684 00:40:25,239 --> 00:40:29,880 Speaker 1: downloadable sounds into MIDI sequences or use alternate tunings for 685 00:40:29,920 --> 00:40:33,600 Speaker 1: synthesized instruments. Now normally, these refinements were the result of 686 00:40:33,640 --> 00:40:38,760 Speaker 1: collaborative efforts among various synthesizer manufacturers, so the MIDI standard 687 00:40:38,840 --> 00:40:42,560 Speaker 1: is continuing to evolve today. People are still working on 688 00:40:42,600 --> 00:40:46,840 Speaker 1: adding in features, and by people I mean mostly various 689 00:40:46,840 --> 00:40:51,200 Speaker 1: companies that are interested parties in continuing the MIDI protocols. 690 00:40:51,200 --> 00:40:55,759 Speaker 1: So again it's seeing people that are typically competitors get 691 00:40:55,800 --> 00:41:00,000 Speaker 1: together to create a standard that works across multiple pieces 692 00:41:00,080 --> 00:41:03,520 Speaker 1: of hardware and that benefits the end user the most. 693 00:41:04,040 --> 00:41:07,839 Speaker 1: It has really changed the world of music production. Back 694 00:41:07,880 --> 00:41:10,480 Speaker 1: in the old days, you had to try and get 695 00:41:10,520 --> 00:41:12,920 Speaker 1: big enough so that some studio will take a chance 696 00:41:12,960 --> 00:41:17,160 Speaker 1: on you and allow you time inside a actual recording 697 00:41:17,200 --> 00:41:19,880 Speaker 1: studio to lay down some tracks, or you would have 698 00:41:19,920 --> 00:41:21,759 Speaker 1: to pay an exorbitant amount of money in order to 699 00:41:21,760 --> 00:41:27,240 Speaker 1: do so. Now, with a small, relatively small investment up front, 700 00:41:27,440 --> 00:41:30,160 Speaker 1: you can make a recording studio of your own and 701 00:41:30,239 --> 00:41:32,920 Speaker 1: lay down all sorts of tracks. Now you are limited 702 00:41:33,120 --> 00:41:36,799 Speaker 1: in what your track is ultimately going to sound like 703 00:41:36,880 --> 00:41:40,520 Speaker 1: based upon the type of playback equipment you can afford. 704 00:41:40,960 --> 00:41:44,280 Speaker 1: The better the equipment, the better your your MIDI file 705 00:41:44,320 --> 00:41:47,600 Speaker 1: will sound when it's played back. Otherwise, if you're playing 706 00:41:47,600 --> 00:41:50,120 Speaker 1: it back on something that's fairly primitive, it's gonna sound 707 00:41:50,200 --> 00:41:53,520 Speaker 1: like it came out of a cheap imitation instrument, not 708 00:41:54,320 --> 00:41:58,200 Speaker 1: a really well synthesized instrument. Now, if that's the effect 709 00:41:58,200 --> 00:41:59,880 Speaker 1: you're going for, it's not a big deal. Like if 710 00:42:00,120 --> 00:42:03,360 Speaker 1: want to have sort of a retro kind of kitchy 711 00:42:04,640 --> 00:42:08,080 Speaker 1: simulated sound, that's not that's not a big problem. But 712 00:42:08,120 --> 00:42:10,560 Speaker 1: if you want something that sounds like, hey, that sounds 713 00:42:10,600 --> 00:42:12,960 Speaker 1: like that's a real cello, then you want to shell 714 00:42:13,000 --> 00:42:15,680 Speaker 1: out the big bucks. I can actually still pick out 715 00:42:16,400 --> 00:42:22,320 Speaker 1: fake stringed instruments on even high profile uh types of 716 00:42:22,320 --> 00:42:26,040 Speaker 1: of soundtracks and scores and movies in particular, I'm looking 717 00:42:26,080 --> 00:42:30,640 Speaker 1: at you Pirates of the Caribbean and your synthesized string sections. 718 00:42:31,400 --> 00:42:35,120 Speaker 1: I hear it, but it's really really good, much better 719 00:42:35,160 --> 00:42:38,359 Speaker 1: than it used to be. So the MIDI format has 720 00:42:38,400 --> 00:42:41,880 Speaker 1: been incredible because again, it was such an elegant solution, 721 00:42:42,080 --> 00:42:45,960 Speaker 1: creating instructions on how to recreate a performance rather than 722 00:42:46,440 --> 00:42:50,600 Speaker 1: recording an existing performance and then being able to tweak 723 00:42:50,719 --> 00:42:53,480 Speaker 1: that performance in any way you want, so that you 724 00:42:53,560 --> 00:42:57,880 Speaker 1: can make it better than the original playthrough was, or 725 00:42:57,920 --> 00:43:00,920 Speaker 1: at least different. It's really interesting to me. So I 726 00:43:00,920 --> 00:43:04,600 Speaker 1: want to thank Jesse for the suggestion. I really appreciate it. 727 00:43:04,600 --> 00:43:07,399 Speaker 1: We're gonna be doing a lot of episodes based off 728 00:43:07,480 --> 00:43:10,160 Speaker 1: listeners suggestions over the next few weeks, and we're gonna 729 00:43:10,200 --> 00:43:13,120 Speaker 1: do some more about music in the next couple of episodes. 730 00:43:13,160 --> 00:43:16,200 Speaker 1: Think of it as a mini music arc of tech 731 00:43:16,239 --> 00:43:19,480 Speaker 1: stuff episodes, because I kind of wanted a group together 732 00:43:19,840 --> 00:43:24,080 Speaker 1: thematically linked topics. So we'll talk more about music in 733 00:43:24,080 --> 00:43:26,760 Speaker 1: the next episode. But that's all for Midy for today. 734 00:43:27,040 --> 00:43:29,680 Speaker 1: If you have suggestions for future episodes, maybe you want 735 00:43:29,680 --> 00:43:32,360 Speaker 1: to get your suggestion in, like Jesse did, send me 736 00:43:32,400 --> 00:43:35,200 Speaker 1: an email. The address for the show is text Stuff 737 00:43:35,320 --> 00:43:38,120 Speaker 1: at how stuff works dot com, or you can drop 738 00:43:38,160 --> 00:43:40,560 Speaker 1: me a line on Facebook or Twitter. The handle at 739 00:43:40,600 --> 00:43:43,760 Speaker 1: both of those is tech stuff h s W. Remember 740 00:43:43,800 --> 00:43:45,680 Speaker 1: we have an Instagram account and you can follow us 741 00:43:45,680 --> 00:43:47,759 Speaker 1: on there and see all sorts of cool behind the 742 00:43:47,760 --> 00:43:52,239 Speaker 1: scenes photos plus relevant information that relates back to technology 743 00:43:52,239 --> 00:43:55,960 Speaker 1: in general and this show in particular. And on Wednesdays 744 00:43:55,960 --> 00:43:59,520 Speaker 1: and Fridays, I stream my recording sessions live on twitch 745 00:43:59,600 --> 00:44:03,040 Speaker 1: dot tv slash tech stuff. I would be happy to 746 00:44:03,080 --> 00:44:05,480 Speaker 1: have you join us. We have a chat room in there. 747 00:44:05,520 --> 00:44:07,319 Speaker 1: You can join in there and chat with me and 748 00:44:07,920 --> 00:44:11,080 Speaker 1: talk about how I mispronounced words and make fun of me, 749 00:44:11,480 --> 00:44:13,440 Speaker 1: or you can, you know, give me encouraging words too. 750 00:44:13,560 --> 00:44:16,520 Speaker 1: I don't just take abuse. I also I also like 751 00:44:16,560 --> 00:44:18,560 Speaker 1: it when people are nice to me. And I hope 752 00:44:18,560 --> 00:44:20,640 Speaker 1: that you will join us there and I'll talk to 753 00:44:20,640 --> 00:44:31,440 Speaker 1: you again really soon. YEA for moral this and thousands 754 00:44:31,480 --> 00:44:43,600 Speaker 1: of other topics. Is that how staff works dot com.