1 00:00:04,440 --> 00:00:12,520 Speaker 1: Welcome to Tech Stuff, a production from iHeartRadio. Hey there, 2 00:00:12,560 --> 00:00:16,120 Speaker 1: and welcome to tech Stuff. I'm your host Jonathan Strickland. 3 00:00:16,160 --> 00:00:19,400 Speaker 1: I'm an executive producer with iHeart Podcasts. And how the 4 00:00:19,480 --> 00:00:22,319 Speaker 1: tech are you? Y'all? Were getting up to the time 5 00:00:22,360 --> 00:00:25,200 Speaker 1: of year when I like to look back on, you know, 6 00:00:25,239 --> 00:00:27,960 Speaker 1: the months that have passed and reflect on all that's 7 00:00:27,960 --> 00:00:30,880 Speaker 1: happened in the tech space, and for twenty twenty four, 8 00:00:30,960 --> 00:00:34,520 Speaker 1: it's been a heck of a lot. But I thought 9 00:00:34,640 --> 00:00:38,239 Speaker 1: today I would talk about three startup companies in the 10 00:00:38,400 --> 00:00:42,519 Speaker 1: AI tech space that are really hoping to make it 11 00:00:42,640 --> 00:00:45,000 Speaker 1: big in the coming years. I'm sure I'll talk a 12 00:00:45,040 --> 00:00:48,280 Speaker 1: lot about AI when we do our year in review 13 00:00:48,640 --> 00:00:52,960 Speaker 1: episode because this year has been largely about AI and 14 00:00:53,080 --> 00:00:56,480 Speaker 1: different aspects, a lot of them kind of scary, right, 15 00:00:56,760 --> 00:01:00,120 Speaker 1: But I wanted to talk about some startups because that's 16 00:01:00,120 --> 00:01:02,920 Speaker 1: something we don't focus on that much in the news episodes. 17 00:01:02,960 --> 00:01:05,680 Speaker 1: Most of the talk has been around companies like open 18 00:01:05,720 --> 00:01:10,959 Speaker 1: AI and Microsoft and Meta and Google and Amazon. I 19 00:01:11,080 --> 00:01:13,800 Speaker 1: thought maybe we could take a look at some startups 20 00:01:13,840 --> 00:01:16,360 Speaker 1: in the space because there are lots of those two. 21 00:01:16,640 --> 00:01:19,840 Speaker 1: There are tons of AI startups, some of which are 22 00:01:20,319 --> 00:01:23,800 Speaker 1: doing all right right now, some of which may be struggling. 23 00:01:23,920 --> 00:01:27,800 Speaker 1: And honestly, like, there's this growing concern in the AI 24 00:01:28,000 --> 00:01:32,840 Speaker 1: field that perhaps some versions of AI are starting to 25 00:01:32,920 --> 00:01:37,200 Speaker 1: hit like a wall when it comes to advancements, like 26 00:01:37,720 --> 00:01:40,520 Speaker 1: not that they're not continuing to evolve, but that that 27 00:01:40,560 --> 00:01:44,800 Speaker 1: evolution is happening on a slower time frame than what 28 00:01:44,880 --> 00:01:48,080 Speaker 1: we saw previously. Which typically that's what happens, right, Like, 29 00:01:48,600 --> 00:01:52,560 Speaker 1: usually you have lots of early gains and then it 30 00:01:52,640 --> 00:01:55,160 Speaker 1: starts getting harder and harder. Those of y'all who work 31 00:01:55,200 --> 00:01:57,920 Speaker 1: out and know what I'm talking about. Anyway, I thought 32 00:01:57,960 --> 00:02:00,800 Speaker 1: I would give a shout out to the true team 33 00:02:00,920 --> 00:02:06,680 Speaker 1: tru I see over at startups Savant. That website has 34 00:02:06,680 --> 00:02:08,880 Speaker 1: put together a list that they called the one hundred 35 00:02:08,919 --> 00:02:13,399 Speaker 1: top startups to watch in twenty twenty four. Now, that's 36 00:02:13,480 --> 00:02:16,000 Speaker 1: a huge number of startups, right and I'm not gonna 37 00:02:16,000 --> 00:02:18,440 Speaker 1: go through all of that. I'm going to look at 38 00:02:18,480 --> 00:02:20,359 Speaker 1: just a few of them, and I want to come 39 00:02:20,400 --> 00:02:24,760 Speaker 1: in a bit on what's been going on. So artificial 40 00:02:24,760 --> 00:02:27,520 Speaker 1: intelligence is still something of a boom period right now. 41 00:02:27,919 --> 00:02:31,520 Speaker 1: You know, Yes, there are these these areas where AI 42 00:02:31,600 --> 00:02:35,040 Speaker 1: could potentially be brushing up against some limitations to certain approaches, 43 00:02:35,080 --> 00:02:38,800 Speaker 1: particularly in the large language model space. But AI, as 44 00:02:38,840 --> 00:02:41,640 Speaker 1: I've said many times, is a very complicated topic. There're 45 00:02:42,000 --> 00:02:45,560 Speaker 1: lots of nuance to AI. It's not just one big, 46 00:02:45,840 --> 00:02:51,880 Speaker 1: monolithic discipline. It's lots of much smaller, subtle disciplines that 47 00:02:52,320 --> 00:02:56,200 Speaker 1: collectively make up artificial intelligence. To that end, I want 48 00:02:56,240 --> 00:02:58,320 Speaker 1: to talk about some of the startups mentioned in this 49 00:02:58,520 --> 00:03:01,280 Speaker 1: piece in Startup Savant. One I would like to chat 50 00:03:01,320 --> 00:03:06,960 Speaker 1: about is Suno. Suno because simultaneously puts on a really 51 00:03:07,160 --> 00:03:11,120 Speaker 1: darn impressive display while also being indicative of some of 52 00:03:11,120 --> 00:03:15,360 Speaker 1: the more challenging aspects of generative AI in particular. So 53 00:03:16,040 --> 00:03:19,840 Speaker 1: Suno is based out of Cambridge, Massachusetts. Technically it's a 54 00:03:19,880 --> 00:03:24,079 Speaker 1: twenty twenty three startup, and it released its first build 55 00:03:24,240 --> 00:03:26,880 Speaker 1: of the generative AI tool that they created back in 56 00:03:26,960 --> 00:03:30,640 Speaker 1: December twenty twenty three. But the first stable release of 57 00:03:30,680 --> 00:03:34,640 Speaker 1: that application came out just this past November. So I 58 00:03:34,680 --> 00:03:37,720 Speaker 1: think Suno really qualifies as a twenty twenty four AI 59 00:03:37,840 --> 00:03:41,240 Speaker 1: company if you ask me. I actually downloaded the Suno 60 00:03:41,320 --> 00:03:43,560 Speaker 1: app to give it a try, and I have to 61 00:03:43,640 --> 00:03:49,120 Speaker 1: admit it is pretty impressive. So Suno uses generative AI 62 00:03:49,200 --> 00:03:53,800 Speaker 1: to create music based off user prompts, so those prompts 63 00:03:53,800 --> 00:03:57,640 Speaker 1: can be really specific or really vague. So, for example, 64 00:03:57,880 --> 00:04:01,720 Speaker 1: you might write a prompt like create a high energy 65 00:04:01,920 --> 00:04:05,800 Speaker 1: dance track with lots of synthesizers and drums about going 66 00:04:05,840 --> 00:04:08,280 Speaker 1: out on the town with a group of friends, sung 67 00:04:08,400 --> 00:04:12,360 Speaker 1: by a female vocalist right, Or you might give it 68 00:04:12,480 --> 00:04:17,640 Speaker 1: much more broad direction, like create an Appalachian folk tune 69 00:04:17,839 --> 00:04:24,440 Speaker 1: about witches. Suno can compose music including lyrics and synthesized vocals, 70 00:04:24,640 --> 00:04:27,359 Speaker 1: which on a phone speaker sound pretty darn convincing. Like 71 00:04:27,400 --> 00:04:29,880 Speaker 1: when I made it make folk tunes. You could even 72 00:04:29,920 --> 00:04:33,560 Speaker 1: hear like the breathing sounds. I guess someone will go, 73 00:04:35,320 --> 00:04:39,000 Speaker 1: you know, it sounded organic, at least on a phone speaker. 74 00:04:39,040 --> 00:04:42,040 Speaker 1: I'm sure if I were listening on really high end speakers, 75 00:04:42,080 --> 00:04:45,279 Speaker 1: I could probably detect a bit of the artificiality. But 76 00:04:45,440 --> 00:04:49,600 Speaker 1: to my dumb ears, they sounded pretty good. Now, what 77 00:04:49,680 --> 00:04:52,080 Speaker 1: you do with that music track from that point forward 78 00:04:52,160 --> 00:04:55,039 Speaker 1: depends on whether or not you're a paid subscriber to 79 00:04:55,120 --> 00:04:58,480 Speaker 1: the service. If you are using their free basic plan, 80 00:04:59,080 --> 00:05:05,040 Speaker 1: then you can use those tracks for any non commercial purposes. 81 00:05:05,560 --> 00:05:10,279 Speaker 1: The actual ownership of the music tracks itself, those belong 82 00:05:10,360 --> 00:05:12,760 Speaker 1: to Suno, so you don't have ownership of the tracks 83 00:05:12,760 --> 00:05:15,479 Speaker 1: you make if you are a free user of the 84 00:05:15,520 --> 00:05:18,600 Speaker 1: basic service. But let's say that you were running a 85 00:05:19,120 --> 00:05:22,480 Speaker 1: role playing game session, like you're the game master and 86 00:05:22,520 --> 00:05:25,760 Speaker 1: you really need some ruddy mysterious music playing in the 87 00:05:25,800 --> 00:05:28,920 Speaker 1: background as your adventurers walk through a dungeon. Well, you 88 00:05:28,960 --> 00:05:32,320 Speaker 1: could use Suno to generate that music for you if 89 00:05:32,320 --> 00:05:35,680 Speaker 1: you liked like it could be instrumental pieces orchestral pieces, 90 00:05:35,839 --> 00:05:38,479 Speaker 1: you know, synth wave, whatever it may need to be 91 00:05:39,120 --> 00:05:41,920 Speaker 1: to suit the needs of your game. You could do that. 92 00:05:42,240 --> 00:05:44,560 Speaker 1: And because it's non commercial, you know, you're just playing 93 00:05:44,560 --> 00:05:47,360 Speaker 1: with friends, there's no problem there. Now, if you wanted 94 00:05:47,360 --> 00:05:50,920 Speaker 1: to make commercial use of the music generated based off 95 00:05:50,920 --> 00:05:53,599 Speaker 1: your prompts, then you would need to be a subscriber 96 00:05:53,680 --> 00:05:58,400 Speaker 1: of the Pro or Premier level, and then you own 97 00:05:59,040 --> 00:06:03,640 Speaker 1: whatever tracks are generated. You own the intellectual property of 98 00:06:03,720 --> 00:06:08,160 Speaker 1: those songs, and Suno grants such users a commercial license. 99 00:06:08,440 --> 00:06:12,040 Speaker 1: Copyright gets really complicated because, at least here in the 100 00:06:12,120 --> 00:06:18,400 Speaker 1: United States, you cannot copyright a work generated by a 101 00:06:18,560 --> 00:06:23,160 Speaker 1: I And what you can do is you can monetize 102 00:06:23,600 --> 00:06:27,320 Speaker 1: the track, right like if you wanted to make a 103 00:06:27,360 --> 00:06:30,080 Speaker 1: commercial podcast, Like you want to monetize the podcast, and 104 00:06:30,120 --> 00:06:33,440 Speaker 1: you wanted to use Suno to generate the theme song 105 00:06:33,720 --> 00:06:36,000 Speaker 1: for your podcast, you could do that at the pro 106 00:06:36,120 --> 00:06:38,760 Speaker 1: or premier level, right, because that's a commercial use. You'd 107 00:06:38,800 --> 00:06:41,360 Speaker 1: be using it for a commercial podcast. You could then 108 00:06:41,440 --> 00:06:44,279 Speaker 1: do that. This does start to raise some questions, however, 109 00:06:44,400 --> 00:06:47,359 Speaker 1: I mean, you could just start churning out songs, you 110 00:06:47,400 --> 00:06:50,120 Speaker 1: could switch out prompts, you could tweak approaches in an 111 00:06:50,120 --> 00:06:52,640 Speaker 1: effort to make something, you know, anything that resembles a 112 00:06:52,640 --> 00:06:56,000 Speaker 1: catchy hit, and you don't have to hire musicians or 113 00:06:56,040 --> 00:06:59,280 Speaker 1: singers or anything. You're just working with a computer and 114 00:06:59,320 --> 00:07:01,200 Speaker 1: you're you know, if you have lots of time in 115 00:07:01,200 --> 00:07:04,080 Speaker 1: your hands and you have one of those subscription levels 116 00:07:04,160 --> 00:07:06,840 Speaker 1: where you don't have a lot of limitations on how 117 00:07:06,880 --> 00:07:09,960 Speaker 1: many times you can use the app, well you can 118 00:07:10,040 --> 00:07:12,240 Speaker 1: just keep rolling the dice and maybe you get lucky 119 00:07:12,400 --> 00:07:14,800 Speaker 1: and you have a track that ends up getting crazy 120 00:07:14,840 --> 00:07:19,120 Speaker 1: amounts of attention, and you're just flooding streaming services with 121 00:07:19,240 --> 00:07:23,320 Speaker 1: track after track after track of AI generated music. This 122 00:07:23,440 --> 00:07:28,520 Speaker 1: is something that is happening, and it's somewhat concerning because meanwhile, 123 00:07:28,560 --> 00:07:31,320 Speaker 1: you have actual human being musicians who are trying to 124 00:07:31,320 --> 00:07:34,800 Speaker 1: get noticed, and if there's just a glut of AI 125 00:07:35,000 --> 00:07:39,760 Speaker 1: generated material hitting the various streaming platforms, it gets harder 126 00:07:39,760 --> 00:07:42,520 Speaker 1: and harder to get discovered as an artist. And that 127 00:07:42,720 --> 00:07:46,360 Speaker 1: doesn't seem terribly fair, right. But to the person who's 128 00:07:46,400 --> 00:07:49,240 Speaker 1: just using this tool to make track after track, it's 129 00:07:49,240 --> 00:07:51,880 Speaker 1: a licensed print money baby. Of course, it's not as 130 00:07:51,880 --> 00:07:54,240 Speaker 1: simple as that, and there's definitely some discomfort in the 131 00:07:54,280 --> 00:07:57,720 Speaker 1: music industry over the rise of these AI tools. The 132 00:07:57,840 --> 00:08:02,040 Speaker 1: stuff made by Suno sounds retive of the various genres, 133 00:08:02,080 --> 00:08:04,080 Speaker 1: at least to my ears. In other words, if you 134 00:08:04,200 --> 00:08:06,360 Speaker 1: give it the direction to make a song in a 135 00:08:06,400 --> 00:08:11,040 Speaker 1: specific genre or subgenre, what you get sounds like it. 136 00:08:11,040 --> 00:08:14,680 Speaker 1: It fits pretty much. I didn't try things like rockabilly 137 00:08:14,800 --> 00:08:17,559 Speaker 1: I should. I should try and do a rockabilly song 138 00:08:18,000 --> 00:08:21,120 Speaker 1: about something and just see what it sounds like. But 139 00:08:21,600 --> 00:08:26,320 Speaker 1: you know, for very broad categories like classical or folk 140 00:08:26,560 --> 00:08:30,080 Speaker 1: or R and B or funk, the stuff I was 141 00:08:30,120 --> 00:08:35,079 Speaker 1: getting sounded fairly representative of those genres. Not necessarily brilliant, 142 00:08:35,360 --> 00:08:37,280 Speaker 1: but you know, it's not like something that you would 143 00:08:37,280 --> 00:08:39,720 Speaker 1: hear if you were tuned into a radio station that 144 00:08:39,760 --> 00:08:43,120 Speaker 1: catered to that specific genre of music. One thing to 145 00:08:43,200 --> 00:08:47,280 Speaker 1: keep in mind is again, anything generated by AI here 146 00:08:47,320 --> 00:08:51,360 Speaker 1: in the United States is ineligible for copyright protection, because 147 00:08:51,360 --> 00:08:54,439 Speaker 1: in order to qualify for a copyright, a work has 148 00:08:54,520 --> 00:08:56,640 Speaker 1: to be created by a human being, and writing a 149 00:08:56,640 --> 00:08:59,439 Speaker 1: text prompt into a field and then having an AI 150 00:08:59,559 --> 00:09:04,240 Speaker 1: model generate music does not qualify as a work created 151 00:09:04,280 --> 00:09:07,120 Speaker 1: by a human being. That means that if you did 152 00:09:07,559 --> 00:09:10,679 Speaker 1: create this musical track, and if you did have a 153 00:09:10,720 --> 00:09:14,319 Speaker 1: commercial license to make money from it, there's no copyright 154 00:09:14,360 --> 00:09:18,040 Speaker 1: protection that would allow you to go after anyone who 155 00:09:18,160 --> 00:09:22,040 Speaker 1: infringed upon your intellectual property and made copies of your music, 156 00:09:22,400 --> 00:09:25,960 Speaker 1: whether in whole or in part, or sampled it or whatever. 157 00:09:26,360 --> 00:09:28,600 Speaker 1: And I say your music in the sense of you 158 00:09:28,679 --> 00:09:32,119 Speaker 1: own it, not you created it. And that's a real issue. 159 00:09:32,280 --> 00:09:35,000 Speaker 1: Beyond that, there's a bigger concern in the music industry 160 00:09:35,040 --> 00:09:38,720 Speaker 1: over how SUNO trained its models in the first place. 161 00:09:39,080 --> 00:09:42,400 Speaker 1: Like AI doesn't just magically know that a folk song 162 00:09:42,480 --> 00:09:45,320 Speaker 1: sounds like a folk song, or that you know, rhythm 163 00:09:45,320 --> 00:09:49,200 Speaker 1: and blues sounds a specific way, or Chicago blues having 164 00:09:49,280 --> 00:09:52,880 Speaker 1: a different style to New Orleans blues. The models had 165 00:09:52,920 --> 00:09:57,479 Speaker 1: to learn the rules of music theory and the styles 166 00:09:57,559 --> 00:09:59,959 Speaker 1: and the things that go along with the various style. 167 00:10:00,559 --> 00:10:02,920 Speaker 1: It had to have some form of understanding, which is 168 00:10:02,920 --> 00:10:04,680 Speaker 1: a tough word because I don't mean to imply that 169 00:10:04,760 --> 00:10:09,119 Speaker 1: the model has general intelligence. It doesn't understand like humans understand, 170 00:10:09,280 --> 00:10:13,880 Speaker 1: but that it recognizes these qualities that are associated with 171 00:10:13,960 --> 00:10:18,920 Speaker 1: different kinds of music, right like, it has to have 172 00:10:19,040 --> 00:10:22,360 Speaker 1: that reference bace, and it has to be able to 173 00:10:22,400 --> 00:10:25,440 Speaker 1: generate music that we find nice to listen to. Otherwise 174 00:10:25,440 --> 00:10:28,920 Speaker 1: it's just spitting out random noise. So to get to 175 00:10:28,960 --> 00:10:32,679 Speaker 1: that point, Suno presumably trained its models on a data 176 00:10:32,720 --> 00:10:35,480 Speaker 1: set that included lots and lots of songs made by 177 00:10:36,120 --> 00:10:39,440 Speaker 1: real life, living, and breathing human beings, and that also 178 00:10:39,520 --> 00:10:43,720 Speaker 1: raises concerns about the potential for plagiarism. Now, Suno asserts 179 00:10:43,800 --> 00:10:46,720 Speaker 1: that its model has guardrails in place to prevent it 180 00:10:46,760 --> 00:10:51,000 Speaker 1: from just say, lifting chord progressions and melody lines and 181 00:10:51,040 --> 00:10:54,280 Speaker 1: such from existing songs. But the music industry isn't so 182 00:10:54,360 --> 00:10:57,320 Speaker 1: quick to accept that explanation. I mean, we just kind 183 00:10:57,320 --> 00:11:01,240 Speaker 1: of had a pop culture moment in the form of Heredic. 184 00:11:01,440 --> 00:11:04,040 Speaker 1: If you saw the movie Heredic had Hugh Grant in it. 185 00:11:04,400 --> 00:11:06,959 Speaker 1: That brought up the issue of plagiarism a couple of 186 00:11:06,960 --> 00:11:09,840 Speaker 1: different ways, and one of the ways was he plays 187 00:11:09,960 --> 00:11:12,680 Speaker 1: a song by the Hollies called The Air that I Breathe, 188 00:11:12,960 --> 00:11:16,120 Speaker 1: and then he plays Radioheads Creep and shows like, here's 189 00:11:16,160 --> 00:11:18,520 Speaker 1: two songs that have the same chord progression. It's so 190 00:11:18,640 --> 00:11:23,960 Speaker 1: similar that the Hollies sued Radiohead and were ultimately allowed 191 00:11:24,000 --> 00:11:28,800 Speaker 1: to have some writing credit and royalties from Creep because 192 00:11:28,840 --> 00:11:32,400 Speaker 1: of the similarity between the two songs. Hugh Grant's character 193 00:11:32,440 --> 00:11:37,080 Speaker 1: also points out that Lana del Reys get free shares 194 00:11:37,440 --> 00:11:40,480 Speaker 1: that same chord progression and most of the melodic line 195 00:11:40,559 --> 00:11:44,280 Speaker 1: from Creep, and then in fact, Radiohead or Radiohead's agents 196 00:11:44,320 --> 00:11:47,680 Speaker 1: or whatever sued Lana del Rey for plagiarism as well, 197 00:11:47,800 --> 00:11:51,520 Speaker 1: which is wild because Radiohead had already been sued and 198 00:11:51,640 --> 00:11:56,320 Speaker 1: essentially admitted to or at least acknowledged the similarities with 199 00:11:56,520 --> 00:11:59,400 Speaker 1: the Hollies, and then the Radiohead then goes and sues 200 00:11:59,440 --> 00:12:02,160 Speaker 1: Lana del Y. This is a really sticky subject in 201 00:12:02,240 --> 00:12:06,720 Speaker 1: music already, like this issue of how similar can one 202 00:12:06,760 --> 00:12:09,520 Speaker 1: song be to another? Before you start to say, hey, 203 00:12:09,520 --> 00:12:12,400 Speaker 1: wait a minute, I think you actually copied this other 204 00:12:12,600 --> 00:12:15,880 Speaker 1: piece of music, as opposed to you both independently arrived 205 00:12:15,880 --> 00:12:20,080 Speaker 1: at the same structure from different pathways. But imagine how 206 00:12:20,120 --> 00:12:22,240 Speaker 1: much more complicated it gets if a company were to 207 00:12:22,320 --> 00:12:26,000 Speaker 1: argue that an AI model was plagiarizing protected works by 208 00:12:26,040 --> 00:12:28,360 Speaker 1: generating songs that conform to the rules of music in 209 00:12:28,360 --> 00:12:31,720 Speaker 1: a way that's already similar to existing pieces. In June 210 00:12:31,720 --> 00:12:34,760 Speaker 1: of this year, the Recording Industry Association of America the 211 00:12:34,880 --> 00:12:38,480 Speaker 1: RI double A, filed a lawsuit against Suno. The RI 212 00:12:38,640 --> 00:12:42,640 Speaker 1: double A claims that Suno has engaged in copyright infringement 213 00:12:42,960 --> 00:12:46,679 Speaker 1: by training its models on protected works. The lawsuit seeks 214 00:12:46,760 --> 00:12:48,920 Speaker 1: damages in the form of up to one hundred and 215 00:12:48,920 --> 00:12:54,559 Speaker 1: fifty thousand dollars per copyrighted work that was used in training. Now, 216 00:12:54,600 --> 00:12:56,760 Speaker 1: I'm going to guess that would be a huge amount 217 00:12:56,800 --> 00:13:00,200 Speaker 1: of money, but I don't know for sure because we 218 00:13:00,240 --> 00:13:03,640 Speaker 1: can't actually see the data set that Suno used. Notably, 219 00:13:03,960 --> 00:13:07,440 Speaker 1: it is behind closed doors, so we aren't allowed to 220 00:13:07,480 --> 00:13:10,120 Speaker 1: see how many songs or what kinds of songs are 221 00:13:10,120 --> 00:13:13,880 Speaker 1: from what catalogs Suno dipped into in order to train 222 00:13:13,960 --> 00:13:16,120 Speaker 1: up its AI. I mean it had to be a lot. 223 00:13:16,360 --> 00:13:19,560 Speaker 1: When you're training AI, you need lots and lots and 224 00:13:19,600 --> 00:13:22,760 Speaker 1: lots and lots of training material to get your model 225 00:13:23,080 --> 00:13:26,760 Speaker 1: to start to hone in on what it is you 226 00:13:26,800 --> 00:13:29,160 Speaker 1: want it to do. In the case of image recognition, 227 00:13:29,440 --> 00:13:33,280 Speaker 1: that could be millions of images in order to train 228 00:13:33,400 --> 00:13:38,760 Speaker 1: a model to recognize one thing versus another. So presumably 229 00:13:39,360 --> 00:13:43,760 Speaker 1: the training material for Suno was pretty darn extensive. I 230 00:13:43,800 --> 00:13:46,960 Speaker 1: think it's pretty safe to say that every single record 231 00:13:47,040 --> 00:13:51,360 Speaker 1: label out there likely has some music from their catalog 232 00:13:51,440 --> 00:13:53,480 Speaker 1: that was used in the training. I don't know that 233 00:13:53,600 --> 00:13:57,160 Speaker 1: for sure, that's just a guess, because again, you know, 234 00:13:57,280 --> 00:14:00,680 Speaker 1: it's not like you can have the AI scan the 235 00:14:00,800 --> 00:14:03,200 Speaker 1: radio for like a second and then say oh yeah, 236 00:14:03,280 --> 00:14:05,320 Speaker 1: I got this, and then they could just go and 237 00:14:05,400 --> 00:14:07,560 Speaker 1: generate all that. Okay, well, we've got more to say 238 00:14:07,559 --> 00:14:11,440 Speaker 1: about Suno plus some other AI startups Before I get 239 00:14:11,480 --> 00:14:22,960 Speaker 1: into any of that, though, let's take a quick break. Okay, 240 00:14:23,000 --> 00:14:26,160 Speaker 1: we're back. We're still talking about Suno here, the AI 241 00:14:26,280 --> 00:14:29,400 Speaker 1: company that makes a tool that allows you to create 242 00:14:29,440 --> 00:14:32,280 Speaker 1: a song just based off a text prompt. So I 243 00:14:32,320 --> 00:14:35,880 Speaker 1: think Suno really illustrates the pros and cons of generative 244 00:14:35,920 --> 00:14:39,320 Speaker 1: AI in a neat little package. So the pros are 245 00:14:39,800 --> 00:14:43,000 Speaker 1: if you are not musically inclined, but you need to 246 00:14:43,080 --> 00:14:47,320 Speaker 1: create some original music for whatever reason. Suno is incredibly 247 00:14:47,360 --> 00:14:50,600 Speaker 1: easy to use, and you can even continue to work 248 00:14:50,640 --> 00:14:54,000 Speaker 1: on a piece. Like let's say you prompt Suno to 249 00:14:54,040 --> 00:14:56,360 Speaker 1: make something, you listen to it, you're like, well, that's close, 250 00:14:56,480 --> 00:14:58,240 Speaker 1: but it's not really what I want. You can keep 251 00:14:58,280 --> 00:15:02,080 Speaker 1: on refining it by continue to type in and get 252 00:15:02,120 --> 00:15:05,880 Speaker 1: that shaped into a place where you really like it. Now, 253 00:15:05,880 --> 00:15:09,080 Speaker 1: if you don't know any musicians or you can't afford 254 00:15:09,160 --> 00:15:12,480 Speaker 1: to hire musicians, arguably this is a tool that could 255 00:15:12,520 --> 00:15:15,480 Speaker 1: work for you. But personally, I feel really icky about 256 00:15:15,520 --> 00:15:19,080 Speaker 1: the idea of leaning too hard on Ai to create 257 00:15:19,360 --> 00:15:23,000 Speaker 1: art of any kind, or even calling AI generated material 258 00:15:23,120 --> 00:15:25,960 Speaker 1: art in the first place. That doesn't seem right to me. 259 00:15:26,200 --> 00:15:29,280 Speaker 1: I like to think that art has to have some 260 00:15:29,320 --> 00:15:33,320 Speaker 1: sort of human intent behind the piece. There needs to 261 00:15:33,320 --> 00:15:36,120 Speaker 1: be a human motivation beyond just a text prompt for 262 00:15:36,160 --> 00:15:39,280 Speaker 1: it to be art, at least in my opinion. But 263 00:15:39,440 --> 00:15:41,960 Speaker 1: art is a very subjective thing. And also I'm very 264 00:15:42,000 --> 00:15:45,360 Speaker 1: old fashioned, so it's entirely possible that I'm out of 265 00:15:45,440 --> 00:15:48,320 Speaker 1: step here, but it doesn't feel that way to me. Now. 266 00:15:48,360 --> 00:15:52,240 Speaker 1: The cons are that Tsuno could really impact human musicians 267 00:15:52,240 --> 00:15:55,080 Speaker 1: who otherwise could be hired to devote their skill and 268 00:15:55,120 --> 00:15:58,360 Speaker 1: craft toward creating new stuff, thus making it harder for 269 00:15:58,440 --> 00:16:00,880 Speaker 1: people who have honed their art to make a living 270 00:16:01,000 --> 00:16:04,640 Speaker 1: off of that art. Like think of the countless hours 271 00:16:05,040 --> 00:16:10,000 Speaker 1: that songwriters and musicians and vocalists spend to get good 272 00:16:10,000 --> 00:16:13,239 Speaker 1: at what they do. You know, people aren't just naturally 273 00:16:14,040 --> 00:16:18,560 Speaker 1: flawless in their execution, right. They take years of practice 274 00:16:18,800 --> 00:16:21,800 Speaker 1: to get to where they are. And they do this 275 00:16:22,320 --> 00:16:25,760 Speaker 1: not just from a love of the art, although I'm 276 00:16:25,800 --> 00:16:27,680 Speaker 1: sure that's a big part of it, but also with 277 00:16:27,720 --> 00:16:30,440 Speaker 1: the desire to make a living off of all that 278 00:16:30,560 --> 00:16:35,160 Speaker 1: hard work. Well, tools like SUO arguably create a shortcut 279 00:16:35,240 --> 00:16:38,880 Speaker 1: that might be very tempting for some folks out there 280 00:16:39,160 --> 00:16:43,560 Speaker 1: to just bypass all that messy human stuff and the 281 00:16:43,640 --> 00:16:46,600 Speaker 1: costs that come with it and go for you know, 282 00:16:46,840 --> 00:16:50,200 Speaker 1: let's just go with AI if it's popular. Who cares 283 00:16:50,680 --> 00:16:54,080 Speaker 1: if humans didn't make it, because, you know, to the 284 00:16:54,120 --> 00:16:57,400 Speaker 1: producer side, anyway, the goal is to make money off 285 00:16:57,440 --> 00:17:00,240 Speaker 1: of music. You don't really care where the music came 286 00:17:00,280 --> 00:17:02,880 Speaker 1: from or how it was generated. You just care that 287 00:17:02,920 --> 00:17:05,879 Speaker 1: it makes money. You don't even care if it resonates 288 00:17:05,880 --> 00:17:08,760 Speaker 1: with your audience. You just need it to sell well. 289 00:17:09,119 --> 00:17:12,680 Speaker 1: So that makes me feel icky. The balance between commerce 290 00:17:12,720 --> 00:17:15,040 Speaker 1: and art is always a tricky thing. I had a 291 00:17:15,080 --> 00:17:18,240 Speaker 1: really good conversation with an artistic director of a theater 292 00:17:18,480 --> 00:17:22,879 Speaker 1: about this and how that's always a difficult balance, Like 293 00:17:22,920 --> 00:17:26,520 Speaker 1: how do you balance between commerce and art? And it's 294 00:17:26,560 --> 00:17:29,040 Speaker 1: tricky because you know, we don't live in a world 295 00:17:29,160 --> 00:17:31,919 Speaker 1: where we can just create art and not have to 296 00:17:31,920 --> 00:17:35,000 Speaker 1: worry about paying the bills. We have to do both. 297 00:17:35,560 --> 00:17:38,640 Speaker 1: Then there's also the plagiarism issue. So at what point 298 00:17:38,720 --> 00:17:41,520 Speaker 1: do you say an AI tool is creating music the 299 00:17:41,600 --> 00:17:44,919 Speaker 1: way a person might create music. So for example, people 300 00:17:45,040 --> 00:17:47,960 Speaker 1: lean on inspiration from previous works all the time, right, 301 00:17:48,280 --> 00:17:51,800 Speaker 1: Like you might hear something in a musical piece and think, oh, 302 00:17:51,880 --> 00:17:53,879 Speaker 1: that's interesting and you want to build off of it. 303 00:17:53,920 --> 00:17:57,000 Speaker 1: I mean, sampling is built off this. Their entire genres 304 00:17:57,000 --> 00:18:00,919 Speaker 1: of music that are rooted in this idea where you 305 00:18:01,000 --> 00:18:04,359 Speaker 1: take something that was used in one piece and you 306 00:18:04,480 --> 00:18:09,280 Speaker 1: repurpose it to create something brand new and transformative. Doing 307 00:18:09,320 --> 00:18:13,040 Speaker 1: that is okay, right, even copyright losses, that's okay if 308 00:18:13,040 --> 00:18:15,960 Speaker 1: it's transformative to a certain extent. Like you can get 309 00:18:16,000 --> 00:18:19,200 Speaker 1: into trouble if you don't do it correctly, but you 310 00:18:20,080 --> 00:18:21,800 Speaker 1: kind I would get to a point where you say, 311 00:18:22,240 --> 00:18:27,440 Speaker 1: is the AI, you know, taking inspiration from this previous 312 00:18:27,440 --> 00:18:30,760 Speaker 1: corpus of music or is it actually just copying something 313 00:18:31,000 --> 00:18:34,000 Speaker 1: that's already been done because the AI is determined this 314 00:18:34,119 --> 00:18:36,480 Speaker 1: is the best way to do it. That's tricky. Now 315 00:18:36,520 --> 00:18:39,240 Speaker 1: we'll have to see where companies like Suno go in 316 00:18:39,280 --> 00:18:43,200 Speaker 1: the future. I know that media companies are simultaneously curious 317 00:18:43,320 --> 00:18:47,120 Speaker 1: and worried about this technology. If the media companies can 318 00:18:47,119 --> 00:18:49,720 Speaker 1: make a buck off the tech, then it will come 319 00:18:49,760 --> 00:18:52,040 Speaker 1: as a surprise to no one when we're flooded by 320 00:18:52,080 --> 00:18:55,840 Speaker 1: AI generated tunes. But if those media companies determine that 321 00:18:55,880 --> 00:19:00,399 Speaker 1: working with AI puts their relationships with human artists at risk. 322 00:19:00,560 --> 00:19:03,080 Speaker 1: You know, these are artists who could potentially be earning 323 00:19:03,160 --> 00:19:06,320 Speaker 1: media companies billions of dollars every year, then it's a 324 00:19:06,320 --> 00:19:08,840 Speaker 1: little different, right, Like it would be a dumb move 325 00:19:08,920 --> 00:19:12,680 Speaker 1: to experiment with AI if in the process you're alienating 326 00:19:12,960 --> 00:19:16,480 Speaker 1: the superstars you regularly work with. I say this because 327 00:19:16,960 --> 00:19:20,320 Speaker 1: I've listened in on meetings where there have been discussions 328 00:19:20,400 --> 00:19:24,000 Speaker 1: about using AI to create music, and these were with 329 00:19:24,119 --> 00:19:28,399 Speaker 1: people who worked with companies that had tight relationships with 330 00:19:28,560 --> 00:19:31,879 Speaker 1: musicians and artists, and I had to ask, like, what 331 00:19:32,000 --> 00:19:34,160 Speaker 1: do you think that's going to do with your relationships, 332 00:19:34,240 --> 00:19:38,199 Speaker 1: your professional relationships with these other people who clearly have 333 00:19:38,280 --> 00:19:43,240 Speaker 1: a vested interest in not having AI flood the market. 334 00:19:43,400 --> 00:19:46,000 Speaker 1: And that really gave pause to the meeting. I'm known 335 00:19:46,000 --> 00:19:49,960 Speaker 1: as Debbie Downer at those types of meetings because instead of, 336 00:19:50,240 --> 00:19:53,000 Speaker 1: you know, kind of blue skying the whole AI thing, 337 00:19:53,160 --> 00:19:57,680 Speaker 1: I say, let's take this into context. Let's really think 338 00:19:57,720 --> 00:20:01,800 Speaker 1: critically about this, because otherwise we're putting lots of people's 339 00:20:01,800 --> 00:20:05,320 Speaker 1: careers at risk. And not only that, but potentially we 340 00:20:05,359 --> 00:20:09,760 Speaker 1: are suppressing artistic expression because again, I don't think you 341 00:20:09,800 --> 00:20:12,760 Speaker 1: can call it artistic expression if it's AI generated. It 342 00:20:12,840 --> 00:20:16,959 Speaker 1: might be conforming to certain conventions and rules in an 343 00:20:17,000 --> 00:20:20,680 Speaker 1: effort to create music that sounds like, you know, whatever 344 00:20:20,760 --> 00:20:23,080 Speaker 1: the prompt was, But that's not the same thing as 345 00:20:23,200 --> 00:20:27,200 Speaker 1: artistic expression. All right, let's switch to a different startup. 346 00:20:27,560 --> 00:20:29,920 Speaker 1: This is one that is also a couple of years old, 347 00:20:30,200 --> 00:20:34,080 Speaker 1: but it completed its Series A funding just this year, 348 00:20:34,320 --> 00:20:37,600 Speaker 1: and that's web Ai. Now, I guess I should kind 349 00:20:37,600 --> 00:20:41,439 Speaker 1: of cover what Series A funding actually means. So in 350 00:20:41,520 --> 00:20:44,080 Speaker 1: the world of startups, and this is not just in tech, 351 00:20:44,400 --> 00:20:47,120 Speaker 1: this is startups in general. But in the world of startups, 352 00:20:47,119 --> 00:20:50,680 Speaker 1: there are typically multiple rounds of investment funding that are 353 00:20:50,720 --> 00:20:53,359 Speaker 1: necessary for a business to get to a point where 354 00:20:53,359 --> 00:20:56,400 Speaker 1: it can operate like a business. So first up, you've 355 00:20:56,400 --> 00:20:59,359 Speaker 1: got your seed funding, and this is used to get 356 00:20:59,359 --> 00:21:02,480 Speaker 1: a company established in the very early stages. You might 357 00:21:02,560 --> 00:21:06,560 Speaker 1: use seed funding to do things like, you know, incorporate, 358 00:21:06,640 --> 00:21:10,400 Speaker 1: to design a logo, to secure some office early office space, 359 00:21:10,400 --> 00:21:14,480 Speaker 1: which might be like a sharing situation. Early on, there 360 00:21:14,480 --> 00:21:17,439 Speaker 1: are a lot of businesses that started off as taking 361 00:21:17,520 --> 00:21:21,320 Speaker 1: up a corner of an existing business's office, for example. 362 00:21:21,720 --> 00:21:24,600 Speaker 1: And I think of seed money kind of like how 363 00:21:24,720 --> 00:21:28,280 Speaker 1: buskers will put a few dollars in their hat before 364 00:21:28,320 --> 00:21:31,199 Speaker 1: performing on the street. Right, you see that street musician, 365 00:21:31,200 --> 00:21:32,919 Speaker 1: they get their hat out, it's got some money in 366 00:21:32,960 --> 00:21:35,480 Speaker 1: the hat. Often these musicians will put a couple of 367 00:21:35,480 --> 00:21:38,359 Speaker 1: bucks in there to start with, and that money is 368 00:21:38,600 --> 00:21:40,800 Speaker 1: seed money, and you're hoping it's going to grow and 369 00:21:40,880 --> 00:21:44,760 Speaker 1: blossom as more people throw dollars into the hat. Seed 370 00:21:44,760 --> 00:21:48,160 Speaker 1: funding is kind of similar. Seed money typically comes from 371 00:21:48,280 --> 00:21:52,240 Speaker 1: folks like angel investors, so these could be really influential 372 00:21:52,320 --> 00:21:55,920 Speaker 1: people in the sector, maybe people that the startup owners 373 00:21:56,280 --> 00:21:59,240 Speaker 1: already know personally, could be someone they went to college with, 374 00:21:59,680 --> 00:22:03,400 Speaker 1: or advisor or a relative. It could be something like that. 375 00:22:03,720 --> 00:22:07,840 Speaker 1: It could also be from an incubator group. Incubators exist 376 00:22:07,880 --> 00:22:12,160 Speaker 1: in order to foster ideas that could potentially grow into 377 00:22:12,680 --> 00:22:16,600 Speaker 1: viable businesses. The incubator gets a stake in whatever the 378 00:22:16,600 --> 00:22:19,879 Speaker 1: company is and thus profits if the company does well, 379 00:22:20,160 --> 00:22:23,840 Speaker 1: and in return, the startup gets a little bit of 380 00:22:23,880 --> 00:22:28,000 Speaker 1: stability and access to some assets. But seed money only 381 00:22:28,040 --> 00:22:30,359 Speaker 1: goes so far, and typically it's enough to keep a 382 00:22:30,400 --> 00:22:33,240 Speaker 1: company afloat for just a relatively short amount of time. 383 00:22:33,640 --> 00:22:37,720 Speaker 1: What typically comes after that is Series A funding, and 384 00:22:37,760 --> 00:22:41,080 Speaker 1: in this series the startup opens itself up to investments 385 00:22:41,119 --> 00:22:45,560 Speaker 1: beyond that initial group of seed money investors. This could 386 00:22:45,600 --> 00:22:48,439 Speaker 1: then be followed by additional rounds of investment. You can 387 00:22:48,520 --> 00:22:52,480 Speaker 1: have Series B, Series C, et cetera. And it does 388 00:22:52,560 --> 00:22:56,639 Speaker 1: mean that the investment crowd gets more dense as you 389 00:22:56,720 --> 00:22:59,359 Speaker 1: go on because you get more investors, which means you 390 00:22:59,359 --> 00:23:01,840 Speaker 1: have to pay out more shares. In the long run, 391 00:23:02,400 --> 00:23:05,439 Speaker 1: the goal is to either become a scaled operation that 392 00:23:05,480 --> 00:23:09,000 Speaker 1: has sustainable growth, potentially going public at some point and 393 00:23:09,040 --> 00:23:12,440 Speaker 1: everyone makes their money back plus interest, or you get 394 00:23:12,480 --> 00:23:15,360 Speaker 1: swallowed up by some bigger fish for a handsome payout 395 00:23:15,359 --> 00:23:20,040 Speaker 1: and everyone goes home rich. So webai concluded its Series 396 00:23:20,160 --> 00:23:23,760 Speaker 1: A funding this year and saw about sixty million dollars 397 00:23:24,160 --> 00:23:29,000 Speaker 1: flood into the company. Coffers analysts value Webai in the 398 00:23:29,160 --> 00:23:33,040 Speaker 1: seven hundred million dollar range, So yeah, they got sixty 399 00:23:33,080 --> 00:23:36,080 Speaker 1: million in investment, but they're valued at around seven hundred 400 00:23:36,119 --> 00:23:39,000 Speaker 1: million dollars, So they're closing in on that Unicorn status 401 00:23:39,000 --> 00:23:42,160 Speaker 1: where you hit a billion dollar valuation. So what the 402 00:23:42,200 --> 00:23:46,960 Speaker 1: heck does webai do? What's interesting because they're called webai, 403 00:23:47,640 --> 00:23:52,840 Speaker 1: but in fact they're focused on creating on device AI solutions, 404 00:23:53,240 --> 00:23:56,480 Speaker 1: which by that I mean instead of relying on a 405 00:23:56,560 --> 00:24:01,280 Speaker 1: cloud based AI server farm, you do or AI processing 406 00:24:01,720 --> 00:24:05,159 Speaker 1: on devices that are local to you, whether it's an 407 00:24:05,200 --> 00:24:09,640 Speaker 1: individual or a business. Now this is important because most 408 00:24:09,680 --> 00:24:13,480 Speaker 1: businesses aren't necessarily keen on using an off site AI 409 00:24:13,560 --> 00:24:17,720 Speaker 1: solution out of concern that the AI provider could possibly 410 00:24:17,840 --> 00:24:22,200 Speaker 1: train AI models on the company's proprietary data. Right, let's 411 00:24:22,240 --> 00:24:25,359 Speaker 1: say that you are an analysis firm and you're using 412 00:24:25,400 --> 00:24:28,680 Speaker 1: AI to assist in the analysis. If you find out 413 00:24:28,760 --> 00:24:32,080 Speaker 1: that the company you're using that provides these AI tools 414 00:24:32,160 --> 00:24:36,040 Speaker 1: is actually training its AI on your information. Well, that 415 00:24:36,080 --> 00:24:40,280 Speaker 1: brings your information security and privacy into question. If your 416 00:24:40,280 --> 00:24:43,840 Speaker 1: business handles sensitive information, and let's face it, most businesses 417 00:24:43,920 --> 00:24:46,439 Speaker 1: do to at least some extent, then you could have 418 00:24:46,520 --> 00:24:50,199 Speaker 1: legitimate concerns about a third party gaining access to that 419 00:24:50,320 --> 00:24:55,400 Speaker 1: data and further potentially exploiting that access by training its 420 00:24:55,400 --> 00:24:58,840 Speaker 1: own models. You could even get into some real legal 421 00:24:58,880 --> 00:25:02,560 Speaker 1: trouble if you are working with other parties like partners, 422 00:25:02,560 --> 00:25:05,800 Speaker 1: that have agreements that would prevent you from legally being 423 00:25:05,840 --> 00:25:08,800 Speaker 1: able to share their information in the first place. So 424 00:25:09,320 --> 00:25:13,480 Speaker 1: there are a lot of sticky situations around this particular 425 00:25:13,880 --> 00:25:17,160 Speaker 1: approach to business. And this is not a hypothetical issue. 426 00:25:17,240 --> 00:25:23,320 Speaker 1: This concern about AI potentially compromising security and privacy because 427 00:25:23,320 --> 00:25:27,040 Speaker 1: we have seen examples of various AI tools generative AI 428 00:25:27,160 --> 00:25:31,800 Speaker 1: tools pulling information from other user interactions with the AI. Now, 429 00:25:31,880 --> 00:25:34,200 Speaker 1: usually this is because there's been some sort of error 430 00:25:34,240 --> 00:25:37,240 Speaker 1: on the back end of the AI side. So theoretically, 431 00:25:37,320 --> 00:25:41,639 Speaker 1: each customer's interactions should be siloed from everybody else's, but 432 00:25:41,680 --> 00:25:45,159 Speaker 1: now and again, mistakes happen, and you might be in 433 00:25:45,200 --> 00:25:49,320 Speaker 1: a conversation with an AI chat bot and you end 434 00:25:49,400 --> 00:25:53,159 Speaker 1: up starting to see information that was inserted by some 435 00:25:53,320 --> 00:25:57,080 Speaker 1: other customer, right you start to see interactions that they 436 00:25:57,200 --> 00:26:00,720 Speaker 1: had with the AI and that obviously is a huge 437 00:26:00,720 --> 00:26:03,960 Speaker 1: breach of privacy that has happened a few times over 438 00:26:04,000 --> 00:26:06,880 Speaker 1: the last couple of years. Now. Companies obviously don't want 439 00:26:06,880 --> 00:26:09,720 Speaker 1: those sorts of mistakes to include their intellectual property that 440 00:26:09,760 --> 00:26:13,960 Speaker 1: could include things like code for software or business strategies 441 00:26:14,160 --> 00:26:16,840 Speaker 1: or you know, trade secrets, all that kind of stuff. 442 00:26:16,880 --> 00:26:20,200 Speaker 1: You don't want that to suddenly get just dumped into 443 00:26:20,359 --> 00:26:23,880 Speaker 1: an AI large learning model and then someone else is like, hey, 444 00:26:24,040 --> 00:26:26,840 Speaker 1: what does company XYZ think about this? And then you 445 00:26:26,960 --> 00:26:30,959 Speaker 1: find out because those trade secrets have been included in 446 00:26:31,000 --> 00:26:34,719 Speaker 1: the training material that would be bad. So a better solution, 447 00:26:35,119 --> 00:26:37,520 Speaker 1: if you plan on making use of any sort of 448 00:26:37,560 --> 00:26:40,800 Speaker 1: AI process might be to make sure that you can 449 00:26:40,840 --> 00:26:44,600 Speaker 1: handle all that processing yourself. Now, depending on what your 450 00:26:44,640 --> 00:26:47,720 Speaker 1: business does, that may or may not be practical from 451 00:26:47,760 --> 00:26:54,640 Speaker 1: the classic standpoint, like open ai has an enormous enormous 452 00:26:54,840 --> 00:27:00,359 Speaker 1: number of computers running AI processes, and most companies would 453 00:27:00,400 --> 00:27:03,240 Speaker 1: not be able to replicate that. And if you're talking 454 00:27:03,240 --> 00:27:06,080 Speaker 1: about a big company that needs to run some hefty 455 00:27:06,240 --> 00:27:12,320 Speaker 1: functions of AI processing, going that server route might not 456 00:27:12,480 --> 00:27:17,720 Speaker 1: be a viable option. Those server farms for open ai 457 00:27:17,880 --> 00:27:20,159 Speaker 1: are so large and they're so expensive that for a 458 00:27:20,200 --> 00:27:22,520 Speaker 1: while this year it looked like open ai might even 459 00:27:22,560 --> 00:27:25,800 Speaker 1: spend itself out of business just in order to pay 460 00:27:25,840 --> 00:27:29,560 Speaker 1: the bills. But investors ultimately did sweep in and injected 461 00:27:29,720 --> 00:27:33,320 Speaker 1: open Ai with a mega truckload of cash, so bankruptcy 462 00:27:33,359 --> 00:27:36,000 Speaker 1: has been saved off for now. Like you might think 463 00:27:36,040 --> 00:27:39,120 Speaker 1: of open ai as the next too big to fail company, 464 00:27:39,440 --> 00:27:42,480 Speaker 1: even though open ai is spending money at like a 465 00:27:42,680 --> 00:27:49,080 Speaker 1: truly eye popping rate, because AI processing is hard. It 466 00:27:49,200 --> 00:27:53,040 Speaker 1: takes lots of processing power if you're doing it the 467 00:27:53,040 --> 00:27:55,679 Speaker 1: way open ai does it. All right, we're gonna take 468 00:27:55,680 --> 00:27:57,920 Speaker 1: another quick break. When we come back, i'll talk more 469 00:27:57,960 --> 00:28:00,560 Speaker 1: about web ai, and then we'll follow up with a 470 00:28:00,680 --> 00:28:13,800 Speaker 1: third AI startup. Okay, we're back, and we're going back 471 00:28:13,800 --> 00:28:17,600 Speaker 1: to Webai. So webai has developed some products that allow 472 00:28:17,680 --> 00:28:22,120 Speaker 1: for local AI processing. So none of this goes over 473 00:28:22,160 --> 00:28:24,040 Speaker 1: the cloud, none of it goes over the Internet. It's 474 00:28:24,080 --> 00:28:29,960 Speaker 1: all contained on premises. Some of Webai's products that they're 475 00:28:30,000 --> 00:28:33,560 Speaker 1: offering sound pretty nifty. The company claims to have taken 476 00:28:33,920 --> 00:28:37,200 Speaker 1: a tailored approached for every single customer and they optimize 477 00:28:37,280 --> 00:28:40,960 Speaker 1: the strategy to fit whatever that customer's needs happen to be. 478 00:28:41,320 --> 00:28:44,160 Speaker 1: The company also says that no coding is needed to 479 00:28:44,280 --> 00:28:48,560 Speaker 1: use webi or Webai, I should say, to build out functions, 480 00:28:48,760 --> 00:28:52,360 Speaker 1: but the programmers can take advantage of advanced settings if 481 00:28:52,400 --> 00:28:54,720 Speaker 1: they want to stretch themselves a bit. Now. I haven't 482 00:28:54,840 --> 00:29:00,520 Speaker 1: used Webai's tools, so I don't know how webi integrates 483 00:29:00,520 --> 00:29:04,400 Speaker 1: AI into the different applications and processes that businesses have, Like, 484 00:29:04,440 --> 00:29:06,760 Speaker 1: I don't know what that looks like. Like I imagined, 485 00:29:06,760 --> 00:29:09,640 Speaker 1: it's got to be more complicated than just see this, 486 00:29:09,880 --> 00:29:12,320 Speaker 1: do this automatically. It has to be a little more 487 00:29:12,360 --> 00:29:15,000 Speaker 1: complicated than that. I don't know how much more complicated 488 00:29:15,200 --> 00:29:18,440 Speaker 1: because I haven't had hands on time with the tools. 489 00:29:18,640 --> 00:29:21,440 Speaker 1: But some of the applications Webai lists on their web 490 00:29:21,480 --> 00:29:25,280 Speaker 1: page include stuff like airline and airport logistics. You know, 491 00:29:25,640 --> 00:29:30,240 Speaker 1: using webai to help reduce turnaround times and improve efficiency 492 00:29:30,320 --> 00:29:33,480 Speaker 1: at airports so that planes are spending less time at 493 00:29:33,560 --> 00:29:37,000 Speaker 1: gates and you have more flights arriving on time or 494 00:29:37,000 --> 00:29:40,440 Speaker 1: ahead of schedule, and just improving efficiency in general, or 495 00:29:40,920 --> 00:29:45,080 Speaker 1: using Webai to help develop educational applications to customize learning 496 00:29:45,120 --> 00:29:49,080 Speaker 1: approaches for classes or even on student to student levels. 497 00:29:49,320 --> 00:29:52,320 Speaker 1: That's something that's been talked about for a very long time, 498 00:29:52,360 --> 00:29:57,080 Speaker 1: this futuristic vision of imagine a world where every student 499 00:29:57,600 --> 00:30:01,720 Speaker 1: has an education that is catered to their style of learning. 500 00:30:01,960 --> 00:30:05,720 Speaker 1: That's obviously something that teachers cannot do right now. It's impossible. 501 00:30:05,760 --> 00:30:08,640 Speaker 1: If you have a class of thirty students, you don't 502 00:30:08,640 --> 00:30:12,520 Speaker 1: have the time to be able to craft and design 503 00:30:12,840 --> 00:30:17,160 Speaker 1: and maintain a teaching plan for each and every student. 504 00:30:17,560 --> 00:30:21,960 Speaker 1: But with technology, the idea seems closer. It may be 505 00:30:22,120 --> 00:30:24,960 Speaker 1: that we can never really achieve it, but it seems 506 00:30:25,000 --> 00:30:29,680 Speaker 1: like it's possible. Then we ai man I just say 507 00:30:29,680 --> 00:30:35,280 Speaker 1: webi all the time. Webai it also suggests medical applications 508 00:30:35,280 --> 00:30:38,560 Speaker 1: that could improve the quality of patient care. And if 509 00:30:38,600 --> 00:30:42,120 Speaker 1: all of this is sounding vague, that's because webai is 510 00:30:42,200 --> 00:30:46,720 Speaker 1: offering a platform upon which many different services and products 511 00:30:46,760 --> 00:30:50,160 Speaker 1: can be built, so it's impossible to go through every 512 00:30:50,160 --> 00:30:55,360 Speaker 1: single variation that webai could empower. It's more like these 513 00:30:55,360 --> 00:30:58,800 Speaker 1: are some examples of what the tools are able to do. 514 00:30:58,840 --> 00:31:03,880 Speaker 1: They're able to improve the performance of various processes in 515 00:31:03,960 --> 00:31:06,600 Speaker 1: all these different fields. By the way, this is one 516 00:31:06,640 --> 00:31:09,840 Speaker 1: of those approaches in AI that I can really get behind. 517 00:31:10,080 --> 00:31:14,560 Speaker 1: I feel that creating on premises processing capabilities and optimizing 518 00:31:14,600 --> 00:31:18,360 Speaker 1: an approach that makes sense for specific companies, that's the 519 00:31:18,360 --> 00:31:21,280 Speaker 1: way to go. You know, don't do a one size 520 00:31:21,280 --> 00:31:24,600 Speaker 1: fits all approach because that just isn't realistic. And I'm 521 00:31:24,640 --> 00:31:28,280 Speaker 1: not saying that cloud based AI services don't have a place, 522 00:31:28,680 --> 00:31:32,040 Speaker 1: but cloud based AI services concerned me for lots of reasons, 523 00:31:32,080 --> 00:31:34,880 Speaker 1: privacy and security being two of the big ones. Plus 524 00:31:34,920 --> 00:31:37,320 Speaker 1: you know, some AI companies are making some choices that 525 00:31:37,400 --> 00:31:42,440 Speaker 1: I personally find a little questionable. Cough open AI pairing 526 00:31:42,520 --> 00:31:46,320 Speaker 1: up with Lucky Palmer's defense company, cough. But how about 527 00:31:46,320 --> 00:31:49,480 Speaker 1: we cover an AI startup that has a much more 528 00:31:49,800 --> 00:31:52,880 Speaker 1: focused purpose. This is the third and final of the 529 00:31:52,920 --> 00:31:55,440 Speaker 1: three that I wanted to highlight today that brings us 530 00:31:55,440 --> 00:31:58,880 Speaker 1: to Overjet. This is an AI company that caters to 531 00:31:58,920 --> 00:32:02,240 Speaker 1: the world of day dental care. Yep, we're bringing together 532 00:32:02,320 --> 00:32:07,760 Speaker 1: the terminator and dentistry to create an unstoppable tooth scraping supervillain. 533 00:32:08,040 --> 00:32:10,800 Speaker 1: All Right, I'm going a little far. I apologize. I 534 00:32:10,880 --> 00:32:14,760 Speaker 1: recently rewatched Little Shop of Horrors and it has rubbed 535 00:32:14,760 --> 00:32:17,760 Speaker 1: off of me. My apologies to all the dental hygienists 536 00:32:17,760 --> 00:32:22,120 Speaker 1: and dentists out there now. This year, Overjet secured a 537 00:32:22,200 --> 00:32:25,680 Speaker 1: Series C round of funding, so this was the third 538 00:32:26,040 --> 00:32:31,400 Speaker 1: round after seed investment. In that Series C funding round, 539 00:32:31,400 --> 00:32:34,480 Speaker 1: they raised more than fifty three million dollars in the process. 540 00:32:34,800 --> 00:32:37,960 Speaker 1: The company was originally founded back in twenty eighteen, and 541 00:32:38,000 --> 00:32:40,479 Speaker 1: according to a press release, the company's mission is to 542 00:32:40,640 --> 00:32:46,160 Speaker 1: quote make dentistry patient centric by providing dental professionals with 543 00:32:46,240 --> 00:32:49,880 Speaker 1: the AI tools they need to operate efficiently and give 544 00:32:49,960 --> 00:32:52,960 Speaker 1: patients exceptional care in the quote. So how do they 545 00:32:53,040 --> 00:32:56,160 Speaker 1: do this? Well, from what I can gather, one way 546 00:32:56,400 --> 00:32:59,600 Speaker 1: is for overjet to use image analysis tools to help 547 00:32:59,680 --> 00:33:03,960 Speaker 1: dig dental diseases in patients and suggest methods of care 548 00:33:04,040 --> 00:33:07,240 Speaker 1: to help treat or prevent dental issues. So, for example, 549 00:33:07,280 --> 00:33:10,160 Speaker 1: if you were to get X rays done at your 550 00:33:10,320 --> 00:33:15,120 Speaker 1: dental appointment, Overjet could be used to analyze those those 551 00:33:15,240 --> 00:33:20,080 Speaker 1: X rays and diagnose any issues. And maybe it tells you, hey, 552 00:33:20,440 --> 00:33:22,520 Speaker 1: it looks like on this one side of your mouth 553 00:33:22,720 --> 00:33:25,760 Speaker 1: you've got a little bit more damage, Like maybe you've 554 00:33:25,760 --> 00:33:28,240 Speaker 1: got more build up a plaque, or maybe you've got 555 00:33:28,240 --> 00:33:31,040 Speaker 1: a you know, maybe you have the beginnings of cavities. 556 00:33:31,040 --> 00:33:33,520 Speaker 1: Over here, it suggests that perhaps you're not reaching your 557 00:33:33,520 --> 00:33:36,800 Speaker 1: teeth effectively when brushing that part of your mouth, and 558 00:33:36,880 --> 00:33:40,080 Speaker 1: that it's something to be mindful of, or more seriously, 559 00:33:40,600 --> 00:33:44,120 Speaker 1: it might detect early signs of oral disease and give 560 00:33:44,160 --> 00:33:47,000 Speaker 1: your dentist more time to address the problem before it 561 00:33:47,000 --> 00:33:51,560 Speaker 1: becomes much more serious. Moreover, a goal of overjet is 562 00:33:51,600 --> 00:33:54,520 Speaker 1: to create a sort of centralized point of information for 563 00:33:54,640 --> 00:33:58,600 Speaker 1: every patient, and that would give dentists and other doctors, 564 00:33:58,760 --> 00:34:02,640 Speaker 1: plus patients and even insurance companies a common foundation to 565 00:34:02,640 --> 00:34:05,600 Speaker 1: work from. So the goal is to smooth out any 566 00:34:05,680 --> 00:34:09,560 Speaker 1: rough spots or miscommunication that could otherwise crop up between 567 00:34:09,600 --> 00:34:12,480 Speaker 1: these different parties and to make sure everyone has the 568 00:34:12,520 --> 00:34:15,040 Speaker 1: same understanding. And I could definitely see where that could 569 00:34:15,040 --> 00:34:18,000 Speaker 1: be helpful right where you have this tool that dentists 570 00:34:18,080 --> 00:34:21,280 Speaker 1: could use to say to the insurance companies, for example, 571 00:34:21,880 --> 00:34:24,919 Speaker 1: that this is something that absolutely has covered and needs 572 00:34:24,960 --> 00:34:29,200 Speaker 1: to be reimbursed or whatever. Because we have this common 573 00:34:29,840 --> 00:34:32,719 Speaker 1: point of contact where we can have an understanding of 574 00:34:32,760 --> 00:34:36,240 Speaker 1: what's going on with this particular patient. Overjet is actually 575 00:34:36,239 --> 00:34:39,799 Speaker 1: the first artificial intelligence company to receive clearance from the 576 00:34:40,000 --> 00:34:44,399 Speaker 1: US Food and Drug Administration or FDA to make use 577 00:34:44,480 --> 00:34:48,760 Speaker 1: of AI in the detection and diagnosis of oral disease. 578 00:34:49,280 --> 00:34:52,160 Speaker 1: And I think that's an incredible achievement. I mean, you 579 00:34:52,239 --> 00:34:54,759 Speaker 1: have to be able to pass lots of inspections and 580 00:34:54,800 --> 00:34:57,520 Speaker 1: analysis in order to do that. The FDA doesn't just 581 00:34:57,600 --> 00:35:01,480 Speaker 1: rubber stamp stuff, and this is there's definitely an AI 582 00:35:01,560 --> 00:35:03,920 Speaker 1: application that I really like, you know, anything that can 583 00:35:03,960 --> 00:35:07,160 Speaker 1: help give more precise care to patients and improve that 584 00:35:07,239 --> 00:35:09,439 Speaker 1: quality of care. To me, that's a really good thing, 585 00:35:09,800 --> 00:35:13,280 Speaker 1: as long as the technology performs reliably and consistently across 586 00:35:13,360 --> 00:35:17,160 Speaker 1: all patients. Obviously, we have seen cases of AI. I'm 587 00:35:17,160 --> 00:35:20,400 Speaker 1: not talking about medical AI necessarily, but we have seen 588 00:35:20,600 --> 00:35:25,200 Speaker 1: examples of AI that have performed really well with one 589 00:35:25,320 --> 00:35:29,160 Speaker 1: set of people and not so well with other sets 590 00:35:29,160 --> 00:35:31,800 Speaker 1: of people. I'm thinking primarily of stuff like facial recognition 591 00:35:31,880 --> 00:35:37,760 Speaker 1: technology and how it is not as reliable when looking 592 00:35:37,800 --> 00:35:41,640 Speaker 1: at anyone who isn't a white dude. Essentially, if you're 593 00:35:41,680 --> 00:35:44,080 Speaker 1: a white dude, it works pretty well, and then if 594 00:35:44,080 --> 00:35:47,320 Speaker 1: you're not a white dude, the reliability of the tool 595 00:35:47,440 --> 00:35:50,959 Speaker 1: starts to decline. Let's say I want to make sure 596 00:35:51,000 --> 00:35:54,520 Speaker 1: that any medical AI doesn't fall into those same sort 597 00:35:54,560 --> 00:35:58,120 Speaker 1: of biases where just because something might be true for 598 00:35:58,200 --> 00:36:01,120 Speaker 1: one subset of the population doesn't mean it's going to 599 00:36:01,160 --> 00:36:03,879 Speaker 1: be true for everybody. And again, this takes us back, 600 00:36:03,960 --> 00:36:06,960 Speaker 1: sort of like with education, to this futuristic view of 601 00:36:07,000 --> 00:36:11,520 Speaker 1: a world where healthcare is customized down to every single patient. 602 00:36:11,880 --> 00:36:15,760 Speaker 1: And that's the dream, right where every person is given 603 00:36:16,080 --> 00:36:22,080 Speaker 1: individualized healthcare so that they get the optimized approach to 604 00:36:22,400 --> 00:36:26,360 Speaker 1: taking care of themselves, either to prevent illness and disease 605 00:36:26,840 --> 00:36:29,600 Speaker 1: and conditions or to treat the ones that they have. 606 00:36:30,120 --> 00:36:33,120 Speaker 1: Because we don't live in a one size fits all world, you know, 607 00:36:33,239 --> 00:36:36,160 Speaker 1: what works for me might not work for you. In fact, 608 00:36:36,280 --> 00:36:40,759 Speaker 1: I personally experience this the hard way this year. You 609 00:36:40,880 --> 00:36:44,319 Speaker 1: might recall, at the end of twenty twenty three, I 610 00:36:44,440 --> 00:36:46,560 Speaker 1: had what I like to refer to as my little 611 00:36:46,600 --> 00:36:50,359 Speaker 1: medical whoopsie, and I was sent to the emergency room. 612 00:36:50,680 --> 00:36:53,759 Speaker 1: And at the end of that I was prescribed a 613 00:36:54,080 --> 00:36:58,560 Speaker 1: blood pressure medication and turned out that that particular type 614 00:36:58,560 --> 00:37:01,360 Speaker 1: of blood pressure medication was not effective for me. I 615 00:37:01,400 --> 00:37:04,680 Speaker 1: didn't know it at the time until three days later 616 00:37:05,000 --> 00:37:08,680 Speaker 1: when I was so poorly off that I had to 617 00:37:08,719 --> 00:37:12,200 Speaker 1: be admitted to the intensive care unit at the hospital. 618 00:37:12,440 --> 00:37:15,920 Speaker 1: So I upgraded from ER to ICU. And part of 619 00:37:15,960 --> 00:37:18,520 Speaker 1: the reason for that was that my blood pressure medication 620 00:37:18,560 --> 00:37:21,280 Speaker 1: I was prescribed wasn't cutting. It turned out my kidneys 621 00:37:21,320 --> 00:37:25,200 Speaker 1: were really badly damaged. Fun times, they're much better now, 622 00:37:25,239 --> 00:37:28,240 Speaker 1: by the way, just so y'all know. So at that point, 623 00:37:28,239 --> 00:37:30,480 Speaker 1: I was then put on a different kind of medication, 624 00:37:30,960 --> 00:37:35,320 Speaker 1: and then they fine tuned that so that it would 625 00:37:35,320 --> 00:37:39,160 Speaker 1: work best for me, and that way, you know, I 626 00:37:39,160 --> 00:37:42,319 Speaker 1: wouldn't die. And that's kind of how it has to 627 00:37:42,360 --> 00:37:46,160 Speaker 1: go right now for most patients. Like it's not something 628 00:37:46,200 --> 00:37:50,279 Speaker 1: where a doctor can speak with one hundred percent confidence 629 00:37:50,520 --> 00:37:54,120 Speaker 1: that a specific medication at a specific dosage is going 630 00:37:54,160 --> 00:37:56,719 Speaker 1: to do the trick. Often it requires a lot of 631 00:37:57,400 --> 00:38:01,279 Speaker 1: trial and error. The hope is that with AI we 632 00:38:01,400 --> 00:38:03,920 Speaker 1: can get to a future where patients can receive a 633 00:38:04,000 --> 00:38:08,280 Speaker 1: much more individualized approach to care that minimizes the risks 634 00:38:08,280 --> 00:38:12,400 Speaker 1: of complications and hopefully the impact of stuff like side effects, 635 00:38:12,400 --> 00:38:14,640 Speaker 1: Like you're never going to get a rid of side effects, 636 00:38:14,760 --> 00:38:19,160 Speaker 1: but hopefully you'd be able to use these complex technologies 637 00:38:19,400 --> 00:38:23,440 Speaker 1: to design a type of care that gives a patient 638 00:38:23,480 --> 00:38:26,239 Speaker 1: the higher quality of life. That's the goal. Now we 639 00:38:26,280 --> 00:38:29,160 Speaker 1: still have a very long way to go before we 640 00:38:29,239 --> 00:38:33,040 Speaker 1: get there, but I feel like Overjet's story is evidence 641 00:38:33,080 --> 00:38:37,120 Speaker 1: that it's at least a potentially achievable goal, maybe not 642 00:38:37,200 --> 00:38:40,320 Speaker 1: one hundred percent achievable. I don't want to paint AI 643 00:38:40,920 --> 00:38:44,080 Speaker 1: as being a perfect solution that's going to get rid 644 00:38:44,080 --> 00:38:46,960 Speaker 1: of all these problems and that will be magically living 645 00:38:46,960 --> 00:38:49,640 Speaker 1: in a Star Trek universe. I don't want to suggest that. 646 00:38:49,960 --> 00:38:52,160 Speaker 1: I do want to say that I think it can 647 00:38:52,320 --> 00:38:58,560 Speaker 1: help us, assuming it is responsibly and accountably designed and maintained, 648 00:38:58,840 --> 00:39:02,239 Speaker 1: It can help us reach a better future depending on 649 00:39:02,280 --> 00:39:04,600 Speaker 1: how we implement it. So there are lots of ways 650 00:39:04,640 --> 00:39:07,879 Speaker 1: where I think AI is going to be a good 651 00:39:07,920 --> 00:39:11,040 Speaker 1: thing moving forward. I know on this show I can 652 00:39:11,080 --> 00:39:14,439 Speaker 1: get really critical of AI, but that's because it does 653 00:39:14,480 --> 00:39:16,319 Speaker 1: have the potential to do good things. It also has 654 00:39:16,320 --> 00:39:20,160 Speaker 1: the potential to do really bad things, either intentionally or 655 00:39:20,480 --> 00:39:24,960 Speaker 1: as we have often seen, unintentionally intentionally. I worry about 656 00:39:25,160 --> 00:39:29,720 Speaker 1: companies like open Ai pairing up with defense contractors because 657 00:39:29,840 --> 00:39:33,800 Speaker 1: I don't see that ending well. I see that going 658 00:39:33,840 --> 00:39:37,560 Speaker 1: to a place that's very dark and honestly something that 659 00:39:37,600 --> 00:39:40,680 Speaker 1: I thought would only exist in science fiction throughout my life, 660 00:39:40,680 --> 00:39:44,360 Speaker 1: and it turns out I was being naive unintentionally is 661 00:39:44,760 --> 00:39:48,799 Speaker 1: arguably just as bad because it shows a lack of 662 00:39:49,200 --> 00:39:53,480 Speaker 1: oversight on the part of whoever's developing the AI process. 663 00:39:53,840 --> 00:39:57,440 Speaker 1: And often when we have our site set on a 664 00:39:57,480 --> 00:40:00,680 Speaker 1: specific goal, we can have blinders put up to the 665 00:40:00,719 --> 00:40:04,680 Speaker 1: potential consequences our choices, and that can be a really 666 00:40:04,680 --> 00:40:07,759 Speaker 1: bad thing too. But that doesn't mean we should shy 667 00:40:07,800 --> 00:40:09,640 Speaker 1: away from AI. It just means that we have to 668 00:40:09,640 --> 00:40:14,480 Speaker 1: be extremely mindful and careful as we develop and deploy 669 00:40:14,920 --> 00:40:18,760 Speaker 1: AI solutions because the potential for them to really improve 670 00:40:18,760 --> 00:40:21,640 Speaker 1: our lives is definitely there. We just have to make 671 00:40:21,680 --> 00:40:24,919 Speaker 1: sure we're doing the right stuff in order to get there. 672 00:40:25,480 --> 00:40:28,120 Speaker 1: That's it for this episode, just a quick look at 673 00:40:28,160 --> 00:40:31,279 Speaker 1: just three AI startups. I mean, there's obviously lots more 674 00:40:31,320 --> 00:40:33,560 Speaker 1: out there, but I wanted to kind of pick three 675 00:40:33,600 --> 00:40:36,120 Speaker 1: that would be fun to talk about for today's episode. 676 00:40:36,239 --> 00:40:40,400 Speaker 1: We'll be back with more new episodes, including hopefully some 677 00:40:40,480 --> 00:40:43,239 Speaker 1: special guests in the very near future, and I will 678 00:40:43,239 --> 00:40:52,560 Speaker 1: talk to you again really soon. Tech Stuff is an 679 00:40:52,560 --> 00:40:58,120 Speaker 1: iHeartRadio production. For more podcasts from iHeartRadio, visit the iHeartRadio app, 680 00:40:58,239 --> 00:41:01,400 Speaker 1: Apple podcasts, or wherever you listen to your favorite shows.