1 00:00:04,440 --> 00:00:12,360 Speaker 1: Welcome to tech Stuff, a production from iHeartRadio. Hey there, 2 00:00:12,360 --> 00:00:15,840 Speaker 1: and welcome to tech Stuff. I'm your host, Jonathan Strickland. 3 00:00:15,880 --> 00:00:19,040 Speaker 1: I'm an executive producer with iHeart Podcasts and How the 4 00:00:19,120 --> 00:00:24,720 Speaker 1: Tech Are Yet Now. Normally I say tech news items 5 00:00:24,760 --> 00:00:28,960 Speaker 1: for Tuesdays and Thursdays, But over this past weekend a 6 00:00:29,080 --> 00:00:33,919 Speaker 1: pretty big sequence of events happened and it really merits 7 00:00:34,200 --> 00:00:37,080 Speaker 1: a deeper discussion. I'm sure most of you have at 8 00:00:37,159 --> 00:00:42,000 Speaker 1: least heard something about this story. The short version is 9 00:00:42,200 --> 00:00:46,839 Speaker 1: Sam Altman, the CEO of open Ai, received his walking 10 00:00:46,920 --> 00:00:51,360 Speaker 1: papers from the company's board of directors. Then the board 11 00:00:51,720 --> 00:00:54,560 Speaker 1: kind of flipped out and begged him to come back 12 00:00:54,600 --> 00:00:58,560 Speaker 1: to the company, and he ultimately decided now I'm good, 13 00:00:59,160 --> 00:01:02,320 Speaker 1: y'all can do this on your own. And last I heard, 14 00:01:02,600 --> 00:01:06,840 Speaker 1: he has now joined Microsoft's Advanced AI department. Which is 15 00:01:06,840 --> 00:01:10,200 Speaker 1: a heck of a weekend. So today I thought we 16 00:01:10,240 --> 00:01:12,240 Speaker 1: would talk a bit about Altman, We talk a bit 17 00:01:12,280 --> 00:01:16,000 Speaker 1: about open Ai, We chat about what went down behind 18 00:01:16,040 --> 00:01:20,360 Speaker 1: closed doors this past Friday, why the board of directors 19 00:01:20,600 --> 00:01:25,039 Speaker 1: fired Altman, why they then switched gears so quickly, and 20 00:01:25,080 --> 00:01:28,280 Speaker 1: what it all means going forward. Now. I did an 21 00:01:28,280 --> 00:01:31,960 Speaker 1: episode titled the Story of Open AI at the beginning 22 00:01:32,000 --> 00:01:35,040 Speaker 1: of this year, which published in January. I am going 23 00:01:35,080 --> 00:01:37,399 Speaker 1: to retread a lot of that same ground here in 24 00:01:37,480 --> 00:01:41,800 Speaker 1: a slightly different context, because it's necessary to really unravel 25 00:01:41,880 --> 00:01:45,040 Speaker 1: what was going on over the last weekend. So first up, 26 00:01:46,000 --> 00:01:50,120 Speaker 1: who is Sam Altman. Well, he grew up in Saint Louis, Missouri, 27 00:01:50,640 --> 00:01:54,240 Speaker 1: and as a kid he became really interested in programming. 28 00:01:54,360 --> 00:01:57,279 Speaker 1: According to The New Yorker, he was programming as early 29 00:01:57,400 --> 00:02:00,240 Speaker 1: as age eight, and that he went so far to 30 00:02:00,240 --> 00:02:02,840 Speaker 1: take a part a Macintosh computer in order to learn 31 00:02:02,840 --> 00:02:06,840 Speaker 1: how it worked. He also challenged social restrictions and taboos. 32 00:02:07,240 --> 00:02:10,640 Speaker 1: When a Christian group announced a boycott for an assembly 33 00:02:10,680 --> 00:02:14,000 Speaker 1: that was supposed to be focused on sexuality, Altman came 34 00:02:14,040 --> 00:02:17,280 Speaker 1: out to his community as gay, and he challenged them 35 00:02:17,280 --> 00:02:20,720 Speaker 1: to adopt an open attitude toward different ideas. And he 36 00:02:20,760 --> 00:02:23,880 Speaker 1: was just a teenager at the time, so very much 37 00:02:24,639 --> 00:02:32,400 Speaker 1: someone who is curious, motivated, and by most accounts I've read, fearless. 38 00:02:33,080 --> 00:02:36,640 Speaker 1: Altman attended Stanford, but he was only there for two years. 39 00:02:36,800 --> 00:02:39,280 Speaker 1: He studied computer science while he was there. In fact, 40 00:02:39,280 --> 00:02:43,160 Speaker 1: he studied artificial intelligence by some of the leading thinkers 41 00:02:43,360 --> 00:02:46,440 Speaker 1: in the discipline at Stanford, but he dropped out of 42 00:02:46,440 --> 00:02:48,560 Speaker 1: college in order to work on an app and a 43 00:02:48,600 --> 00:02:51,600 Speaker 1: business idea. So in many ways, it was the stereotypical 44 00:02:52,040 --> 00:02:55,520 Speaker 1: founder story of Silicon Valley. Right. You go to Stanford, 45 00:02:55,840 --> 00:02:58,760 Speaker 1: and when you're there, you're really there to make connections. 46 00:02:59,120 --> 00:03:02,280 Speaker 1: You don't bother completing your studies, You drop out of school, 47 00:03:02,760 --> 00:03:05,839 Speaker 1: you make a company, and then you get rich. It's 48 00:03:05,960 --> 00:03:08,119 Speaker 1: kind of like, you know, the whole idea of step one, 49 00:03:08,760 --> 00:03:12,360 Speaker 1: go to Stanford, step two, drop out, step three profit. 50 00:03:13,639 --> 00:03:15,720 Speaker 1: I guess you could argue that if you can make 51 00:03:15,720 --> 00:03:19,639 Speaker 1: a successful tech business, there's no real point to completing 52 00:03:19,639 --> 00:03:21,720 Speaker 1: your studies. I mean, if your studies are all about 53 00:03:22,360 --> 00:03:25,480 Speaker 1: learning the technology and it turns out you already have 54 00:03:25,520 --> 00:03:27,680 Speaker 1: a good mastery of that and you can make a 55 00:03:27,720 --> 00:03:31,480 Speaker 1: profitable business, why would you continue to spend money going 56 00:03:31,520 --> 00:03:35,400 Speaker 1: to school, unless, of course, maybe you would grow more 57 00:03:35,400 --> 00:03:38,880 Speaker 1: as a person and develop a deeper appreciation and understanding 58 00:03:38,920 --> 00:03:42,480 Speaker 1: of things that could perhaps help you when you make 59 00:03:42,880 --> 00:03:45,600 Speaker 1: decisions in the future. But don't listen to me. I 60 00:03:45,640 --> 00:03:48,240 Speaker 1: graduated with a degree in the humanities, so I have 61 00:03:48,320 --> 00:03:52,520 Speaker 1: these wacky ideas about how the experience of college is 62 00:03:52,840 --> 00:03:56,800 Speaker 1: about more than just learning a subject. But that's beside 63 00:03:56,800 --> 00:03:59,120 Speaker 1: the point. Let's get back to Sam Altman. So the 64 00:03:59,240 --> 00:04:02,240 Speaker 1: app he was working on with a few friends was 65 00:04:02,280 --> 00:04:07,440 Speaker 1: called Looped loopt, and it was meant to let users 66 00:04:07,480 --> 00:04:11,760 Speaker 1: to share their location data with selected other users, So 67 00:04:11,800 --> 00:04:14,160 Speaker 1: you can make friends with people on the app, and 68 00:04:14,200 --> 00:04:18,200 Speaker 1: then you could share your precise location with that person, 69 00:04:18,680 --> 00:04:21,039 Speaker 1: kind of a shorthand way of saying here I am. 70 00:04:21,400 --> 00:04:24,360 Speaker 1: And it could facilitate stuff like real world meetups. Like 71 00:04:24,480 --> 00:04:27,920 Speaker 1: imagine that you're heading to a concert venue and it's 72 00:04:27,920 --> 00:04:30,560 Speaker 1: a big venue. There's a lot of different interests and stuff. 73 00:04:30,640 --> 00:04:32,440 Speaker 1: You get there, you're going to meet up with your friends. 74 00:04:32,800 --> 00:04:35,720 Speaker 1: You use this app to say this is specifically where 75 00:04:35,720 --> 00:04:37,920 Speaker 1: I am, so that you can find each other. That's 76 00:04:38,000 --> 00:04:41,360 Speaker 1: kind of a use case. The Looped team applied to 77 00:04:41,760 --> 00:04:46,799 Speaker 1: the Why Combinator accelerator company to become part of their program. 78 00:04:47,160 --> 00:04:50,600 Speaker 1: So let's talk about Why Combinator for a moment. It is, 79 00:04:50,720 --> 00:04:55,039 Speaker 1: as I said, a startup accelerator organization. So Why Combinator 80 00:04:55,400 --> 00:04:59,680 Speaker 1: its purpose is to provide early funding in promising startup 81 00:04:59,680 --> 00:05:03,119 Speaker 1: ideas in order for them to start to get off 82 00:05:03,279 --> 00:05:07,640 Speaker 1: the ground. So it's an early investment part of a 83 00:05:07,720 --> 00:05:10,479 Speaker 1: startup so that they can at least get a chance 84 00:05:10,560 --> 00:05:14,839 Speaker 1: to mature into an actual business. Now, in return for 85 00:05:14,920 --> 00:05:19,760 Speaker 1: this early investment, why Combinator takes a small percentage of 86 00:05:19,800 --> 00:05:24,960 Speaker 1: ownership in the startup. So let's say that why Combinator 87 00:05:25,000 --> 00:05:28,799 Speaker 1: provides you know, a fairly modest sum in the early days, 88 00:05:28,839 --> 00:05:30,880 Speaker 1: Like it was a lot of money, don't get me wrong, 89 00:05:30,960 --> 00:05:33,200 Speaker 1: Like maybe like one hundred thousand dollars or maybe one 90 00:05:33,240 --> 00:05:35,719 Speaker 1: hundred and twenty thousand dollars. That's a lot of money, 91 00:05:35,960 --> 00:05:38,159 Speaker 1: but it's a tiny amount when you think about what 92 00:05:38,279 --> 00:05:40,920 Speaker 1: a company needs to actually run. So this is really 93 00:05:41,120 --> 00:05:44,440 Speaker 1: just to get a startup to go from idea to 94 00:05:44,560 --> 00:05:50,120 Speaker 1: something slightly more you know, coherent. But in return, why 95 00:05:50,160 --> 00:05:55,080 Speaker 1: Combinator gets like, you know, seven percent ownership of that startup. Now, 96 00:05:55,160 --> 00:05:58,560 Speaker 1: let's say that startup is something like Dropbox, and then 97 00:05:58,920 --> 00:06:00,960 Speaker 1: years down the road it's worth you know, more than 98 00:06:01,000 --> 00:06:03,839 Speaker 1: ten billion dollars. Well, that becomes a heck of a 99 00:06:03,880 --> 00:06:07,919 Speaker 1: return on investment. Right, you can have a huge profit 100 00:06:08,080 --> 00:06:11,320 Speaker 1: as this startup accelerator, even if just a few of 101 00:06:11,320 --> 00:06:14,880 Speaker 1: the startups really hit it big. Ideally, you want them 102 00:06:14,960 --> 00:06:17,479 Speaker 1: all to hit big and then you get huge payouts 103 00:06:17,520 --> 00:06:22,279 Speaker 1: down the line, and you become an important part of 104 00:06:22,360 --> 00:06:25,839 Speaker 1: the whole tech startup ecosystem, which is exactly what why 105 00:06:25,920 --> 00:06:28,919 Speaker 1: Combinator said up to do now why Combinator launched in 106 00:06:28,920 --> 00:06:30,760 Speaker 1: two thousand and five, that was the same year that 107 00:06:30,880 --> 00:06:36,760 Speaker 1: Looped would become part of its inaugural class of startups. Essentially, now, 108 00:06:36,800 --> 00:06:40,880 Speaker 1: not all startups make it. Obviously, Looped at least appeared 109 00:06:40,960 --> 00:06:44,400 Speaker 1: to do well initially, at least on paper. Altman and 110 00:06:44,440 --> 00:06:48,360 Speaker 1: his colleagues secured a couple of major rounds of investment funding, 111 00:06:48,440 --> 00:06:52,280 Speaker 1: Series A and Series B. The company's valuation hit more 112 00:06:52,320 --> 00:06:55,480 Speaker 1: than one hundred and seventy million dollars. But they were 113 00:06:55,520 --> 00:06:59,679 Speaker 1: running into a tiny little problem. They had developed this app, 114 00:07:00,200 --> 00:07:03,279 Speaker 1: and they couldn't convince people to use it. They were 115 00:07:03,880 --> 00:07:06,000 Speaker 1: all thinking that, you know, Looped was going to be 116 00:07:06,000 --> 00:07:09,000 Speaker 1: this really useful and popular tool. Everyone was going to 117 00:07:09,080 --> 00:07:12,600 Speaker 1: download it. But turns out the general public didn't seem 118 00:07:12,640 --> 00:07:17,120 Speaker 1: to agree with that, and so in twenty twelve, Looped 119 00:07:17,600 --> 00:07:21,240 Speaker 1: that team accepted an offer from the company green Dot, 120 00:07:21,760 --> 00:07:25,200 Speaker 1: and they sold Looped for around forty three million dollars. 121 00:07:25,760 --> 00:07:27,840 Speaker 1: That's also a lot of money, but it did not 122 00:07:28,040 --> 00:07:32,040 Speaker 1: cover the amount of money that venture capitalists had invested 123 00:07:32,240 --> 00:07:35,600 Speaker 1: into looped, so it was a negative return for investors. 124 00:07:36,200 --> 00:07:38,239 Speaker 1: You know, sometimes you bet on the ponies and you lose. 125 00:07:38,720 --> 00:07:41,880 Speaker 1: But Altman walked away with around five million bucks, so 126 00:07:42,160 --> 00:07:45,200 Speaker 1: it was a pretty decent return for him, even though 127 00:07:45,840 --> 00:07:48,960 Speaker 1: the app that he had worked on for several years 128 00:07:49,000 --> 00:07:52,560 Speaker 1: never really gained traction. Now, in the wake of this disappointment, 129 00:07:52,720 --> 00:07:56,280 Speaker 1: Altman founded a venture capital company of his own, and 130 00:07:56,320 --> 00:07:59,800 Speaker 1: it was called Hydrazine Capital. So he sunk most of 131 00:07:59,800 --> 00:08:02,400 Speaker 1: his personal wealth that he had earned from this sale 132 00:08:03,080 --> 00:08:06,440 Speaker 1: into this new venture capital company, and he also raised 133 00:08:06,480 --> 00:08:10,520 Speaker 1: millions more from other investors. He focused on investing in 134 00:08:10,560 --> 00:08:14,240 Speaker 1: companies that were in the y Combinator program, and he 135 00:08:14,360 --> 00:08:17,960 Speaker 1: was largely successful in this. He was picking some really 136 00:08:18,000 --> 00:08:20,920 Speaker 1: good startups, and he was backing ones that would become 137 00:08:20,960 --> 00:08:22,760 Speaker 1: a big deal in the tech space a few years 138 00:08:22,840 --> 00:08:25,520 Speaker 1: down the road, and so he was seeing big returns 139 00:08:25,520 --> 00:08:29,280 Speaker 1: on those investments. Within just a few years, his venture 140 00:08:29,320 --> 00:08:32,920 Speaker 1: capital firm increased in value by an order of magnitude. 141 00:08:33,679 --> 00:08:37,959 Speaker 1: But Altman wasn't super happy doing this work. He didn't 142 00:08:38,000 --> 00:08:41,960 Speaker 1: find it rewarding on a personal level. Financially sure, but 143 00:08:42,080 --> 00:08:44,680 Speaker 1: on a personal level, he didn't really like the work, 144 00:08:45,280 --> 00:08:49,520 Speaker 1: so he then extricated himself from the venture capital company 145 00:08:49,679 --> 00:08:53,440 Speaker 1: to try and do something else. Now, around this same time, 146 00:08:53,480 --> 00:08:56,680 Speaker 1: the folks who were behind why Combinator were looking to 147 00:08:56,840 --> 00:08:59,839 Speaker 1: hand off the whole accelerator program to someone else to 148 00:09:00,120 --> 00:09:03,640 Speaker 1: lead it, and that someone else ended up being Sam Altman. 149 00:09:04,440 --> 00:09:07,720 Speaker 1: The guy had gone through the process with looped and 150 00:09:07,760 --> 00:09:10,480 Speaker 1: now he would be running it, and he reportedly agreed 151 00:09:10,520 --> 00:09:13,120 Speaker 1: without hesitation. He was eager to do this job. It 152 00:09:13,160 --> 00:09:15,800 Speaker 1: was something that he didn't even necessarily know he wanted 153 00:09:15,800 --> 00:09:18,400 Speaker 1: to do, but once he was offered it, he was 154 00:09:18,600 --> 00:09:22,280 Speaker 1: really enthusiastic about doing it. So he really ramped up 155 00:09:22,400 --> 00:09:26,000 Speaker 1: y Combinator, and Altman began recruiting, you know, startups that 156 00:09:26,000 --> 00:09:29,680 Speaker 1: were focused on science and technology, so we're talking about 157 00:09:29,720 --> 00:09:34,240 Speaker 1: bleeding edge stuff like quantum computing or nuclear power or 158 00:09:34,600 --> 00:09:39,360 Speaker 1: AI that kind of stuff. Around this time, he started 159 00:09:39,440 --> 00:09:43,560 Speaker 1: to be part of a group that included Elon Musk, 160 00:09:44,280 --> 00:09:48,720 Speaker 1: and this group ostensibly wanted to develop artificial intelligence in 161 00:09:48,760 --> 00:09:53,240 Speaker 1: a responsible, accountable way while being extra careful not to 162 00:09:53,320 --> 00:09:58,440 Speaker 1: do anything foolish to make safe artificial intelligence. You know, 163 00:09:58,520 --> 00:10:01,080 Speaker 1: they didn't want to go down the wrong path and 164 00:10:01,120 --> 00:10:04,959 Speaker 1: do something like accidentally unleash Skynet and terminators all over 165 00:10:05,000 --> 00:10:10,640 Speaker 1: the place, so this group would become open ai. Now, 166 00:10:10,640 --> 00:10:15,240 Speaker 1: the original open ai was a non profit organization, and 167 00:10:15,280 --> 00:10:18,080 Speaker 1: the whole idea was to help in this effort to 168 00:10:18,280 --> 00:10:22,520 Speaker 1: foster the development of artificial intelligence in a responsible and 169 00:10:22,600 --> 00:10:26,600 Speaker 1: safe way. It wasn't some for profit company pushing a 170 00:10:26,720 --> 00:10:30,600 Speaker 1: generative AI chatbot at that time, and it was not 171 00:10:30,720 --> 00:10:36,160 Speaker 1: yet a partner to massive companies like Microsoft. So Altman 172 00:10:36,320 --> 00:10:40,000 Speaker 1: was running y Combinator, and he also began to work 173 00:10:40,040 --> 00:10:43,920 Speaker 1: with the open ai folks and tried to recruit various 174 00:10:44,080 --> 00:10:49,440 Speaker 1: leaders in artificial intelligence to join open ai. In twenty fifteen, 175 00:10:49,800 --> 00:10:53,199 Speaker 1: Sam Altman published a two part blog post about machine 176 00:10:53,200 --> 00:10:57,880 Speaker 1: intelligence and quote why you should fear it end quote, 177 00:10:58,480 --> 00:11:01,160 Speaker 1: And it starts off with the comfort sentence and I 178 00:11:01,320 --> 00:11:08,559 Speaker 1: quote development of superhuman machine intelligence SMI is probably the 179 00:11:08,600 --> 00:11:13,520 Speaker 1: greatest threat to the continued existence of humanity end quote. 180 00:11:13,720 --> 00:11:18,240 Speaker 1: Now Altman allows that other massive threats, like say, an 181 00:11:18,320 --> 00:11:23,280 Speaker 1: asteroid hitting the Earth, are possibly more likely to happen 182 00:11:23,640 --> 00:11:27,480 Speaker 1: than super human intelligent machines run amok, but he also 183 00:11:27,559 --> 00:11:30,520 Speaker 1: points out that a lot of the other threats that 184 00:11:30,600 --> 00:11:34,840 Speaker 1: we think of, like super volcanoes and climate crisis might 185 00:11:35,000 --> 00:11:38,160 Speaker 1: end up having a massive impact on the human population, 186 00:11:38,360 --> 00:11:43,079 Speaker 1: but probably wouldn't just totally wipe out humans in totality. 187 00:11:44,160 --> 00:11:51,080 Speaker 1: But superhuman machine intelligence, he argued, did have that potential, 188 00:11:51,960 --> 00:11:54,000 Speaker 1: and that this is why he thought of it as 189 00:11:54,040 --> 00:11:59,480 Speaker 1: being the most important or perhaps most dangerous threat. Almand 190 00:11:59,520 --> 00:12:01,839 Speaker 1: goes on in those blog posts to point out that 191 00:12:02,000 --> 00:12:04,840 Speaker 1: SMI doesn't have to be malevolent to be a threat. 192 00:12:05,640 --> 00:12:09,000 Speaker 1: Just setting an SMI to complete a task like trying 193 00:12:09,000 --> 00:12:12,720 Speaker 1: to manage resources could end up causing massive human harm. 194 00:12:13,520 --> 00:12:17,079 Speaker 1: The SMI might determine that the biggest cause of resource 195 00:12:17,080 --> 00:12:19,920 Speaker 1: depletion is the human race, So presto, you get rid 196 00:12:19,960 --> 00:12:21,959 Speaker 1: of the people, and now you don't have to worry 197 00:12:22,000 --> 00:12:25,760 Speaker 1: about these resources running out anymore. Now that's an oversimplification, 198 00:12:25,840 --> 00:12:28,800 Speaker 1: but you get the idea. Altman's point is if you 199 00:12:29,000 --> 00:12:33,840 Speaker 1: don't develop artificial intelligence in a way that is safe, 200 00:12:34,400 --> 00:12:37,640 Speaker 1: you can get terrible consequences, whether that was your goal 201 00:12:37,840 --> 00:12:41,439 Speaker 1: or otherwise. And of course there are malicious ways to 202 00:12:41,520 --> 00:12:44,800 Speaker 1: use AI. Right You could develop AI in an effort 203 00:12:44,880 --> 00:12:48,360 Speaker 1: to try and come up with new biological weapons. For example, 204 00:12:48,440 --> 00:12:53,439 Speaker 1: that's often a scenario that's cited by concerned critics of 205 00:12:53,520 --> 00:12:58,080 Speaker 1: artificial intelligence. That's certainly something that could potentially happen. So 206 00:12:58,120 --> 00:13:01,200 Speaker 1: again Altman saying, well, you need to have the right 207 00:13:01,280 --> 00:13:05,520 Speaker 1: team responsible to develop AI in a way that is 208 00:13:05,559 --> 00:13:09,320 Speaker 1: most likely to benefit people and to protect people from 209 00:13:09,880 --> 00:13:15,320 Speaker 1: malicious or badly designed AI. So Altman makes an argument 210 00:13:15,400 --> 00:13:19,840 Speaker 1: that machine intelligence could hit an inflection point once recursive 211 00:13:19,880 --> 00:13:25,200 Speaker 1: self improvement becomes a real possibility. That means, if we 212 00:13:25,240 --> 00:13:28,080 Speaker 1: get to the point where we can create machines that 213 00:13:28,160 --> 00:13:32,559 Speaker 1: are smart enough to reprogram themselves, and to reprogram themselves 214 00:13:32,600 --> 00:13:35,480 Speaker 1: in a way that is better than what humans could do, 215 00:13:36,160 --> 00:13:40,920 Speaker 1: so to program these machines at a higher than human 216 00:13:41,000 --> 00:13:45,200 Speaker 1: level of capability, a superhuman capability, if you will, then 217 00:13:45,360 --> 00:13:48,520 Speaker 1: machines suddenly engage in self improvement and can do so 218 00:13:48,600 --> 00:13:52,440 Speaker 1: at increasingly shorter intervals. They get better at doing the 219 00:13:52,480 --> 00:13:55,840 Speaker 1: thing that they're doing, so they get better at improving themselves, 220 00:13:56,080 --> 00:13:58,880 Speaker 1: and they improve themselves over and over, and this becomes 221 00:13:58,920 --> 00:14:02,920 Speaker 1: a version of the singularity, which is a moment where 222 00:14:03,080 --> 00:14:06,200 Speaker 1: change is so sudden and it's happening all the time 223 00:14:06,400 --> 00:14:09,800 Speaker 1: that effectively it becomes impossible to even describe the present, 224 00:14:10,600 --> 00:14:13,640 Speaker 1: everything will change and continue to change at a rate 225 00:14:13,720 --> 00:14:18,440 Speaker 1: that's beyond our ability to describe. Altman says, we might 226 00:14:18,440 --> 00:14:21,240 Speaker 1: be creeping toward that now, and maybe we're creeping toward 227 00:14:21,280 --> 00:14:23,080 Speaker 1: it at a rate that's just impossible for us to 228 00:14:23,120 --> 00:14:26,560 Speaker 1: notice because it's so gradual. That makes it really tricky, 229 00:14:26,680 --> 00:14:30,120 Speaker 1: because it could be that it goes from it's happening 230 00:14:30,160 --> 00:14:33,720 Speaker 1: so slowly that we can't notice it, to it's happening 231 00:14:33,800 --> 00:14:38,360 Speaker 1: so quickly that we are unable to describe it, and 232 00:14:38,400 --> 00:14:40,640 Speaker 1: that there's no point in the middle where we can say, 233 00:14:41,560 --> 00:14:45,080 Speaker 1: wait a second. So in part two of his blog posts, 234 00:14:45,120 --> 00:14:48,440 Speaker 1: Altman makes a clear argument. He says, quote, the US 235 00:14:48,560 --> 00:14:53,680 Speaker 1: government and all other governments should regulate the development of SMI. 236 00:14:54,080 --> 00:14:57,240 Speaker 1: In an ideal world, regulation would slow down the bad 237 00:14:57,280 --> 00:15:00,720 Speaker 1: guys and speed up the good guys. It seems like 238 00:15:00,760 --> 00:15:03,600 Speaker 1: what happens with the first SMI to be developed will 239 00:15:03,600 --> 00:15:08,360 Speaker 1: be very important end quote. Essentially, what Altman is arguing 240 00:15:08,360 --> 00:15:12,320 Speaker 1: here is that if ethical researchers develop a superhuman machine 241 00:15:12,360 --> 00:15:16,360 Speaker 1: intelligence first, they can employ that SMI to prevent the 242 00:15:16,400 --> 00:15:21,640 Speaker 1: development or deployment of malevolent or poorly built SMIs. So 243 00:15:21,680 --> 00:15:26,280 Speaker 1: we unleash our good guy Superman against their bad guy 244 00:15:26,440 --> 00:15:31,560 Speaker 1: General Zod or you know whatever whichever superhero supervillain pairing 245 00:15:31,600 --> 00:15:35,120 Speaker 1: you happen to like. Interestingly, this is going to come 246 00:15:35,160 --> 00:15:37,720 Speaker 1: back again when we talk about Altman and his appearances 247 00:15:37,760 --> 00:15:41,240 Speaker 1: around the world while talking about the potential for AI regulations. 248 00:15:42,160 --> 00:15:44,520 Speaker 1: Before we dive any further into this, let's take a 249 00:15:44,560 --> 00:15:47,320 Speaker 1: quick break to thank our sponsors and we'll be right back. 250 00:15:56,720 --> 00:15:59,360 Speaker 1: So Altman went so far in his blog posts to 251 00:15:59,400 --> 00:16:03,640 Speaker 1: say that he thinks, generally speaking, that tech is often overregulated, 252 00:16:04,040 --> 00:16:06,280 Speaker 1: but on the flip side, he doesn't want to live 253 00:16:06,320 --> 00:16:09,080 Speaker 1: in a world that has no regulation at all. In 254 00:16:09,120 --> 00:16:12,080 Speaker 1: some cases, you can see regulation as a necessary evil 255 00:16:12,600 --> 00:16:16,920 Speaker 1: that maybe it does slow down innovation or it has 256 00:16:17,160 --> 00:16:21,960 Speaker 1: unintended consequences, but in the absence of regulation you can 257 00:16:22,040 --> 00:16:25,280 Speaker 1: have some really poorly thought out deployments that can cause 258 00:16:25,320 --> 00:16:29,200 Speaker 1: a lot of harm. From twenty fifteen to twenty eighteen, 259 00:16:29,800 --> 00:16:35,680 Speaker 1: Open Ai operated as a nonprofit organization. The organization championed 260 00:16:35,760 --> 00:16:39,920 Speaker 1: the open part of its name, claiming that it would 261 00:16:39,960 --> 00:16:44,360 Speaker 1: freely share research and its patents with AI researchers all 262 00:16:44,400 --> 00:16:47,680 Speaker 1: around the world, all in an effort to ensure safety 263 00:16:47,720 --> 00:16:51,320 Speaker 1: in AI development. Greg Brockman, one of the co founders, 264 00:16:51,600 --> 00:16:55,600 Speaker 1: identified a short list of top AI researchers, and the 265 00:16:55,960 --> 00:16:58,560 Speaker 1: organization as a whole began to recruit several of them 266 00:16:58,560 --> 00:17:02,560 Speaker 1: to join open ai as the first employees of the organization. 267 00:17:03,040 --> 00:17:07,119 Speaker 1: The talent helped attract more talent. Some folks said they 268 00:17:07,160 --> 00:17:10,520 Speaker 1: actually joined open ai because it was where you could 269 00:17:10,520 --> 00:17:13,560 Speaker 1: work on really exciting research with the most brilliant and 270 00:17:13,760 --> 00:17:17,600 Speaker 1: talented people in the discipline, even though it would mean 271 00:17:17,640 --> 00:17:20,359 Speaker 1: you wouldn't be making as much money there as you 272 00:17:20,400 --> 00:17:23,960 Speaker 1: could somewhere else. Even high paid individuals at companies like 273 00:17:24,040 --> 00:17:27,160 Speaker 1: Google found themselves switching jobs for the chance to work 274 00:17:27,160 --> 00:17:31,239 Speaker 1: on something that they saw as important and challenging and 275 00:17:31,280 --> 00:17:35,680 Speaker 1: potentially critical to the survival of humanity. One of those 276 00:17:35,720 --> 00:17:38,840 Speaker 1: people who would also be listed as a co founder 277 00:17:38,920 --> 00:17:44,040 Speaker 1: of open ai was Ilia Sutzkiver, who would become chief 278 00:17:44,080 --> 00:17:47,959 Speaker 1: Scientist at open ai and would join the board of directors. 279 00:17:48,600 --> 00:17:52,840 Speaker 1: Musk reportedly played a critical role in recruiting Sutzkiver over 280 00:17:52,840 --> 00:17:55,120 Speaker 1: to open Ai. Like it went back and forth between 281 00:17:55,560 --> 00:17:58,080 Speaker 1: open Ai and Google, which really wanted to hold on 282 00:17:58,119 --> 00:18:01,320 Speaker 1: to him, and reportedly the Musk was a big reason 283 00:18:01,400 --> 00:18:05,440 Speaker 1: why Sutskever eventually moved over to open ai, and also 284 00:18:05,480 --> 00:18:08,639 Speaker 1: Ilia Sutzkever is one of the people who would ultimately 285 00:18:09,359 --> 00:18:12,320 Speaker 1: be part of the decision making group that fired Sam 286 00:18:12,359 --> 00:18:16,200 Speaker 1: Altman this past weekend. Anyway, we're up to twenty eighteen, 287 00:18:16,240 --> 00:18:19,280 Speaker 1: and behind the scenes there was drama a bruin, and 288 00:18:19,400 --> 00:18:23,240 Speaker 1: much of it was in the cauldron known as Elon Musk. 289 00:18:24,200 --> 00:18:26,960 Speaker 1: Not a big surprise there, right, because Elon Musk is 290 00:18:27,080 --> 00:18:30,040 Speaker 1: kind of a magnet for drama in the tech sphere 291 00:18:30,200 --> 00:18:33,440 Speaker 1: and the business sector. So Musk was on the board 292 00:18:33,440 --> 00:18:36,320 Speaker 1: of directors for open Ai, but in twenty eighteen he 293 00:18:36,600 --> 00:18:40,800 Speaker 1: left open ai entirely, and the official story was that 294 00:18:40,960 --> 00:18:44,000 Speaker 1: Musk chose to step down because of a potential conflict 295 00:18:44,040 --> 00:18:46,679 Speaker 1: of interest because there he was on the board of 296 00:18:46,680 --> 00:18:50,960 Speaker 1: directors for an organization working on artificial intelligence, but he 297 00:18:51,119 --> 00:18:55,119 Speaker 1: also was CEO of Tesla, a car company that was 298 00:18:55,119 --> 00:18:59,439 Speaker 1: pushing hard to develop and deploy autonomous driving capabilities to 299 00:18:59,480 --> 00:19:03,240 Speaker 1: the market, and autonomous driving is of course a subset 300 00:19:03,520 --> 00:19:07,560 Speaker 1: of artificial intelligence, so stepping down was the responsible thing 301 00:19:07,600 --> 00:19:10,680 Speaker 1: to do because of this potential conflict of interest between 302 00:19:10,680 --> 00:19:15,040 Speaker 1: the two companies. There was, however, more to his decision 303 00:19:15,119 --> 00:19:19,280 Speaker 1: than just that. So, according to Business Insider, Musk was 304 00:19:19,359 --> 00:19:23,760 Speaker 1: not happy with open Ay's progress. He compared it negatively 305 00:19:23,840 --> 00:19:27,000 Speaker 1: to Google. He was saying Google is spending huge amounts 306 00:19:27,000 --> 00:19:31,080 Speaker 1: of money and is getting ahead in artificial intelligence research, 307 00:19:31,520 --> 00:19:37,160 Speaker 1: and he argued Musk argued that Google, and he specifically 308 00:19:37,200 --> 00:19:42,120 Speaker 1: targeted Larry Page in this criticism, was not paying any 309 00:19:42,160 --> 00:19:45,439 Speaker 1: attention to safety, that safety was not a factor when 310 00:19:45,480 --> 00:19:49,479 Speaker 1: it came to Google's approach to artificial intelligence, and so 311 00:19:49,720 --> 00:19:52,479 Speaker 1: that was one of the things he said that it 312 00:19:52,560 --> 00:19:55,600 Speaker 1: was critical of open ai, saying you're not doing enough, 313 00:19:56,119 --> 00:19:59,240 Speaker 1: and he was kind of pointing at Sam Altman as 314 00:19:59,320 --> 00:20:01,919 Speaker 1: the reason for them, that Altman's leadership was the reason 315 00:20:02,000 --> 00:20:08,040 Speaker 1: why open ai was lagging behind. So Musk then reportedly 316 00:20:08,880 --> 00:20:14,600 Speaker 1: went to other co founders of open ai, including Sam Altman, 317 00:20:15,080 --> 00:20:18,000 Speaker 1: and essentially he said, I want to run open ai, 318 00:20:18,640 --> 00:20:21,040 Speaker 1: and he was told in no uncertain terms that this 319 00:20:21,160 --> 00:20:24,600 Speaker 1: would not happen, and so, again, according to Business Insider, 320 00:20:24,720 --> 00:20:27,960 Speaker 1: Musk decided to take his ball and leave. His ball 321 00:20:28,000 --> 00:20:31,640 Speaker 1: also included a sizable investment or donation to open ai, 322 00:20:32,680 --> 00:20:35,480 Speaker 1: so when he left, he left with a whole bunch 323 00:20:35,520 --> 00:20:38,119 Speaker 1: of money that otherwise was going to go to the 324 00:20:38,280 --> 00:20:42,520 Speaker 1: organization and didn't. Musk would later say he disagreed with 325 00:20:42,560 --> 00:20:45,720 Speaker 1: the direction of open ai and that the company wasn't 326 00:20:45,920 --> 00:20:50,560 Speaker 1: nearly as open as its name would suggest. That last 327 00:20:50,600 --> 00:20:54,080 Speaker 1: criticism happened after open Ai would create a for profit 328 00:20:54,119 --> 00:20:58,679 Speaker 1: company in twenty nineteen. Musk actually leveled the lack of 329 00:20:58,720 --> 00:21:03,040 Speaker 1: openness at open Ai that critique. Around twenty twenty, Musk 330 00:21:03,119 --> 00:21:06,960 Speaker 1: also was founding his own ai research organization and would 331 00:21:06,960 --> 00:21:10,879 Speaker 1: occasionally throw shade at open Ai and Sam Altman. And 332 00:21:11,160 --> 00:21:14,200 Speaker 1: I am not an Elon Musk fan. Most of y'all 333 00:21:14,200 --> 00:21:16,640 Speaker 1: know this. I'm not a huge fan of Elon Musk. However, 334 00:21:17,000 --> 00:21:19,360 Speaker 1: at least some of the criticisms he had toward open 335 00:21:19,400 --> 00:21:23,160 Speaker 1: Ai I actually agree with, or at least I think 336 00:21:23,320 --> 00:21:26,040 Speaker 1: they were true, like the fact that open ai was 337 00:21:26,080 --> 00:21:30,400 Speaker 1: becoming less open. I think that criticism has merit. Meanwhile, 338 00:21:30,440 --> 00:21:33,440 Speaker 1: open Ai was in a pretty tough position because, as 339 00:21:33,480 --> 00:21:39,199 Speaker 1: it turns out, artificial intelligence research is expensive, so you 340 00:21:39,280 --> 00:21:42,479 Speaker 1: need access to a whole lot of compute power and 341 00:21:42,560 --> 00:21:45,800 Speaker 1: that's not cheap, and then you also need to have 342 00:21:45,880 --> 00:21:49,280 Speaker 1: the money to attract the best talent, especially if your 343 00:21:49,320 --> 00:21:55,360 Speaker 1: goal is to be the first to develop superhuman machine 344 00:21:55,400 --> 00:22:00,440 Speaker 1: intelligence that is ethically sound. Like, if that's your goal 345 00:22:01,040 --> 00:22:04,600 Speaker 1: and you need to outpace everybody else who's also working 346 00:22:04,680 --> 00:22:08,960 Speaker 1: on developing superhuman machine intelligence, you got to spend the 347 00:22:08,960 --> 00:22:12,000 Speaker 1: big bucks to get the top of the class to 348 00:22:12,080 --> 00:22:16,399 Speaker 1: come over to your organization. And a nonprofit organization is 349 00:22:16,520 --> 00:22:18,680 Speaker 1: just not the fastest way to gather huge amounts of 350 00:22:18,720 --> 00:22:21,879 Speaker 1: money needed to fund research and operations. It would be 351 00:22:21,920 --> 00:22:25,240 Speaker 1: way easier if you could get investors to pour money in, 352 00:22:25,520 --> 00:22:29,080 Speaker 1: but investors want a return. Meanwhile, a nonprofit is a 353 00:22:29,080 --> 00:22:33,040 Speaker 1: place where you donate money. You're not expecting return on 354 00:22:33,080 --> 00:22:36,520 Speaker 1: your donation. It's no an investment. This is what led 355 00:22:36,560 --> 00:22:39,840 Speaker 1: to the decision to create a for profit arm of 356 00:22:39,880 --> 00:22:43,800 Speaker 1: open ai, which in turn would generate money that could 357 00:22:43,880 --> 00:22:47,960 Speaker 1: be theoretically at least used by the nonprofit part of 358 00:22:48,040 --> 00:22:52,320 Speaker 1: open ai to further the original organization's goals and mission. 359 00:22:52,920 --> 00:22:56,840 Speaker 1: So the result was open Ai LP, which open Ai 360 00:22:56,960 --> 00:23:00,800 Speaker 1: called a capped profit company. So what the heck is 361 00:23:00,840 --> 00:23:04,399 Speaker 1: a capped profit company? That's actually a really good question, 362 00:23:04,520 --> 00:23:10,320 Speaker 1: because I've found two somewhat conflicting answers from various sources. 363 00:23:10,400 --> 00:23:13,560 Speaker 1: Like they lay it out in two different ways that 364 00:23:13,640 --> 00:23:17,199 Speaker 1: are similar but distinct. So I'm going to give you 365 00:23:17,280 --> 00:23:19,840 Speaker 1: both of the ways that it has been explained in 366 00:23:19,960 --> 00:23:22,840 Speaker 1: various sources, because I'm gonna be honest with y'all. I'm 367 00:23:22,840 --> 00:23:25,040 Speaker 1: not a business person. Despite the fact that I have 368 00:23:25,200 --> 00:23:28,120 Speaker 1: hosted a business podcast of the Best, I'm not really 369 00:23:28,119 --> 00:23:30,760 Speaker 1: a business person, so I can't pretend like I have 370 00:23:31,240 --> 00:23:34,280 Speaker 1: a firm grip on this. And also, open ai was 371 00:23:34,359 --> 00:23:37,159 Speaker 1: kind of charting new territory while they were announcing this. 372 00:23:37,680 --> 00:23:42,120 Speaker 1: But here are the two ways that it is frequently described. 373 00:23:42,400 --> 00:23:46,840 Speaker 1: So version number one means that open ai would accept 374 00:23:46,880 --> 00:23:50,880 Speaker 1: investments from venture capitalists and that would pay out returns 375 00:23:50,880 --> 00:23:54,600 Speaker 1: on those investments from profits, but only up to a 376 00:23:54,600 --> 00:23:58,240 Speaker 1: certain amount. So in open AI's case, the early backers, 377 00:23:58,280 --> 00:24:00,640 Speaker 1: the people who first poured money in to open ai, 378 00:24:01,280 --> 00:24:04,800 Speaker 1: would have a cap of one hundred times their initial investment. 379 00:24:05,359 --> 00:24:08,880 Speaker 1: So let's take a very simple scenario. Let's say that 380 00:24:08,920 --> 00:24:11,200 Speaker 1: some kids in your neighborhood want to start a lemonade 381 00:24:11,240 --> 00:24:15,959 Speaker 1: stand and you invest one dollar into their lemonade stand. Now, 382 00:24:16,040 --> 00:24:18,320 Speaker 1: let's say the kids running the stand turn out to 383 00:24:18,320 --> 00:24:23,160 Speaker 1: be business geniuses and your dollar investment helps lead that 384 00:24:23,280 --> 00:24:29,600 Speaker 1: stand into making tens of thousands of dollars in profits, Like, 385 00:24:29,680 --> 00:24:32,560 Speaker 1: even after the expenses, these kids are raking in tens 386 00:24:32,560 --> 00:24:37,520 Speaker 1: of thousands of dollars. However, when you invested, you did 387 00:24:37,560 --> 00:24:41,359 Speaker 1: so knowing there was a one hundred time cap on returns, 388 00:24:41,760 --> 00:24:43,560 Speaker 1: So that means the most you're ever going to get 389 00:24:43,600 --> 00:24:46,440 Speaker 1: from the stand is one hundred dollars. It's a one 390 00:24:46,480 --> 00:24:50,240 Speaker 1: hundred times return on your investment. Meanwhile, those snot nosed 391 00:24:50,280 --> 00:24:52,560 Speaker 1: kids who never could have made the stand without your 392 00:24:52,640 --> 00:24:56,800 Speaker 1: dollar are pocketing thousands of bucks and they're franchising across 393 00:24:56,840 --> 00:25:01,000 Speaker 1: the town, those rotten kids. Anyway, that's one version of 394 00:25:01,040 --> 00:25:05,400 Speaker 1: how the capped profit structure work. It works, investors can 395 00:25:05,400 --> 00:25:07,680 Speaker 1: make a return, but only up to a certain amount, 396 00:25:08,359 --> 00:25:11,080 Speaker 1: the early backers being one hundred times whatever they put in, 397 00:25:11,480 --> 00:25:13,920 Speaker 1: and that means if they put in ten million dollars, 398 00:25:14,240 --> 00:25:16,920 Speaker 1: they could potentially make as much as a billion dollars 399 00:25:16,960 --> 00:25:20,919 Speaker 1: in returns if open ai profited that much. So you 400 00:25:20,960 --> 00:25:25,600 Speaker 1: know it does add up. However, there is a second 401 00:25:25,680 --> 00:25:29,840 Speaker 1: explanation for capped profit that, like I said, is slightly different. 402 00:25:30,200 --> 00:25:33,760 Speaker 1: So in this version, investors would pour money into open 403 00:25:33,800 --> 00:25:37,960 Speaker 1: ai and open ai would hold back on distributing any 404 00:25:38,040 --> 00:25:42,880 Speaker 1: returns on profits until those profits reached at least one 405 00:25:42,960 --> 00:25:47,120 Speaker 1: hundred times the investments that had been made. So using 406 00:25:47,119 --> 00:25:50,840 Speaker 1: our limonade stand example, you've donated one dollar to the 407 00:25:50,840 --> 00:25:54,199 Speaker 1: limonade stand business, you would not see a return on 408 00:25:54,240 --> 00:25:58,280 Speaker 1: that investment until the liminade stand made at least one 409 00:25:58,320 --> 00:26:02,440 Speaker 1: hundred dollars in profit. At that point you could start 410 00:26:02,440 --> 00:26:06,240 Speaker 1: to receive returns. And a few explanations kind of combine 411 00:26:06,880 --> 00:26:10,680 Speaker 1: the first version I mentioned in this version, and frankly, 412 00:26:10,800 --> 00:26:13,480 Speaker 1: just to be transparent, this kind of confuses me. So 413 00:26:13,800 --> 00:26:18,639 Speaker 1: for example, Time at time dot Com uses the second 414 00:26:18,720 --> 00:26:22,520 Speaker 1: explanation right that you don't get any returns until the 415 00:26:22,520 --> 00:26:26,040 Speaker 1: profits reach one hundred times whatever your investment level was, 416 00:26:26,600 --> 00:26:30,840 Speaker 1: but then includes the phrase quote anything above that being 417 00:26:30,880 --> 00:26:33,960 Speaker 1: the one hundred times profit would be donated back to 418 00:26:34,040 --> 00:26:37,320 Speaker 1: the nonprofit. So if that's the case, it means you 419 00:26:37,320 --> 00:26:39,840 Speaker 1: wouldn't get a return until the profits hit that one 420 00:26:39,960 --> 00:26:44,080 Speaker 1: hundred times your investment, and then anything over one hundred 421 00:26:44,080 --> 00:26:50,240 Speaker 1: times your investment would be going toward the investment into 422 00:26:50,280 --> 00:26:53,640 Speaker 1: the nonprofit or a donation into the nonprofit, which means 423 00:26:53,640 --> 00:26:56,800 Speaker 1: I guess you would be limited to one hundred times. 424 00:26:57,160 --> 00:26:59,600 Speaker 1: I don't know, like maybe it's a combination of these two, 425 00:26:59,680 --> 00:27:03,919 Speaker 1: but it's just been poorly reported in various places. It 426 00:27:04,000 --> 00:27:06,399 Speaker 1: just it seems a little confusing to me, and it 427 00:27:06,440 --> 00:27:09,480 Speaker 1: also seems like it'd be confusing from an investor standpoint 428 00:27:09,560 --> 00:27:12,159 Speaker 1: of whether or not it would even make sense to 429 00:27:12,440 --> 00:27:15,960 Speaker 1: pour money into this. I think a lot of reporting 430 00:27:16,000 --> 00:27:19,480 Speaker 1: around the cap profit nature is just incomplete, and that's 431 00:27:19,520 --> 00:27:22,560 Speaker 1: the problem that there were just there's just a lack 432 00:27:22,640 --> 00:27:26,520 Speaker 1: of good explanations of this. And also, I mean, I'm dense, 433 00:27:26,680 --> 00:27:29,520 Speaker 1: so that's the other part of the problem. But anyway, 434 00:27:30,800 --> 00:27:34,880 Speaker 1: however you frame the context of a capped profit company, 435 00:27:35,520 --> 00:27:38,440 Speaker 1: the structure would give open ai the chance to court 436 00:27:38,520 --> 00:27:42,399 Speaker 1: investors and to hold a whole bunch of money that 437 00:27:42,440 --> 00:27:46,040 Speaker 1: they could then pour into research and recruiting. A one 438 00:27:46,080 --> 00:27:50,480 Speaker 1: hundred times factor is pretty darn big, And arguably you 439 00:27:50,520 --> 00:27:53,640 Speaker 1: could say this was necessary because while open ai had 440 00:27:53,760 --> 00:27:58,080 Speaker 1: this noble mission, the truth is you still had massive 441 00:27:58,080 --> 00:28:02,959 Speaker 1: companies like Amazon and Google and Meta. These companies have 442 00:28:03,040 --> 00:28:07,600 Speaker 1: really deep pockets and a desire to invest in AI research, 443 00:28:07,960 --> 00:28:11,440 Speaker 1: and if you didn't do something, there was just no way, 444 00:28:11,520 --> 00:28:14,119 Speaker 1: no matter how noble the cause you were going to 445 00:28:14,200 --> 00:28:17,760 Speaker 1: keep up with these companies. So that was kind of 446 00:28:17,840 --> 00:28:21,520 Speaker 1: the decision making factors that drove open eye to launch 447 00:28:21,840 --> 00:28:28,560 Speaker 1: this for profit arm of the organization, and that didn't 448 00:28:28,600 --> 00:28:32,840 Speaker 1: make everybody happy. In fact, it was controversial, to put 449 00:28:32,840 --> 00:28:35,720 Speaker 1: it lightly, There were critics who were asking if it 450 00:28:35,720 --> 00:28:38,400 Speaker 1: would even be possible for open ai to continue to 451 00:28:38,440 --> 00:28:43,760 Speaker 1: pursue its mission of ethical AI development while also operating 452 00:28:43,840 --> 00:28:49,760 Speaker 1: a commercial business that was profiting off of artificial intelligence development. 453 00:28:50,280 --> 00:28:54,080 Speaker 1: That these two things could not be in alignment and 454 00:28:54,160 --> 00:28:57,240 Speaker 1: would mean that ultimately open ai would not be able 455 00:28:57,280 --> 00:29:01,120 Speaker 1: to achieve its mission. Complicating matters was that open ai 456 00:29:01,240 --> 00:29:04,360 Speaker 1: began to back away from that whole open part of 457 00:29:04,400 --> 00:29:08,240 Speaker 1: the philosophy, which again Elon Musk would criticize. In twenty twenty. 458 00:29:08,840 --> 00:29:12,840 Speaker 1: Open a Eye sighted a concern that malicious developers might 459 00:29:12,920 --> 00:29:17,240 Speaker 1: take the information the research being shared from open ai 460 00:29:17,960 --> 00:29:22,040 Speaker 1: and use that information to develop nasty and harmful applications, 461 00:29:22,240 --> 00:29:24,800 Speaker 1: or at least poorly designed ones. So now they're saying, 462 00:29:25,680 --> 00:29:29,000 Speaker 1: you know, our knowledge is dangerous. So I know, we 463 00:29:29,080 --> 00:29:30,840 Speaker 1: said we were going to share it so that we 464 00:29:30,840 --> 00:29:33,760 Speaker 1: could benefit humanity, but now we're scared that if we 465 00:29:33,840 --> 00:29:36,680 Speaker 1: share it people will misuse it, so we're not going 466 00:29:36,760 --> 00:29:39,600 Speaker 1: to do that anymore. So again, like Musk was saying, 467 00:29:40,160 --> 00:29:43,080 Speaker 1: it was no longer being as open as the name 468 00:29:43,240 --> 00:29:46,960 Speaker 1: had implied. So yeah, his criticisms had weight. Open ai 469 00:29:47,120 --> 00:29:51,880 Speaker 1: really was moving away from an unassailable nonprofit status and 470 00:29:52,000 --> 00:29:55,480 Speaker 1: also was getting less open in the process. Sure, the 471 00:29:55,480 --> 00:29:59,000 Speaker 1: folks that open ai had explanations for why they were 472 00:29:59,000 --> 00:30:01,440 Speaker 1: doing this, but it didn't change the fact that the 473 00:30:01,480 --> 00:30:06,160 Speaker 1: open Ai of twenty nineteen was fundamentally different than the 474 00:30:06,280 --> 00:30:10,560 Speaker 1: organization that had started in twenty fifteen. All Right, we're 475 00:30:10,600 --> 00:30:12,960 Speaker 1: going to take another break. When we come back, we'll 476 00:30:13,000 --> 00:30:16,600 Speaker 1: talk more about what happened in the following years that 477 00:30:16,680 --> 00:30:22,280 Speaker 1: then led to the situation that we saw unfold this 478 00:30:22,400 --> 00:30:36,600 Speaker 1: past weekend. But first, let's thank our sponsors. Okay, So 479 00:30:37,240 --> 00:30:39,800 Speaker 1: we left off in around twenty nineteen. We're gonna skip 480 00:30:39,840 --> 00:30:43,280 Speaker 1: ahead a few years. So the battle for AI talent 481 00:30:43,520 --> 00:30:46,680 Speaker 1: was a constant one in the tech space, but the 482 00:30:46,720 --> 00:30:50,000 Speaker 1: world at large remained pretty much oblivious to open ai 483 00:30:50,360 --> 00:30:54,000 Speaker 1: and the folks who were involved in that company. Open 484 00:30:54,040 --> 00:30:57,920 Speaker 1: ai just wasn't a name that your average person was 485 00:30:57,960 --> 00:31:01,480 Speaker 1: aware of. But that would change in November twenty twenty two. 486 00:31:02,520 --> 00:31:08,120 Speaker 1: That's when OpenAI introduced the chatbot called chat GPT. This 487 00:31:08,240 --> 00:31:11,920 Speaker 1: chat bot drew on a large language model, the GPT model, 488 00:31:11,920 --> 00:31:15,520 Speaker 1: which had gone through a couple of iterations, and it 489 00:31:15,560 --> 00:31:20,240 Speaker 1: would use that language model to generate responses to queries 490 00:31:20,320 --> 00:31:25,280 Speaker 1: and input. The responses often seemed like a human being 491 00:31:25,280 --> 00:31:27,640 Speaker 1: had actually written it. It didn't come across as your 492 00:31:27,680 --> 00:31:32,400 Speaker 1: typical AI generated text. It seemed more natural than that. 493 00:31:32,840 --> 00:31:35,040 Speaker 1: It also seemed like it was a really smart person 494 00:31:35,360 --> 00:31:37,560 Speaker 1: who wrote the response, someone who appeared to be an 495 00:31:37,600 --> 00:31:41,720 Speaker 1: authority on whatever the subject was. Like any subject you 496 00:31:41,760 --> 00:31:44,000 Speaker 1: could think of, you could put into this thing, and 497 00:31:44,120 --> 00:31:46,840 Speaker 1: chat gpt would generate a response that seemed to be 498 00:31:46,920 --> 00:31:52,840 Speaker 1: pretty definitive. And there were limitations on chat GPT's expertise. 499 00:31:53,280 --> 00:31:56,360 Speaker 1: The open Ai announced that chat gpt really only had 500 00:31:56,400 --> 00:31:59,720 Speaker 1: access to information leading up to September twenty twenty one, 501 00:32:00,400 --> 00:32:03,120 Speaker 1: and if you asked it to explain anything that happened 502 00:32:03,200 --> 00:32:06,240 Speaker 1: after September twenty twenty one, you'd be out of luck 503 00:32:06,440 --> 00:32:09,880 Speaker 1: because chat GPT wouldn't have access to that information. But 504 00:32:10,000 --> 00:32:14,280 Speaker 1: right out of the gate, chat gpt seemed incredible. Now 505 00:32:14,320 --> 00:32:18,400 Speaker 1: over the following weeks after its introduction, we would start 506 00:32:18,440 --> 00:32:24,040 Speaker 1: to see various critics and skeptics raise concerns about generative 507 00:32:24,080 --> 00:32:28,440 Speaker 1: AI in general, really and chat GPT in particular. Now, 508 00:32:28,520 --> 00:32:31,720 Speaker 1: some of these conversations had already started because there were 509 00:32:31,800 --> 00:32:35,880 Speaker 1: already text to image generative AI tools out there that 510 00:32:36,000 --> 00:32:41,200 Speaker 1: had prompted some concern. Chat GPT created new discussions about 511 00:32:41,240 --> 00:32:47,800 Speaker 1: how generative AI could make misinformation. They could engage in plagiarism, 512 00:32:48,200 --> 00:32:51,640 Speaker 1: it could slander someone, or it could just produce the 513 00:32:51,680 --> 00:32:56,080 Speaker 1: wrong response due to something that the AI field calls hallucinations. 514 00:32:56,720 --> 00:33:00,920 Speaker 1: Sometimes they call it confabulations instead. So this is when 515 00:33:00,920 --> 00:33:05,160 Speaker 1: an AI model fabricates an answer for whatever reason. Like 516 00:33:05,240 --> 00:33:08,200 Speaker 1: one reason that AI might just make something up is 517 00:33:08,200 --> 00:33:12,240 Speaker 1: that it doesn't have access to relevant information that relates 518 00:33:12,240 --> 00:33:16,840 Speaker 1: to the query. So instead the chatbot produces an answer that, 519 00:33:16,920 --> 00:33:22,040 Speaker 1: from a linguistic perspective, is statistically relevant. In other words, 520 00:33:22,320 --> 00:33:28,160 Speaker 1: it's creating sentences that are linguistically correct but factually incorrect 521 00:33:28,440 --> 00:33:31,800 Speaker 1: because it doesn't know the difference, and it's just trying 522 00:33:31,840 --> 00:33:34,280 Speaker 1: to provide a response to the question that was asked 523 00:33:34,320 --> 00:33:36,920 Speaker 1: of it. Now, this meant that sometimes you could ask 524 00:33:37,000 --> 00:33:39,360 Speaker 1: chat GBT to solve a problem for you, and the 525 00:33:39,400 --> 00:33:43,800 Speaker 1: response you would get would sound authoritative and sound like 526 00:33:43,880 --> 00:33:47,920 Speaker 1: it's correct, but in fact it was entirely wrong. And 527 00:33:48,320 --> 00:33:52,640 Speaker 1: following these criticisms came concerns from lawmakers who began to 528 00:33:52,720 --> 00:33:56,400 Speaker 1: ask the very same questions that open AI was intended 529 00:33:56,480 --> 00:33:59,560 Speaker 1: to address when folks first got together back in twenty 530 00:33:59,680 --> 00:34:02,240 Speaker 1: fifteen to create it in the first place. Now, as 531 00:34:02,280 --> 00:34:06,520 Speaker 1: we all know, the law trails behind technological development, sometimes 532 00:34:06,960 --> 00:34:10,880 Speaker 1: by years. It takes time to make laws, and then 533 00:34:10,920 --> 00:34:13,600 Speaker 1: it takes time to approve them and to pass them 534 00:34:13,640 --> 00:34:17,840 Speaker 1: into law. If you rush, you're likely to make problems worse, 535 00:34:18,440 --> 00:34:21,200 Speaker 1: or at the very least, you're likely to complicate matters 536 00:34:21,320 --> 00:34:23,880 Speaker 1: so that it becomes very difficult to comply with the 537 00:34:23,960 --> 00:34:28,200 Speaker 1: laws you've written. So Altman, who had already made his 538 00:34:28,239 --> 00:34:31,880 Speaker 1: philosophy around regulation known in that blog post from twenty fifteen, 539 00:34:32,640 --> 00:34:36,799 Speaker 1: began to meet with various officials all around the world. Now, 540 00:34:36,880 --> 00:34:41,520 Speaker 1: the idea was that sam Altman would help legislators understand 541 00:34:41,560 --> 00:34:46,719 Speaker 1: the potential risks of artificial intelligence and presumably create the 542 00:34:46,760 --> 00:34:52,160 Speaker 1: most responsible approach to regulations to ensure safety. But skeptics 543 00:34:52,200 --> 00:34:55,960 Speaker 1: were worried that what sam Altman was actually doing was 544 00:34:56,040 --> 00:34:59,680 Speaker 1: just stacking the deck to favor Open a Eye over 545 00:35:00,080 --> 00:35:04,200 Speaker 1: other AI companies. You see, Altman had long held this 546 00:35:04,280 --> 00:35:07,480 Speaker 1: position that it's really important for an ethical group of 547 00:35:07,520 --> 00:35:11,240 Speaker 1: researchers to beat everybody else to the punch to develop 548 00:35:11,280 --> 00:35:15,480 Speaker 1: that superhuman machine intelligence in order to prevent catastrophe. That 549 00:35:15,520 --> 00:35:20,240 Speaker 1: if you don't do that, you're essentially sealing your own doom. 550 00:35:20,360 --> 00:35:25,120 Speaker 1: And of course Altman viewed open AI as that ethical group. 551 00:35:25,480 --> 00:35:30,640 Speaker 1: It is the group that's dedicated to creating ethical, safe AI, 552 00:35:31,120 --> 00:35:34,759 Speaker 1: and Altmann felt that regulations could help mitigate the risk 553 00:35:34,840 --> 00:35:40,600 Speaker 1: of bad actors or inept creators from making dangerous machine intelligence. 554 00:35:41,000 --> 00:35:45,239 Speaker 1: And so these skeptics were arguing Altman's position was that 555 00:35:45,320 --> 00:35:50,200 Speaker 1: regulations should really hold back everybody else and then favor 556 00:35:50,320 --> 00:35:52,840 Speaker 1: open AI and allow it to move forward towards this 557 00:35:52,960 --> 00:35:58,879 Speaker 1: goal of creating benign, protective machine intelligence. So in other words, yeah, 558 00:35:58,920 --> 00:36:02,239 Speaker 1: the AI field needs rights, but more importantly, those regulations 559 00:36:02,280 --> 00:36:05,480 Speaker 1: need to stop every one other than open AI. That 560 00:36:05,600 --> 00:36:08,920 Speaker 1: was how the skeptics saw Sam Altman's position as he 561 00:36:09,000 --> 00:36:12,400 Speaker 1: met with all these different leaders, and in fact we 562 00:36:12,480 --> 00:36:16,560 Speaker 1: saw proposals for putting AI research on hold for half 563 00:36:16,600 --> 00:36:20,480 Speaker 1: a year. In fact, Elon Musk argued for this. Now, 564 00:36:20,560 --> 00:36:24,600 Speaker 1: the skeptics came out again and said, well, I see 565 00:36:24,640 --> 00:36:28,040 Speaker 1: the need to kind of pump the brakes so that 566 00:36:28,120 --> 00:36:31,720 Speaker 1: we make sure that all this work in AI isn't 567 00:36:31,800 --> 00:36:35,520 Speaker 1: going to cause enormous harm in the future. But of 568 00:36:35,560 --> 00:36:38,200 Speaker 1: course Musk is arguing for this because he wants to 569 00:36:38,239 --> 00:36:41,920 Speaker 1: create his own AI research division. And if you force 570 00:36:42,080 --> 00:36:44,880 Speaker 1: the industry as a whole to hold off for six 571 00:36:45,000 --> 00:36:49,000 Speaker 1: months on moving forward, it would give Musk the opportunity 572 00:36:49,080 --> 00:36:52,279 Speaker 1: to start to build out the foundations for his own 573 00:36:52,320 --> 00:36:57,160 Speaker 1: AI division while not losing more ground to competitors like 574 00:36:57,239 --> 00:37:02,200 Speaker 1: open Ai. It's a cutthroat world out in that AI field, y'all. Now. 575 00:37:02,200 --> 00:37:06,360 Speaker 1: In January of this year, in twenty twenty three, news 576 00:37:06,400 --> 00:37:10,120 Speaker 1: broke that Microsoft was investing around ten billion with a 577 00:37:10,200 --> 00:37:15,239 Speaker 1: B dollars in open Ai. Microsoft had already invested billions 578 00:37:15,280 --> 00:37:18,120 Speaker 1: in open Ai over the previous years, in twenty nineteen 579 00:37:18,120 --> 00:37:21,359 Speaker 1: and twenty twenty one, specifically, so this was seen as 580 00:37:21,400 --> 00:37:25,800 Speaker 1: Microsoft's effort to catch up to rivals like Google and Amazon, 581 00:37:25,920 --> 00:37:29,800 Speaker 1: which had already been spending their own billions in AI research. 582 00:37:30,520 --> 00:37:33,520 Speaker 1: The relationship between Microsoft and open Ai would manifest in 583 00:37:33,600 --> 00:37:38,120 Speaker 1: lots of different ways, including in Microsoft incorporating chat GPT 584 00:37:38,440 --> 00:37:42,640 Speaker 1: into its search feature in bing. For a long time, 585 00:37:42,680 --> 00:37:47,719 Speaker 1: Microsoft has been pushing being an edge toward becoming more 586 00:37:47,760 --> 00:37:52,000 Speaker 1: important in the market, but as come up time and 587 00:37:52,000 --> 00:37:56,120 Speaker 1: again up against the brick wall that is Google. In August, 588 00:37:56,480 --> 00:37:59,760 Speaker 1: open Ai announced in an enterprise version of chat gpt, 589 00:38:00,320 --> 00:38:04,520 Speaker 1: and then in September, open Ai allowed the chatbot to 590 00:38:04,680 --> 00:38:07,279 Speaker 1: access information on the Internet for the first time. So 591 00:38:07,760 --> 00:38:11,799 Speaker 1: now that restriction where it could only access information up 592 00:38:11,880 --> 00:38:15,480 Speaker 1: to September twenty twenty one, had been lifted. Now it 593 00:38:15,520 --> 00:38:19,400 Speaker 1: could access real time information around the world. On the 594 00:38:19,440 --> 00:38:22,959 Speaker 1: political side, lawmakers around the world, particularly in the United 595 00:38:23,000 --> 00:38:26,160 Speaker 1: States and in the European Union, began to grow more 596 00:38:26,200 --> 00:38:30,040 Speaker 1: concerned about AI and its possible uses and the risks 597 00:38:30,080 --> 00:38:33,839 Speaker 1: associated with it. So more pressure was building on the 598 00:38:34,040 --> 00:38:37,760 Speaker 1: artificial intelligence discipline in general and open ai in particular, 599 00:38:37,760 --> 00:38:40,720 Speaker 1: because open ai was seen as sort of the leading 600 00:38:40,800 --> 00:38:46,600 Speaker 1: authority in artificial intelligence. Chad GPT had really captured a 601 00:38:46,640 --> 00:38:49,640 Speaker 1: lot of interest around the world, and at the same 602 00:38:49,680 --> 00:38:53,719 Speaker 1: time you had some people within open ai who were 603 00:38:53,760 --> 00:38:57,719 Speaker 1: really clinging onto the ideals of the original nonprofit organization 604 00:38:57,920 --> 00:39:02,160 Speaker 1: and who had growing concerns about where open ai was 605 00:39:02,239 --> 00:39:05,520 Speaker 1: headed because of the for profit arm of the company. 606 00:39:05,960 --> 00:39:09,080 Speaker 1: So similar to Elon Musk, there were people, high level 607 00:39:09,120 --> 00:39:13,480 Speaker 1: people in open ai who were starting to feel uncomfortable 608 00:39:13,800 --> 00:39:16,600 Speaker 1: with where the organization was going. Now, just a couple 609 00:39:16,640 --> 00:39:19,800 Speaker 1: of weeks ago, open ai held its first developer conference. 610 00:39:20,200 --> 00:39:23,920 Speaker 1: Sam Altman took the stage on November sixth, so not 611 00:39:24,080 --> 00:39:27,400 Speaker 1: long ago, and he took the stage as CEO of 612 00:39:27,400 --> 00:39:30,560 Speaker 1: open Ai, and he listened off some pretty incredible statistics 613 00:39:30,880 --> 00:39:33,399 Speaker 1: like the fact that open ai can count more than 614 00:39:33,560 --> 00:39:40,640 Speaker 1: ninety percent of Fortune five hundred companies as customers. That's incredible. 615 00:39:40,680 --> 00:39:44,880 Speaker 1: It shows how influential open ai is in this field. 616 00:39:45,400 --> 00:39:49,240 Speaker 1: Open AI's partnership with Microsoft also played a huge part 617 00:39:49,680 --> 00:39:54,319 Speaker 1: in Sam Altman's presentation. But again, behind the scenes, things 618 00:39:54,360 --> 00:39:58,200 Speaker 1: were far from honky dory. You had open Ai doing 619 00:39:58,280 --> 00:40:01,680 Speaker 1: gangbusters on a business level, but again some of the 620 00:40:01,719 --> 00:40:04,440 Speaker 1: scientists who were part of the board of directors were 621 00:40:04,440 --> 00:40:07,920 Speaker 1: growing increasingly concerned that the company was guilty of the 622 00:40:08,080 --> 00:40:11,520 Speaker 1: very behaviors that open Ai was meant to head off. 623 00:40:12,120 --> 00:40:15,840 Speaker 1: That open ai was developing and deploying tools without putting 624 00:40:15,840 --> 00:40:20,560 Speaker 1: in appropriate safeguards or considering the consequences of unleashing these 625 00:40:20,600 --> 00:40:25,680 Speaker 1: tools that open ai had become more about monetizing technologies 626 00:40:25,719 --> 00:40:30,600 Speaker 1: and innovation and less about ethical development of artificial intelligence. 627 00:40:31,320 --> 00:40:33,960 Speaker 1: There were also some personal tensions that were growing between 628 00:40:34,000 --> 00:40:39,239 Speaker 1: Altman and other members of the board, such as Ilia Sutzkever. So, 629 00:40:39,360 --> 00:40:43,960 Speaker 1: according to Time, Altman reduced Ilia's role in the company, 630 00:40:44,440 --> 00:40:48,719 Speaker 1: while Ilia worried that Altman was launching side projects that 631 00:40:49,040 --> 00:40:52,719 Speaker 1: would benefit from open AI's work but also not be 632 00:40:52,880 --> 00:40:55,920 Speaker 1: accountable to open ai. So, in other words, ILIO was 633 00:40:55,920 --> 00:40:59,600 Speaker 1: worried about this sort of conflict of interest that Altman 634 00:40:59,680 --> 00:41:03,280 Speaker 1: was going to end up pursuing some developments of artificial 635 00:41:03,280 --> 00:41:07,000 Speaker 1: intelligence that were not governed by the board of open 636 00:41:07,040 --> 00:41:12,719 Speaker 1: ai and thus not constrained by these ethical concerns. So 637 00:41:13,239 --> 00:41:16,960 Speaker 1: this really boiled over. During the developer conference, Altman made 638 00:41:16,960 --> 00:41:21,280 Speaker 1: several announcements that Ilya reportedly objected to, including the unveiling 639 00:41:21,320 --> 00:41:25,040 Speaker 1: of a customizable version of chat GPT that, in theory, 640 00:41:25,520 --> 00:41:28,600 Speaker 1: could run autonomously once it was told what tasks it 641 00:41:28,680 --> 00:41:31,480 Speaker 1: was supposed to handle. So the critics on the board 642 00:41:32,080 --> 00:41:36,200 Speaker 1: outnumbered Altman's supporters. You had essentially two camps. You had 643 00:41:36,200 --> 00:41:38,600 Speaker 1: the people who thought Altman was in the right, including 644 00:41:38,600 --> 00:41:42,239 Speaker 1: Altman and Brockman, and then you had other members who 645 00:41:42,280 --> 00:41:46,799 Speaker 1: were more concerned. And last Friday, that's when the board 646 00:41:46,880 --> 00:41:50,000 Speaker 1: decided it was time to fire Sam Altman. They saw 647 00:41:50,040 --> 00:41:54,640 Speaker 1: Altman as being too reckless, not nearly cautious enough, and 648 00:41:54,680 --> 00:41:58,200 Speaker 1: despite open AI's market performance, which was incredible, they felt 649 00:41:58,200 --> 00:42:00,400 Speaker 1: the company was moving in the wrong direction and that 650 00:42:00,480 --> 00:42:04,000 Speaker 1: it needed new leadership as a result, so they decided 651 00:42:04,000 --> 00:42:07,520 Speaker 1: they had to fire Sam Altman as CEO. Now. Reportedly, 652 00:42:07,960 --> 00:42:11,200 Speaker 1: Altman learned of this fate in a Zoom meeting. Ilia 653 00:42:11,320 --> 00:42:12,520 Speaker 1: was the one who told him that he had to 654 00:42:12,520 --> 00:42:15,279 Speaker 1: go to the Zoom meeting, and it happened shortly before 655 00:42:15,320 --> 00:42:19,880 Speaker 1: the board announced their decision Publicly. Greg Brockman, co founder 656 00:42:20,080 --> 00:42:23,720 Speaker 1: president of open Ai, he was not told of this meeting, 657 00:42:24,000 --> 00:42:26,560 Speaker 1: and in fact, he found out about Sam Altman getting 658 00:42:26,600 --> 00:42:30,840 Speaker 1: fired just shortly before open Ai released the news to 659 00:42:30,880 --> 00:42:36,320 Speaker 1: the public. Similarly, Satya Nadella, the CEO of Microsoft, also 660 00:42:36,719 --> 00:42:40,080 Speaker 1: found out essentially when the news got released to the public, 661 00:42:40,840 --> 00:42:46,120 Speaker 1: and this set off a metaphorical explosion in the tech world. First, 662 00:42:46,480 --> 00:42:50,320 Speaker 1: open AI's board didn't exactly have a real good transition 663 00:42:50,520 --> 00:42:54,439 Speaker 1: plan in place, to handle this, nor did it seem 664 00:42:54,480 --> 00:42:58,000 Speaker 1: to really comprehend the extent of the fallout this decision 665 00:42:58,040 --> 00:43:01,279 Speaker 1: would have, particularly in the way they did it, where 666 00:43:01,320 --> 00:43:05,640 Speaker 1: they did not consult with Microsoft, a partner that was 667 00:43:05,719 --> 00:43:09,520 Speaker 1: going to invest ten billion dollars into the company, or 668 00:43:09,960 --> 00:43:14,240 Speaker 1: talk it over with the other executives before making the decision. 669 00:43:14,760 --> 00:43:18,360 Speaker 1: So even the folks who agreed that Altman was perhaps 670 00:43:18,440 --> 00:43:22,239 Speaker 1: not being cautious enough felt that the board's move was 671 00:43:22,320 --> 00:43:26,160 Speaker 1: poorly thought out and even more poorly executed. It's really 672 00:43:26,160 --> 00:43:29,279 Speaker 1: hard to argue against that. Like, even if you feel 673 00:43:29,320 --> 00:43:32,440 Speaker 1: that Sam Altman was absolutely leading open Ai in the 674 00:43:32,480 --> 00:43:35,960 Speaker 1: wrong way the wrong direction, you can also say that 675 00:43:36,040 --> 00:43:42,319 Speaker 1: the way the board handled this ultimately was disastrous. So 676 00:43:42,480 --> 00:43:46,720 Speaker 1: Greg Brockman announced that he was leaving open Ai once 677 00:43:47,040 --> 00:43:49,640 Speaker 1: the news went public that Sam Altman had been fired, 678 00:43:50,360 --> 00:43:53,759 Speaker 1: So open Ai would see both its CEO and its 679 00:43:53,800 --> 00:43:58,400 Speaker 1: president leave the company in one day. Some members of 680 00:43:58,440 --> 00:44:01,960 Speaker 1: the board, like Ilia, expressed regret for having supported the 681 00:44:01,960 --> 00:44:05,680 Speaker 1: measure to fire Altman, like they said later like I 682 00:44:05,760 --> 00:44:09,759 Speaker 1: kind of wish we hadn't done that. Iliosutskiv would go 683 00:44:09,840 --> 00:44:12,960 Speaker 1: on to sign an open letter saying as much, and 684 00:44:13,040 --> 00:44:15,600 Speaker 1: even threatened to leave Open a Eye along with more 685 00:44:15,600 --> 00:44:19,759 Speaker 1: than five hundred other staff members over this decision. Just 686 00:44:19,760 --> 00:44:22,920 Speaker 1: a big old whoopsie, right, so the board found itself 687 00:44:23,040 --> 00:44:27,600 Speaker 1: in extremely hot water. They had done the classy thing 688 00:44:27,680 --> 00:44:31,320 Speaker 1: of waiting until a Friday to announce a massive decision. 689 00:44:32,200 --> 00:44:34,480 Speaker 1: My guess is this was probably in an effort to 690 00:44:34,560 --> 00:44:36,719 Speaker 1: take at least some of the sting out of the 691 00:44:36,760 --> 00:44:39,600 Speaker 1: news cycle, the idea of being like, well, if it's 692 00:44:39,600 --> 00:44:42,000 Speaker 1: on a Friday afternoon, no one's going to pay attention 693 00:44:42,080 --> 00:44:44,239 Speaker 1: because we're going into the weekend. By the time it 694 00:44:44,280 --> 00:44:46,640 Speaker 1: comes around to Monday, things are going to cool off 695 00:44:46,640 --> 00:44:49,120 Speaker 1: a bit. Plus here in the United States, we're going 696 00:44:49,160 --> 00:44:52,840 Speaker 1: into a holiday week with Thanksgiving, so there won't be 697 00:44:52,880 --> 00:44:54,839 Speaker 1: a whole lot of opportunity to bring a whole lot 698 00:44:54,880 --> 00:44:56,680 Speaker 1: of attention to this, and we'll be able to get 699 00:44:56,680 --> 00:45:01,880 Speaker 1: away relatively unscathed. That is not how it turned out, however. Instead, 700 00:45:02,520 --> 00:45:07,359 Speaker 1: the news media went bonkers with this decision, and how 701 00:45:07,400 --> 00:45:11,160 Speaker 1: could you not chat GPT and open AI. They had 702 00:45:11,200 --> 00:45:13,799 Speaker 1: been the center of so many headlines throughout the whole year. 703 00:45:13,880 --> 00:45:16,640 Speaker 1: Of course, this was going to get a lot of attention, 704 00:45:17,440 --> 00:45:21,000 Speaker 1: and so while they were hoping that they could get 705 00:45:21,000 --> 00:45:25,520 Speaker 1: away with this without it being too painful. Immediately, investors 706 00:45:25,560 --> 00:45:28,840 Speaker 1: started to freak out about this change. A bunch of 707 00:45:28,840 --> 00:45:31,120 Speaker 1: them essentially indicated that they would pull out of open 708 00:45:31,200 --> 00:45:34,479 Speaker 1: ai and they would back whatever Altman chose to do next. 709 00:45:34,480 --> 00:45:38,200 Speaker 1: So if Altman launched his own competing artificial intelligence company, 710 00:45:38,719 --> 00:45:41,640 Speaker 1: they were going to back Altman, not open Ai. There 711 00:45:41,640 --> 00:45:44,760 Speaker 1: were hints that Microsoft could potentially even do the same, 712 00:45:44,840 --> 00:45:49,760 Speaker 1: and that's ten billion dollars plus. You had the general 713 00:45:49,840 --> 00:45:53,279 Speaker 1: staff who felt that this was the wrong move, and 714 00:45:53,320 --> 00:45:56,200 Speaker 1: they felt that this was a terrible mistake, and they 715 00:45:56,200 --> 00:46:00,799 Speaker 1: were threatening a mass walkout of the company. It was 716 00:46:00,840 --> 00:46:04,640 Speaker 1: pretty much the worst reaction you could expect from a 717 00:46:04,640 --> 00:46:07,640 Speaker 1: big announcement. So it did not take very long for 718 00:46:08,040 --> 00:46:10,799 Speaker 1: news to break that the board was trying hard to 719 00:46:10,960 --> 00:46:13,960 Speaker 1: take back what it had done and to try and 720 00:46:14,000 --> 00:46:18,440 Speaker 1: convince Altman and Brockman to return to the company, but 721 00:46:18,600 --> 00:46:22,200 Speaker 1: by then the damage had been done. Altman was not 722 00:46:22,360 --> 00:46:26,040 Speaker 1: interested in coming back. Specifically, he said unless the board 723 00:46:26,160 --> 00:46:29,719 Speaker 1: stepped down, he would not come back, and that would 724 00:46:29,760 --> 00:46:34,160 Speaker 1: become a real sticking point. Mira Murrati, the chief technology 725 00:46:34,200 --> 00:46:37,960 Speaker 1: officer for open Ai, would serve as interim CEO for 726 00:46:38,000 --> 00:46:41,560 Speaker 1: about two whole days. Murati reportedly was the person who 727 00:46:41,560 --> 00:46:43,719 Speaker 1: actually reached out to Altman to try and convince him 728 00:46:43,719 --> 00:46:48,160 Speaker 1: to come back to open AIE. But while Altman did 729 00:46:48,200 --> 00:46:51,759 Speaker 1: return to open AI's headquarters to negotiate, and he said 730 00:46:51,760 --> 00:46:53,480 Speaker 1: it was the first and only time he would ever 731 00:46:53,520 --> 00:46:58,640 Speaker 1: have a guest badge to open AIE, those negotiations didn't 732 00:46:58,920 --> 00:47:02,799 Speaker 1: really go very far, so the board decided on a 733 00:47:02,840 --> 00:47:06,200 Speaker 1: new interim CEO, perhaps because of a perception that Murrati 734 00:47:06,360 --> 00:47:09,120 Speaker 1: was maybe a bit too pro Altman and they needed 735 00:47:09,120 --> 00:47:11,480 Speaker 1: to get someone who would be more in their pocket. 736 00:47:12,440 --> 00:47:16,879 Speaker 1: They chose the former CEO of Twitch, Emmitt Sheer, who 737 00:47:16,880 --> 00:47:21,400 Speaker 1: doesn't have any experience with artificial intelligence. By this time, 738 00:47:21,719 --> 00:47:25,080 Speaker 1: the board directors consisted of just four people, people who 739 00:47:25,080 --> 00:47:28,600 Speaker 1: have been pressured to step down but refused to do so. 740 00:47:29,000 --> 00:47:31,840 Speaker 1: Thus Altman did not come back to the company. Altman 741 00:47:31,920 --> 00:47:35,000 Speaker 1: and Brockman, meanwhile, weren't exactly on the job market for 742 00:47:35,120 --> 00:47:39,160 Speaker 1: very long, because Microsoft swiftly hired both of them to 743 00:47:39,239 --> 00:47:43,279 Speaker 1: head up a new advanced AI research team within Microsoft. 744 00:47:43,680 --> 00:47:47,440 Speaker 1: Nadella also said Microsoft remains committed to supporting open AI, 745 00:47:48,000 --> 00:47:51,359 Speaker 1: hinting that those ten billion dollars, most of which has 746 00:47:51,480 --> 00:47:54,399 Speaker 1: not made its way to open Ai yet. Open Ai 747 00:47:54,560 --> 00:47:58,320 Speaker 1: has received just a fraction of that ten billion dollars, 748 00:47:58,760 --> 00:48:01,799 Speaker 1: but the hint is that that money will continue to 749 00:48:01,920 --> 00:48:05,080 Speaker 1: go to OpenAI, that Microsoft is not backing out of 750 00:48:05,120 --> 00:48:08,240 Speaker 1: that agreement, but Altman is going to end up working 751 00:48:08,280 --> 00:48:11,560 Speaker 1: directly with Microsoft and will have the title of CEO 752 00:48:11,760 --> 00:48:16,400 Speaker 1: for whatever this Advanced AI part of Microsoft ends up 753 00:48:16,400 --> 00:48:21,120 Speaker 1: being called. A few other prominent executives and scientists from 754 00:48:21,200 --> 00:48:24,960 Speaker 1: open Ai are apparently moving over to Microsoft as well, 755 00:48:25,080 --> 00:48:31,680 Speaker 1: so there are already other defections from open Ai to Microsoft. Meanwhile, 756 00:48:31,719 --> 00:48:34,520 Speaker 1: back at open Ai, a lot of folks who work 757 00:48:34,600 --> 00:48:37,720 Speaker 1: within the company have been posting their support for Altman 758 00:48:38,080 --> 00:48:41,640 Speaker 1: on platforms like x so there's a concern that there's 759 00:48:41,680 --> 00:48:45,160 Speaker 1: going to be a mass walkout and resignation following this move. 760 00:48:45,440 --> 00:48:49,359 Speaker 1: Certainly other companies like Microsoft, Google, Amazon, and Meta would 761 00:48:49,360 --> 00:48:51,440 Speaker 1: be eager to get hold of some of that talent, 762 00:48:52,080 --> 00:48:54,920 Speaker 1: and it's entirely possible that the board of open Ai, 763 00:48:55,000 --> 00:48:57,520 Speaker 1: in a move made out of concern for the company's 764 00:48:57,520 --> 00:49:02,600 Speaker 1: safety and humanity safety, may have actually doomed the organization entirely. 765 00:49:02,800 --> 00:49:05,279 Speaker 1: I'm not sure hiring a former CEO of Twitch is 766 00:49:05,320 --> 00:49:08,600 Speaker 1: going to be enough to prevent disaster. Now, all that 767 00:49:08,640 --> 00:49:12,080 Speaker 1: being said, open ai is in an incredible position. A 768 00:49:12,120 --> 00:49:16,680 Speaker 1: recent evaluation placed the company at around eighty six billion dollars. 769 00:49:17,280 --> 00:49:20,960 Speaker 1: Microsoft says it is committed to this ongoing relationship with 770 00:49:21,040 --> 00:49:25,600 Speaker 1: open Ai. Chad GPT is still an incredibly important tool 771 00:49:25,719 --> 00:49:28,680 Speaker 1: in the tech space, particularly with the introduction of the 772 00:49:28,840 --> 00:49:33,080 Speaker 1: enterprise product of chat GPT. So could open ai just 773 00:49:33,120 --> 00:49:37,920 Speaker 1: be too big to fail? Maybe? I think this monumental 774 00:49:38,640 --> 00:49:43,080 Speaker 1: misstep will test that hypothesis. I do not know how 775 00:49:43,080 --> 00:49:45,800 Speaker 1: it's all going to shake out. From a business perspective, 776 00:49:46,000 --> 00:49:48,640 Speaker 1: I would say open ai is in a really strong position. 777 00:49:49,160 --> 00:49:52,080 Speaker 1: But then if the organization suddenly sees a mass defection 778 00:49:52,280 --> 00:49:55,280 Speaker 1: from its researchers and staff, that could very well change. 779 00:49:55,560 --> 00:49:57,279 Speaker 1: So we'll have to see. And of course, now we're 780 00:49:57,280 --> 00:49:59,759 Speaker 1: on a holiday week, so it might be another week 781 00:49:59,760 --> 00:50:03,040 Speaker 1: but before we start getting answers. But yeah, that's kind 782 00:50:03,080 --> 00:50:06,160 Speaker 1: of an update on what went down this past weekend 783 00:50:06,160 --> 00:50:09,279 Speaker 1: and why it happened, Like all those different factors that 784 00:50:09,400 --> 00:50:14,040 Speaker 1: built up to this big explosion of activity. Now you 785 00:50:14,160 --> 00:50:16,960 Speaker 1: have a bit more background as to what was going on. 786 00:50:17,760 --> 00:50:20,239 Speaker 1: As to whose side I'm on, I don't know. I 787 00:50:20,280 --> 00:50:26,200 Speaker 1: do think that Altman's leadership was not always the best 788 00:50:26,239 --> 00:50:29,800 Speaker 1: as far as trying to achieve the goal of ethical AI. 789 00:50:30,440 --> 00:50:33,880 Speaker 1: I do not think that it was almost like engaging 790 00:50:33,880 --> 00:50:36,200 Speaker 1: in a necessary evil kind of thing. But I'm not 791 00:50:36,200 --> 00:50:40,239 Speaker 1: sure that the evil is really necessary. But what do 792 00:50:40,320 --> 00:50:43,560 Speaker 1: I know. I know that AI is very hard and 793 00:50:43,680 --> 00:50:45,799 Speaker 1: very expensive, and I don't know how you get the 794 00:50:45,840 --> 00:50:48,680 Speaker 1: money to do it the right way and still beat 795 00:50:48,680 --> 00:50:51,759 Speaker 1: out all the companies that don't have those restrictions on them. 796 00:50:52,200 --> 00:50:56,120 Speaker 1: So I don't know. I just know that it's a mess. 797 00:50:56,680 --> 00:50:58,920 Speaker 1: But now it's a mess we can put behind us 798 00:50:59,000 --> 00:51:02,160 Speaker 1: until we see what happens next. I hope you are 799 00:51:02,200 --> 00:51:06,040 Speaker 1: all well, and I'll talk to you again really soon. 800 00:51:12,400 --> 00:51:17,040 Speaker 1: Tech Stuff is an iHeartRadio production. For more podcasts from iHeartRadio, 801 00:51:17,360 --> 00:51:21,040 Speaker 1: visit the iHeartRadio app, Apple Podcasts, or wherever you listen 802 00:51:21,120 --> 00:51:22,200 Speaker 1: to your favorite shows.