1 00:00:04,400 --> 00:00:07,800 Speaker 1: Welcome to tech Stuff, a production from I Heart Radio. 2 00:00:11,880 --> 00:00:14,960 Speaker 1: Hey there, and welcome to tech Stuff. I'm your host 3 00:00:15,200 --> 00:00:18,640 Speaker 1: Jonathan Strickland. I'm an executive producer with I Heart Radio. 4 00:00:18,800 --> 00:00:21,960 Speaker 1: And how the tech are Young. Well, since it's been 5 00:00:22,000 --> 00:00:24,320 Speaker 1: in the news quite a bit so far this year, 6 00:00:24,600 --> 00:00:28,920 Speaker 1: I thought today we would look into open ai, both 7 00:00:29,000 --> 00:00:35,120 Speaker 1: the for profit company and it's parent not for profit organization. So, 8 00:00:35,240 --> 00:00:38,080 Speaker 1: for those of y'all who have managed to dodge all 9 00:00:38,159 --> 00:00:42,760 Speaker 1: the hubbub, open ai is the company behind chat gpt. 10 00:00:43,320 --> 00:00:46,199 Speaker 1: That's the chat bot that's been making headlines for everything 11 00:00:46,280 --> 00:00:49,720 Speaker 1: from offending the musician Nick Cave of Nick Cave and 12 00:00:49,760 --> 00:00:53,600 Speaker 1: the Bad Seeds Fame, to worrying teachers that their students 13 00:00:53,600 --> 00:00:55,720 Speaker 1: are just going to use a chat bot to cheat 14 00:00:55,760 --> 00:00:59,680 Speaker 1: on assignments rather than actually bother to learn something. But 15 00:00:59,760 --> 00:01:02,560 Speaker 1: what about the company that made this thing in the 16 00:01:02,600 --> 00:01:07,399 Speaker 1: first place. Well, the history of open ai dates back 17 00:01:07,440 --> 00:01:13,399 Speaker 1: to twenty when a bunch of very wealthy tech entrepreneurs 18 00:01:13,400 --> 00:01:16,880 Speaker 1: got together and said, you know what, maybe we should 19 00:01:16,959 --> 00:01:21,480 Speaker 1: create an organization that aims to make helpful artificial intelligence 20 00:01:21,520 --> 00:01:26,760 Speaker 1: before someone opens Pandora's box and Unleasha's malevolent you know, 21 00:01:26,920 --> 00:01:31,560 Speaker 1: or at least uncaring super intelligence upon us all or 22 00:01:31,800 --> 00:01:36,440 Speaker 1: something to that effect. Essentially, the goal was to develop 23 00:01:36,520 --> 00:01:39,400 Speaker 1: AI and AI applications in a way that would be 24 00:01:39,400 --> 00:01:43,520 Speaker 1: beneficial to humanity and try to avoid all the scary 25 00:01:43,640 --> 00:01:49,480 Speaker 1: sky net terminator kind of stuff. But to talk about 26 00:01:49,520 --> 00:01:54,400 Speaker 1: this requires us to define some terms, like terms that 27 00:01:54,520 --> 00:01:57,400 Speaker 1: you might think are obvious on the face of it, 28 00:01:57,440 --> 00:02:00,440 Speaker 1: but I would argue are not. So. The big one 29 00:02:00,520 --> 00:02:04,560 Speaker 1: here would be artificial intelligence. There are certain words and 30 00:02:04,560 --> 00:02:07,800 Speaker 1: phrases out in the world that have lots of different meanings, 31 00:02:08,080 --> 00:02:11,519 Speaker 1: and this can sometimes cause confusion and miscommunication. I would 32 00:02:11,639 --> 00:02:15,840 Speaker 1: argue artificial intelligence is a real doozy among these. You 33 00:02:15,840 --> 00:02:19,200 Speaker 1: hear about someone working in AI and you start immediately 34 00:02:19,240 --> 00:02:24,919 Speaker 1: getting preconceived ideas of what that means, and you're probably wrong. Actually, 35 00:02:25,480 --> 00:02:27,680 Speaker 1: now that we're just talking about even just the word 36 00:02:27,720 --> 00:02:31,840 Speaker 1: intelligence has some ambiguity to it. So what do we 37 00:02:31,919 --> 00:02:35,600 Speaker 1: mean when we say that something is intelligent. Well, let's 38 00:02:35,600 --> 00:02:39,079 Speaker 1: take a look at what some dictionaries say. So Webster 39 00:02:39,320 --> 00:02:43,760 Speaker 1: defines intelligence as the ability to learn or understand or 40 00:02:43,840 --> 00:02:47,399 Speaker 1: to deal with new or trying situations, or the ability 41 00:02:47,480 --> 00:02:51,360 Speaker 1: to apply knowledge to manipulate one's environment or to think 42 00:02:51,400 --> 00:02:56,440 Speaker 1: abstractly as measured by objective criteria such as tests. Thanks 43 00:02:56,480 --> 00:03:02,160 Speaker 1: Webster Oxford defines it as the ability to learn, understand, 44 00:03:02,240 --> 00:03:05,000 Speaker 1: and think in a logical way about things. The ability 45 00:03:05,080 --> 00:03:08,080 Speaker 1: to do this, well, it's a little more succinct. But 46 00:03:08,160 --> 00:03:09,880 Speaker 1: then if we really want to boil it down, the 47 00:03:09,880 --> 00:03:14,919 Speaker 1: American Heritage Dictionary defines it as the ability to acquire, understand, 48 00:03:15,040 --> 00:03:18,840 Speaker 1: and use knowledge. That's what intelligence is, according to those. 49 00:03:19,840 --> 00:03:24,120 Speaker 1: Dr dah lyoel Lee, and I apologize Dr Lee for 50 00:03:24,400 --> 00:03:28,360 Speaker 1: butchering your name is a professor of neuroscience and author 51 00:03:28,560 --> 00:03:32,120 Speaker 1: of Birth of Intelligence, and he defines intelligence as the 52 00:03:32,160 --> 00:03:36,560 Speaker 1: ability to solve complex problems or make decisions with outcomes 53 00:03:36,600 --> 00:03:41,280 Speaker 1: that benefit the actor. Dr Lee also acknowledges that intelligence 54 00:03:41,320 --> 00:03:44,200 Speaker 1: is actually pretty hard to define, and that there are 55 00:03:44,240 --> 00:03:47,520 Speaker 1: many different definitions, which you know, we've just seen. Like 56 00:03:47,600 --> 00:03:52,120 Speaker 1: even though all the definitions I mentioned have significant overlap 57 00:03:52,200 --> 00:03:55,280 Speaker 1: between them and they all seem to be dancing around 58 00:03:55,320 --> 00:03:59,480 Speaker 1: the same kind of concept, you might feel like none 59 00:03:59,520 --> 00:04:02,320 Speaker 1: of them quite get it right. And that's where some 60 00:04:02,440 --> 00:04:06,800 Speaker 1: of these challenges come from. Is that just defining intelligence 61 00:04:07,040 --> 00:04:11,600 Speaker 1: before we even get to artificial intelligence is hard. All right, Well, 62 00:04:11,680 --> 00:04:15,640 Speaker 1: let's let's say that intelligence generally is the ability to 63 00:04:15,760 --> 00:04:18,640 Speaker 1: learn and to acquire knowledge and then to use that 64 00:04:18,720 --> 00:04:21,960 Speaker 1: knowledge in new situations. Let's just use it by that 65 00:04:22,000 --> 00:04:23,680 Speaker 1: and say that, you know, it's got an element of 66 00:04:23,760 --> 00:04:26,560 Speaker 1: problem solving that goes with that, which I think is 67 00:04:26,600 --> 00:04:32,360 Speaker 1: pretty much implied. So artificial intelligence, then will artificial suggests 68 00:04:32,400 --> 00:04:35,119 Speaker 1: that it's something that's created by humans rather than found 69 00:04:35,160 --> 00:04:39,520 Speaker 1: in nature. Oxford Languages defines artificial intelligence as the theory 70 00:04:39,640 --> 00:04:43,320 Speaker 1: and development of computer systems able to perform tasks that 71 00:04:43,480 --> 00:04:49,039 Speaker 1: normally require human intelligence, such as visual perception, speech recognition, 72 00:04:49,400 --> 00:04:54,760 Speaker 1: decision making, and translation between languages. So that's a fairly 73 00:04:54,920 --> 00:04:58,479 Speaker 1: decent definition. Uh, But here's where we run into some 74 00:04:58,480 --> 00:05:03,000 Speaker 1: more ambiguity. When we talk about artificial intelligence. We're not 75 00:05:03,040 --> 00:05:06,880 Speaker 1: necessarily using the word intelligence to mean the exact same 76 00:05:06,920 --> 00:05:10,440 Speaker 1: thing when we apply it to a human context. You know, 77 00:05:10,480 --> 00:05:14,960 Speaker 1: a person working in artificial intelligence isn't necessarily trying to 78 00:05:15,000 --> 00:05:18,880 Speaker 1: make a machine think or appear to think like a 79 00:05:19,000 --> 00:05:24,240 Speaker 1: human does. In fact, they're probably not doing anything of 80 00:05:24,320 --> 00:05:26,479 Speaker 1: the sort. They might be working on something that, when 81 00:05:26,560 --> 00:05:30,640 Speaker 1: collected with the work of countless others, ends up contributing 82 00:05:30,680 --> 00:05:35,360 Speaker 1: to that kind of machine but that's different, so AI 83 00:05:35,440 --> 00:05:40,880 Speaker 1: involves a lot of different disciplines and technologies. Facial recognition 84 00:05:41,600 --> 00:05:45,640 Speaker 1: is a type of AI. Speech recognition is a type 85 00:05:45,640 --> 00:05:49,840 Speaker 1: of AI. Text to speech is related to artificial intelligence. 86 00:05:50,320 --> 00:05:54,640 Speaker 1: Robotics share a lot of features with AI, although you 87 00:05:54,680 --> 00:05:57,640 Speaker 1: could also have robots that are fully programmed to complete 88 00:05:57,640 --> 00:06:01,200 Speaker 1: precise routines and and that cases they're just following a 89 00:06:01,240 --> 00:06:04,440 Speaker 1: list of instructions and there's no decision making component right there, 90 00:06:04,480 --> 00:06:08,680 Speaker 1: just literally following step one, step two, step three, step four, repeat. 91 00:06:09,360 --> 00:06:12,839 Speaker 1: So those kinds of robots aren't really in the artificial 92 00:06:12,880 --> 00:06:16,360 Speaker 1: intelligence realm, but there are other robots that are now 93 00:06:16,400 --> 00:06:20,440 Speaker 1: frequently I find that the general public associates the concept 94 00:06:20,480 --> 00:06:23,960 Speaker 1: of artificial intelligence with a machine that appears to have 95 00:06:24,200 --> 00:06:29,440 Speaker 1: knowledge gathering and problems solving capabilities, usually paired with some 96 00:06:29,520 --> 00:06:32,880 Speaker 1: method to put solutions into action, so often in the 97 00:06:32,920 --> 00:06:35,960 Speaker 1: form of a robot or a computer system that's connected 98 00:06:36,000 --> 00:06:38,719 Speaker 1: to stuff that can actually get crap done. I almost 99 00:06:38,760 --> 00:06:41,039 Speaker 1: said the other phrase, but this is a family show, 100 00:06:42,200 --> 00:06:45,039 Speaker 1: so they're thinking about what is often referred to as 101 00:06:45,240 --> 00:06:48,320 Speaker 1: strong AI. These are machines that have a form of 102 00:06:48,360 --> 00:06:54,080 Speaker 1: intelligence that is to all practical purposes. Indistinguishable from human intelligence. 103 00:06:54,720 --> 00:06:58,400 Speaker 1: Now that's not to say that it's processing information the 104 00:06:58,400 --> 00:07:02,359 Speaker 1: exact same way that we peep process information, but that 105 00:07:02,440 --> 00:07:04,800 Speaker 1: the outcome is the same that at the end of 106 00:07:04,800 --> 00:07:08,080 Speaker 1: the day, if the machine and the person were to 107 00:07:08,120 --> 00:07:12,000 Speaker 1: come to the same conclusion, doesn't really matter what steps 108 00:07:12,080 --> 00:07:14,920 Speaker 1: in the middle were taken. Now, if such a thing 109 00:07:14,960 --> 00:07:18,680 Speaker 1: as possible, we're not there yet. We aren't at the 110 00:07:18,760 --> 00:07:21,520 Speaker 1: point where we have this. But the work done in 111 00:07:21,600 --> 00:07:24,760 Speaker 1: AI right now, which is really in the field of 112 00:07:24,840 --> 00:07:29,480 Speaker 1: weak AI, that is, artificial intelligent solutions designed for specific purposes, 113 00:07:30,280 --> 00:07:36,160 Speaker 1: is contributing toward the creation of strong AI. Now there's 114 00:07:36,160 --> 00:07:38,560 Speaker 1: another phrase for strong AI that we need to talk about, 115 00:07:38,640 --> 00:07:44,120 Speaker 1: which is artificial general intelligence or a g I. And 116 00:07:44,160 --> 00:07:46,720 Speaker 1: I know there are a lot of initialisms, that's always 117 00:07:46,760 --> 00:07:49,480 Speaker 1: the case when we talk about tech. But a g 118 00:07:49,760 --> 00:07:53,120 Speaker 1: I general intelligence that kind of tells you, Okay, this 119 00:07:53,240 --> 00:07:56,520 Speaker 1: is an AI that's meant to do lots of different stuff. Right, 120 00:07:56,560 --> 00:08:00,400 Speaker 1: It's not designed to do a specific task and just 121 00:08:00,480 --> 00:08:03,320 Speaker 1: get better and better and better at doing that task. 122 00:08:03,920 --> 00:08:08,400 Speaker 1: It's meant to handle lots of different things, maybe any thing. 123 00:08:09,160 --> 00:08:11,440 Speaker 1: And it's just like if you put a human and 124 00:08:11,520 --> 00:08:14,560 Speaker 1: you have that human go into a situation they've never 125 00:08:14,600 --> 00:08:19,000 Speaker 1: experienced before, how do they cope? Well, it's the goal 126 00:08:19,080 --> 00:08:22,040 Speaker 1: is to create an artificial intelligence that would be able 127 00:08:22,120 --> 00:08:25,120 Speaker 1: to handle new situations in a similar way to the 128 00:08:25,160 --> 00:08:29,520 Speaker 1: way humans do. That's the artificial general intelligence. Again, no 129 00:08:29,560 --> 00:08:32,280 Speaker 1: one has made one of these yet, but that would 130 00:08:32,360 --> 00:08:37,160 Speaker 1: become open a eyes. Primary goal is to create an 131 00:08:37,240 --> 00:08:40,800 Speaker 1: a g I, the first to create an a g I. Now, 132 00:08:40,880 --> 00:08:44,240 Speaker 1: week AI does not mean that artificial intelligence is bad 133 00:08:44,280 --> 00:08:47,760 Speaker 1: at its job or its inferior in some way. In fact, 134 00:08:47,760 --> 00:08:51,640 Speaker 1: week AI might be much better at doing its specific 135 00:08:51,720 --> 00:08:56,160 Speaker 1: task than humans are at completing that specific task. It's 136 00:08:56,160 --> 00:08:58,400 Speaker 1: just that this is all the week AI can do. 137 00:08:58,600 --> 00:09:02,520 Speaker 1: It can't do other things things it's operating under constraints. 138 00:09:02,520 --> 00:09:05,360 Speaker 1: So as an example, let's just think of something that's 139 00:09:05,400 --> 00:09:08,280 Speaker 1: really simple that you wouldn't even think of as being intelligent, 140 00:09:08,840 --> 00:09:13,360 Speaker 1: like a basic calculator, not even a scientific calculator, a 141 00:09:13,440 --> 00:09:17,400 Speaker 1: basic calculator like one that might be handed out by 142 00:09:17,559 --> 00:09:23,280 Speaker 1: a bank, and you can enter a pretty tough mathematical 143 00:09:23,360 --> 00:09:26,360 Speaker 1: problem into the calculator and it will provide a solution 144 00:09:26,400 --> 00:09:28,480 Speaker 1: in a fraction of the time it would take your 145 00:09:28,520 --> 00:09:31,680 Speaker 1: average human to do the same work, but that same 146 00:09:31,760 --> 00:09:34,480 Speaker 1: human could do other stuff like maybe that human can 147 00:09:34,480 --> 00:09:38,720 Speaker 1: play the guitar or juggle or paint or play a 148 00:09:38,840 --> 00:09:42,960 Speaker 1: video game or any of an endless number of other tasks. 149 00:09:43,000 --> 00:09:45,719 Speaker 1: But the calculator can't do that. It can just calculate. 150 00:09:45,760 --> 00:09:48,160 Speaker 1: That's all it can do, and it can do it 151 00:09:48,320 --> 00:09:52,160 Speaker 1: really well, but it's unable to extend this capability to 152 00:09:52,400 --> 00:09:56,480 Speaker 1: anything beyond that purpose. Now, sometimes when we encounter a 153 00:09:56,679 --> 00:10:00,880 Speaker 1: really good week AI, we can fool ourselves into thinking 154 00:10:00,880 --> 00:10:04,200 Speaker 1: that the AI is doing something really magical, or that 155 00:10:04,280 --> 00:10:08,000 Speaker 1: it's matching our own capabilities to think. It can actually 156 00:10:08,000 --> 00:10:11,000 Speaker 1: be pretty easy to fall into this trap. A sufficiently 157 00:10:11,040 --> 00:10:14,320 Speaker 1: sophisticated chatbot might fool listen to thinking that the machine 158 00:10:14,360 --> 00:10:18,280 Speaker 1: we're chatting with is actually thinking itself. But it's not, 159 00:10:18,559 --> 00:10:21,920 Speaker 1: at least not in the same way that people do. Now, 160 00:10:21,920 --> 00:10:24,280 Speaker 1: why did I go through all of that trouble to 161 00:10:24,360 --> 00:10:28,120 Speaker 1: define all these things? Well? The founding principle of open 162 00:10:28,160 --> 00:10:32,880 Speaker 1: AI is to create artificial general intelligence and AI applications 163 00:10:32,880 --> 00:10:38,960 Speaker 1: and technologies through a responsible, thoughtful approach, and that implies 164 00:10:39,400 --> 00:10:42,600 Speaker 1: that there's an irresponsible way to do this, and that 165 00:10:42,720 --> 00:10:45,720 Speaker 1: following such an irresponsible way could lead to disaster. And 166 00:10:45,760 --> 00:10:48,640 Speaker 1: that's where we get to our science fiction stories, and 167 00:10:48,679 --> 00:10:51,440 Speaker 1: that certainly tracks. You know, I'm not here to tell 168 00:10:51,440 --> 00:10:56,079 Speaker 1: you that that's an unreasonable fear. That fear is totally reasonable. 169 00:10:56,640 --> 00:11:00,760 Speaker 1: In fact, we've been seeing how weak AI can and 170 00:11:00,840 --> 00:11:04,640 Speaker 1: does cause problems, or maybe how I should say, are 171 00:11:04,640 --> 00:11:09,280 Speaker 1: our reliance upon week AI can cause problems. The AI 172 00:11:09,440 --> 00:11:12,000 Speaker 1: on its own may not be able to cause a 173 00:11:12,040 --> 00:11:15,760 Speaker 1: problem by itself, but because we rely on it, then 174 00:11:15,840 --> 00:11:17,880 Speaker 1: we go and we create these problems. So let's go 175 00:11:17,920 --> 00:11:21,400 Speaker 1: with facial recognition for this one. It has been shown 176 00:11:21,960 --> 00:11:25,439 Speaker 1: time and again that many of the facial recognition technologies 177 00:11:25,480 --> 00:11:29,120 Speaker 1: that are actively deployed in the world today have bias 178 00:11:29,160 --> 00:11:32,840 Speaker 1: built into them. They are fairly reliable at identifying people 179 00:11:32,840 --> 00:11:38,080 Speaker 1: within certain populations, like white people primarily, but then with 180 00:11:38,120 --> 00:11:41,760 Speaker 1: people of color, these systems aren't nearly as accurate. So 181 00:11:41,920 --> 00:11:45,720 Speaker 1: what happens is that these facial recognition systems can generate 182 00:11:45,760 --> 00:11:50,360 Speaker 1: false positives more frequently for say, black people. And because 183 00:11:50,400 --> 00:11:53,440 Speaker 1: we have law enforcement agencies that are making active use 184 00:11:53,559 --> 00:11:57,600 Speaker 1: of facial recognition technologies when looking for suspects, this means 185 00:11:57,640 --> 00:12:01,839 Speaker 1: that police can and do end up harassing innocent people, 186 00:12:02,000 --> 00:12:06,640 Speaker 1: all based off of this misidentification. So imagine one day 187 00:12:06,880 --> 00:12:09,840 Speaker 1: you're just going about your business and then suddenly law 188 00:12:09,920 --> 00:12:13,240 Speaker 1: enforcement swoops in and arrests you for a crime not 189 00:12:13,320 --> 00:12:16,959 Speaker 1: only you didn't commit, but you also have no knowledge 190 00:12:17,400 --> 00:12:20,480 Speaker 1: of this crime. And it's all because a machine somewhere 191 00:12:20,520 --> 00:12:23,920 Speaker 1: said this is the person you want. Now, imagine how 192 00:12:23,960 --> 00:12:27,280 Speaker 1: your life would be affected. What if it happened while 193 00:12:27,400 --> 00:12:30,040 Speaker 1: you were at work or at school. How do you 194 00:12:30,080 --> 00:12:32,640 Speaker 1: think the people around you would react when police come 195 00:12:32,640 --> 00:12:35,440 Speaker 1: in and arrest you. How many of those people would 196 00:12:35,440 --> 00:12:38,880 Speaker 1: treat you differently even after hearing that the whole thing 197 00:12:38,960 --> 00:12:42,560 Speaker 1: was just a mistake. What kind of stress would that 198 00:12:42,600 --> 00:12:46,120 Speaker 1: put on you and the people in your life? Now? 199 00:12:46,120 --> 00:12:48,440 Speaker 1: The reason I'm really nailing this home is because this 200 00:12:48,520 --> 00:12:52,360 Speaker 1: stuff is happening right. This problem is a real problem. 201 00:12:52,440 --> 00:12:55,600 Speaker 1: This is not a theoretical it's not a hypothetical. Real 202 00:12:55,840 --> 00:13:00,160 Speaker 1: people have had their lives up ended because police have 203 00:13:00,280 --> 00:13:06,320 Speaker 1: relied upon faulty facial recognition technology and saying, oops, it 204 00:13:06,400 --> 00:13:09,000 Speaker 1: was our mistake doesn't fix your life when it's been 205 00:13:09,000 --> 00:13:13,199 Speaker 1: turned upside down. Or as Matthew Grissinger of the Institute 206 00:13:13,240 --> 00:13:17,400 Speaker 1: for Safe medication practices has put it quote. The tendency 207 00:13:17,480 --> 00:13:21,160 Speaker 1: to favor or give greater credence to information supplied by 208 00:13:21,160 --> 00:13:24,680 Speaker 1: technology e g. And a d C display, and to 209 00:13:24,760 --> 00:13:29,600 Speaker 1: ignore a manual source of information that provides contradictory information 210 00:13:30,000 --> 00:13:33,400 Speaker 1: e g. Handwritten entry on the computer generated m a R, 211 00:13:34,360 --> 00:13:39,360 Speaker 1: even if it is correct, illustrates the phenomenon of automation bias. 212 00:13:39,880 --> 00:13:44,240 Speaker 1: Automation complacency is a closely linked, overlapping concept that refers 213 00:13:44,280 --> 00:13:48,400 Speaker 1: to the monitoring of technology with less frequency or vigilance 214 00:13:48,679 --> 00:13:51,600 Speaker 1: because of a lower suspicion of error and a stronger 215 00:13:51,640 --> 00:13:54,920 Speaker 1: belief in its accuracy end quote. So in other words, 216 00:13:55,800 --> 00:13:58,959 Speaker 1: we have a tendency to trust the output of machines, 217 00:13:59,800 --> 00:14:03,200 Speaker 1: and that trust is not always warranted. This can get 218 00:14:03,280 --> 00:14:05,600 Speaker 1: us into trouble. We can trust that the machines know 219 00:14:05,640 --> 00:14:09,040 Speaker 1: what they're doing and that the way they process information 220 00:14:09,559 --> 00:14:15,400 Speaker 1: is reliable and even infallible, and by acting upon that 221 00:14:15,480 --> 00:14:20,480 Speaker 1: we can create terrible consequences. Mr griss Singer's context was 222 00:14:20,560 --> 00:14:24,760 Speaker 1: within the field of medication prescriptions, which, obviously, if you 223 00:14:24,800 --> 00:14:27,920 Speaker 1: were to rely solely upon automated output and that automated 224 00:14:27,960 --> 00:14:32,160 Speaker 1: output was wrong, could result in terrible consequences. But I'm 225 00:14:32,200 --> 00:14:34,840 Speaker 1: sure you can imagine countless other scenarios in which an 226 00:14:34,880 --> 00:14:39,240 Speaker 1: over reliance on technology could lead to disaster. We'll talk 227 00:14:39,240 --> 00:14:43,160 Speaker 1: about another one when we come back from this quick break. 228 00:14:52,720 --> 00:14:55,200 Speaker 1: We're back, and before the break, I was talking about 229 00:14:55,200 --> 00:14:59,000 Speaker 1: how we have a tendency to put too much trust 230 00:14:59,560 --> 00:15:03,400 Speaker 1: inte knowlogy in general and AI in particular, and how 231 00:15:03,440 --> 00:15:06,680 Speaker 1: this can come back to haunt us. So an example 232 00:15:06,680 --> 00:15:10,280 Speaker 1: that leaps to my mind is autonomous cars. And I'm 233 00:15:10,320 --> 00:15:12,240 Speaker 1: going to be the first to admit I jumped on 234 00:15:12,320 --> 00:15:16,480 Speaker 1: the autonomous car bandwagon without applying nearly enough critical thinking. 235 00:15:17,440 --> 00:15:21,560 Speaker 1: I was really considering just the surface level of what 236 00:15:21,640 --> 00:15:24,440 Speaker 1: it would mean to have autonomous cars. So here's how 237 00:15:24,480 --> 00:15:27,640 Speaker 1: my flawed logic went. This is why I was so 238 00:15:27,760 --> 00:15:32,400 Speaker 1: like Gung Ho on autonomous cars several years ago now 239 00:15:32,840 --> 00:15:36,800 Speaker 1: and have subsequently changed my my thinking. So the way 240 00:15:36,880 --> 00:15:40,520 Speaker 1: I originally thought was computer processors are wicked fast, right 241 00:15:40,640 --> 00:15:46,240 Speaker 1: like a CPU in your computer can complete calculations so quickly, 242 00:15:46,360 --> 00:15:50,440 Speaker 1: millions of them every second, billions in fact, depending upon 243 00:15:50,520 --> 00:15:55,080 Speaker 1: the the sophistication of the of the operations. And then 244 00:15:55,200 --> 00:15:58,560 Speaker 1: you have parallel processing, right like if you have a 245 00:15:58,640 --> 00:16:03,480 Speaker 1: multi core processor could have lots of functions all being 246 00:16:03,520 --> 00:16:07,920 Speaker 1: performed simultaneously by this processor. Then, on top of that, 247 00:16:08,000 --> 00:16:10,600 Speaker 1: you could have sensors on your car that cover three 248 00:16:11,040 --> 00:16:14,520 Speaker 1: sixty degrees of view around the vehicle, so you would 249 00:16:14,560 --> 00:16:18,160 Speaker 1: be able to have the system pay attention in every 250 00:16:18,200 --> 00:16:22,520 Speaker 1: single direction simultaneously, whereas a human driver can only pay 251 00:16:22,520 --> 00:16:25,520 Speaker 1: attention within their field of view and then with the 252 00:16:25,520 --> 00:16:28,440 Speaker 1: help of some mirrors, get a little extra you know, 253 00:16:28,640 --> 00:16:33,240 Speaker 1: awareness around them. You could have mechanical systems that could 254 00:16:33,280 --> 00:16:36,920 Speaker 1: react immediately upon receiving a command from the processors with 255 00:16:36,960 --> 00:16:40,000 Speaker 1: no delay, so you don't have that delay of action 256 00:16:40,080 --> 00:16:43,160 Speaker 1: between when you sense something happening and when you are 257 00:16:43,200 --> 00:16:47,680 Speaker 1: able to act on that. So, surely such a system 258 00:16:47,720 --> 00:16:52,160 Speaker 1: with incredible processing power, with three sixty degrees of awareness, 259 00:16:52,200 --> 00:16:56,120 Speaker 1: with this immediate ability to react, would be able to 260 00:16:56,160 --> 00:16:59,960 Speaker 1: engage in defensive driving faster, more effectively, and safer than 261 00:17:00,200 --> 00:17:04,879 Speaker 1: in a human ever could. Clearly, machines are superior. We 262 00:17:04,920 --> 00:17:07,600 Speaker 1: should all be in autonomous cars. This is where I 263 00:17:07,640 --> 00:17:10,680 Speaker 1: ran into the problem of overreliance on technology. Sure, in 264 00:17:10,880 --> 00:17:14,640 Speaker 1: isolated cases, everything I was thinking might be at least 265 00:17:14,680 --> 00:17:17,399 Speaker 1: partly true, but when you take it together and you 266 00:17:17,400 --> 00:17:20,280 Speaker 1: start to apply it in the field in a vehicle, 267 00:17:20,760 --> 00:17:23,800 Speaker 1: things are far more complicated than I ever gave it 268 00:17:23,840 --> 00:17:27,160 Speaker 1: credit for. And as we have seen with advanced driver 269 00:17:27,240 --> 00:17:31,720 Speaker 1: assist features, if we rely too much on this technology, 270 00:17:31,760 --> 00:17:36,240 Speaker 1: it can and does lead to tragedy. So we've seen 271 00:17:36,320 --> 00:17:40,640 Speaker 1: this play out where people have depended too heavily upon 272 00:17:40,760 --> 00:17:43,280 Speaker 1: this tech and have paid for it with their lives. 273 00:17:43,760 --> 00:17:47,159 Speaker 1: So we know that this is more complex than what 274 00:17:47,400 --> 00:17:50,280 Speaker 1: I initially thought of back in my naive days of 275 00:17:50,359 --> 00:17:55,080 Speaker 1: being so, you know, flag bearing for the whole autonomous 276 00:17:55,320 --> 00:17:58,959 Speaker 1: car our movement, and I still believe in autonomous cars 277 00:17:59,359 --> 00:18:04,080 Speaker 1: and how they could contribute to greater safety, but I 278 00:18:04,160 --> 00:18:08,200 Speaker 1: also recognize that it's a far more complex problem than 279 00:18:08,280 --> 00:18:13,960 Speaker 1: what I originally imagined. All right, so we have thoroughly 280 00:18:14,520 --> 00:18:18,280 Speaker 1: defined the problem at this point. Right. Artificial intelligence has 281 00:18:18,359 --> 00:18:22,240 Speaker 1: the potential to help us do amazing things, but only 282 00:18:22,520 --> 00:18:26,800 Speaker 1: if we develop and deploy it properly. Otherwise it could 283 00:18:26,840 --> 00:18:31,720 Speaker 1: exacerbate existing problems or even create all new problems. So 284 00:18:31,800 --> 00:18:36,000 Speaker 1: there's a need to be thoughtful about design and application 285 00:18:36,160 --> 00:18:42,000 Speaker 1: and deployment and distribution. So who decided to codify this 286 00:18:42,119 --> 00:18:47,560 Speaker 1: philosophy of being careful about AI and create an organization 287 00:18:47,680 --> 00:18:51,160 Speaker 1: dedicated to doing that. Well, the two people who are 288 00:18:51,160 --> 00:18:55,520 Speaker 1: frequently cited as the co founders for open ai are 289 00:18:56,160 --> 00:19:00,080 Speaker 1: Elon Musk and Sam Altman, though I would hate and 290 00:19:00,200 --> 00:19:03,040 Speaker 1: add there were many other people who are really co 291 00:19:03,200 --> 00:19:07,080 Speaker 1: founders as well, but these are the two that, you know, 292 00:19:07,280 --> 00:19:10,639 Speaker 1: everyone says, these are the guys who started talking and 293 00:19:10,680 --> 00:19:13,480 Speaker 1: kind of generated the initial idea that became open AI. 294 00:19:14,440 --> 00:19:18,679 Speaker 1: So let's start with Musk. So years before he decided 295 00:19:18,720 --> 00:19:21,880 Speaker 1: to drop billions of dollars in an effort to troll 296 00:19:21,960 --> 00:19:26,040 Speaker 1: the Internet whenever he wanted to, Mr Musk was something 297 00:19:26,080 --> 00:19:29,639 Speaker 1: of an AI doomsayer. You know, he was warning that 298 00:19:29,760 --> 00:19:34,840 Speaker 1: artificial intelligence could potentially pose an existential threat to humans. 299 00:19:35,320 --> 00:19:37,960 Speaker 1: Kind of this idea of we create a human level 300 00:19:38,080 --> 00:19:43,240 Speaker 1: or even superhuman level strong AI, and then it turns 301 00:19:43,280 --> 00:19:46,919 Speaker 1: on us and wipes us out. And certainly bad AI 302 00:19:47,200 --> 00:19:49,720 Speaker 1: can be a huge issue. We just talked about how 303 00:19:49,760 --> 00:19:53,960 Speaker 1: even weak AI can be a really big problem. Now, 304 00:19:54,000 --> 00:19:57,040 Speaker 1: I don't think we're close to having a human, let 305 00:19:57,040 --> 00:20:02,640 Speaker 1: alone superhuman intelligence determined to wipe out humanity emerge, but 306 00:20:02,720 --> 00:20:04,960 Speaker 1: you know, you can definitely have bad AI contribute to 307 00:20:05,040 --> 00:20:09,760 Speaker 1: human suffering. See also Tesla, one of Mr Musk's companies. 308 00:20:09,840 --> 00:20:12,359 Speaker 1: One might even argue that Elon Musk knows the danger 309 00:20:12,640 --> 00:20:16,000 Speaker 1: that artificial intelligence poses to humanity because one of his 310 00:20:16,080 --> 00:20:19,080 Speaker 1: companies is leading the charge in that field in the 311 00:20:19,119 --> 00:20:24,000 Speaker 1: form of Tesla autopilot and full self driving modes. Now again, 312 00:20:24,240 --> 00:20:27,119 Speaker 1: you could say that I'm being unkind, because we do 313 00:20:27,200 --> 00:20:30,919 Speaker 1: need to remember that Tesla, despite the languages it uses 314 00:20:30,960 --> 00:20:35,159 Speaker 1: for marketing purposes, does alert drivers that they are not 315 00:20:35,200 --> 00:20:37,720 Speaker 1: supposed to take their hands off the wheel or stop 316 00:20:37,720 --> 00:20:41,000 Speaker 1: paying attention to the road, and that at least in 317 00:20:41,200 --> 00:20:44,960 Speaker 1: all the accounts I have read about terrible accidents involving 318 00:20:45,000 --> 00:20:48,560 Speaker 1: Tesla vehicles that were in driver assists mode, it sounds 319 00:20:48,600 --> 00:20:52,600 Speaker 1: like the driver wasn't following those directions. So you could 320 00:20:52,640 --> 00:20:56,119 Speaker 1: argue that, you know, the driver ultimately is at fault 321 00:20:56,119 --> 00:21:00,399 Speaker 1: because they're failing to adhere to the instructions that Tesla gives. 322 00:21:00,960 --> 00:21:03,600 Speaker 1: The flip side of that is that Tesla markets these 323 00:21:03,640 --> 00:21:08,600 Speaker 1: features as if they are more than you know, sophisticated 324 00:21:08,640 --> 00:21:13,440 Speaker 1: driver assist features. The other co founder of open Ai 325 00:21:13,520 --> 00:21:17,639 Speaker 1: that's frequently mentioned is Sam Altman, the current CEO of 326 00:21:17,720 --> 00:21:22,920 Speaker 1: open Ai. Sam Altman was previously president of y Combinator. 327 00:21:23,160 --> 00:21:26,000 Speaker 1: He became president of y Combinator in fourteen, which was 328 00:21:26,040 --> 00:21:29,119 Speaker 1: the year before he co founded open Ai with Elon Musk. 329 00:21:29,880 --> 00:21:32,360 Speaker 1: And you might say, well, what is why Combinator. It's 330 00:21:32,400 --> 00:21:36,800 Speaker 1: a startup accelerator, which doesn't really mean anything either, right, Well, 331 00:21:37,240 --> 00:21:40,560 Speaker 1: that's a company that helps people who have startup business 332 00:21:40,600 --> 00:21:44,719 Speaker 1: ideas get the support they need in order to launch 333 00:21:44,800 --> 00:21:47,440 Speaker 1: their idea and make it a reality. So that can 334 00:21:47,440 --> 00:21:50,720 Speaker 1: include stuff like mentoring the startup leaders so that they 335 00:21:50,760 --> 00:21:55,240 Speaker 1: can build a good business model and create the right 336 00:21:55,680 --> 00:21:58,119 Speaker 1: corporate structure that they're going to need in order to 337 00:21:58,119 --> 00:22:02,160 Speaker 1: do business, all the way up to prepping them and 338 00:22:02,240 --> 00:22:05,679 Speaker 1: connecting them with people that they can pitch their idea 339 00:22:05,760 --> 00:22:09,680 Speaker 1: to in order to get investment into their startup. So 340 00:22:09,760 --> 00:22:13,240 Speaker 1: one of the big valuable services that companies like y 341 00:22:13,320 --> 00:22:18,200 Speaker 1: Combinator provide is access to the investor community that you 342 00:22:18,280 --> 00:22:22,199 Speaker 1: might not otherwise be able to get to without that 343 00:22:22,280 --> 00:22:25,520 Speaker 1: kind of support. Now, Altman would continue to serve as 344 00:22:25,680 --> 00:22:28,840 Speaker 1: y Combinator president until twenty nineteen. At that point he 345 00:22:28,920 --> 00:22:32,640 Speaker 1: stepped down from that position to focus on open Ai. 346 00:22:33,200 --> 00:22:36,280 Speaker 1: H Elon Musk would sit on the board of directors 347 00:22:36,280 --> 00:22:39,359 Speaker 1: for open Ai until ten. We'll talk about that in 348 00:22:39,440 --> 00:22:42,240 Speaker 1: just a bit. Now. I mentioned that there were also 349 00:22:42,320 --> 00:22:45,920 Speaker 1: other co founders, So in addition to these two entrepreneurs, 350 00:22:46,320 --> 00:22:50,520 Speaker 1: early founders in the open Ai initiative included Greg Brockman, 351 00:22:50,680 --> 00:22:54,159 Speaker 1: who's still there. I believe he's a former chief technology 352 00:22:54,200 --> 00:22:58,560 Speaker 1: officer of Stripe, the payment processing company. The PayPal co 353 00:22:58,680 --> 00:23:02,040 Speaker 1: founder Peter Thiel was also one of the early investors 354 00:23:02,119 --> 00:23:06,639 Speaker 1: in open Ai. LinkedIn co founder read Garrett Hoffman, another 355 00:23:06,680 --> 00:23:11,679 Speaker 1: one one of Altman's y Combinator colleagues. Jessica Livingston was another, 356 00:23:11,840 --> 00:23:15,920 Speaker 1: and there were a few more. Now collectively, the founders 357 00:23:15,920 --> 00:23:21,000 Speaker 1: and partners all pledged one billion dollars to fund open ai, 358 00:23:21,080 --> 00:23:24,360 Speaker 1: which again was meant to be a nonprofit organization dedicated 359 00:23:24,400 --> 00:23:28,640 Speaker 1: to developing productive, friendly AI and not the scary pew 360 00:23:28,720 --> 00:23:32,600 Speaker 1: pew lasers kind of AI. But then there's also the 361 00:23:32,760 --> 00:23:38,080 Speaker 1: open part of open ai. So during the brainstorming that 362 00:23:38,119 --> 00:23:40,720 Speaker 1: would lead to the founding of this organization, the co 363 00:23:40,880 --> 00:23:44,719 Speaker 1: founders talked about how big tech companies typically do all 364 00:23:44,760 --> 00:23:49,840 Speaker 1: their AI development behind closed doors with no transparency, and 365 00:23:49,960 --> 00:23:53,879 Speaker 1: that their version of AI was meant to benefit the 366 00:23:54,320 --> 00:23:58,439 Speaker 1: parent company, not humanity as a whole. The open Ai 367 00:23:58,640 --> 00:24:02,359 Speaker 1: organization is going to take a different approach. The idea 368 00:24:02,480 --> 00:24:05,879 Speaker 1: was to share the benefits of AI research with the 369 00:24:05,920 --> 00:24:09,639 Speaker 1: world and do that as much as possible on an 370 00:24:09,720 --> 00:24:12,399 Speaker 1: effort to evolve AI in a way that helps but 371 00:24:12,520 --> 00:24:16,359 Speaker 1: doesn't harm. Researchers would be encouraged to publish their work 372 00:24:16,440 --> 00:24:20,680 Speaker 1: in various formats as frequently as they could, and any 373 00:24:20,760 --> 00:24:24,800 Speaker 1: patents that open ai would secure would similarly be shared 374 00:24:24,880 --> 00:24:27,760 Speaker 1: with the world. The message appeared to be the goal 375 00:24:28,000 --> 00:24:32,800 Speaker 1: is more important than the organization, that friendly AI is 376 00:24:32,880 --> 00:24:37,520 Speaker 1: the chief important goal here, and that open ai only 377 00:24:37,560 --> 00:24:41,920 Speaker 1: exists to see that become reality, and that open ai 378 00:24:42,040 --> 00:24:45,199 Speaker 1: was really kind of more of a a shepherd of 379 00:24:45,320 --> 00:24:48,639 Speaker 1: pushing AI into this direction rather than brazenly forging a 380 00:24:48,720 --> 00:24:51,880 Speaker 1: path into the wilderness, although that's not how things would 381 00:24:51,880 --> 00:24:55,480 Speaker 1: turn out now. Early on, the organization grew mostly through 382 00:24:55,600 --> 00:24:59,919 Speaker 1: connections in the AI research community, with Luminaries and x 383 00:25:00,040 --> 00:25:04,159 Speaker 1: birds joining the organization, but the organization itself kind of 384 00:25:04,240 --> 00:25:08,320 Speaker 1: lacked a real sense of leadership or direction. There was 385 00:25:08,359 --> 00:25:11,640 Speaker 1: this noble goal, right, Everyone knew that they were trying 386 00:25:11,680 --> 00:25:17,400 Speaker 1: to make reliable, safe, friendly, beneficial AI, but how there 387 00:25:17,440 --> 00:25:20,560 Speaker 1: wasn't really any plan for how to get to where 388 00:25:20,560 --> 00:25:26,160 Speaker 1: they wanted to be. Google researcher Dariomday visited open ai 389 00:25:26,280 --> 00:25:29,880 Speaker 1: in mid and he came away thinking that no one 390 00:25:29,920 --> 00:25:32,280 Speaker 1: at the organization really had any idea of what they 391 00:25:32,280 --> 00:25:36,200 Speaker 1: were doing. Despite that, or maybe because of it, i'm 392 00:25:36,200 --> 00:25:39,280 Speaker 1: a Day would join the organization a couple of months 393 00:25:39,400 --> 00:25:43,000 Speaker 1: later and became head of research there. Now. One of 394 00:25:43,040 --> 00:25:45,840 Speaker 1: the first things to emerge from open ai was in 395 00:25:45,920 --> 00:25:50,080 Speaker 1: ten like it was founded in late and in twenty 396 00:25:50,200 --> 00:25:54,720 Speaker 1: sixteen they were already producing some interesting stuff. And the 397 00:25:54,800 --> 00:25:58,160 Speaker 1: first up was a testing environment that the organization called 398 00:25:58,480 --> 00:26:03,080 Speaker 1: Jim Jim as a gymnasium, not as in Jimmy Jim 399 00:26:03,200 --> 00:26:08,359 Speaker 1: Jim Jim Hawkins. So what was being tested, well, they 400 00:26:08,400 --> 00:26:12,879 Speaker 1: were testing learning agents. This brings us to a discipline 401 00:26:12,920 --> 00:26:18,080 Speaker 1: that's within artificial intelligence. It's called machine learning, and basically 402 00:26:18,119 --> 00:26:21,600 Speaker 1: machine learning is what it says on the TIN. It's 403 00:26:21,640 --> 00:26:24,840 Speaker 1: finding ways to make machines learn so that they discover 404 00:26:25,000 --> 00:26:28,640 Speaker 1: how to do certain tasks and how to improve at 405 00:26:28,680 --> 00:26:33,080 Speaker 1: doing them over time. And there is no single way 406 00:26:33,119 --> 00:26:35,080 Speaker 1: that this is done. It's not like there's one and 407 00:26:35,200 --> 00:26:38,119 Speaker 1: only one way for machine learning to happen. There are 408 00:26:38,119 --> 00:26:42,640 Speaker 1: actually lots of different models. For example, there's the generative 409 00:26:42,920 --> 00:26:47,840 Speaker 1: adversarial model of machine learning. Basically, this is a model 410 00:26:47,880 --> 00:26:52,040 Speaker 1: that involves having two machines set against each other. One 411 00:26:52,080 --> 00:26:55,080 Speaker 1: machine is set up to try and accomplish a specific task. 412 00:26:55,280 --> 00:26:57,879 Speaker 1: This is the generative part, and the other machine is 413 00:26:57,920 --> 00:27:01,840 Speaker 1: set up to foil that task is the adversarial part. So, 414 00:27:01,920 --> 00:27:06,840 Speaker 1: for example, maybe you're training the generative model to create 415 00:27:06,880 --> 00:27:11,679 Speaker 1: a digital painting mimicking the style of famous impressionists, and 416 00:27:11,720 --> 00:27:15,480 Speaker 1: the adversarial system's job is to figure out which images 417 00:27:15,520 --> 00:27:19,080 Speaker 1: that are fed to it are real impressionist paintings from 418 00:27:19,160 --> 00:27:23,360 Speaker 1: history and which ones were generated by the computer system. 419 00:27:23,400 --> 00:27:26,400 Speaker 1: And you run these trials over and over, with each 420 00:27:26,440 --> 00:27:29,560 Speaker 1: system getting better over time. The generative one gets better 421 00:27:29,600 --> 00:27:33,440 Speaker 1: at making Impressionist style paintings and the adversarial one gets 422 00:27:33,480 --> 00:27:37,800 Speaker 1: better at finding little hints that indicate this was not 423 00:27:38,320 --> 00:27:41,879 Speaker 1: an actual painting but was computer generated. The open ai 424 00:27:42,040 --> 00:27:46,840 Speaker 1: jem specializes in learning agents that rely on reinforcement learning, 425 00:27:47,480 --> 00:27:49,680 Speaker 1: and when you break it down, it sounds a lot 426 00:27:49,720 --> 00:27:52,920 Speaker 1: like your typical kind of school work. That is, when 427 00:27:52,960 --> 00:27:56,760 Speaker 1: the learning agent performs well, it is rewarded when it 428 00:27:56,800 --> 00:28:00,240 Speaker 1: performs poorly, it is punished. So it's kind of like 429 00:28:00,280 --> 00:28:03,200 Speaker 1: getting your test paper back and finding out you aced 430 00:28:03,200 --> 00:28:06,600 Speaker 1: the exam, or if things didn't go well that you 431 00:28:06,680 --> 00:28:08,960 Speaker 1: totally whiffed it and you'll be going to summer school 432 00:28:09,000 --> 00:28:13,440 Speaker 1: to make up for that. Also in open Ai introduced 433 00:28:13,480 --> 00:28:19,240 Speaker 1: a platform humbly called Universe. This platform helps track progress 434 00:28:19,280 --> 00:28:22,879 Speaker 1: and train learning agents to problem solve, starting with the 435 00:28:22,880 --> 00:28:27,359 Speaker 1: most serious of all problems, finding the fun in Atari 436 00:28:27,520 --> 00:28:33,680 Speaker 1: video games. I'm talking about classic Autari video games like Pitfall, which, 437 00:28:34,080 --> 00:28:36,560 Speaker 1: let's be honest, awesome game. You don't have to find 438 00:28:36,560 --> 00:28:39,360 Speaker 1: the fund there, it's right there. But let's say et 439 00:28:39,640 --> 00:28:43,760 Speaker 1: the Extraterrestrial or their version of pac Man. Yeah, you 440 00:28:43,800 --> 00:28:46,920 Speaker 1: have to really find the fun in those. And I'm 441 00:28:46,960 --> 00:28:50,040 Speaker 1: being a little facetious here, but Universe really does train 442 00:28:50,160 --> 00:28:53,160 Speaker 1: learning agents by having them learn how to play video games. 443 00:28:53,160 --> 00:28:55,800 Speaker 1: They started with the Tari games and then they began 444 00:28:55,840 --> 00:29:01,600 Speaker 1: to build from there, and Universe trains these agents to 445 00:29:01,640 --> 00:29:04,240 Speaker 1: play the games, and the ideas that by learning how 446 00:29:04,280 --> 00:29:08,880 Speaker 1: to play games, as the agents encounter new games, they 447 00:29:08,920 --> 00:29:14,000 Speaker 1: can apply the previous learnings from the experiences of playing 448 00:29:14,040 --> 00:29:17,600 Speaker 1: everything before to the new game. Just like we humans, 449 00:29:17,880 --> 00:29:21,680 Speaker 1: will try and apply our knowledge and experience with certain tasks. 450 00:29:22,080 --> 00:29:25,600 Speaker 1: When we face a totally new situation. You come into 451 00:29:25,640 --> 00:29:28,000 Speaker 1: something you've never done before, and you might think, well, 452 00:29:28,560 --> 00:29:31,280 Speaker 1: when I do this other thing, I do it this way, 453 00:29:31,400 --> 00:29:34,280 Speaker 1: So let me try that here first. Maybe that skill 454 00:29:34,320 --> 00:29:38,640 Speaker 1: translates to this new situation, and maybe it works, maybe 455 00:29:38,680 --> 00:29:41,520 Speaker 1: it doesn't, but either way, that informs you and then 456 00:29:41,560 --> 00:29:44,240 Speaker 1: you can start branching out from there to learn how 457 00:29:44,320 --> 00:29:50,040 Speaker 1: to master this new task. That's the idea with Universe. 458 00:29:50,720 --> 00:29:53,640 Speaker 1: Jim and Universe both gave a glimpse at the big 459 00:29:53,680 --> 00:29:56,600 Speaker 1: plans open Ai had in store. But there was a 460 00:29:56,640 --> 00:30:00,520 Speaker 1: looming problem on the horizon. And it wasn't a levolent 461 00:30:00,640 --> 00:30:04,280 Speaker 1: Ai that was hell bent on destroying humanity. It was 462 00:30:04,320 --> 00:30:08,320 Speaker 1: a far more mundane threat. Open Ai was in danger 463 00:30:08,400 --> 00:30:12,120 Speaker 1: of running out of money. I'll explain more, but before 464 00:30:12,160 --> 00:30:15,360 Speaker 1: I run out of money, let's take a quick break. 465 00:30:24,760 --> 00:30:29,600 Speaker 1: We're back, okay, So we're up to and leaders in 466 00:30:29,680 --> 00:30:32,560 Speaker 1: open Ai realized that they were facing their own existential 467 00:30:32,640 --> 00:30:36,200 Speaker 1: crisis in the form of funding. So in order to 468 00:30:36,280 --> 00:30:39,760 Speaker 1: remain relevant and competitive in the fast paced world of 469 00:30:39,800 --> 00:30:42,880 Speaker 1: AI development, and in order to achieve the goal of 470 00:30:42,960 --> 00:30:46,400 Speaker 1: creating an a g I before anyone else. The company 471 00:30:46,480 --> 00:30:49,760 Speaker 1: was going to have to spend enormous amounts of money 472 00:30:49,840 --> 00:30:54,400 Speaker 1: on computer systems and other assets like training, databases or 473 00:30:54,440 --> 00:30:57,560 Speaker 1: else it was going to get left behind. It just 474 00:30:57,760 --> 00:31:01,200 Speaker 1: wasn't possible to do this while also being a strictly 475 00:31:01,280 --> 00:31:04,640 Speaker 1: not for profit company, so the leaders started to think 476 00:31:04,680 --> 00:31:10,160 Speaker 1: about how they might address this. Meanwhile, in Elon Musk 477 00:31:10,280 --> 00:31:13,800 Speaker 1: stepped down from the board of directors. Now officially, the 478 00:31:13,840 --> 00:31:16,960 Speaker 1: reason given was that Musk wanted to avoid a potential 479 00:31:17,000 --> 00:31:21,040 Speaker 1: conflict of interest because Tesla was pursuing its own AI 480 00:31:21,040 --> 00:31:24,920 Speaker 1: research and Tesla was bound to compete for the same 481 00:31:24,960 --> 00:31:28,400 Speaker 1: talent pool that Opened a I wanted to tap into, 482 00:31:28,760 --> 00:31:31,920 Speaker 1: so in order to avoid a conflict of interest, he 483 00:31:31,960 --> 00:31:35,880 Speaker 1: resigned from the board of directors. However, Musk also subsequently 484 00:31:35,880 --> 00:31:39,400 Speaker 1: tweeted out that he felt open ai was falling short, 485 00:31:39,800 --> 00:31:42,920 Speaker 1: mostly on the open part, and that he had disagreements 486 00:31:42,960 --> 00:31:47,000 Speaker 1: regarding the direction of the organization's efforts. It was also 487 00:31:47,160 --> 00:31:51,400 Speaker 1: in when open ai released its charter, the company charter, 488 00:31:51,920 --> 00:31:56,160 Speaker 1: which started to hint at upcoming changes. The charter read, 489 00:31:56,200 --> 00:32:00,280 Speaker 1: in part quote, we anticipate needing to marshal substant ential 490 00:32:00,320 --> 00:32:04,600 Speaker 1: resources to fulfill our mission, but will always diligently act 491 00:32:04,640 --> 00:32:08,560 Speaker 1: to minimize conflicts of interest among our employees and stakeholders 492 00:32:08,600 --> 00:32:12,680 Speaker 1: that could compromise broad benefit end quote. It was like 493 00:32:12,720 --> 00:32:15,440 Speaker 1: the leaders were starting to couch things in an effort 494 00:32:15,440 --> 00:32:18,200 Speaker 1: to explain what was going to be coming up next. 495 00:32:18,840 --> 00:32:22,320 Speaker 1: So the following year, twenty nineteen, saw open Ai create 496 00:32:22,400 --> 00:32:26,680 Speaker 1: a new for profit company as a subsidiary. So the 497 00:32:26,720 --> 00:32:31,200 Speaker 1: parent company, Open Eye Eye, Incorporated, remains a not for 498 00:32:31,280 --> 00:32:36,000 Speaker 1: profit organization, but open Ai l P is a for 499 00:32:36,320 --> 00:32:40,240 Speaker 1: profit company. Open Ai published a blog post that tried 500 00:32:40,320 --> 00:32:44,120 Speaker 1: to explain this decision, saying, quote, we want to increase 501 00:32:44,160 --> 00:32:47,680 Speaker 1: our ability to raise capital while still serving our mission, 502 00:32:48,000 --> 00:32:51,160 Speaker 1: and no pre existing legal structure we know of strikes 503 00:32:51,200 --> 00:32:54,680 Speaker 1: the right balance. Our solution is to create open Ai 504 00:32:55,040 --> 00:32:59,240 Speaker 1: LP as a hybrid of a for profit and nonprofit, 505 00:32:59,560 --> 00:33:03,959 Speaker 1: which we are calling a capped profit company end quote. 506 00:33:04,600 --> 00:33:06,960 Speaker 1: So the idea here is that an investor can pour 507 00:33:07,080 --> 00:33:11,560 Speaker 1: money into open Ai LP and can potentially earn up 508 00:33:11,600 --> 00:33:16,200 Speaker 1: to one hundred times that investment as the company releases 509 00:33:16,200 --> 00:33:20,320 Speaker 1: and generates revenue from products. But that's the limit. Once 510 00:33:20,360 --> 00:33:24,120 Speaker 1: an investor hits one hundred times their investment. That's they're done. 511 00:33:24,160 --> 00:33:26,120 Speaker 1: You ain't getting a hunter and one times return on 512 00:33:26,160 --> 00:33:29,520 Speaker 1: your investment, bucko. So all the additional money over that 513 00:33:29,600 --> 00:33:34,840 Speaker 1: one hundred times return would go toward nonprofit work. But um, 514 00:33:34,880 --> 00:33:39,240 Speaker 1: that's that's a lot, right. One hundred times return on 515 00:33:39,360 --> 00:33:43,320 Speaker 1: investment is huge, to the point where some people say, like, 516 00:33:43,960 --> 00:33:46,960 Speaker 1: when would you ever hit that? I mean, Google, I 517 00:33:47,000 --> 00:33:49,680 Speaker 1: think is somewhere in the realm of twenty times return 518 00:33:49,720 --> 00:33:53,800 Speaker 1: on investment if you got in early on. So um, 519 00:33:54,600 --> 00:33:58,760 Speaker 1: it's hard to imagine a hundred time return. So some 520 00:33:58,760 --> 00:34:02,040 Speaker 1: people say, well, this is just language to make it 521 00:34:02,080 --> 00:34:06,520 Speaker 1: seem like they're still dedicated to this nonprofit but aren't. Really, 522 00:34:07,120 --> 00:34:11,400 Speaker 1: that's one of the criticisms I've I've read. Now, just 523 00:34:11,520 --> 00:34:14,799 Speaker 1: imagine that you know that initial investment into open ai 524 00:34:14,960 --> 00:34:17,560 Speaker 1: was a billion dollars, so presumably you'd have to see 525 00:34:17,560 --> 00:34:21,080 Speaker 1: more than a hundred billion dollars in profit, uh in 526 00:34:21,200 --> 00:34:23,879 Speaker 1: order to return that to investors before they were all 527 00:34:23,880 --> 00:34:26,839 Speaker 1: paid out, and then the rest could go toward nonprofit 528 00:34:27,280 --> 00:34:30,120 Speaker 1: That's just that initial investment, because believe me, open ai 529 00:34:30,200 --> 00:34:33,920 Speaker 1: has received subsequent funding. In fact, in twenty nineteen, Microsoft 530 00:34:33,960 --> 00:34:37,600 Speaker 1: board an additional billion dollars into the company, although only 531 00:34:37,719 --> 00:34:39,759 Speaker 1: half of that was cash, so it was only like 532 00:34:39,760 --> 00:34:44,080 Speaker 1: five million. The other five million was in like cloud 533 00:34:44,160 --> 00:34:47,840 Speaker 1: computing credit, so that open ai could make use of 534 00:34:47,920 --> 00:34:51,880 Speaker 1: Microsoft's Azure platform without having to pay for it because 535 00:34:52,239 --> 00:34:56,000 Speaker 1: they had five hundred million dollars in credit. Yalza. And 536 00:34:56,000 --> 00:34:58,759 Speaker 1: of course we've heard recently that Microsoft is considering a 537 00:34:58,880 --> 00:35:02,120 Speaker 1: ten billion all our investment into open ai, and there 538 00:35:02,160 --> 00:35:04,760 Speaker 1: ain't a yells a big enough to express how princely 539 00:35:04,920 --> 00:35:10,000 Speaker 1: that sum is. In twenty nineteen, open Ai did something strange, 540 00:35:10,000 --> 00:35:12,600 Speaker 1: at least strange if you remember that open is part 541 00:35:12,680 --> 00:35:16,400 Speaker 1: of the company's name. The PR Department released information that 542 00:35:16,480 --> 00:35:19,480 Speaker 1: open ai had been sitting on a language model named 543 00:35:19,600 --> 00:35:25,320 Speaker 1: Generative pre Trained Transformer TO or GPT two that developed 544 00:35:25,320 --> 00:35:28,279 Speaker 1: this and not talked about it, and now they were 545 00:35:28,320 --> 00:35:31,120 Speaker 1: finally talking about it, and that this language model was 546 00:35:31,200 --> 00:35:34,640 Speaker 1: capable of generating text in response to props, including stuff 547 00:35:34,680 --> 00:35:39,320 Speaker 1: like it could create fake news articles or alternative takes 548 00:35:39,360 --> 00:35:43,160 Speaker 1: on classic literature. Further, open ai said that it was 549 00:35:43,200 --> 00:35:47,239 Speaker 1: actually too dangerous to release the code because people might 550 00:35:47,320 --> 00:35:51,560 Speaker 1: then use the code to create misinformation or worse, which 551 00:35:51,560 --> 00:35:54,200 Speaker 1: seemed to fly in the face of open Aiyes, purpose 552 00:35:54,560 --> 00:35:58,920 Speaker 1: that the company had fostered a published, often and transparently culture, 553 00:35:59,520 --> 00:36:02,520 Speaker 1: and that was keeping certain projects secret, and when finally 554 00:36:02,640 --> 00:36:06,880 Speaker 1: talking about them, denying access to the research that seemed 555 00:36:07,600 --> 00:36:11,760 Speaker 1: counter to the founding principles of open ai. The folks 556 00:36:11,760 --> 00:36:14,000 Speaker 1: in open ai had sort of shifted their perspective a 557 00:36:14,000 --> 00:36:17,279 Speaker 1: little bit. In their eyes, some secrecy and restrictions were 558 00:36:17,320 --> 00:36:20,200 Speaker 1: needed to ensure safety and security, as well as to 559 00:36:20,239 --> 00:36:23,719 Speaker 1: maintain a competitive advantage over others in the field of 560 00:36:23,760 --> 00:36:28,759 Speaker 1: AI research. Open ai would eventually release GPT two in 561 00:36:28,880 --> 00:36:32,799 Speaker 1: several stages before the full code finally came out in 562 00:36:32,840 --> 00:36:36,759 Speaker 1: November twenty nineteen. Critics accused open ai of relying on 563 00:36:36,840 --> 00:36:39,640 Speaker 1: publicity stunts to hype up what their research and work 564 00:36:39,680 --> 00:36:45,080 Speaker 1: had created, and thus pumping unrealistic expectations into the investor market, like, 565 00:36:45,120 --> 00:36:48,360 Speaker 1: in other words, by saying, oh, this is really dangerous, 566 00:36:48,400 --> 00:36:50,160 Speaker 1: I don't know if I can let you have this. 567 00:36:50,840 --> 00:36:53,960 Speaker 1: It got people really excited about it, and so investors 568 00:36:53,960 --> 00:36:56,319 Speaker 1: were willing to pour more money into open Ai. That's 569 00:36:56,360 --> 00:36:58,839 Speaker 1: what the critics were saying, that you're just doing this 570 00:36:59,239 --> 00:37:03,239 Speaker 1: to get people worked up into a frenzy and that 571 00:37:03,400 --> 00:37:09,800 Speaker 1: the staged release process for GPT two was open AIS 572 00:37:09,840 --> 00:37:14,040 Speaker 1: way to capitalize on all this height gradually so as 573 00:37:14,080 --> 00:37:16,960 Speaker 1: not to just deflate expectations by releasing it and then 574 00:37:17,000 --> 00:37:21,040 Speaker 1: everyone say, oh, that's it. Later, in a paper released 575 00:37:21,040 --> 00:37:25,800 Speaker 1: in early open AI revealed another secret that the company 576 00:37:25,880 --> 00:37:29,880 Speaker 1: was essentially using the more power approach of trying to 577 00:37:29,920 --> 00:37:33,520 Speaker 1: achieve artificial general intelligence or a g I. So a 578 00:37:33,600 --> 00:37:37,000 Speaker 1: quick word on what they were doing. This was called foresight, 579 00:37:37,280 --> 00:37:41,280 Speaker 1: by the way, So broadly speaking, there are two big 580 00:37:41,320 --> 00:37:44,160 Speaker 1: schools of thought on how the world will see a 581 00:37:44,360 --> 00:37:48,640 Speaker 1: true a g I emerge. That is, an artificial intelligence 582 00:37:48,640 --> 00:37:51,560 Speaker 1: that can perform very much like a human intelligence, you know, 583 00:37:51,600 --> 00:37:53,680 Speaker 1: perhaps not in the same way, but again achieving the 584 00:37:53,719 --> 00:37:58,040 Speaker 1: same outcomes. So one way, the one school of thought 585 00:37:58,200 --> 00:38:01,160 Speaker 1: is that we already have all the off that we 586 00:38:01,200 --> 00:38:03,920 Speaker 1: need in all the AI research that has been done 587 00:38:03,960 --> 00:38:06,800 Speaker 1: over the years. We have all the pieces, They're all there. 588 00:38:07,280 --> 00:38:09,399 Speaker 1: We just need to amp it up by providing more 589 00:38:09,480 --> 00:38:15,080 Speaker 1: computational resources behind it and larger training sets. So everything's 590 00:38:15,280 --> 00:38:17,400 Speaker 1: good to go. We just got to provide the power 591 00:38:17,440 --> 00:38:20,080 Speaker 1: to push it into the realm of a g I. Now, 592 00:38:20,120 --> 00:38:22,920 Speaker 1: the other school of thought is that we're still missing 593 00:38:23,000 --> 00:38:26,799 Speaker 1: something or maybe several some things, and that until we 594 00:38:26,840 --> 00:38:31,560 Speaker 1: figure those out and we incorporate them into our AI strategy, 595 00:38:31,680 --> 00:38:34,000 Speaker 1: we just are not going to see an a G I. 596 00:38:34,080 --> 00:38:36,239 Speaker 1: It won't matter how much power you put behind it. 597 00:38:36,680 --> 00:38:39,719 Speaker 1: We're still missing elements that will actually allow us to 598 00:38:39,800 --> 00:38:43,640 Speaker 1: hit a GI status. Now open ai subscribes to the 599 00:38:43,719 --> 00:38:47,799 Speaker 1: more power philosophy generally speaking, and the research paper kind 600 00:38:47,800 --> 00:38:50,759 Speaker 1: of explained us. And again this was something that open 601 00:38:50,800 --> 00:38:54,440 Speaker 1: ai was holding in secret. They even compelled employees to 602 00:38:54,480 --> 00:38:57,600 Speaker 1: stay quiet about the work. And what was essentially going 603 00:38:57,600 --> 00:39:00,640 Speaker 1: on was that open ai researchers were taking AI work 604 00:39:01,160 --> 00:39:05,839 Speaker 1: that was developed in other research labs and companies. These 605 00:39:05,880 --> 00:39:11,000 Speaker 1: were tools that other competitors were offering, and so they 606 00:39:11,160 --> 00:39:14,040 Speaker 1: essentially got hold of these tools, and then they jacked 607 00:39:14,120 --> 00:39:17,000 Speaker 1: up the power of the tools by training them on 608 00:39:17,160 --> 00:39:21,600 Speaker 1: larger data sets and providing more compute computational power to 609 00:39:21,680 --> 00:39:26,960 Speaker 1: see if, oh, maybe what we already have is the 610 00:39:26,960 --> 00:39:29,280 Speaker 1: way there and we just gotta give it the extra 611 00:39:29,320 --> 00:39:34,759 Speaker 1: oomph to get it to in open ai announced the 612 00:39:34,760 --> 00:39:38,680 Speaker 1: next generation of its Generative pre Trained Transformer. This would 613 00:39:38,719 --> 00:39:42,200 Speaker 1: be GPT three and that it would make available in 614 00:39:42,239 --> 00:39:46,040 Speaker 1: Application Programming Interface or a p I, which would be 615 00:39:46,080 --> 00:39:50,520 Speaker 1: the company's first commercial product, so customers developers in this 616 00:39:50,680 --> 00:39:54,240 Speaker 1: case could get access to the GPT three language model 617 00:39:54,520 --> 00:39:57,960 Speaker 1: through this ap I and then integrate that with their app. 618 00:39:58,440 --> 00:40:00,640 Speaker 1: So if it was an app would help you do 619 00:40:00,719 --> 00:40:04,120 Speaker 1: things like I don't know book meetings, then the language 620 00:40:04,120 --> 00:40:08,480 Speaker 1: model would be part of what would power this app. 621 00:40:09,120 --> 00:40:11,840 Speaker 1: The following year, we got open aies tool that would 622 00:40:11,880 --> 00:40:15,600 Speaker 1: generate digital images, which is doll E. That's d A 623 00:40:16,000 --> 00:40:19,319 Speaker 1: L L E kind of a combination of Wally the 624 00:40:19,360 --> 00:40:25,759 Speaker 1: Pixar character and Salvador Dolly, the absurdist artist with the 625 00:40:25,800 --> 00:40:30,800 Speaker 1: incredible mustache. So you would feed Dolly a text prompt 626 00:40:31,120 --> 00:40:34,040 Speaker 1: and it would try to create images based on that prompt. 627 00:40:34,400 --> 00:40:37,520 Speaker 1: Sometimes it was delightful and sometimes it was disturbing. Sometimes 628 00:40:37,520 --> 00:40:41,279 Speaker 1: it was a combination. But it was really impressive that 629 00:40:41,400 --> 00:40:44,120 Speaker 1: it was able to do this at all, and similar 630 00:40:44,160 --> 00:40:47,760 Speaker 1: to that of other generative image AI services like mid Journey, 631 00:40:47,760 --> 00:40:51,759 Speaker 1: which would actually debut a year later in two and 632 00:40:51,880 --> 00:40:57,840 Speaker 1: open Ai updated Dolly and released Dolly two. In the 633 00:40:57,920 --> 00:41:00,480 Speaker 1: new version of Dolly is able to combine find more 634 00:41:00,600 --> 00:41:06,320 Speaker 1: concepts together to create images and also to imitate specific styles. 635 00:41:06,360 --> 00:41:09,280 Speaker 1: So you know, if you wanted a style that imitated 636 00:41:09,320 --> 00:41:12,160 Speaker 1: a photograph from the nineteen twenties, it would try to 637 00:41:12,840 --> 00:41:15,439 Speaker 1: create that that effect, or if you were to say, 638 00:41:16,160 --> 00:41:20,359 Speaker 1: like a painting from the Cubist movement, that it would 639 00:41:20,360 --> 00:41:25,200 Speaker 1: try and and accomplish that. In late two, open Ai 640 00:41:25,320 --> 00:41:28,680 Speaker 1: introduced chat GPT, a chat bought built on top of 641 00:41:28,680 --> 00:41:32,799 Speaker 1: the GPT three point five language model. That's the one 642 00:41:32,840 --> 00:41:37,520 Speaker 1: that stirred up conversations around transparency, trusting AI output, and 643 00:41:37,600 --> 00:41:42,120 Speaker 1: worrying about students cheating off an AI s AST. Now 644 00:41:42,680 --> 00:41:46,279 Speaker 1: we've already touched on this in this episode about you know, 645 00:41:46,320 --> 00:41:48,319 Speaker 1: a lot of the concern here, and I think a 646 00:41:48,360 --> 00:41:52,360 Speaker 1: great deal of it rises not from Chat GPTs incredible abilities, 647 00:41:52,400 --> 00:41:56,759 Speaker 1: which are genuinely impressive, but rather our human tendency to 648 00:41:56,920 --> 00:42:02,160 Speaker 1: trust automated output implicitly when a fact it's sometimes wrong. 649 00:42:02,560 --> 00:42:06,080 Speaker 1: In fact, as many reports have said, sometimes Chat GPT 650 00:42:06,360 --> 00:42:11,360 Speaker 1: gets things very very wrong, but it presents it in 651 00:42:11,400 --> 00:42:14,640 Speaker 1: a way that appears to be authoritative and trustworthy. So 652 00:42:14,760 --> 00:42:17,920 Speaker 1: if we do trust the output of such a system 653 00:42:18,000 --> 00:42:21,640 Speaker 1: and then we act on that output, where we're falling 654 00:42:21,760 --> 00:42:24,680 Speaker 1: far short of that AI that's supposed to be beneficial 655 00:42:24,680 --> 00:42:28,560 Speaker 1: to humanity, right. Open ai was built around that, So 656 00:42:28,680 --> 00:42:31,680 Speaker 1: this seems again to be a contradiction to open a 657 00:42:31,760 --> 00:42:34,680 Speaker 1: eyes goal that if it has a chat bot that 658 00:42:34,760 --> 00:42:39,960 Speaker 1: occasionally produces incorrect information and then people act on it, 659 00:42:40,440 --> 00:42:44,600 Speaker 1: wouldn't you argue that this AI could be potentially harmful 660 00:42:44,640 --> 00:42:48,839 Speaker 1: to humanity not beneficial. Now you could say that it's 661 00:42:48,880 --> 00:42:51,640 Speaker 1: the people who are relying too heavily on chat GPT 662 00:42:52,480 --> 00:42:55,600 Speaker 1: that are the problem, and that's not really open aiyes fault. 663 00:42:55,640 --> 00:42:59,239 Speaker 1: They can't control how people use their tools. That, just 664 00:42:59,360 --> 00:43:03,400 Speaker 1: like the test law owners, people are not properly making 665 00:43:03,520 --> 00:43:08,200 Speaker 1: use of the technology with enough awareness of that technology's limitations. 666 00:43:08,840 --> 00:43:12,279 Speaker 1: But others might argue that open ai hasn't exactly made 667 00:43:12,320 --> 00:43:15,440 Speaker 1: people aware of the limitations at all, at least not 668 00:43:15,520 --> 00:43:18,759 Speaker 1: in a way that's equal to the hype that surrounds 669 00:43:18,800 --> 00:43:23,200 Speaker 1: their various products. That open Ai is benefiting from this 670 00:43:23,400 --> 00:43:28,280 Speaker 1: excitement around the undeniably impressive achievements, but that the company 671 00:43:28,360 --> 00:43:30,720 Speaker 1: is failing to live up to this commitment to creating 672 00:43:30,760 --> 00:43:35,919 Speaker 1: beneficial AI because they're not being good stewards of this 673 00:43:36,360 --> 00:43:39,279 Speaker 1: tool and the outcome of people using it. And it 674 00:43:39,400 --> 00:43:42,480 Speaker 1: is a very complicated problem, and AI isn't likely to 675 00:43:42,520 --> 00:43:46,640 Speaker 1: solve this one right away. Open ai is currently developing 676 00:43:46,719 --> 00:43:50,600 Speaker 1: GPT four, so that's the next generation of the language 677 00:43:50,640 --> 00:43:54,960 Speaker 1: model it's been developing all these years. CEO Sam Altman 678 00:43:55,040 --> 00:43:57,160 Speaker 1: has already said that people are likely going to be 679 00:43:57,239 --> 00:44:01,120 Speaker 1: disappointed by GPT for not the cause the model won't 680 00:44:01,120 --> 00:44:03,920 Speaker 1: be impressive. I have no doubt it will be, but 681 00:44:04,040 --> 00:44:06,680 Speaker 1: because people have already built up in their minds a 682 00:44:06,760 --> 00:44:10,240 Speaker 1: bar that GPT four simply will not be able to reach. 683 00:44:11,040 --> 00:44:14,279 Speaker 1: And while that is a fair observation, I can't help 684 00:44:14,320 --> 00:44:18,200 Speaker 1: but think that open ai is at least partly responsible 685 00:44:18,280 --> 00:44:23,239 Speaker 1: for encouraging the fervor that led to this impossibly high bar. 686 00:44:23,480 --> 00:44:26,040 Speaker 1: I don't think people said it all on their own. 687 00:44:26,080 --> 00:44:30,040 Speaker 1: I think open aiyes own approach has kind of encouraged 688 00:44:30,719 --> 00:44:34,920 Speaker 1: this sort of reaction. I mean, there's already this tendency 689 00:44:35,000 --> 00:44:37,400 Speaker 1: for us to hype stuff when we just get a 690 00:44:37,480 --> 00:44:40,359 Speaker 1: hint of what is possible and we start to extrapolate 691 00:44:40,400 --> 00:44:43,000 Speaker 1: from that. That's true all the time. You can see 692 00:44:43,040 --> 00:44:44,840 Speaker 1: it over and over and over again in lots of 693 00:44:44,880 --> 00:44:48,400 Speaker 1: different technologies throughout the years. But at the same time, 694 00:44:48,440 --> 00:44:52,200 Speaker 1: I feel open ai takes a kind of almost coy approach, 695 00:44:53,000 --> 00:44:57,839 Speaker 1: and that helps encourage this behavior rather than discourage it. 696 00:44:58,560 --> 00:45:01,080 Speaker 1: The company is openly doing the goal of building the 697 00:45:01,120 --> 00:45:03,520 Speaker 1: first a g I, though as we've seen, it's not 698 00:45:03,640 --> 00:45:07,279 Speaker 1: doing so in quite as transparent away as the organization 699 00:45:07,320 --> 00:45:10,560 Speaker 1: first set out to follow. But if you're pursuing that goal, 700 00:45:11,400 --> 00:45:14,760 Speaker 1: it means you've got like really big ambitions, and that again, 701 00:45:15,080 --> 00:45:18,239 Speaker 1: I think helps to fuel the hype cycle. Now, I 702 00:45:18,280 --> 00:45:21,120 Speaker 1: guess I can conclude this episode by just reflecting on 703 00:45:21,160 --> 00:45:24,560 Speaker 1: the fact that open ai is a company that Elon 704 00:45:24,800 --> 00:45:32,320 Speaker 1: Musk has criticized for failing to be transparent. That's something, y'all. Now. 705 00:45:32,760 --> 00:45:35,719 Speaker 1: I don't wish to disparage the people who work for 706 00:45:35,760 --> 00:45:39,280 Speaker 1: open ai or even the goal of the organization itself. 707 00:45:39,320 --> 00:45:41,239 Speaker 1: I think it's a worthy goal. I think there are 708 00:45:41,280 --> 00:45:43,840 Speaker 1: a lot of people who truly believe in that goal 709 00:45:43,880 --> 00:45:46,880 Speaker 1: who are working for open Ai. I think the leadership 710 00:45:47,000 --> 00:45:49,960 Speaker 1: believes in the goal and that that's what they're pursuing. 711 00:45:50,239 --> 00:45:53,440 Speaker 1: It's just the realities of trying to achieve that in 712 00:45:53,480 --> 00:45:55,919 Speaker 1: a world where you need to make money in order 713 00:45:55,960 --> 00:46:01,600 Speaker 1: to fuel that pursuit creates complications, and there are no 714 00:46:01,920 --> 00:46:05,160 Speaker 1: perfect solutions unless you just happen to have, you know, 715 00:46:05,239 --> 00:46:09,200 Speaker 1: a a bottomless pit of a benefactor who can just 716 00:46:09,280 --> 00:46:14,160 Speaker 1: pour money into the organization and allow it to pursue 717 00:46:14,200 --> 00:46:18,680 Speaker 1: these these developments without having to worry about the commercial 718 00:46:18,719 --> 00:46:22,160 Speaker 1: aspect of it. Unless you have that, then you have 719 00:46:22,280 --> 00:46:25,480 Speaker 1: to deal with these real world complications. And just like 720 00:46:26,040 --> 00:46:29,760 Speaker 1: the autonomous cars that you know, on the surface should 721 00:46:29,800 --> 00:46:33,520 Speaker 1: be able to maneuver without any driver in the driver's 722 00:46:33,520 --> 00:46:37,680 Speaker 1: seat and do so perfectly safely, we learned that once 723 00:46:37,719 --> 00:46:40,600 Speaker 1: you put it into the real world, there are so 724 00:46:40,640 --> 00:46:44,600 Speaker 1: many other variables and complications at play. It's never as 725 00:46:44,640 --> 00:46:48,960 Speaker 1: simple as you first thought. So I know I've dogged 726 00:46:48,960 --> 00:46:51,200 Speaker 1: on open Ai a lot. There are a lot of 727 00:46:51,280 --> 00:46:56,040 Speaker 1: really great critical articles about the company. But I do 728 00:46:56,120 --> 00:46:59,000 Speaker 1: believe in the work they're doing. I just the way 729 00:46:59,000 --> 00:47:02,480 Speaker 1: they go about it has some elements to it that 730 00:47:02,520 --> 00:47:07,680 Speaker 1: I find troubling. But it's not like I can suggest 731 00:47:07,719 --> 00:47:10,560 Speaker 1: a better approach. I just think that it's important for 732 00:47:10,640 --> 00:47:14,920 Speaker 1: us to pay attention and to criticize when necessary, and 733 00:47:14,960 --> 00:47:20,640 Speaker 1: to ask questions and to hold the organization accountable because 734 00:47:20,800 --> 00:47:23,840 Speaker 1: it has claimed to be this organization founded with a 735 00:47:23,920 --> 00:47:28,160 Speaker 1: pursuit of developing beneficial AI and doing so in an open, 736 00:47:28,160 --> 00:47:31,360 Speaker 1: transparent way. And if it fails to do that, I 737 00:47:31,400 --> 00:47:33,400 Speaker 1: think we have to call them on it, because otherwise 738 00:47:34,520 --> 00:47:37,239 Speaker 1: what we get may not be that beneficial AI we've 739 00:47:37,280 --> 00:47:40,800 Speaker 1: been hoping for. All Right, that's it for this episode. 740 00:47:40,840 --> 00:47:43,840 Speaker 1: Hope you enjoyed it, and if you have suggestions for 741 00:47:43,880 --> 00:47:46,120 Speaker 1: topics I should cover in future episodes of tech Stuff, 742 00:47:46,120 --> 00:47:48,480 Speaker 1: please reach out to me. You can download the i 743 00:47:48,600 --> 00:47:51,880 Speaker 1: heart Radio app for free and navigate over to tech Stuff. 744 00:47:51,920 --> 00:47:53,879 Speaker 1: Just put tech stuff in the search field. That will 745 00:47:53,920 --> 00:47:56,360 Speaker 1: bring you over to our little page on that app, 746 00:47:57,000 --> 00:47:59,960 Speaker 1: and you will find a microphone icon on the tech 747 00:48:00,040 --> 00:48:01,839 Speaker 1: stuff page. If you click on that, you can leave 748 00:48:01,880 --> 00:48:04,520 Speaker 1: a voice message up to thirty seconds in length let 749 00:48:04,520 --> 00:48:06,160 Speaker 1: me know what you would like to hear, or if 750 00:48:06,160 --> 00:48:08,880 Speaker 1: you prefer, you can head on over to Elon Musk's 751 00:48:08,920 --> 00:48:12,160 Speaker 1: Twitter and you can send me a Twitter message. The 752 00:48:12,160 --> 00:48:15,080 Speaker 1: handle for the show is tech Stuff H s W 753 00:48:15,800 --> 00:48:24,439 Speaker 1: and I'll talk to you again really soon, y. Tech 754 00:48:24,520 --> 00:48:27,960 Speaker 1: Stuff is an I Heart Radio production. For more podcasts 755 00:48:27,960 --> 00:48:30,760 Speaker 1: from I Heart Radio, visit the i heart Radio app, 756 00:48:30,880 --> 00:48:34,040 Speaker 1: Apple Podcasts, or wherever you listen to your favorite shows