1 00:00:04,440 --> 00:00:12,760 Speaker 1: Welcome to tech Stuff, a production from iHeartRadio. Hey there, 2 00:00:12,760 --> 00:00:15,840 Speaker 1: and welcome to tech Stuff. I'm your host Jonathan Strickland. 3 00:00:15,840 --> 00:00:19,440 Speaker 1: I'm an executive producer with iHeart Podcasts and how the 4 00:00:19,480 --> 00:00:22,680 Speaker 1: tech are you? Folks. We have a very very special 5 00:00:22,720 --> 00:00:25,360 Speaker 1: episode of tech Stuff because I have a very very 6 00:00:25,400 --> 00:00:29,440 Speaker 1: special guest with me today, Jacob Goldstein. Now, if I 7 00:00:29,480 --> 00:00:32,760 Speaker 1: were to go and run down his entire resume, it 8 00:00:32,800 --> 00:00:35,600 Speaker 1: would be an entire episode just by itself. He's got 9 00:00:35,600 --> 00:00:40,680 Speaker 1: a long and distinguished career in journalism, and among his 10 00:00:41,040 --> 00:00:44,360 Speaker 1: many accomplishments happens to be the fact that he's the 11 00:00:44,360 --> 00:00:48,279 Speaker 1: host of a podcast called What's Your Problem, which is 12 00:00:48,520 --> 00:00:52,559 Speaker 1: a show that really dives into things like engineering and 13 00:00:52,640 --> 00:00:55,800 Speaker 1: how engineers tackle problems, how do they define them, and 14 00:00:55,800 --> 00:00:59,960 Speaker 1: then how do they create solutions. So Jacob, welcome to 15 00:01:00,080 --> 00:01:01,640 Speaker 1: tech Stuff. Thank you for being here. 16 00:01:02,120 --> 00:01:04,320 Speaker 2: Hi, Jonathan, thanks so much for having me. I'm delighted 17 00:01:04,360 --> 00:01:04,760 Speaker 2: to be here. 18 00:01:05,240 --> 00:01:08,160 Speaker 1: Yeah, I'm delighted you're here too. And before we really 19 00:01:08,200 --> 00:01:10,840 Speaker 1: dive into a full discussion, we're going to talk a 20 00:01:10,880 --> 00:01:13,440 Speaker 1: lot about engineering and a lot about AI in particular, 21 00:01:13,520 --> 00:01:16,440 Speaker 1: because many of your episodes in this last season of 22 00:01:16,480 --> 00:01:18,759 Speaker 1: your show have been on AI. I want to learn 23 00:01:18,800 --> 00:01:21,280 Speaker 1: more about you, So tell us a bit about your 24 00:01:21,319 --> 00:01:24,920 Speaker 1: background and how you came to become a podcaster on 25 00:01:25,080 --> 00:01:25,960 Speaker 1: What's Your Problem. 26 00:01:26,480 --> 00:01:29,360 Speaker 2: So, before I started What's Your Problem, I was one 27 00:01:29,360 --> 00:01:31,640 Speaker 2: of the co hosts of a podcast called Planet Money, 28 00:01:32,040 --> 00:01:35,840 Speaker 2: which is a show about economics. And before I had 29 00:01:35,840 --> 00:01:39,160 Speaker 2: that job, I didn't know that much about economics. You know, 30 00:01:39,200 --> 00:01:41,600 Speaker 2: I was an English major in college. I'd covered healthcare 31 00:01:41,600 --> 00:01:44,800 Speaker 2: for the Wall Street Journal, and so getting there and 32 00:01:44,880 --> 00:01:49,680 Speaker 2: covering economics for a while, to me, the big exciting 33 00:01:49,920 --> 00:01:54,760 Speaker 2: idea at the heart of economics is the pie can 34 00:01:54,840 --> 00:01:57,960 Speaker 2: get bigger, right, everybody can be better off. The world 35 00:01:58,040 --> 00:02:01,560 Speaker 2: is not a zero sum game, and I think that 36 00:02:01,640 --> 00:02:07,640 Speaker 2: is a very non intuitive, big exciting idea. And basically 37 00:02:08,280 --> 00:02:11,480 Speaker 2: the way we all get better off in the long 38 00:02:11,560 --> 00:02:15,960 Speaker 2: run is through technology. Right, It's through people figuring out 39 00:02:16,440 --> 00:02:20,160 Speaker 2: more efficient ways to do things, ways so that you know, 40 00:02:20,200 --> 00:02:22,680 Speaker 2: we do the same amount of work basically, but we 41 00:02:22,720 --> 00:02:25,840 Speaker 2: get more stuff. You get more output for every hour 42 00:02:25,880 --> 00:02:30,639 Speaker 2: of labor. And that is fundamentally you know, engineering and technology, 43 00:02:30,680 --> 00:02:33,000 Speaker 2: as you said, And so I wanted to go deeper 44 00:02:33,040 --> 00:02:36,080 Speaker 2: on like how that actually works, Like, you know, there 45 00:02:36,080 --> 00:02:38,040 Speaker 2: are people whose job is I'm going to go to 46 00:02:38,080 --> 00:02:40,520 Speaker 2: work and like figure out a better way to do something, 47 00:02:40,639 --> 00:02:42,520 Speaker 2: and so that is what I'm trying to do on 48 00:02:42,560 --> 00:02:44,080 Speaker 2: What's Your Problem? Those are the kind of people I 49 00:02:44,120 --> 00:02:44,920 Speaker 2: talk to on the show. 50 00:02:45,520 --> 00:02:48,880 Speaker 1: That's awesome. I hear what you're saying because it resonates 51 00:02:48,880 --> 00:02:50,320 Speaker 1: a lot with a lot of the stuff we talk 52 00:02:50,360 --> 00:02:53,080 Speaker 1: about here on tech stuff. It is to me one 53 00:02:53,120 --> 00:02:58,040 Speaker 1: of the key funny components about technology is how everyone 54 00:02:58,120 --> 00:03:01,840 Speaker 1: anticipates the next big technology development means that their jobs 55 00:03:01,840 --> 00:03:05,720 Speaker 1: are going to be way less labor intensive, and then 56 00:03:05,880 --> 00:03:09,840 Speaker 1: often it turns out like, well, sure each individual task 57 00:03:09,960 --> 00:03:12,840 Speaker 1: is easier, but now you're doing way more tasks because 58 00:03:12,880 --> 00:03:16,160 Speaker 1: everyone's more efficient. So if you remember, because I'm not 59 00:03:16,160 --> 00:03:17,600 Speaker 1: going to put an age on you, Jacob, but I 60 00:03:17,600 --> 00:03:19,239 Speaker 1: will say that I am of a certain age where 61 00:03:19,280 --> 00:03:23,400 Speaker 1: I remember the concept of the paperless office, yes, and 62 00:03:23,440 --> 00:03:26,320 Speaker 1: how we were going to get to this incredibly efficient thing, 63 00:03:26,360 --> 00:03:29,160 Speaker 1: and that maybe, like maybe your workday would be reduced 64 00:03:29,160 --> 00:03:31,440 Speaker 1: to maybe three hours a day. Turns out that that 65 00:03:31,680 --> 00:03:34,760 Speaker 1: was perhaps a bit idealistic on the part of the 66 00:03:34,800 --> 00:03:36,880 Speaker 1: workers and not the way it worked out. 67 00:03:37,480 --> 00:03:40,240 Speaker 2: Yes, although I mean there is a lot less paper 68 00:03:40,400 --> 00:03:43,000 Speaker 2: to start with us, right, Like I am old enough 69 00:03:43,080 --> 00:03:46,080 Speaker 2: to remember, like when I started working, everybody had like 70 00:03:46,120 --> 00:03:49,240 Speaker 2: a file drawer by their desk and like hanging files 71 00:03:49,280 --> 00:03:52,080 Speaker 2: with papers in them, so there is less paper. I mean, 72 00:03:52,200 --> 00:03:54,920 Speaker 2: the sort of less work thing is interesting, right, because 73 00:03:54,960 --> 00:03:56,839 Speaker 2: on the one hand, people are like, oh, I hope 74 00:03:56,880 --> 00:03:59,840 Speaker 2: technology you'll mean I have less work. By the same token, 75 00:03:59,840 --> 00:04:04,960 Speaker 2: people are like, oh I hope technology doesn't take my job, right, Yes, 76 00:04:05,400 --> 00:04:09,520 Speaker 2: In fact, the basic mechanism is like technology the happy 77 00:04:09,600 --> 00:04:13,160 Speaker 2: story anyways, the story that we hope happens is technology 78 00:04:13,200 --> 00:04:16,440 Speaker 2: makes us more productive, not so that we can work less, 79 00:04:16,480 --> 00:04:19,120 Speaker 2: but so that our output can be greater, right, Like 80 00:04:19,440 --> 00:04:23,080 Speaker 2: you know, I did not start working on podcasts and 81 00:04:23,200 --> 00:04:26,080 Speaker 2: radio in the real to real tape era, but I 82 00:04:26,120 --> 00:04:28,880 Speaker 2: know people who did, and like they talk about how 83 00:04:28,920 --> 00:04:32,440 Speaker 2: long it took to literally cut tape by hand, and 84 00:04:32,839 --> 00:04:34,880 Speaker 2: you can cut a lot more tape now that it's 85 00:04:34,920 --> 00:04:37,479 Speaker 2: not actual tape. Right, that's a productivity game. 86 00:04:37,760 --> 00:04:41,640 Speaker 1: Yes, totally. It's so funny to kind of see these 87 00:04:41,839 --> 00:04:44,800 Speaker 1: changes over time and the different perceptions that go into 88 00:04:44,839 --> 00:04:47,279 Speaker 1: whether we was it going to be like versus what 89 00:04:47,400 --> 00:04:50,040 Speaker 1: it actually turns out to be. We're gonna as I said, 90 00:04:50,080 --> 00:04:53,320 Speaker 1: talk about AI a lot, and to your point, one 91 00:04:53,360 --> 00:04:56,919 Speaker 1: of the things that I hear repeated about AI in 92 00:04:57,000 --> 00:05:00,640 Speaker 1: general and specifically within the realm of robotics and AI 93 00:05:01,160 --> 00:05:04,680 Speaker 1: is that its ideal role is to tackle tasks that 94 00:05:04,760 --> 00:05:08,000 Speaker 1: fall into the three d's, which are dirty, dangerous, and dull. 95 00:05:08,360 --> 00:05:11,440 Speaker 1: That these are technologies that are best suited to take 96 00:05:11,480 --> 00:05:15,200 Speaker 1: on jobs that are perhaps less desirable for humans for 97 00:05:15,279 --> 00:05:19,240 Speaker 1: various reasons, whether they could potentially cause injury, or they're 98 00:05:19,240 --> 00:05:23,840 Speaker 1: not very rewarding, that sort of thing, And I really 99 00:05:23,960 --> 00:05:28,680 Speaker 1: like that concept. The fear everyone has, obviously is that 100 00:05:28,760 --> 00:05:33,359 Speaker 1: it's tackling everything. It's non discriminate, it's not looking at 101 00:05:33,640 --> 00:05:37,440 Speaker 1: just the three d's, it's looking at every possible option. 102 00:05:37,600 --> 00:05:40,000 Speaker 1: And you're seeing a lot of discourse, at least I 103 00:05:40,040 --> 00:05:42,839 Speaker 1: am online where I see a lot of people saying, 104 00:05:43,400 --> 00:05:47,680 Speaker 1: why aren't we looking at how to automate c suite jobs? 105 00:05:47,800 --> 00:05:49,800 Speaker 1: Because it seems to me like a lot of the 106 00:05:49,960 --> 00:05:52,880 Speaker 1: duties that c suite executives have are ones that would 107 00:05:52,920 --> 00:05:56,240 Speaker 1: be best suited for some of the AI tools we're 108 00:05:56,600 --> 00:05:59,760 Speaker 1: talking about. Why are we talking about eliminating these lower 109 00:05:59,839 --> 00:06:02,640 Speaker 1: level jobs when some of these upper level ones particularly 110 00:06:02,680 --> 00:06:06,400 Speaker 1: when you have stories about c suite executives who perhaps 111 00:06:06,440 --> 00:06:09,640 Speaker 1: had a less than stellar run at the top of 112 00:06:09,680 --> 00:06:12,200 Speaker 1: the company ladder, like maybe the company didn't perform as 113 00:06:12,240 --> 00:06:14,680 Speaker 1: well as it should have, and yet they still retire 114 00:06:14,960 --> 00:06:18,400 Speaker 1: with these massive packages. So it's funny to see how 115 00:06:18,640 --> 00:06:23,080 Speaker 1: the perceptions of things like automation and AI are shaping 116 00:06:23,960 --> 00:06:25,760 Speaker 1: social discussions in that way. 117 00:06:26,240 --> 00:06:28,919 Speaker 2: Well, I think certainly. I mean, the rise of large 118 00:06:29,000 --> 00:06:33,120 Speaker 2: language models, you know lms like chat GPT have shifted 119 00:06:33,120 --> 00:06:35,400 Speaker 2: that conversation some, right. I think if you go back 120 00:06:35,440 --> 00:06:38,920 Speaker 2: five years, people thought about, you know, automating warehouse jobs. 121 00:06:38,960 --> 00:06:42,080 Speaker 2: But what I've seen in the last say year or 122 00:06:42,120 --> 00:06:46,680 Speaker 2: so since chat GPT, you know, stormed, our discourse is 123 00:06:46,800 --> 00:06:50,880 Speaker 2: people are talking a lot more about journalists and lawyers 124 00:06:51,279 --> 00:06:55,120 Speaker 2: being automated away, right, and plausibly to some extent, plausibly 125 00:06:55,160 --> 00:06:57,240 Speaker 2: to some margin. I mean. The other thing is like, 126 00:06:57,400 --> 00:07:00,160 Speaker 2: do you want a robot boss? When people talk about like, 127 00:07:00,200 --> 00:07:03,000 Speaker 2: I haven't heard about people talking about automating the c suite, 128 00:07:03,000 --> 00:07:06,400 Speaker 2: but like, sure, bad bosses are bad, and overpaying bad 129 00:07:06,440 --> 00:07:10,320 Speaker 2: bosses is bad. I actually think the role of a 130 00:07:10,360 --> 00:07:14,160 Speaker 2: good boss, of a good CEO is largely to be 131 00:07:14,200 --> 00:07:18,360 Speaker 2: a human being, right, is not fundamentally about you know, 132 00:07:18,440 --> 00:07:21,040 Speaker 2: assessing the data and making the best decision, although obviously 133 00:07:21,040 --> 00:07:24,680 Speaker 2: that's important. It's to be there to you know, talk 134 00:07:24,720 --> 00:07:26,960 Speaker 2: to people, essentially, to be in the room to tell 135 00:07:27,000 --> 00:07:29,120 Speaker 2: people that things are going to be okay. And that 136 00:07:29,320 --> 00:07:33,240 Speaker 2: seems like the set of domains that are less likely 137 00:07:33,280 --> 00:07:35,119 Speaker 2: to be automated, certainly in the short run. 138 00:07:35,560 --> 00:07:39,120 Speaker 1: I think you're absolutely right. I think it's based on again, perception, right. 139 00:07:39,200 --> 00:07:44,480 Speaker 1: A lot of people don't have like FaceTime with CEOs, 140 00:07:45,040 --> 00:07:48,960 Speaker 1: and so their perception is from a distance, and they're 141 00:07:49,000 --> 00:07:51,920 Speaker 1: looking at the effects that the CEO decisions are having 142 00:07:52,080 --> 00:07:54,920 Speaker 1: on a broad level, especially in the wake of something 143 00:07:55,000 --> 00:07:57,600 Speaker 1: like a round of layoffs, for example, Whereas you and 144 00:07:57,640 --> 00:08:00,440 Speaker 1: I have had the opportunity to speak face to face 145 00:08:00,720 --> 00:08:03,880 Speaker 1: and you find out very quickly these CEOs are human beings, 146 00:08:03,960 --> 00:08:06,120 Speaker 1: some more so than others. I mean, there are some. 147 00:08:06,720 --> 00:08:09,880 Speaker 1: There are some CEOs out there who I suspect maybe 148 00:08:09,920 --> 00:08:11,680 Speaker 1: at least part cyborg, but. 149 00:08:12,200 --> 00:08:13,239 Speaker 2: Could be had happen. 150 00:08:14,560 --> 00:08:18,000 Speaker 1: Yeah, yeah, most of them. I mean, I'm not sure 151 00:08:18,000 --> 00:08:22,120 Speaker 1: if Elon Musk is even robotic, he might be alien. 152 00:08:22,200 --> 00:08:24,800 Speaker 1: I'm not entirely certain. But there are a lot of 153 00:08:24,800 --> 00:08:27,160 Speaker 1: them out there. When you just have a short conversation 154 00:08:27,280 --> 00:08:30,160 Speaker 1: and you realize these aren't just talking points. For a 155 00:08:30,160 --> 00:08:34,199 Speaker 1: lot of these leaders, they sincerely believe their mission statement, 156 00:08:34,280 --> 00:08:37,840 Speaker 1: or they sincerely believe in the strategies that they're following, 157 00:08:37,880 --> 00:08:40,960 Speaker 1: and they sincerely feel bad when they have to make 158 00:08:41,000 --> 00:08:44,600 Speaker 1: decisions that lead to things like layoffs. But when you 159 00:08:44,679 --> 00:08:47,000 Speaker 1: are at more of a distance, I think it's easier 160 00:08:47,040 --> 00:08:52,040 Speaker 1: to kind of dehumanize the person. And it's understandable, right 161 00:08:52,120 --> 00:08:55,679 Speaker 1: you see those big effects and you just think this, Meanwhile, 162 00:08:55,679 --> 00:09:00,400 Speaker 1: the CEO is potentially making enormous amounts of money. I'm 163 00:09:00,440 --> 00:09:03,360 Speaker 1: reminded of a former CEO I worked for, not a 164 00:09:03,400 --> 00:09:06,440 Speaker 1: former CEO. He's still a CEO, but he's a find, 165 00:09:06,440 --> 00:09:09,440 Speaker 1: my former boss, David Zaslov. And whenever I see any 166 00:09:09,480 --> 00:09:12,480 Speaker 1: stories about him, I sit there and think, well, I've 167 00:09:12,520 --> 00:09:15,120 Speaker 1: met the man, I've had conversations with him. I know 168 00:09:15,440 --> 00:09:17,800 Speaker 1: a little bit more. I feel like I could give 169 00:09:17,800 --> 00:09:19,920 Speaker 1: a bit more perspective to this. But at the same time, 170 00:09:20,240 --> 00:09:24,439 Speaker 1: you're not entirely wrong with some of the conclusions you've drawn. 171 00:09:24,720 --> 00:09:26,920 Speaker 2: Yeah, and I mean, you know, to some extent, like 172 00:09:27,600 --> 00:09:29,240 Speaker 2: I mean the sub I feel like a lot of 173 00:09:29,280 --> 00:09:31,840 Speaker 2: the subtext of what you're talking about is inequality. 174 00:09:31,920 --> 00:09:32,120 Speaker 1: Right. 175 00:09:32,120 --> 00:09:34,760 Speaker 2: It's the gas between what CEOs make and whatever the 176 00:09:34,840 --> 00:09:38,480 Speaker 2: median worker at their company makes, and that indeed ballooned 177 00:09:38,520 --> 00:09:40,760 Speaker 2: out a lot at the end of the twentieth century 178 00:09:40,800 --> 00:09:43,600 Speaker 2: and has stayed quite wide obviously, and to some extent, 179 00:09:43,679 --> 00:09:47,719 Speaker 2: that is an effect of technology, right, although it's complicated, 180 00:09:47,760 --> 00:09:51,199 Speaker 2: there's a lot of it is norms. Right. Yeah, that's 181 00:09:51,240 --> 00:09:52,439 Speaker 2: a pretty subtle one. 182 00:09:53,200 --> 00:09:56,360 Speaker 1: Yeah, that's another thing that as communicators about things that 183 00:09:56,400 --> 00:09:58,920 Speaker 1: are in the technological space, often we do need to 184 00:09:58,960 --> 00:10:02,040 Speaker 1: take a step back away from just the technology and 185 00:10:02,160 --> 00:10:08,800 Speaker 1: acknowledge these other components that impact the entire direction of tech. 186 00:10:08,840 --> 00:10:10,960 Speaker 1: I mean, I'm sure as someone who has looked into 187 00:10:10,960 --> 00:10:14,640 Speaker 1: Silicon Valley you have seen how things like social norms 188 00:10:14,640 --> 00:10:18,719 Speaker 1: and politics and even things like living expenses in San 189 00:10:18,760 --> 00:10:22,040 Speaker 1: Francisco have a big impact on these sorts of things, 190 00:10:22,480 --> 00:10:25,480 Speaker 1: and people can get frustrated when I step back and 191 00:10:25,520 --> 00:10:28,560 Speaker 1: talk about these elements. But my argument is that you 192 00:10:28,600 --> 00:10:31,120 Speaker 1: can't really have a full understanding of technology unless you 193 00:10:31,160 --> 00:10:34,040 Speaker 1: also take into account these other things that do have 194 00:10:34,080 --> 00:10:35,959 Speaker 1: an impact. But one of the things I wanted to 195 00:10:35,960 --> 00:10:38,720 Speaker 1: ask you about so what's your problem. You are talking 196 00:10:38,720 --> 00:10:42,400 Speaker 1: with a lot of problem solvers, obviously, and I was curious, 197 00:10:42,760 --> 00:10:45,160 Speaker 1: now that you've spoken with quite a few people who 198 00:10:45,480 --> 00:10:47,960 Speaker 1: either come directly from engineering or have kind of an 199 00:10:48,000 --> 00:10:52,200 Speaker 1: engineering perspective, what's your take on engineers, Because I know 200 00:10:52,240 --> 00:10:54,720 Speaker 1: that's a general question and not everyone falls into the 201 00:10:54,760 --> 00:10:59,200 Speaker 1: same bucket, But I have a love for engineers in 202 00:10:59,240 --> 00:11:00,680 Speaker 1: the way that they approach things. 203 00:11:01,360 --> 00:11:04,800 Speaker 2: Yes, same, And you know, I also love engineers. And 204 00:11:04,840 --> 00:11:07,400 Speaker 2: as I mentioned, I was an English major. I am 205 00:11:07,440 --> 00:11:09,800 Speaker 2: not an engineer at all. I'm not even good at 206 00:11:09,840 --> 00:11:14,439 Speaker 2: fixing things around the house, although I try. But in college, 207 00:11:14,440 --> 00:11:18,840 Speaker 2: I took one computer science class my last term of college. 208 00:11:19,240 --> 00:11:22,280 Speaker 2: It was just the intro class. And on the first 209 00:11:22,360 --> 00:11:25,000 Speaker 2: day it was, you know, big lecture class. The professors 210 00:11:25,040 --> 00:11:27,440 Speaker 2: computer scientist was talking about his grading system and it 211 00:11:27,440 --> 00:11:28,959 Speaker 2: was this weird thing was like a check and a 212 00:11:29,080 --> 00:11:31,800 Speaker 2: check plus. And he said the top grade is a 213 00:11:31,800 --> 00:11:35,960 Speaker 2: plus plus and that is for code that makes me weep, 214 00:11:37,400 --> 00:11:40,000 Speaker 2: like cause it's so beautiful, you know, so erien. And 215 00:11:40,040 --> 00:11:42,400 Speaker 2: that was like a revelation to me because you know, 216 00:11:42,600 --> 00:11:45,560 Speaker 2: as an English major, as a non engineer, I always 217 00:11:45,600 --> 00:11:48,200 Speaker 2: thought of engineering as like, oh, a thing works or 218 00:11:48,200 --> 00:11:50,360 Speaker 2: it doesn't work. The building falls down or it doesn't 219 00:11:50,360 --> 00:11:53,679 Speaker 2: fall down. But this idea that there is elegance and 220 00:11:53,800 --> 00:11:57,640 Speaker 2: beauty in the construction of the thing itself was really 221 00:11:57,679 --> 00:12:00,520 Speaker 2: exciting to me and remains really exciting to me. And 222 00:12:00,840 --> 00:12:04,440 Speaker 2: engineers really are like that, you know, like they love 223 00:12:04,600 --> 00:12:08,760 Speaker 2: building things and they find beauty in an elegant solution, 224 00:12:09,360 --> 00:12:11,600 Speaker 2: the way other people find beauty in a song or 225 00:12:11,640 --> 00:12:12,840 Speaker 2: a poem or a painting. 226 00:12:13,280 --> 00:12:15,679 Speaker 1: I love that. I would say there's like a spectrum 227 00:12:15,760 --> 00:12:19,600 Speaker 1: in engineering as well, where you have sort of the artists, 228 00:12:19,640 --> 00:12:23,920 Speaker 1: who are the ones who are very carefully creating and 229 00:12:24,080 --> 00:12:27,960 Speaker 1: refining their code, and then maybe you have on the 230 00:12:28,000 --> 00:12:32,440 Speaker 1: punk rock side the hackers who didn't necessarily build the thing, 231 00:12:32,760 --> 00:12:34,880 Speaker 1: but they really want to know how the thing works, 232 00:12:34,920 --> 00:12:37,320 Speaker 1: and they will take the thing down to the very 233 00:12:38,160 --> 00:12:40,840 Speaker 1: base level of the structure, and then they'll say, what 234 00:12:40,920 --> 00:12:43,560 Speaker 1: if I rebuild it so it does something else? Like 235 00:12:44,000 --> 00:12:48,200 Speaker 1: I just think that entire culture, from the artists to 236 00:12:48,280 --> 00:12:51,560 Speaker 1: the punk rockers, who, hey, they're artists. I'm a punk 237 00:12:51,920 --> 00:12:54,280 Speaker 1: rock kind of fan myself, but I always find that 238 00:12:54,360 --> 00:12:57,240 Speaker 1: to be a wonderful way to have a conversation is 239 00:12:57,280 --> 00:12:59,880 Speaker 1: to talk about people and about how they're approach to 240 00:13:00,160 --> 00:13:02,480 Speaker 1: sort of stuff. And I also always say whenever I 241 00:13:02,480 --> 00:13:04,520 Speaker 1: talk with engineers, I come away with the feeling that 242 00:13:05,080 --> 00:13:07,920 Speaker 1: they view the world, as you can think of it, 243 00:13:07,960 --> 00:13:10,400 Speaker 1: as either a set of problems or a set of challenges, 244 00:13:10,559 --> 00:13:15,480 Speaker 1: and they're constantly thinking about solutions, which is nice because 245 00:13:15,559 --> 00:13:17,520 Speaker 1: I am unfortunately one of those people who's far more 246 00:13:17,600 --> 00:13:19,440 Speaker 1: likely to point out a problem but not have a 247 00:13:19,440 --> 00:13:22,720 Speaker 1: solution for you, right. So to talk to someone who's 248 00:13:22,760 --> 00:13:26,520 Speaker 1: already thinking ahead about how to solve the problem, not 249 00:13:26,640 --> 00:13:29,200 Speaker 1: just that there is a problem, I always find that 250 00:13:29,320 --> 00:13:30,040 Speaker 1: very inspiring. 251 00:13:30,559 --> 00:13:33,000 Speaker 2: Yeah, it's nice. I guess I also have tended to 252 00:13:33,040 --> 00:13:35,760 Speaker 2: be problem focused, you know. I sort of came up 253 00:13:35,760 --> 00:13:38,920 Speaker 2: in my career as a journalist, which is essentially all 254 00:13:38,960 --> 00:13:42,200 Speaker 2: about pointing out problems right to a significant degree if 255 00:13:42,280 --> 00:13:45,760 Speaker 2: you read the paper. Basically, what's going on in many 256 00:13:45,840 --> 00:13:48,440 Speaker 2: many stories is here is a thing that is bad, 257 00:13:50,000 --> 00:13:53,520 Speaker 2: and so talking to people who are trying to make 258 00:13:53,559 --> 00:13:56,440 Speaker 2: things better or fix things is great. And you know, 259 00:13:56,559 --> 00:13:59,720 Speaker 2: to be clear, I don't want to be too Pollyanna 260 00:13:59,760 --> 00:14:03,240 Speaker 2: issue here, Like there are plenty of engineers who build 261 00:14:03,280 --> 00:14:05,959 Speaker 2: things that on net don't help the world right, Building 262 00:14:06,200 --> 00:14:08,719 Speaker 2: new things is not always helpful to the world, and 263 00:14:08,920 --> 00:14:13,199 Speaker 2: there are certainly engineers who become enamored of just building 264 00:14:13,240 --> 00:14:15,240 Speaker 2: the thing and don't think about what it might mean. 265 00:14:15,559 --> 00:14:19,280 Speaker 2: And frankly, you know, in choosing who to talk to 266 00:14:19,400 --> 00:14:21,840 Speaker 2: for the show, I do try and talk to people 267 00:14:21,880 --> 00:14:25,120 Speaker 2: who I think are some combination of well cognizant of 268 00:14:25,120 --> 00:14:27,760 Speaker 2: what they're building and actually trying to do a good thing, 269 00:14:27,920 --> 00:14:30,840 Speaker 2: and you know, aware of the fact that there might 270 00:14:30,880 --> 00:14:34,640 Speaker 2: be unintended bad consequences of the thing that they're building. 271 00:14:35,320 --> 00:14:37,640 Speaker 2: Maybe I don't always succeed, but it is a useful 272 00:14:38,200 --> 00:14:39,640 Speaker 2: frame totally. 273 00:14:39,960 --> 00:14:43,280 Speaker 1: I also tend to think about how when engineers build 274 00:14:43,320 --> 00:14:47,240 Speaker 1: things that make sense to them, assuming that's they're building 275 00:14:47,240 --> 00:14:49,160 Speaker 1: something that ultimately is supposed to be used by the 276 00:14:49,200 --> 00:14:51,960 Speaker 1: general public, the great ones will take into account how 277 00:14:52,480 --> 00:14:57,000 Speaker 1: a quote unquote normal person would approach the whatever it is. 278 00:14:57,280 --> 00:14:57,360 Speaker 2: Like. 279 00:14:57,400 --> 00:15:01,040 Speaker 1: I'm thinking of user interfaces in particular, and that making 280 00:15:01,080 --> 00:15:03,240 Speaker 1: sure that the user interface is going to make sense 281 00:15:03,280 --> 00:15:06,520 Speaker 1: to enore me as opposed to an engineer. And then 282 00:15:06,560 --> 00:15:10,560 Speaker 1: there are other engineers or in fact entire companies where 283 00:15:10,560 --> 00:15:13,680 Speaker 1: they will build things that work great. If you're an 284 00:15:13,720 --> 00:15:17,200 Speaker 1: engineer they're fantastic if you're an engineer. If you're not 285 00:15:17,280 --> 00:15:21,040 Speaker 1: an engineer, it may require a bit more work on 286 00:15:21,080 --> 00:15:24,360 Speaker 1: your part. I'm looking specifically at Google and the Android 287 00:15:24,360 --> 00:15:27,680 Speaker 1: operating System because I'm an Android user, but at the 288 00:15:27,680 --> 00:15:32,080 Speaker 1: same time, I fully recognize that iOS is an operating 289 00:15:32,120 --> 00:15:35,880 Speaker 1: system that is so intuitive. You can literally hand an 290 00:15:35,920 --> 00:15:38,760 Speaker 1: iOS device to a child and they will have it 291 00:15:38,840 --> 00:15:41,320 Speaker 1: figured out in no time. You can hand an Android 292 00:15:41,360 --> 00:15:44,120 Speaker 1: device to someone and they will spend a lot of 293 00:15:44,120 --> 00:15:46,640 Speaker 1: time asking questions about how to do things and how 294 00:15:46,640 --> 00:15:49,880 Speaker 1: to access things. And it's not that the Android operating 295 00:15:49,880 --> 00:15:52,160 Speaker 1: system is bad, it's not that it's worse than iOS. 296 00:15:52,320 --> 00:15:54,840 Speaker 1: If you happen to be an engineer, Android is awesome, 297 00:15:55,320 --> 00:15:57,600 Speaker 1: and if you're not, it's still awesome. But you have 298 00:15:57,680 --> 00:16:01,120 Speaker 1: to put in work to get to realize that. And 299 00:16:01,160 --> 00:16:04,680 Speaker 1: it really differentiates the two, right because Apple has always 300 00:16:04,760 --> 00:16:08,600 Speaker 1: had a focus on how can we make this into 301 00:16:08,640 --> 00:16:12,600 Speaker 1: a product that people realize they need, even if they 302 00:16:12,640 --> 00:16:15,600 Speaker 1: never had that need before. And Google's like, how can 303 00:16:15,640 --> 00:16:17,960 Speaker 1: we make this so that it's really powerful and that 304 00:16:18,040 --> 00:16:19,960 Speaker 1: it does what we wanted to do? But it may 305 00:16:20,040 --> 00:16:22,680 Speaker 1: require a little bit of work on the user's part. 306 00:16:22,720 --> 00:16:25,480 Speaker 1: In order to have it work out. So that's sort 307 00:16:25,520 --> 00:16:28,680 Speaker 1: of also a fascinating thing about engineering that I've really 308 00:16:29,240 --> 00:16:31,920 Speaker 1: loved to look at and to talk about. I don't 309 00:16:31,960 --> 00:16:35,040 Speaker 1: necessarily think one is better than the other, apart from 310 00:16:35,040 --> 00:16:37,160 Speaker 1: the fact that one is just much easier for the 311 00:16:37,240 --> 00:16:40,480 Speaker 1: general public to kind of glom onto. As much as 312 00:16:40,520 --> 00:16:43,280 Speaker 1: I love Android, I would never say that it's more 313 00:16:43,360 --> 00:16:44,680 Speaker 1: user friendly than iOS. 314 00:16:46,000 --> 00:16:48,720 Speaker 2: Yeah, I mean there's a few ways of thing about that, right, 315 00:16:48,800 --> 00:16:51,120 Speaker 2: Like somebody a long time ago told me there's a 316 00:16:51,160 --> 00:16:54,680 Speaker 2: phrase people use sometimes the user is never wrong, right, 317 00:16:54,840 --> 00:16:57,080 Speaker 2: which is which is an interesting framework. And I think 318 00:16:57,160 --> 00:16:58,800 Speaker 2: they told it to me to be nice, because I 319 00:16:58,840 --> 00:17:01,240 Speaker 2: was like trying to do something in a radio studio 320 00:17:01,280 --> 00:17:02,840 Speaker 2: and I couldn't figure it out. And it was an 321 00:17:02,840 --> 00:17:05,440 Speaker 2: engineer who was like a thoughtful guy. I said, no, no, 322 00:17:05,480 --> 00:17:08,000 Speaker 2: you're not bad at this, it's just not set up well. 323 00:17:08,119 --> 00:17:10,480 Speaker 2: I mean, the other way of thinking about it, in 324 00:17:10,560 --> 00:17:15,760 Speaker 2: terms of the Android versus io ask question is as 325 00:17:15,800 --> 00:17:18,520 Speaker 2: an optimization problem. Right, if you're an engineer, then the 326 00:17:18,560 --> 00:17:21,360 Speaker 2: question is, well, are we optimizing for sort of the 327 00:17:21,880 --> 00:17:24,240 Speaker 2: mobile operating system that can do the most things or 328 00:17:24,280 --> 00:17:26,560 Speaker 2: be the most flexible, or the mobile operating system that 329 00:17:26,680 --> 00:17:28,920 Speaker 2: is just like bulletproof. You can give it to anybody 330 00:17:28,920 --> 00:17:31,399 Speaker 2: in any language and they will immediately understand what it 331 00:17:31,480 --> 00:17:34,960 Speaker 2: is and you get different sort of solutions depending on 332 00:17:34,960 --> 00:17:36,000 Speaker 2: what you're optimizing for. 333 00:17:36,359 --> 00:17:39,280 Speaker 1: Yeah, that's true. Like you've identified whatever your goal is, 334 00:17:39,320 --> 00:17:43,040 Speaker 1: so obviously the execution is going to be different. Well, 335 00:17:43,080 --> 00:17:45,040 Speaker 1: I'm glad that you weren't told that the problem was 336 00:17:45,040 --> 00:17:49,680 Speaker 1: between keyboard and chair, which is the other classic classic answer. 337 00:17:49,720 --> 00:17:50,960 Speaker 2: Wait a minute, that's me. 338 00:17:51,560 --> 00:17:55,080 Speaker 1: Yeah, I've received that particular one more than more than 339 00:17:55,119 --> 00:17:58,159 Speaker 1: once in my lifetime. Well, now you have a reply. 340 00:17:58,520 --> 00:18:02,880 Speaker 1: Now you have to reply yes, yeah, yes, that's true. 341 00:18:02,880 --> 00:18:06,160 Speaker 1: The user is never wrong. We've got a lot more 342 00:18:06,240 --> 00:18:08,920 Speaker 1: to talk about, Jacob and I, but before we get 343 00:18:08,920 --> 00:18:10,840 Speaker 1: to the rest of our conversation, we need to take 344 00:18:10,880 --> 00:18:23,800 Speaker 1: a quick break to thank our sponsors. Your show covers 345 00:18:24,080 --> 00:18:28,119 Speaker 1: all realms of technology, not just AI, but because this 346 00:18:28,240 --> 00:18:32,720 Speaker 1: past year has been undeniably AI centric when it comes 347 00:18:32,760 --> 00:18:35,600 Speaker 1: to tech news, clearly a lot of your episodes do 348 00:18:35,880 --> 00:18:40,000 Speaker 1: tackle AI, and as someone else who tries to communicate 349 00:18:40,040 --> 00:18:43,479 Speaker 1: technology in a way that's really accessible and understandable, one 350 00:18:43,520 --> 00:18:46,760 Speaker 1: of the things I frequently run into is that talking 351 00:18:46,800 --> 00:18:51,600 Speaker 1: about AI in a responsible way is in itself challenging. 352 00:18:52,080 --> 00:18:54,920 Speaker 1: But I'm curious to hear what your take is when 353 00:18:54,960 --> 00:18:57,440 Speaker 1: it comes time for you to communicate about AI. How 354 00:18:57,480 --> 00:18:59,719 Speaker 1: do you perceive that and how do you approach it? 355 00:19:00,200 --> 00:19:02,280 Speaker 2: A thing in general that I try and do on 356 00:19:02,320 --> 00:19:06,320 Speaker 2: my show and certainly with respect to AI, is to 357 00:19:06,440 --> 00:19:10,720 Speaker 2: go narrow essentially right, Like I'm not going on my 358 00:19:10,760 --> 00:19:13,560 Speaker 2: show and saying here is what AI is and here's 359 00:19:13,600 --> 00:19:15,960 Speaker 2: what's going to happen. I'm talking to people who are 360 00:19:16,200 --> 00:19:21,200 Speaker 2: typically building a company to do a specific thing with AI. 361 00:19:21,760 --> 00:19:25,280 Speaker 2: Not quite always, but usually right. And so that to 362 00:19:25,400 --> 00:19:28,960 Speaker 2: me is a helpful way to well, AA say something 363 00:19:29,040 --> 00:19:31,680 Speaker 2: new because so many people are making so many broad 364 00:19:31,680 --> 00:19:36,080 Speaker 2: statements about AI and b steer clear of you know, 365 00:19:36,640 --> 00:19:40,400 Speaker 2: over generalization. How about you, I mean, what's your take? 366 00:19:41,080 --> 00:19:44,760 Speaker 1: So my concern is I always want to avoid being 367 00:19:44,760 --> 00:19:49,560 Speaker 1: reductive because AI is such a huge discipline and it 368 00:19:49,640 --> 00:19:52,919 Speaker 1: involves so many different aspects, that is very easy to 369 00:19:52,960 --> 00:19:55,200 Speaker 1: fall into the trap because I mean, clearly we see 370 00:19:55,200 --> 00:19:57,200 Speaker 1: this in mainstream media all the time. Not that I 371 00:19:57,280 --> 00:20:00,440 Speaker 1: blame them, but they're just they're taking some shortcuts where 372 00:20:00,440 --> 00:20:03,880 Speaker 1: they'll use the term artificial intelligence. It almost implies that 373 00:20:04,040 --> 00:20:06,120 Speaker 1: what they're talking about is the end all be all 374 00:20:06,160 --> 00:20:09,200 Speaker 1: of artificial intelligence. And usually they're talking about generative AI. 375 00:20:09,440 --> 00:20:11,480 Speaker 1: In the last year, I would say, like, that's been 376 00:20:11,760 --> 00:20:15,639 Speaker 1: the biggest topic in artificial intelligence, but it's one topic, 377 00:20:15,880 --> 00:20:19,320 Speaker 1: and that AI actually covers a lot more than that, 378 00:20:19,640 --> 00:20:22,639 Speaker 1: and it falls into so many other buckets too, Like 379 00:20:23,040 --> 00:20:25,879 Speaker 1: robotics obviously has a lot to do with AI. That 380 00:20:26,040 --> 00:20:30,120 Speaker 1: not always you can have a fully remote controlled robot, 381 00:20:30,280 --> 00:20:35,040 Speaker 1: but often there's some AI components there, things like assisted vision, 382 00:20:35,640 --> 00:20:38,880 Speaker 1: brain computer interfaces. I mean, there's so many different things 383 00:20:38,920 --> 00:20:41,200 Speaker 1: that don't have anything to do with generative AI. That's 384 00:20:41,240 --> 00:20:43,240 Speaker 1: still at least touch on AI. 385 00:20:43,600 --> 00:20:46,640 Speaker 2: Doing a Google search, yeah, like exactly. 386 00:20:46,800 --> 00:20:50,959 Speaker 1: Yeah, anything that you're getting into, like automation, I mean 387 00:20:51,040 --> 00:20:51,600 Speaker 1: you can get a. 388 00:20:51,600 --> 00:20:55,879 Speaker 2: Lot recommending a show to you. Yeah, blind spot monitoring 389 00:20:55,920 --> 00:20:59,119 Speaker 2: system saying there's a cardex too. Like these are all side. 390 00:20:59,200 --> 00:21:01,520 Speaker 1: The Yeah, they're all they're all different aspects of AI. 391 00:21:01,680 --> 00:21:03,440 Speaker 1: And like you could argue, well, sometimes you get a 392 00:21:03,440 --> 00:21:05,840 Speaker 1: little fuzzy. I'm like, well, so is the word intelligence? 393 00:21:05,880 --> 00:21:08,159 Speaker 1: Like intelligence itself is a fuzzy term. I thought that. 394 00:21:08,160 --> 00:21:11,960 Speaker 2: Would be better off if the AI did not exist 395 00:21:12,480 --> 00:21:14,919 Speaker 2: me too, like I think it's an unfortunate choice of 396 00:21:14,960 --> 00:21:17,000 Speaker 2: words that is unhelpful. 397 00:21:17,119 --> 00:21:20,440 Speaker 1: Ultimately, it's I think largely because everyone starts to jump 398 00:21:20,520 --> 00:21:23,400 Speaker 1: to They jump to science fiction, they jump to Skynet, 399 00:21:23,560 --> 00:21:26,280 Speaker 1: they jump to Terminator, they jump to this idea of 400 00:21:26,840 --> 00:21:30,320 Speaker 1: something that appears to think the way humans do. And 401 00:21:30,400 --> 00:21:32,520 Speaker 1: of course we don't even know if we ever reach 402 00:21:33,240 --> 00:21:36,320 Speaker 1: strong AI or general AI, however you want to define it. 403 00:21:36,600 --> 00:21:39,359 Speaker 1: We don't know if it's going to quote unquote think 404 00:21:39,440 --> 00:21:41,960 Speaker 1: like a human, or even think at all. It may 405 00:21:42,080 --> 00:21:46,960 Speaker 1: just be indistinguishable to us from the way humans think. 406 00:21:47,320 --> 00:21:50,240 Speaker 1: And I think Touring would argue, well, that's good enough. 407 00:21:50,880 --> 00:21:53,520 Speaker 1: It doesn't matter. If it's indistinguishable, then it might as 408 00:21:53,520 --> 00:21:53,840 Speaker 1: well be. 409 00:21:54,080 --> 00:21:57,840 Speaker 2: You quoted somebody on your show a while back as 410 00:21:57,920 --> 00:22:03,200 Speaker 2: saying something like intelligence is whatever machines can't do yet, 411 00:22:03,880 --> 00:22:05,440 Speaker 2: which I thought was pretty good. 412 00:22:06,119 --> 00:22:08,399 Speaker 1: Yeah. I think that it's very similar to how a 413 00:22:08,400 --> 00:22:12,200 Speaker 1: lot of philosophers define consciousness, right They say, like, ugh, 414 00:22:12,600 --> 00:22:13,120 Speaker 1: we don't. 415 00:22:12,960 --> 00:22:13,840 Speaker 2: Know what consciousness is. 416 00:22:14,200 --> 00:22:17,199 Speaker 1: All we do. All we do is we define what 417 00:22:17,320 --> 00:22:20,000 Speaker 1: consciousness isn't we haven't gotten to a point where we 418 00:22:20,000 --> 00:22:22,480 Speaker 1: can say what consciousness is. We get we chip away 419 00:22:22,520 --> 00:22:25,720 Speaker 1: So what we're doing is we've got the marble slab, yeah, 420 00:22:25,760 --> 00:22:28,520 Speaker 1: and we're chipping at it, but we haven't yet seen 421 00:22:28,560 --> 00:22:32,000 Speaker 1: the statue that's living under the slab. Yet we're still 422 00:22:32,040 --> 00:22:32,560 Speaker 1: just chipping. 423 00:22:32,880 --> 00:22:35,840 Speaker 2: I was already worried about intelligence. As you say consciousness, 424 00:22:35,880 --> 00:22:37,800 Speaker 2: I like, I don't even know what to do with 425 00:22:37,880 --> 00:22:38,399 Speaker 2: that one. 426 00:22:38,680 --> 00:22:41,199 Speaker 1: Oh no, well, I mean it often goes hand in 427 00:22:41,240 --> 00:22:44,520 Speaker 1: hand with AI, right, because people immediately assure that intelligence 428 00:22:44,680 --> 00:22:47,800 Speaker 1: and self awareness go hand in hand with one another, 429 00:22:47,840 --> 00:22:50,119 Speaker 1: and maybe it will. We don't know. That's the point. 430 00:22:50,200 --> 00:22:53,520 Speaker 1: We don't know. But long story short, to answer the 431 00:22:53,600 --> 00:22:56,560 Speaker 1: question of how I approach this, I usually go where 432 00:22:56,600 --> 00:22:59,760 Speaker 1: I start from the broad foundation and then I go narrow. 433 00:22:59,840 --> 00:23:02,719 Speaker 1: So I start with saying, first we need to acknowledge 434 00:23:02,800 --> 00:23:06,199 Speaker 1: that artificial intelligence is a very very big field, and 435 00:23:06,240 --> 00:23:09,280 Speaker 1: that this is one aspect of AI, and that we're 436 00:23:09,320 --> 00:23:11,160 Speaker 1: not going to talk about the other aspects of AI, 437 00:23:11,240 --> 00:23:14,560 Speaker 1: but we need to remember they exist, and that while 438 00:23:14,600 --> 00:23:16,600 Speaker 1: the thing we're talking about is important, and while it 439 00:23:16,640 --> 00:23:19,600 Speaker 1: has its own set of challenges and potential, you know, 440 00:23:19,680 --> 00:23:22,880 Speaker 1: rewards and risks and all the things that go with it, 441 00:23:22,880 --> 00:23:25,679 Speaker 1: it's one part. It's like if you were to say, 442 00:23:25,720 --> 00:23:28,359 Speaker 1: you wouldn't hold up a remote control and say this 443 00:23:28,480 --> 00:23:31,400 Speaker 1: is all of technology. Right, This is one thing that's 444 00:23:31,400 --> 00:23:34,920 Speaker 1: a technological gadget, but it doesn't represent all of technology. 445 00:23:35,000 --> 00:23:37,760 Speaker 2: First of all, you'd have to find it, right, that's. 446 00:23:37,560 --> 00:23:39,879 Speaker 1: True, which you know, you know, you got to have 447 00:23:40,000 --> 00:23:42,040 Speaker 1: some sort of method to figure out where it is. 448 00:23:42,200 --> 00:23:44,200 Speaker 1: This is, by the way, is why three D television 449 00:23:44,280 --> 00:23:46,560 Speaker 1: never became a thing. Who wants to look for glasses 450 00:23:46,600 --> 00:23:49,879 Speaker 1: so that they can watch True Detective season four? Not me. 451 00:23:51,280 --> 00:23:54,080 Speaker 1: So yeah, that's kind of my approach. And so I 452 00:23:54,080 --> 00:23:57,040 Speaker 1: don't think that our approaches are that different. I think 453 00:23:57,119 --> 00:23:59,760 Speaker 1: that we're pretty similar. And it's that I think we 454 00:24:00,080 --> 00:24:03,520 Speaker 1: both feel there's a responsibility to make certain that we 455 00:24:03,640 --> 00:24:09,080 Speaker 1: never overgeneralize or be reductive, because that feeds into a 456 00:24:09,280 --> 00:24:14,800 Speaker 1: narrative that I think actually contributes to the old fud 457 00:24:14,880 --> 00:24:17,960 Speaker 1: the fear, uncertainty, in doubt. And while there are things 458 00:24:18,000 --> 00:24:21,280 Speaker 1: to certainly be concerned about and to be aware of, 459 00:24:21,720 --> 00:24:25,200 Speaker 1: we don't want to rush into anything, you know, with 460 00:24:26,160 --> 00:24:29,359 Speaker 1: a poor understanding of the situation. I think that's true 461 00:24:29,359 --> 00:24:33,280 Speaker 1: whether you're you know, really enthusiastic and excited about AI, 462 00:24:33,720 --> 00:24:36,920 Speaker 1: I think it's true if you are really concerned about AI. 463 00:24:37,119 --> 00:24:40,560 Speaker 1: I think, you know, taking critical thinking and a really 464 00:24:40,680 --> 00:24:45,439 Speaker 1: methodical approach is absolutely key if you want to avoid pitfalls. 465 00:24:46,040 --> 00:24:48,919 Speaker 2: Sure, it seems hard to argue against critical thinking and 466 00:24:48,960 --> 00:24:51,199 Speaker 2: a methodical approach, right, Who's going to take the other 467 00:24:51,280 --> 00:24:52,920 Speaker 2: side of that one? 468 00:24:53,280 --> 00:24:55,160 Speaker 1: I mean, Mark. 469 00:24:55,400 --> 00:24:57,520 Speaker 2: Fair enough, everybody, ye, fair enough? 470 00:24:57,920 --> 00:24:59,200 Speaker 1: Like Sam Altman maybe? 471 00:25:00,760 --> 00:25:03,400 Speaker 2: I mean, so it is interesting to think about the 472 00:25:03,720 --> 00:25:06,560 Speaker 2: Sam Altman, the head of Open AI. I mean, one 473 00:25:06,560 --> 00:25:09,399 Speaker 2: of the really interesting things to me about AI. And 474 00:25:09,440 --> 00:25:13,240 Speaker 2: that seems different in particular when you know, we're talking 475 00:25:13,280 --> 00:25:16,440 Speaker 2: about engineers, Like, I feel like the extent to which 476 00:25:16,480 --> 00:25:20,119 Speaker 2: the engineers working on AI are worried about AI is 477 00:25:20,200 --> 00:25:24,000 Speaker 2: really interesting and different, right, I feel like the traditional 478 00:25:24,440 --> 00:25:27,080 Speaker 2: kind of engineer stance is like, this thing is cool, 479 00:25:27,200 --> 00:25:30,879 Speaker 2: let's build it right again. That's obviously reductive and somewhat unfair, 480 00:25:30,920 --> 00:25:37,760 Speaker 2: but whatever. Whereas with AI, to some significant degree, many 481 00:25:37,800 --> 00:25:39,760 Speaker 2: of the people who are most worried about it are 482 00:25:39,800 --> 00:25:42,000 Speaker 2: the people who understand it the best. And you know, 483 00:25:42,040 --> 00:25:44,400 Speaker 2: I've heard people are you like, oh, that's just marketing, 484 00:25:44,480 --> 00:25:46,760 Speaker 2: and that doesn't seem true to me. First of all, 485 00:25:46,920 --> 00:25:48,720 Speaker 2: why would you market a thing by saying we should 486 00:25:48,720 --> 00:25:50,560 Speaker 2: be worried about it. And second of all, if you 487 00:25:50,640 --> 00:25:53,119 Speaker 2: just look like open ai was started as a nonprofit 488 00:25:53,200 --> 00:25:55,000 Speaker 2: and then they you know, needed more money, but they 489 00:25:55,000 --> 00:25:58,320 Speaker 2: became this weird capped profit model, and then people left 490 00:25:58,359 --> 00:26:00,639 Speaker 2: open ai to start anthropic, which is another one of 491 00:26:00,680 --> 00:26:03,399 Speaker 2: the big ones, because they thought open ai wasn't worried enough. 492 00:26:03,520 --> 00:26:06,119 Speaker 2: And then you had, you know, people calling like Elon 493 00:26:06,240 --> 00:26:08,720 Speaker 2: Musk calling for a six month pause on AI development. 494 00:26:08,800 --> 00:26:11,240 Speaker 2: And so I do think that people who know a 495 00:26:11,240 --> 00:26:14,040 Speaker 2: lot of it about AI are in fact really worried 496 00:26:14,040 --> 00:26:17,520 Speaker 2: about it, which is just interesting on its face, indifferent 497 00:26:17,560 --> 00:26:19,679 Speaker 2: than the way technology often works. 498 00:26:20,000 --> 00:26:22,240 Speaker 1: Yeah, there are a lot of conspiracy theories that pop 499 00:26:22,320 --> 00:26:24,400 Speaker 1: up or fringe theories that pop up around this. By 500 00:26:24,400 --> 00:26:27,120 Speaker 1: the way, like you have the ones who say, well, 501 00:26:27,359 --> 00:26:29,760 Speaker 1: they say they're worried about AI because what they're trying 502 00:26:29,800 --> 00:26:34,240 Speaker 1: to do is shape the discussions around regulations so that 503 00:26:34,440 --> 00:26:38,199 Speaker 1: their own personal organization ends up benefiting from those regulations 504 00:26:38,240 --> 00:26:41,760 Speaker 1: while those same regulations slow down smaller companies that are 505 00:26:41,760 --> 00:26:44,320 Speaker 1: in the space. You had people saying, well, Elon Musk, yes, 506 00:26:44,359 --> 00:26:46,080 Speaker 1: he was arguing that there needs to be a pause, 507 00:26:46,119 --> 00:26:48,280 Speaker 1: but it's because he was launching his own AI company, 508 00:26:48,320 --> 00:26:50,280 Speaker 1: and he wanted a chance to be able to catch up. 509 00:26:50,320 --> 00:26:52,679 Speaker 1: Like You've got a lot of other fringe theories out there, 510 00:26:52,800 --> 00:26:55,400 Speaker 1: and I understand that there may be, you know, some 511 00:26:55,560 --> 00:26:58,160 Speaker 1: credibility to some of those who knows. But I think 512 00:26:58,240 --> 00:27:00,679 Speaker 1: when it gets to a point where the board of 513 00:27:00,720 --> 00:27:05,880 Speaker 1: directors of open Ai get together and decide seemingly spontaneously 514 00:27:06,400 --> 00:27:08,480 Speaker 1: that they're going to get rid of the CEO and 515 00:27:08,560 --> 00:27:13,439 Speaker 1: co founder of the company and then do so, I 516 00:27:13,480 --> 00:27:17,600 Speaker 1: think that speaks to a genuine and sincere concern that 517 00:27:17,760 --> 00:27:21,640 Speaker 1: perhaps the organization is moving in a direction that they 518 00:27:21,680 --> 00:27:26,840 Speaker 1: feel as fundamentally counteractive to what they had intended. And 519 00:27:26,920 --> 00:27:30,560 Speaker 1: of course we know they subsequently had to reverse that 520 00:27:30,640 --> 00:27:33,800 Speaker 1: decision and step down from the board of directors because 521 00:27:33,800 --> 00:27:37,719 Speaker 1: the overwhelming support within the organization was to that co 522 00:27:37,840 --> 00:27:43,520 Speaker 1: founder and CEO seemingly embracing this new approach toward developing 523 00:27:43,560 --> 00:27:47,560 Speaker 1: AI that was a departure from the original organization's intent. 524 00:27:48,119 --> 00:27:50,399 Speaker 1: But to your point, the fact that the board of 525 00:27:50,400 --> 00:27:54,280 Speaker 1: directors was willing to do such an extreme move, even 526 00:27:54,280 --> 00:27:55,600 Speaker 1: though it was on a Friday at the end of 527 00:27:55,640 --> 00:27:58,320 Speaker 1: a news cycle, even that they were willing to do 528 00:27:58,400 --> 00:28:02,040 Speaker 1: that knowing that it would lead to them having to 529 00:28:02,119 --> 00:28:05,239 Speaker 1: leave the organization. I think that speaks to me a 530 00:28:05,280 --> 00:28:09,160 Speaker 1: genuine concern. You don't go and remove a co founder 531 00:28:09,200 --> 00:28:13,360 Speaker 1: and CEO for a small reason. 532 00:28:14,280 --> 00:28:18,199 Speaker 2: Yeah, I mean sure, I'm sure that some amount of 533 00:28:18,240 --> 00:28:20,959 Speaker 2: the public worrying over AI by people in the field 534 00:28:21,280 --> 00:28:26,880 Speaker 2: is some kind of self interested behavior. But I think overall, 535 00:28:27,400 --> 00:28:30,399 Speaker 2: there are clearly a lot of people who know a 536 00:28:30,440 --> 00:28:33,320 Speaker 2: lot who are even building these things, who are genuinely worried. 537 00:28:33,359 --> 00:28:34,879 Speaker 2: Like that seems obviously true. 538 00:28:35,240 --> 00:28:40,080 Speaker 1: Yeah, And honestly, like anytime someone's building a technology and 539 00:28:40,160 --> 00:28:43,160 Speaker 1: they're they're bringing concerns up to me, that's a good thing. 540 00:28:43,360 --> 00:28:46,480 Speaker 1: And it doesn't necessarily mean that the technology is ultimately 541 00:28:47,120 --> 00:28:50,520 Speaker 1: harmful or not beneficial. But you know, I think it's 542 00:28:50,520 --> 00:28:54,160 Speaker 1: a responsible person who does ask those questions. For one thing, 543 00:28:55,040 --> 00:28:57,280 Speaker 1: it really can save you a lot of time and 544 00:28:57,400 --> 00:29:00,360 Speaker 1: heartache further down the line, if you're tackling these kinds 545 00:29:00,360 --> 00:29:04,920 Speaker 1: of things before they've escalated to a point where they're 546 00:29:04,960 --> 00:29:09,560 Speaker 1: actively causing catastrophe. So I like seeing that. Whether it 547 00:29:09,640 --> 00:29:12,320 Speaker 1: ends up being merited or not, well, that's just sort 548 00:29:12,360 --> 00:29:14,160 Speaker 1: of a curse we have to bear, right, What if 549 00:29:14,640 --> 00:29:17,200 Speaker 1: it turns out that it was never merited, we won't know. 550 00:29:17,640 --> 00:29:19,560 Speaker 1: And if it turns out if it was merited, we 551 00:29:19,600 --> 00:29:21,880 Speaker 1: still don't know. Because they asked the questions ahead of 552 00:29:21,880 --> 00:29:24,640 Speaker 1: time and fix the problems before they became problems. And 553 00:29:24,880 --> 00:29:27,480 Speaker 1: it's only if we take the other path that we 554 00:29:27,520 --> 00:29:30,520 Speaker 1: find out for sure, like WHOA, we should have thought 555 00:29:30,520 --> 00:29:32,080 Speaker 1: of this before we did it. 556 00:29:32,880 --> 00:29:35,880 Speaker 2: I mean, you know, tools are complicated, right. People come 557 00:29:35,960 --> 00:29:38,760 Speaker 2: up with new tools, and then other people use those 558 00:29:38,800 --> 00:29:42,080 Speaker 2: tools in various ways, some of which enhance human well 559 00:29:42,120 --> 00:29:45,960 Speaker 2: being and some of which cause new miseries. And plainly 560 00:29:46,160 --> 00:29:46,960 Speaker 2: AI will do. 561 00:29:46,960 --> 00:29:50,200 Speaker 1: Both, yes, and so it really becomes important that we 562 00:29:50,280 --> 00:29:53,240 Speaker 1: are really good stewards of the technology and we're paying attention, 563 00:29:53,360 --> 00:29:55,680 Speaker 1: that we're calling things out and we're addressing them as 564 00:29:55,720 --> 00:29:58,840 Speaker 1: they come up. I don't think that we're that close 565 00:29:58,920 --> 00:30:03,040 Speaker 1: yet to the doom day problem of the superhuman intelligent 566 00:30:03,080 --> 00:30:05,320 Speaker 1: AI that's stuck in a box and then is convincing 567 00:30:05,320 --> 00:30:06,840 Speaker 1: people to let it out of the box. I don't 568 00:30:06,840 --> 00:30:09,080 Speaker 1: think we're close to that yet. I mean, even quote 569 00:30:09,120 --> 00:30:13,840 Speaker 1: unquote dumb AI can do terrible things if it's poorly implemented. Right, 570 00:30:13,880 --> 00:30:19,400 Speaker 1: We've seen that, We've seen accidents with autonomous cars, which 571 00:30:19,400 --> 00:30:24,800 Speaker 1: show that AI can make bad choices sometimes because perhaps 572 00:30:24,840 --> 00:30:29,880 Speaker 1: it encounters a scenario that no one anticipated, because as 573 00:30:29,920 --> 00:30:32,560 Speaker 1: it turns out, reality has far more variables than we 574 00:30:32,560 --> 00:30:36,480 Speaker 1: can account for when we're designing things and then something 575 00:30:36,840 --> 00:30:40,240 Speaker 1: terrible happens. That doesn't mean that the technology itself is 576 00:30:41,400 --> 00:30:45,720 Speaker 1: deeply flawed or bad, but it does highlight that we 577 00:30:46,040 --> 00:30:49,640 Speaker 1: constantly have to be asking how can we make it better, 578 00:30:49,960 --> 00:30:51,880 Speaker 1: and how can we make it safer, and how can 579 00:30:51,920 --> 00:30:55,040 Speaker 1: we make it so that it's actually benefiting us and 580 00:30:55,120 --> 00:30:58,440 Speaker 1: not just causing you know, maybe a little bit of 581 00:30:58,480 --> 00:31:02,080 Speaker 1: benefit but a larger amount of harm. Yeah. 582 00:31:02,120 --> 00:31:05,200 Speaker 2: I mean autonomous cars you mentioned are an interesting one, 583 00:31:05,560 --> 00:31:11,680 Speaker 2: right because plainly there have been, you know, tragic crashes 584 00:31:11,960 --> 00:31:15,560 Speaker 2: by autonomous cars. One question there is what are we 585 00:31:16,040 --> 00:31:20,400 Speaker 2: benchmarking them against? Right, Like, there are tragic crashes with 586 00:31:20,520 --> 00:31:24,520 Speaker 2: non autonomous cars every hour of every day, and so 587 00:31:24,760 --> 00:31:29,400 Speaker 2: in a sort of if people were just mathematically rational optimizers, 588 00:31:29,440 --> 00:31:32,600 Speaker 2: we would all say, okay, well, let's see if you know, 589 00:31:32,840 --> 00:31:37,120 Speaker 2: over a million hours of driving, autonomous cars are safer 590 00:31:37,240 --> 00:31:41,200 Speaker 2: or less safe than human drivers. That's clearly not what's happening. 591 00:31:41,240 --> 00:31:45,400 Speaker 2: People clearly favor human drivers for some complicated set of 592 00:31:45,480 --> 00:31:50,960 Speaker 2: human reasons. And we're not obviously benchmarking autonomous cars against humans, who, 593 00:31:51,000 --> 00:31:53,480 Speaker 2: by the way, are terrible drivers. Like one thing about 594 00:31:53,560 --> 00:31:55,720 Speaker 2: human beings. We're really bad at driving. 595 00:31:56,240 --> 00:31:58,840 Speaker 1: Yeah, if you look at the stats in the United 596 00:31:58,880 --> 00:32:02,400 Speaker 1: States for the number of fatalities and injuries that result 597 00:32:02,520 --> 00:32:07,200 Speaker 1: from car accidents that are just human error caused car accidents, 598 00:32:07,400 --> 00:32:11,320 Speaker 1: it's a staggering number. And when you think how much 599 00:32:11,400 --> 00:32:17,120 Speaker 1: that could be reduced through autonomous cars, and you imagine, well, 600 00:32:17,360 --> 00:32:20,200 Speaker 1: think of the ripple effect. It's not just the idea 601 00:32:20,280 --> 00:32:23,440 Speaker 1: that those people who died would still be alive, which 602 00:32:23,800 --> 00:32:26,800 Speaker 1: on its own is already a phenomenal thing to talk about. 603 00:32:27,080 --> 00:32:30,840 Speaker 1: That means that the impact on those people's friends and 604 00:32:30,920 --> 00:32:33,680 Speaker 1: families that would not have happened. It means that the 605 00:32:33,720 --> 00:32:37,120 Speaker 1: impact on whatever their place of employment was that would 606 00:32:37,120 --> 00:32:40,160 Speaker 1: not have happened. They would be contributing members of society. 607 00:32:40,280 --> 00:32:43,080 Speaker 1: That would be a phenomenal change there. So when you 608 00:32:43,120 --> 00:32:47,360 Speaker 1: start thinking about that, you realize the overall benefit is 609 00:32:47,440 --> 00:32:51,080 Speaker 1: so huge that it only makes sense to really pursue 610 00:32:51,840 --> 00:32:56,040 Speaker 1: autonomous vehicles. And as long as the data does show 611 00:32:56,080 --> 00:33:00,000 Speaker 1: that in fact, they are better drivers per million miles 612 00:33:00,400 --> 00:33:03,360 Speaker 1: than humans are, and that to me is something I 613 00:33:03,400 --> 00:33:05,680 Speaker 1: try and keep in mind. You have to balance it out. 614 00:33:05,760 --> 00:33:08,200 Speaker 1: I think it's the same thing as people who really 615 00:33:08,240 --> 00:33:11,240 Speaker 1: flip out when they go on a flight they are 616 00:33:11,240 --> 00:33:14,520 Speaker 1: not directly in control of the plane typically. I mean, 617 00:33:14,520 --> 00:33:16,280 Speaker 1: if they're flipping out, whether they're the pilot, that's a 618 00:33:16,320 --> 00:33:19,120 Speaker 1: whole different issue. But if you're going on a flight 619 00:33:19,160 --> 00:33:21,720 Speaker 1: and you flip it out because you lack a sense 620 00:33:21,720 --> 00:33:24,040 Speaker 1: of control, I feel like that's a very similar thing 621 00:33:24,120 --> 00:33:26,640 Speaker 1: to how people feel when they're thinking about autonomous cars. 622 00:33:26,680 --> 00:33:30,000 Speaker 1: It's that somehow the fact that someone's not in control 623 00:33:30,520 --> 00:33:34,680 Speaker 1: brings up something very scary to a lot of people. 624 00:33:34,920 --> 00:33:38,600 Speaker 1: It also raises other questions obviously, like accountability. Who do 625 00:33:38,640 --> 00:33:40,680 Speaker 1: you hold accountable in these cases? I mean, there are 626 00:33:40,680 --> 00:33:43,440 Speaker 1: a lot of questions that as a society we have 627 00:33:43,480 --> 00:33:46,720 Speaker 1: to solve. It's not just the technology. But yeah, I 628 00:33:46,760 --> 00:33:49,520 Speaker 1: agree with you that that gets complicated because it involves 629 00:33:49,560 --> 00:33:51,880 Speaker 1: a lot of human feelings. And once you get to 630 00:33:51,960 --> 00:33:56,240 Speaker 1: human feelings, the whole data and stats and everything kind 631 00:33:56,240 --> 00:34:00,440 Speaker 1: of falls away. It's hard to convince someone who has 632 00:34:00,480 --> 00:34:06,240 Speaker 1: a deep seated distrust of changing their mind just by 633 00:34:06,280 --> 00:34:09,239 Speaker 1: showing them data, because they're always going to think of 634 00:34:09,320 --> 00:34:13,239 Speaker 1: the things that fall outside the norm as being more 635 00:34:13,280 --> 00:34:16,479 Speaker 1: important than the norm. Right, So if the accident rate 636 00:34:16,560 --> 00:34:20,120 Speaker 1: per million miles is let's say, one tenth of what 637 00:34:20,239 --> 00:34:22,479 Speaker 1: it would be for humans, they would still be looking 638 00:34:22,520 --> 00:34:25,439 Speaker 1: at that tenth and not the nine tenths. Right. 639 00:34:25,960 --> 00:34:30,520 Speaker 2: People don't think statistically, right they yeah, clearly, Like in general, 640 00:34:30,560 --> 00:34:34,680 Speaker 2: statistics don't convince people of the way the world works. 641 00:34:34,760 --> 00:34:38,600 Speaker 2: I do think, I mean I would have thought autonomous 642 00:34:38,680 --> 00:34:41,720 Speaker 2: vehicles would have developed faster, right. They are the classic 643 00:34:41,760 --> 00:34:45,080 Speaker 2: thing that's been five years away for fifteen years. Yeah, 644 00:34:45,120 --> 00:34:47,120 Speaker 2: and they still feel five years away. Maybe they feel 645 00:34:47,160 --> 00:34:49,840 Speaker 2: a little farther right, like five years ago they really 646 00:34:49,880 --> 00:34:51,640 Speaker 2: felt five years away. It's like, okay, but this time 647 00:34:51,680 --> 00:34:53,839 Speaker 2: we mean it. Look we got these, you know things 648 00:34:53,880 --> 00:34:56,719 Speaker 2: for have around San Francisco. But I do think like 649 00:34:56,800 --> 00:34:59,120 Speaker 2: that one which is AI by the way, right, it's 650 00:34:59,160 --> 00:35:04,800 Speaker 2: basically computer vision is essential to that to autonomous cars. Uh, 651 00:35:05,080 --> 00:35:07,480 Speaker 2: I feel like that one's gonna happen, don't you like? 652 00:35:08,200 --> 00:35:10,880 Speaker 2: And yes, people be worried about it, but it's the 653 00:35:11,000 --> 00:35:13,839 Speaker 2: kind of thing like you know, you drive, you ride 654 00:35:13,880 --> 00:35:15,680 Speaker 2: in like a driverless train when you go to the 655 00:35:15,719 --> 00:35:18,520 Speaker 2: airport and you take the train from Terminal A to 656 00:35:18,560 --> 00:35:22,920 Speaker 2: Terminal C or whatever. And yes, obviously driving is more complicated, 657 00:35:22,920 --> 00:35:24,560 Speaker 2: and obviously we're used to driving the car and not 658 00:35:24,640 --> 00:35:27,000 Speaker 2: driving the train. It's not exactly the same, but like, 659 00:35:27,120 --> 00:35:30,719 Speaker 2: you get used to it. People just get used to things, right, 660 00:35:30,880 --> 00:35:33,879 Speaker 2: Like people didn't used to all walk around looking at 661 00:35:33,880 --> 00:35:36,440 Speaker 2: their phones all the time, and now they do. And 662 00:35:37,040 --> 00:35:38,839 Speaker 2: I'm old enough that it still seems a little weird 663 00:35:38,840 --> 00:35:41,359 Speaker 2: to me. But I'm the weirdo for thinking it's weird, right, 664 00:35:41,400 --> 00:35:45,799 Speaker 2: And I think that's gonna happen with driverless cars in Michael. 665 00:35:45,600 --> 00:35:48,160 Speaker 1: Jacob, you just called me a weirdo, because I also 666 00:35:48,760 --> 00:35:51,680 Speaker 1: I do too. Once in a while, I'll take my 667 00:35:51,719 --> 00:35:54,400 Speaker 1: smartphone out and I'll just stop for a second and 668 00:35:54,440 --> 00:35:58,279 Speaker 1: think I have a computer in my pocket. When I 669 00:35:58,400 --> 00:36:01,840 Speaker 1: was a kid, a computer or my Apple to e 670 00:36:02,280 --> 00:36:06,319 Speaker 1: was a fraction of of what I'm holding in my 671 00:36:06,480 --> 00:36:09,360 Speaker 1: hand right now. I also have a device where I 672 00:36:09,360 --> 00:36:13,680 Speaker 1: could contact pretty much anyone. I know. It was just 673 00:36:13,719 --> 00:36:16,400 Speaker 1: a couple of like, like, it'll hit me for a second, 674 00:36:16,440 --> 00:36:18,880 Speaker 1: it's most attentive. 675 00:36:18,560 --> 00:36:22,200 Speaker 2: And no, maybe I should just check Twitter real quick, Like, yeah, 676 00:36:22,239 --> 00:36:25,160 Speaker 2: it's a it's I mean, it's another tool that has 677 00:36:25,200 --> 00:36:28,200 Speaker 2: like a complicated set of effects positive AGA. 678 00:36:28,440 --> 00:36:31,120 Speaker 1: But you know, you had said, like, do do you 679 00:36:31,160 --> 00:36:33,160 Speaker 1: think that autonomous cars are still going to be a thing? 680 00:36:33,200 --> 00:36:35,480 Speaker 1: I absolutely do think they're going to be a thing. 681 00:36:35,560 --> 00:36:38,279 Speaker 1: I think there there are companies out there that are 682 00:36:38,360 --> 00:36:43,600 Speaker 1: so invested in it that it's going to happen. The 683 00:36:43,600 --> 00:36:48,239 Speaker 1: timeline is really interesting. I would actually argue that some 684 00:36:48,440 --> 00:36:52,319 Speaker 1: other issues in AI that are not related to autonomous 685 00:36:52,360 --> 00:36:57,239 Speaker 1: cars could potentially keep that five years out going for 686 00:36:57,280 --> 00:37:01,480 Speaker 1: a while. Because people have this concern about A I 687 00:37:01,480 --> 00:37:04,800 Speaker 1: think they port that concern over to pretty much every 688 00:37:04,960 --> 00:37:08,840 Speaker 1: kind of AI, whether it's warranted or not. Because the 689 00:37:09,200 --> 00:37:13,239 Speaker 1: scary risks of generative AI, this idea that it's going 690 00:37:13,280 --> 00:37:15,560 Speaker 1: to displace people out of their jobs and such, which 691 00:37:15,600 --> 00:37:19,440 Speaker 1: it very may well do. They then kind of say like, well, 692 00:37:20,040 --> 00:37:24,800 Speaker 1: that application of AI is is really seems very harmful 693 00:37:24,840 --> 00:37:28,560 Speaker 1: to me. Then I think there's a tendency to kind 694 00:37:28,600 --> 00:37:30,520 Speaker 1: of apply that, even if it's not the same sort 695 00:37:30,560 --> 00:37:34,239 Speaker 1: of artificial intelligence to other implementations. And maybe I'm being 696 00:37:34,280 --> 00:37:38,200 Speaker 1: a little too cynical with that, but because I've seen 697 00:37:38,400 --> 00:37:42,280 Speaker 1: so much reporting go on where there isn't any effort 698 00:37:42,400 --> 00:37:46,200 Speaker 1: made to distinguish between different types of artificial intelligence and 699 00:37:46,239 --> 00:37:49,520 Speaker 1: what their purposes are and what their limitations are. It 700 00:37:49,920 --> 00:37:53,600 Speaker 1: feels like we are conditioning the public, and by we, 701 00:37:53,719 --> 00:37:56,600 Speaker 1: I mean like mass media conditioning the public to think 702 00:37:56,600 --> 00:37:59,800 Speaker 1: of AI as all existing in this one single bucket. 703 00:38:00,360 --> 00:38:02,279 Speaker 2: I feel like you consume a lot of really bad 704 00:38:02,400 --> 00:38:04,240 Speaker 2: media based on what you've been saying. 705 00:38:05,000 --> 00:38:08,680 Speaker 1: I mean, I'm reading articles all the time, and it's 706 00:38:08,719 --> 00:38:10,919 Speaker 1: not that they're written poorly or that the people who 707 00:38:10,960 --> 00:38:14,839 Speaker 1: write them are bad writers, but they are taking shortcuts, 708 00:38:15,680 --> 00:38:18,319 Speaker 1: there's no getting around it, and those shortcuts I think 709 00:38:18,360 --> 00:38:23,160 Speaker 1: are ultimately harmful. But then I also understand, especially if 710 00:38:23,160 --> 00:38:26,000 Speaker 1: you're assigned to write a certain number of articles per week, 711 00:38:26,040 --> 00:38:27,759 Speaker 1: you're probably not going to take the time to sit 712 00:38:27,800 --> 00:38:30,960 Speaker 1: there and explain the intricacies of how this is different 713 00:38:31,000 --> 00:38:35,120 Speaker 1: from every other implementation of artificial intelligence. But I certainly 714 00:38:35,120 --> 00:38:37,200 Speaker 1: can take the time on my show, so I do. 715 00:38:39,840 --> 00:38:42,560 Speaker 1: Jacob Goldstein of What's Your Problem has a lot more 716 00:38:42,600 --> 00:38:46,320 Speaker 1: to say about tech and engineering and AI, But before 717 00:38:46,360 --> 00:38:59,239 Speaker 1: we jump into that, let's take another quick break. Let's 718 00:38:59,280 --> 00:39:01,040 Speaker 1: talk a little bit of about some of the episodes 719 00:39:01,080 --> 00:39:03,279 Speaker 1: you've done on What's your Problem? Are there Are there 720 00:39:03,280 --> 00:39:06,360 Speaker 1: any that kind of stand out as like a particularly 721 00:39:07,480 --> 00:39:13,480 Speaker 1: fun or informative conversation, perhaps opening your eyes to something 722 00:39:13,520 --> 00:39:14,920 Speaker 1: that you hadn't considered before. 723 00:39:15,520 --> 00:39:18,520 Speaker 2: Yeah, yeah, a lot. Actually. You know, when you told 724 00:39:18,560 --> 00:39:21,840 Speaker 2: me you wanted to talk about the AI shows that 725 00:39:21,880 --> 00:39:23,960 Speaker 2: I've done, the interviews that I've done, I actually went 726 00:39:24,080 --> 00:39:27,080 Speaker 2: back through the back catalog and you know, we listened 727 00:39:27,120 --> 00:39:29,640 Speaker 2: to some shows and looked, and there really are a 728 00:39:29,680 --> 00:39:31,719 Speaker 2: lot of them, as you said, in sort of some 729 00:39:31,760 --> 00:39:35,200 Speaker 2: different domains, Like I've done a lot on AI and health, 730 00:39:35,480 --> 00:39:37,160 Speaker 2: which is really interesting to me. I mean, one of 731 00:39:37,239 --> 00:39:40,200 Speaker 2: the things that I try and find are places where 732 00:39:40,239 --> 00:39:43,319 Speaker 2: it's like, oh, this is there's actually real stakes here, right. 733 00:39:43,360 --> 00:39:47,040 Speaker 2: It's not just like, oh, making some kind of company 734 00:39:47,080 --> 00:39:49,279 Speaker 2: I don't care about ten percent more profitable or whatever, 735 00:39:49,320 --> 00:39:51,279 Speaker 2: which fine, like it's fine for people to do that, 736 00:39:51,320 --> 00:39:53,359 Speaker 2: it's just not that interesting to me. Whereas with health, 737 00:39:53,400 --> 00:39:56,080 Speaker 2: it's like, oh, if you can make it less likely 738 00:39:56,160 --> 00:39:59,799 Speaker 2: for me and the people I love to die, I'm interested. 739 00:40:01,160 --> 00:40:05,160 Speaker 2: When I just did recently, I interviewed this woman Succi Saria, 740 00:40:05,160 --> 00:40:07,239 Speaker 2: who she's a professor at Johns Hopkins and She also 741 00:40:07,239 --> 00:40:10,880 Speaker 2: has this company called Baesian Health, and her story is 742 00:40:10,920 --> 00:40:13,759 Speaker 2: really interesting. She started out as a grad student. She 743 00:40:13,840 --> 00:40:16,200 Speaker 2: was interested in AI and robots, and as she told 744 00:40:16,239 --> 00:40:17,400 Speaker 2: me about it, she's like, you know, I was like 745 00:40:17,480 --> 00:40:19,440 Speaker 2: trying to figure out how to make a robot juggle 746 00:40:19,560 --> 00:40:21,799 Speaker 2: or whatever, just because it was fun. And she had 747 00:40:21,800 --> 00:40:24,279 Speaker 2: this friend who was a doctor. She was a grad 748 00:40:24,320 --> 00:40:26,680 Speaker 2: student at Stanford and her friend was a doctor at 749 00:40:26,719 --> 00:40:30,840 Speaker 2: Stanford Hospital who was taken care of of premature babies 750 00:40:30,880 --> 00:40:34,240 Speaker 2: in the neonatal intensive care unit. And this was about 751 00:40:34,239 --> 00:40:38,759 Speaker 2: twelve years ago, and at this time hospitals were just 752 00:40:38,880 --> 00:40:42,080 Speaker 2: starting to use electronic health records, which is kind of 753 00:40:42,120 --> 00:40:44,000 Speaker 2: amazing that it was that late. Like we're talking like, 754 00:40:44,040 --> 00:40:45,759 Speaker 2: you know, I don't know, twenty twelve or something. And 755 00:40:45,880 --> 00:40:47,880 Speaker 2: it is one of the really interesting things to me 756 00:40:48,000 --> 00:40:51,080 Speaker 2: about healthcare is in some ways it's super high tech, 757 00:40:51,120 --> 00:40:53,759 Speaker 2: you know, like these crazy CT scanners and like, you know, 758 00:40:53,800 --> 00:40:56,879 Speaker 2: everybody's got like bionic knees and amazing stuff. But when 759 00:40:56,880 --> 00:40:59,400 Speaker 2: you get to actual like care at the bedside, like 760 00:40:59,560 --> 00:41:03,399 Speaker 2: doctor treating patients in the hospital, it has remained rather 761 00:41:03,480 --> 00:41:06,600 Speaker 2: old fashioned in many ways. Right, you know, twelve years 762 00:41:06,640 --> 00:41:10,480 Speaker 2: ago it was still paper charts. Today, it's still doctors relying, 763 00:41:10,520 --> 00:41:12,720 Speaker 2: you know, to a significant degree on evidence, but also 764 00:41:13,040 --> 00:41:17,720 Speaker 2: to a significantry on essentially intuition. And so this computer scientist, 765 00:41:17,719 --> 00:41:20,560 Speaker 2: who she's sorry it, basically decides, oh, I'm going to 766 00:41:20,600 --> 00:41:23,720 Speaker 2: try and figure out how to use AI to make 767 00:41:24,000 --> 00:41:27,880 Speaker 2: patient care in hospitals better. Like that's basically her big project. 768 00:41:27,960 --> 00:41:32,359 Speaker 2: And she starts doing it with these premature babies twelve 769 00:41:32,400 --> 00:41:35,800 Speaker 2: years ago and in fact figures out that by using 770 00:41:35,840 --> 00:41:38,840 Speaker 2: this data that's now being captured in the electronic health record, 771 00:41:39,120 --> 00:41:42,640 Speaker 2: she can build an AI model that can essentially better 772 00:41:42,719 --> 00:41:47,279 Speaker 2: predict outcomes for these premature babies than the standard of care. 773 00:41:48,000 --> 00:41:51,880 Speaker 2: But it's so early that it doesn't really go anywhere, right, Like, 774 00:41:51,920 --> 00:41:55,600 Speaker 2: hospitals are just starting to use electronic health records, and 775 00:41:56,080 --> 00:41:58,960 Speaker 2: a lot of doctors don't want to hear from some 776 00:41:59,200 --> 00:42:03,280 Speaker 2: random computers scientists. They studied medicine for a long time 777 00:42:03,400 --> 00:42:05,480 Speaker 2: and they've treated a lot of patients and they know 778 00:42:05,520 --> 00:42:08,760 Speaker 2: what they're doing, and so it takes a long time, 779 00:42:09,520 --> 00:42:13,400 Speaker 2: but she eventually starts this company, and more recently she 780 00:42:13,520 --> 00:42:16,480 Speaker 2: decided to go after sepsis, which is this really common 781 00:42:16,480 --> 00:42:21,160 Speaker 2: complication at hospitals. In hospitalized patients. It's basically a terrible 782 00:42:21,320 --> 00:42:25,399 Speaker 2: infection to your body's response to infection, and you can 783 00:42:25,480 --> 00:42:28,759 Speaker 2: die from it. Lots of people die from it. It's complicated, 784 00:42:28,800 --> 00:42:31,720 Speaker 2: it's somewhat hard to diagnose. If you can diagnose it sooner, 785 00:42:33,040 --> 00:42:36,560 Speaker 2: the patient has a much better chance of surviving, right, So, 786 00:42:37,560 --> 00:42:41,000 Speaker 2: very high stakes and fundamentally, you know, if you think 787 00:42:41,040 --> 00:42:44,600 Speaker 2: about what AI is today, people generally mean machine learning 788 00:42:44,640 --> 00:42:47,000 Speaker 2: when they say that, right, as you know, and what 789 00:42:47,160 --> 00:42:50,799 Speaker 2: machine learning is really good at doing is taking a 790 00:42:50,840 --> 00:42:55,080 Speaker 2: lot of data and matching it to patterns, right saying, Oh, 791 00:42:55,120 --> 00:42:57,960 Speaker 2: when you have all of this set of data like this, 792 00:42:58,480 --> 00:43:01,520 Speaker 2: you tend to get this kind of out, which really 793 00:43:01,719 --> 00:43:04,360 Speaker 2: is what a medical diagnosis is, right, Like, that's what 794 00:43:04,480 --> 00:43:06,800 Speaker 2: a doctor is doing when they look at a patient 795 00:43:07,040 --> 00:43:12,320 Speaker 2: who has some set of symptoms, age, everything, and they say, oh, 796 00:43:12,440 --> 00:43:15,760 Speaker 2: this person might have sepsis. Let's do a test to see. 797 00:43:16,200 --> 00:43:19,720 Speaker 2: And so she built this system and it basically works. 798 00:43:19,760 --> 00:43:23,439 Speaker 2: They did some trials. But a really interesting thing she said, 799 00:43:23,480 --> 00:43:25,439 Speaker 2: and it goes back, Jonathan, something you were talking about 800 00:43:25,480 --> 00:43:28,319 Speaker 2: earlier in the conversation, is like she realized getting the 801 00:43:28,480 --> 00:43:31,120 Speaker 2: AI to work. You know, it's not one hundred percent, 802 00:43:31,120 --> 00:43:35,120 Speaker 2: but to usefully flag that a patient might have sepsis 803 00:43:35,280 --> 00:43:38,160 Speaker 2: essentially what it does is like maybe half the problem, 804 00:43:38,200 --> 00:43:41,480 Speaker 2: maybe not even What's really hard is getting super busy 805 00:43:41,520 --> 00:43:44,120 Speaker 2: doctors who are getting a million alerts all the time 806 00:43:44,360 --> 00:43:46,680 Speaker 2: to believe that this alert is worth paying attention to. 807 00:43:47,200 --> 00:43:49,319 Speaker 2: And like you were talking about UI, she was like, 808 00:43:49,360 --> 00:43:51,480 Speaker 2: it's totally a UI problem. Like the math was the 809 00:43:51,520 --> 00:43:54,000 Speaker 2: easy part, Like you know, getting it so that instead 810 00:43:54,000 --> 00:43:57,600 Speaker 2: of doctors having to spend one minute when this alert 811 00:43:57,600 --> 00:44:00,400 Speaker 2: comes up, they can spend three seconds. Like that was 812 00:44:00,400 --> 00:44:03,600 Speaker 2: actually a huge breakthrough for like more than the AI model. 813 00:44:03,719 --> 00:44:06,120 Speaker 2: So like that's an example of an episode where there's 814 00:44:06,160 --> 00:44:09,520 Speaker 2: like a cool aipiece, big stakes, but also like this 815 00:44:09,719 --> 00:44:13,880 Speaker 2: interesting human UI kind of messy humanity piece. 816 00:44:14,239 --> 00:44:18,560 Speaker 1: Yeah. I love talking with folks who who really tackle 817 00:44:18,719 --> 00:44:22,600 Speaker 1: those sorts of challenges. I remember chatting with some roboticists 818 00:44:22,600 --> 00:44:27,839 Speaker 1: who were focused not on the robotics side necessarily, not 819 00:44:27,880 --> 00:44:31,840 Speaker 1: on how the robot actually functioned, but rather how to 820 00:44:31,920 --> 00:44:35,160 Speaker 1: design the robots so that they could interact within a 821 00:44:35,239 --> 00:44:38,560 Speaker 1: human environment in a way that did not disrupt that 822 00:44:38,719 --> 00:44:41,840 Speaker 1: environment at all. And it turns out that's a really 823 00:44:42,000 --> 00:44:46,120 Speaker 1: tough challenge right, like creating a robot that can navigate 824 00:44:46,160 --> 00:44:49,640 Speaker 1: through a human environment still do useful things. So you 825 00:44:49,719 --> 00:44:53,160 Speaker 1: have to design the robot so it can go through 826 00:44:53,160 --> 00:44:56,080 Speaker 1: an environment that we have designed to make sense to us, 827 00:44:56,360 --> 00:44:59,600 Speaker 1: which doesn't necessarily make sense to a robot, but then 828 00:44:59,640 --> 00:45:01,759 Speaker 1: to all. So do it in a way where people 829 00:45:01,800 --> 00:45:05,160 Speaker 1: aren't just stopping everything they're doing to watch the robot 830 00:45:05,480 --> 00:45:08,360 Speaker 1: bump into a wall fourteen times before it finds the doorway. 831 00:45:08,600 --> 00:45:12,919 Speaker 1: So yeah, I think that those conversations can be really 832 00:45:12,960 --> 00:45:16,359 Speaker 1: fascinating because it does open up your eyes to other 833 00:45:16,560 --> 00:45:22,239 Speaker 1: issues within technology that don't necessarily relate directly to how 834 00:45:22,280 --> 00:45:25,839 Speaker 1: the tech functions, but rather how do we interact with that, 835 00:45:26,120 --> 00:45:29,520 Speaker 1: what happens when you have the intersection of human experience 836 00:45:29,920 --> 00:45:34,160 Speaker 1: and technology. Those are really really great. We need to 837 00:45:34,200 --> 00:45:37,000 Speaker 1: take one more break to thank our sponsors, but we'll 838 00:45:37,000 --> 00:45:40,920 Speaker 1: be back with more conversation about communicating technology to the 839 00:45:40,960 --> 00:45:53,600 Speaker 1: general public. So another episode, I just wanted to call out. 840 00:45:53,640 --> 00:45:55,319 Speaker 1: We don't have to talk about it, really, but I 841 00:45:55,360 --> 00:45:57,759 Speaker 1: wanted to call out because you spoke with someone I 842 00:45:57,800 --> 00:46:00,600 Speaker 1: had spoken with as well on a different show. Casner, 843 00:46:00,920 --> 00:46:03,880 Speaker 1: the founder of a Panoai, which is a company that 844 00:46:04,080 --> 00:46:09,040 Speaker 1: uses cameras that are co located, typically on cellular towers, 845 00:46:09,280 --> 00:46:13,680 Speaker 1: to monitor for forest fires in remote places, and then 846 00:46:13,880 --> 00:46:16,880 Speaker 1: it's using AI to look for signs of forest fires, 847 00:46:16,880 --> 00:46:20,239 Speaker 1: which then it flags anything that it suspects as a 848 00:46:20,239 --> 00:46:23,880 Speaker 1: forest fire. A human reviews the footage, so it's not 849 00:46:24,520 --> 00:46:28,560 Speaker 1: just relying upon AI, and if the human determines, oh 850 00:46:28,560 --> 00:46:30,759 Speaker 1: my gosh, yes, this does look like the beginnings of 851 00:46:30,800 --> 00:46:33,480 Speaker 1: a forest fire, they can then send an alert to 852 00:46:34,120 --> 00:46:37,239 Speaker 1: the authorities that would be responsible to respond to that 853 00:46:37,400 --> 00:46:41,839 Speaker 1: and potentially cut off disasters before they could happen. When 854 00:46:41,880 --> 00:46:44,240 Speaker 1: I spoke with her, it was at a time where 855 00:46:44,280 --> 00:46:48,280 Speaker 1: there were the infamous Canadian forest fires that were really 856 00:46:48,440 --> 00:46:53,160 Speaker 1: ravaging Canada, and so it was very clear that this 857 00:46:53,320 --> 00:46:58,279 Speaker 1: sort of application of artificial intelligence had a potentially like 858 00:46:58,680 --> 00:47:03,080 Speaker 1: a really beneficial implementation where it could save property in 859 00:47:03,160 --> 00:47:06,879 Speaker 1: people and all sorts of benefits beyond that. You think 860 00:47:06,880 --> 00:47:09,719 Speaker 1: about just even just cutting back the amount of air 861 00:47:09,760 --> 00:47:12,960 Speaker 1: pollution that affected all of the Northeast. You know, all 862 00:47:12,960 --> 00:47:16,320 Speaker 1: those folks who had to breathe smoky air for months 863 00:47:16,400 --> 00:47:20,440 Speaker 1: because of this, Like you start again, I always talk 864 00:47:20,480 --> 00:47:22,439 Speaker 1: about the ripple effect. You always want to look at 865 00:47:22,480 --> 00:47:26,360 Speaker 1: how this is rippling outward because you start to realize, oh, 866 00:47:26,800 --> 00:47:31,240 Speaker 1: this has even greater benefit than just the the ground 867 00:47:31,320 --> 00:47:34,239 Speaker 1: zero point, right, It has all these other things that 868 00:47:34,320 --> 00:47:36,799 Speaker 1: will end up benefiting people, most of which you won't 869 00:47:36,800 --> 00:47:40,480 Speaker 1: even realize because you have prevented the bad thing so 870 00:47:40,520 --> 00:47:43,239 Speaker 1: you don't experience the bad thing. And so I just 871 00:47:43,280 --> 00:47:45,040 Speaker 1: wanted to call that out for listeners who might be 872 00:47:45,040 --> 00:47:47,719 Speaker 1: looking to see where to start off, because you've got 873 00:47:47,800 --> 00:47:48,800 Speaker 1: quite a few episodes. 874 00:47:48,920 --> 00:47:51,080 Speaker 2: She's really interesting and you know, one of the things 875 00:47:51,080 --> 00:47:54,480 Speaker 2: that was interesting to me about Sonya, the person who 876 00:47:54,480 --> 00:47:57,799 Speaker 2: started this company, was she has this big idea that 877 00:47:58,000 --> 00:48:01,399 Speaker 2: actually goes beyond wildfires. Is that's what they're doing now, 878 00:48:01,440 --> 00:48:05,200 Speaker 2: that's their business now. But her big dream is is 879 00:48:05,239 --> 00:48:10,080 Speaker 2: about data and adapting to climate change. Basically, right, there 880 00:48:10,080 --> 00:48:13,200 Speaker 2: are more wildfires because of climate change. But she's like, look, 881 00:48:13,200 --> 00:48:15,480 Speaker 2: we're going to be spending trillions of dollars over the 882 00:48:15,520 --> 00:48:18,920 Speaker 2: next decades to mitigate the effects of climate change to 883 00:48:19,040 --> 00:48:21,440 Speaker 2: you know, deal with seawater rise and flooding, and like, 884 00:48:21,760 --> 00:48:24,080 Speaker 2: are you know flood maps are one hundred years old, 885 00:48:24,200 --> 00:48:27,840 Speaker 2: and so if we can in different domains bring data 886 00:48:27,880 --> 00:48:30,840 Speaker 2: to bear on like where should we prioritize the money. 887 00:48:30,840 --> 00:48:32,920 Speaker 2: Where are there going to be more floods if we 888 00:48:32,960 --> 00:48:36,719 Speaker 2: can bring technology to bear? And in her case, what 889 00:48:36,760 --> 00:48:38,920 Speaker 2: that really means is data. Right, Like the sort of 890 00:48:38,960 --> 00:48:41,720 Speaker 2: substrate of AI. The thing AI needs to be clever 891 00:48:41,960 --> 00:48:43,920 Speaker 2: is a lot of data. If we can bring data 892 00:48:43,960 --> 00:48:47,400 Speaker 2: to that, it'll just work better. For every million dollars 893 00:48:47,400 --> 00:48:50,000 Speaker 2: we spend, for every billion dollars we spend, if we 894 00:48:50,080 --> 00:48:53,160 Speaker 2: can be more smart about it, we will get better results. 895 00:48:53,480 --> 00:48:57,399 Speaker 1: Right. It's it's like the difference between being proactive and reactive, right, 896 00:48:57,680 --> 00:49:00,640 Speaker 1: being able to being able to plan for something and 897 00:49:01,480 --> 00:49:04,880 Speaker 1: minimize its impact, as opposed to, Oh, now we have 898 00:49:04,920 --> 00:49:07,879 Speaker 1: to clean up because this catastrophic event has happened, and 899 00:49:07,920 --> 00:49:10,759 Speaker 1: how do we deal with that? And I think when 900 00:49:10,760 --> 00:49:13,440 Speaker 1: we look back at some of those catastrophic events that 901 00:49:13,480 --> 00:49:16,680 Speaker 1: have happened in our lifetimes, you can really see the 902 00:49:16,760 --> 00:49:22,000 Speaker 1: benefit of mitigation versus reaction and cleaning up, you know, 903 00:49:22,080 --> 00:49:26,560 Speaker 1: the ability to save lives and prevent damage. It's tremendous. 904 00:49:26,760 --> 00:49:32,200 Speaker 1: So certainly there are plenty of artificial intelligence applications that 905 00:49:32,239 --> 00:49:36,160 Speaker 1: would be incredibly helpful when put to the proper use. 906 00:49:36,680 --> 00:49:39,680 Speaker 1: So again, I think if there's any lesson to take 907 00:49:39,719 --> 00:49:43,040 Speaker 1: home with this conversation is use that critical thinking, try 908 00:49:43,200 --> 00:49:45,279 Speaker 1: not to be reductive. I know that I can get 909 00:49:45,520 --> 00:49:49,160 Speaker 1: really cynical about artificial intelligence, but again it's mostly because 910 00:49:49,200 --> 00:49:53,160 Speaker 1: of the marketing language around it rather than the technology itself. 911 00:49:53,760 --> 00:49:53,960 Speaker 2: Yeah. 912 00:49:54,000 --> 00:49:56,200 Speaker 1: I think it's also because, like I see a lot 913 00:49:56,200 --> 00:50:00,279 Speaker 1: of similarities in the AI evangelists that I saw with 914 00:50:00,520 --> 00:50:04,759 Speaker 1: NFT evangelists. And that's low We all know how that. 915 00:50:05,480 --> 00:50:10,200 Speaker 2: Yeah, you know, I think AI has more legs than NFTs. 916 00:50:10,360 --> 00:50:12,160 Speaker 2: I feel like I'm not going out on a limb 917 00:50:12,160 --> 00:50:12,680 Speaker 2: to say that. 918 00:50:12,719 --> 00:50:16,680 Speaker 1: I certainly think it has more potential beneficial uses than 919 00:50:16,800 --> 00:50:20,399 Speaker 1: n FTS. I think NFTs probably have some benefits too, 920 00:50:20,480 --> 00:50:23,040 Speaker 1: but the problem is that no one was focusing on 921 00:50:23,080 --> 00:50:24,960 Speaker 1: those when they were going crazy about them. 922 00:50:25,480 --> 00:50:27,600 Speaker 2: I mean, one of the interesting things to me, when 923 00:50:28,080 --> 00:50:30,320 Speaker 2: you know, when you think about what should we worry 924 00:50:30,360 --> 00:50:33,399 Speaker 2: about with AI, there's sort of these two there's sort 925 00:50:33,400 --> 00:50:35,600 Speaker 2: of like a Barbelle where like the thing you hear 926 00:50:35,600 --> 00:50:38,120 Speaker 2: about most is it's going to take all our jobs, 927 00:50:38,160 --> 00:50:39,960 Speaker 2: where a robot's going to kill us, all right, that's 928 00:50:40,000 --> 00:50:43,160 Speaker 2: the like amazing there's an interesting other end of the 929 00:50:43,160 --> 00:50:45,200 Speaker 2: spectrum that you know, some of the people have talked 930 00:50:45,239 --> 00:50:46,719 Speaker 2: to on the show have talked about which is the 931 00:50:46,800 --> 00:50:50,440 Speaker 2: risk of people over relying on AI, right, People worry that, 932 00:50:50,600 --> 00:50:54,640 Speaker 2: you know, critical decisions are going to be made based 933 00:50:54,760 --> 00:50:57,880 Speaker 2: on AI outputs that are not that good, that are 934 00:50:57,880 --> 00:51:00,840 Speaker 2: not that robust, that are not that reliable. And you know, 935 00:51:00,880 --> 00:51:02,640 Speaker 2: one of the people I talk to like runs a 936 00:51:02,680 --> 00:51:07,880 Speaker 2: company basically like to stress test AI to catch eyes mistakes, 937 00:51:07,960 --> 00:51:11,000 Speaker 2: and he talked about just like really dumb mistakes that 938 00:51:11,040 --> 00:51:12,799 Speaker 2: he sees all the time, you know. Where he gave 939 00:51:12,840 --> 00:51:15,399 Speaker 2: the example of like on a life insurance application if 940 00:51:15,400 --> 00:51:18,480 Speaker 2: someone puts their year of birth instead of their age, right, 941 00:51:18,520 --> 00:51:21,640 Speaker 2: so you put whatever, nineteen eighty four instead of forty, 942 00:51:22,160 --> 00:51:24,879 Speaker 2: and I will actually think the person is one nine 943 00:51:24,960 --> 00:51:27,600 Speaker 2: hundred and eighty four years old and will want to 944 00:51:27,680 --> 00:51:29,919 Speaker 2: charge them a lot for their life insurance because boy, 945 00:51:29,920 --> 00:51:32,239 Speaker 2: if you're that old, you're gonna have a lot of 946 00:51:32,239 --> 00:51:34,080 Speaker 2: health risks. And I said to him, like, is it 947 00:51:34,239 --> 00:51:36,839 Speaker 2: really that dumb, Like is it really or you being 948 00:51:36,920 --> 00:51:39,520 Speaker 2: you know, is this hyperbole? He said, no, it's really 949 00:51:39,600 --> 00:51:43,080 Speaker 2: that dumb. And so that is an interesting side of 950 00:51:43,080 --> 00:51:45,439 Speaker 2: it to me, right, Like, oh, there's also a risk, 951 00:51:45,560 --> 00:51:47,640 Speaker 2: there's a risk from AI being too smart. There's also 952 00:51:47,680 --> 00:51:50,479 Speaker 2: a risk from a being not smart enough if people 953 00:51:50,520 --> 00:51:51,760 Speaker 2: are over reliant. 954 00:51:51,400 --> 00:51:53,920 Speaker 1: On it, which you would hope that people wouldn't fall 955 00:51:53,960 --> 00:51:56,319 Speaker 1: into that trap. But at the same time, you just 956 00:51:56,400 --> 00:51:59,520 Speaker 1: look back over the history of technology and how would 957 00:51:59,520 --> 00:52:04,960 Speaker 1: we've had technology that helps remove certain tasks that we 958 00:52:05,080 --> 00:52:08,440 Speaker 1: just let them go. So, for example, I can probably 959 00:52:08,560 --> 00:52:13,160 Speaker 1: rattle off maybe half a dozen phone numbers of people 960 00:52:13,200 --> 00:52:15,840 Speaker 1: that I know would love, but all the rest are 961 00:52:15,880 --> 00:52:19,440 Speaker 1: just buried in my phone contacts because I don't need 962 00:52:19,440 --> 00:52:21,799 Speaker 1: to have them stored in my own brain. I have 963 00:52:21,960 --> 00:52:23,680 Speaker 1: offloaded that to technology. 964 00:52:24,040 --> 00:52:26,600 Speaker 2: Half does it is a lot? Are those from twenty 965 00:52:26,640 --> 00:52:28,680 Speaker 2: years ago? I don't. I haven't member a city, but 966 00:52:28,840 --> 00:52:30,399 Speaker 2: he's phone number in a long time. 967 00:52:30,480 --> 00:52:32,560 Speaker 1: My parents haven't changed their phone numbers since I was 968 00:52:32,600 --> 00:52:36,440 Speaker 1: a child, so that one I remember, honestly, if I'm 969 00:52:36,440 --> 00:52:39,319 Speaker 1: being really honest, it's probably more like three, but I. 970 00:52:39,400 --> 00:52:41,040 Speaker 2: Might be down to two at this point when my 971 00:52:41,120 --> 00:52:43,319 Speaker 2: mom got rid of her landline a few years ago. 972 00:52:43,920 --> 00:52:45,960 Speaker 2: I definitely don't know my mom's cell number. 973 00:52:46,320 --> 00:52:48,799 Speaker 1: Yeah, I know my parents' landline number because they still 974 00:52:48,840 --> 00:52:51,879 Speaker 1: have it. I couldn't tell you they're cell numbers either, 975 00:52:52,239 --> 00:52:54,799 Speaker 1: but that's kind of a simple example, and you know, 976 00:52:54,960 --> 00:52:57,239 Speaker 1: obviously it's going to be a lot more complicated when 977 00:52:57,239 --> 00:53:01,320 Speaker 1: you're talking about offloading, you know, potentially decisions to AI. 978 00:53:01,840 --> 00:53:03,960 Speaker 1: But I think the argument I can make is that 979 00:53:04,000 --> 00:53:08,319 Speaker 1: there's precedent, So I think that concern is well warranted, right. 980 00:53:08,400 --> 00:53:12,360 Speaker 1: I think it also gets back to that concern about 981 00:53:12,680 --> 00:53:15,640 Speaker 1: autonomous cars. Everyone worries that the autonomous car they're getting 982 00:53:15,640 --> 00:53:18,160 Speaker 1: into is the one that's being driven by a crazy robot. 983 00:53:19,160 --> 00:53:21,359 Speaker 1: So it's odd also to think of a world where 984 00:53:21,360 --> 00:53:23,879 Speaker 1: people might be nervous to get into an autonomous car, 985 00:53:23,920 --> 00:53:27,040 Speaker 1: but they might be willing to have an AI complete 986 00:53:27,080 --> 00:53:31,200 Speaker 1: their taxes for them. For example. It's a weird world 987 00:53:31,239 --> 00:53:31,680 Speaker 1: we live in. 988 00:53:32,080 --> 00:53:34,279 Speaker 2: Hey, I do in taxes? Is interesting? Are you do 989 00:53:34,320 --> 00:53:36,960 Speaker 2: you have an AI accountant? I mean, I like my accountant, but. 990 00:53:37,920 --> 00:53:41,759 Speaker 1: My account's pretty good and I'm ninety seven percent sure 991 00:53:41,800 --> 00:53:46,400 Speaker 1: she's human, So I think I'm in the clear on 992 00:53:46,600 --> 00:53:49,839 Speaker 1: that one. But I could easily see that being a thing, 993 00:53:49,960 --> 00:53:52,240 Speaker 1: especially for something like the United States, where the tax 994 00:53:52,280 --> 00:53:55,680 Speaker 1: code gets complicated enough where people like you and I 995 00:53:55,760 --> 00:53:58,319 Speaker 1: we feel the need to go out and reach out 996 00:53:58,360 --> 00:54:01,200 Speaker 1: to a professional because handling it yourself as daunting. 997 00:54:01,400 --> 00:54:04,960 Speaker 2: You could imagine like a happy story is like the 998 00:54:05,040 --> 00:54:07,480 Speaker 2: AI does a lot of the work. An accountant can 999 00:54:07,480 --> 00:54:09,960 Speaker 2: have more clients and charge each of them less, and 1000 00:54:10,040 --> 00:54:12,239 Speaker 2: like go over the work of the AI. Right, Like 1001 00:54:12,280 --> 00:54:14,480 Speaker 2: there's yes, and this it's sort of mundane, right. The 1002 00:54:14,520 --> 00:54:17,080 Speaker 2: reason people don't talk about outcomes like that is because 1003 00:54:17,160 --> 00:54:20,640 Speaker 2: it's boring. But there are a lot of boring incremental gains. 1004 00:54:20,680 --> 00:54:22,960 Speaker 2: If I could pay my accountant half as much and 1005 00:54:22,960 --> 00:54:25,680 Speaker 2: my accountant could have twice as many clients and do work, 1006 00:54:25,719 --> 00:54:27,799 Speaker 2: that's maybe a little better or at least as good. 1007 00:54:28,320 --> 00:54:30,600 Speaker 2: Everybody wins. I mean, I suppose at the margin there's 1008 00:54:30,760 --> 00:54:33,640 Speaker 2: need for fewer accountants in that world, but like that's 1009 00:54:33,680 --> 00:54:35,719 Speaker 2: okay with me, right, Like those people who would have 1010 00:54:35,719 --> 00:54:38,040 Speaker 2: been accountants can go and like, you know, work on 1011 00:54:38,120 --> 00:54:39,400 Speaker 2: AI healthcare or something. 1012 00:54:40,080 --> 00:54:42,719 Speaker 1: Yeah. I like the people who argue that instead of 1013 00:54:42,719 --> 00:54:46,359 Speaker 1: calling it artificial intelligence, maybe call it augmented intelligence, where 1014 00:54:46,360 --> 00:54:50,480 Speaker 1: the goal is to augment our abilities to get things done. 1015 00:54:51,120 --> 00:54:53,200 Speaker 1: And I think it would be a lot easier to 1016 00:54:53,239 --> 00:54:56,400 Speaker 1: do that if we heard fewer stories like a CEO 1017 00:54:56,560 --> 00:55:00,440 Speaker 1: suggesting that eight thousand unfilled jobs will ultimately be filled 1018 00:55:00,440 --> 00:55:03,360 Speaker 1: by AI and not humans. If we heard fewer stories 1019 00:55:03,440 --> 00:55:07,640 Speaker 1: like that and more stories about no, we we implemented 1020 00:55:07,680 --> 00:55:13,399 Speaker 1: this so that people could respond to customer concerns at 1021 00:55:13,440 --> 00:55:17,320 Speaker 1: a rate that's five times faster than before, which means 1022 00:55:17,680 --> 00:55:21,160 Speaker 1: they can resolve your issue and you're you're spending less 1023 00:55:21,160 --> 00:55:24,320 Speaker 1: time frustrated and sitting on hold. Like I think that's 1024 00:55:25,120 --> 00:55:27,399 Speaker 1: the direction that everyone wants it to go, and they're 1025 00:55:27,400 --> 00:55:29,799 Speaker 1: just worried that's going to go in the direction of, Hey, 1026 00:55:30,680 --> 00:55:33,759 Speaker 1: those coworkers you used to like they're all replaced by 1027 00:55:34,320 --> 00:55:37,879 Speaker 1: algorithms now, Like that's that's where we need to really go. 1028 00:55:38,000 --> 00:55:43,080 Speaker 2: Yes, I mean, technological unemployment is complicated, right, Like people 1029 00:55:43,120 --> 00:55:46,240 Speaker 2: have been certainly afraid of it for hundreds of years. 1030 00:55:46,280 --> 00:55:49,120 Speaker 2: Now today let's talk about some let's talk about the 1031 00:55:49,239 --> 00:55:52,640 Speaker 2: Dutch weavers high yeah right, I mean, you know, unemployment 1032 00:55:52,680 --> 00:55:55,760 Speaker 2: is below four percent today, wages are going up. People 1033 00:55:56,000 --> 00:55:59,000 Speaker 2: get angry when you point that out. But it's true, 1034 00:55:59,600 --> 00:56:03,280 Speaker 2: and it's possible that AI will be bad for workers, 1035 00:56:03,840 --> 00:56:06,880 Speaker 2: but we don't know yet, Like that's one like I 1036 00:56:07,000 --> 00:56:09,560 Speaker 2: just don't know, and I don't think anybody knows the 1037 00:56:09,600 --> 00:56:10,319 Speaker 2: answer to that one. 1038 00:56:10,400 --> 00:56:15,520 Speaker 1: Yeah, yeah, And then this lays the scariness. Well, Jacob, 1039 00:56:15,719 --> 00:56:18,640 Speaker 1: thank you so much for joining the show. This has 1040 00:56:18,680 --> 00:56:22,040 Speaker 1: been a really fun conversation. I've really enjoyed it. I'm 1041 00:56:22,040 --> 00:56:25,520 Speaker 1: sure my listeners have too. And just to remind everybody, 1042 00:56:25,680 --> 00:56:28,600 Speaker 1: your podcast is What's Your Problem. You have these kinds 1043 00:56:28,600 --> 00:56:32,319 Speaker 1: of conversations with decision makers and the people who are 1044 00:56:32,360 --> 00:56:35,919 Speaker 1: actually creating the systems we've been talking about, and who 1045 00:56:35,960 --> 00:56:40,839 Speaker 1: are actively tackling these questions and determining how to address them. 1046 00:56:41,360 --> 00:56:45,359 Speaker 1: So I highly recommend to my listeners you check out 1047 00:56:45,400 --> 00:56:49,080 Speaker 1: What's Your Problem. You've got so many different episodes, I'm 1048 00:56:49,120 --> 00:56:51,600 Speaker 1: sure like there's going to be one on there that's 1049 00:56:51,640 --> 00:56:53,799 Speaker 1: going to speak to every single person who listens to 1050 00:56:53,840 --> 00:56:54,279 Speaker 1: my show. 1051 00:56:54,800 --> 00:56:56,920 Speaker 2: Thank you so much. That's such a kind generous thing 1052 00:56:57,000 --> 00:56:59,279 Speaker 2: to say, and thanks for having me. It was great. 1053 00:57:00,640 --> 00:57:02,560 Speaker 1: I hope you all enjoyed this conversation I had with 1054 00:57:02,640 --> 00:57:04,680 Speaker 1: Jacob Goldstein. It was a pleasure having him on the show. 1055 00:57:05,080 --> 00:57:07,160 Speaker 1: I know this was a long one. We literally could 1056 00:57:07,200 --> 00:57:11,640 Speaker 1: have gone another hour easy, so I had to use 1057 00:57:11,680 --> 00:57:14,560 Speaker 1: some restraint there. I hope all of you out there 1058 00:57:14,640 --> 00:57:16,919 Speaker 1: are well. I'm looking forward to having a lot more 1059 00:57:17,000 --> 00:57:20,200 Speaker 1: interviews in the future. In fact, I've got a couple 1060 00:57:20,240 --> 00:57:22,760 Speaker 1: that I'm working on right now to kind of line up, 1061 00:57:23,040 --> 00:57:26,080 Speaker 1: So that's really exciting for me. I love having another 1062 00:57:26,120 --> 00:57:29,240 Speaker 1: point of view come into the conversation. I hope you 1063 00:57:29,280 --> 00:57:32,680 Speaker 1: do too, and I will talk to you again really soon. 1064 00:57:38,800 --> 00:57:43,480 Speaker 1: Tech Stuff is an iHeartRadio production. For more podcasts from iHeartRadio, 1065 00:57:43,800 --> 00:57:47,520 Speaker 1: visit the iHeartRadio app, Apple Podcasts, or wherever you listen 1066 00:57:47,560 --> 00:57:48,600 Speaker 1: to your favorite shows.