1 00:00:04,400 --> 00:00:07,800 Speaker 1: Welcome to tex Stuff, a production from I Heart Radio. 2 00:00:12,200 --> 00:00:16,000 Speaker 1: Hey therein Welcome to tex Stuff. I'm your host, Jonathan Strickland. 3 00:00:16,000 --> 00:00:18,919 Speaker 1: I'm an executive producer with I Heart Radio. And how 4 00:00:19,040 --> 00:00:22,560 Speaker 1: the tech are you? I gotta treat view folks. Today 5 00:00:22,600 --> 00:00:26,599 Speaker 1: I had the opportunity to speak with Bena Amnath, the 6 00:00:26,640 --> 00:00:31,520 Speaker 1: executive director of Deloitte AI Institute. Vina is an accomplished 7 00:00:31,560 --> 00:00:37,960 Speaker 1: technologist and expert in artificial intelligence. She's a coder, she's, 8 00:00:38,560 --> 00:00:42,159 Speaker 1: you know, an engineer, and she's a great communicator too. 9 00:00:42,240 --> 00:00:47,600 Speaker 1: She has appeared on numerous shows and panels talking about AI. 10 00:00:47,680 --> 00:00:51,960 Speaker 1: She's also the author of a book called Trustworthy AI. 11 00:00:52,600 --> 00:00:56,360 Speaker 1: And this was a fantastic opportunity to speak with someone 12 00:00:56,400 --> 00:00:59,600 Speaker 1: who actually has a deep amount of experience in the 13 00:00:59,640 --> 00:01:04,600 Speaker 1: field and really talk about some of the big concepts 14 00:01:04,600 --> 00:01:07,200 Speaker 1: in AI and get a little more perspective on them. 15 00:01:07,400 --> 00:01:13,520 Speaker 1: And I have to admit Beena's responses really open up 16 00:01:13,560 --> 00:01:16,640 Speaker 1: the blinders that I have on And of course I'm 17 00:01:16,680 --> 00:01:18,560 Speaker 1: like a lot of people, right I I go through 18 00:01:18,600 --> 00:01:21,560 Speaker 1: life thinking I have a pretty good handle on this. 19 00:01:21,720 --> 00:01:23,959 Speaker 1: I think I know what's going on, and then I 20 00:01:23,959 --> 00:01:27,319 Speaker 1: meet someone else who has had a you know, a 21 00:01:27,319 --> 00:01:30,880 Speaker 1: different experience, and especially a different depth of experience in 22 00:01:30,880 --> 00:01:34,360 Speaker 1: a particular field and realize, oh gosh, I hadn't even 23 00:01:34,400 --> 00:01:41,120 Speaker 1: considered some of these specific scenarios, for example. So I 24 00:01:41,240 --> 00:01:45,240 Speaker 1: really very much enjoyed my conversation with Bena. She's also 25 00:01:45,720 --> 00:01:50,240 Speaker 1: incredibly good at putting things in a way that are 26 00:01:50,360 --> 00:01:54,040 Speaker 1: easily understandable. A lot of technologists, when you start to 27 00:01:54,040 --> 00:01:58,880 Speaker 1: talk with them, they get really heavy with jargon or 28 00:01:59,040 --> 00:02:03,040 Speaker 1: or concepts. That makes sense if you've had experience working 29 00:02:03,080 --> 00:02:06,400 Speaker 1: in that area, but if you haven't, your eyes kind 30 00:02:06,400 --> 00:02:09,799 Speaker 1: of glaze over and you just trust that what they 31 00:02:09,800 --> 00:02:12,160 Speaker 1: say makes sense. That was not the issue with Bina. 32 00:02:12,320 --> 00:02:16,600 Speaker 1: She is really good at talking about this stuff on 33 00:02:16,680 --> 00:02:20,239 Speaker 1: a level that that the average person can easily understand, 34 00:02:21,040 --> 00:02:25,840 Speaker 1: and yet also really stressing how AI is a very 35 00:02:25,880 --> 00:02:28,839 Speaker 1: important component today. I mean, we're seeing it rolled out 36 00:02:28,919 --> 00:02:33,520 Speaker 1: in all sorts of different ways across all different sectors. 37 00:02:34,120 --> 00:02:38,079 Speaker 1: We mostly talked about business in this conversation, but clearly 38 00:02:38,680 --> 00:02:43,880 Speaker 1: AI is everywhere. Whether we're talking about facial recognition technology 39 00:02:43,880 --> 00:02:46,880 Speaker 1: that might be built directly into the camera on your phone, 40 00:02:47,560 --> 00:02:51,440 Speaker 1: or maybe we're talking about a personal digital assistant, you know, 41 00:02:51,520 --> 00:02:55,040 Speaker 1: something like the Amazon one. I won't say her name 42 00:02:55,240 --> 00:02:58,320 Speaker 1: because some of you have her and she gets real 43 00:02:58,720 --> 00:03:02,680 Speaker 1: like she she perks up when you say her name. Um, 44 00:03:02,720 --> 00:03:05,680 Speaker 1: those sort of things obviously have components of AI built 45 00:03:05,680 --> 00:03:08,680 Speaker 1: into them, but we were really looking at things like 46 00:03:09,680 --> 00:03:14,840 Speaker 1: processes in business where you might need to use automation 47 00:03:14,919 --> 00:03:23,320 Speaker 1: and artificial intelligence to make complicated processes more efficient and 48 00:03:23,840 --> 00:03:27,520 Speaker 1: less human intensive. And so, yeah, this was a great 49 00:03:27,560 --> 00:03:30,320 Speaker 1: conversation and I really feel like I learned a lot. 50 00:03:30,360 --> 00:03:34,200 Speaker 1: I hope that you all enjoy it. And again, she 51 00:03:34,320 --> 00:03:37,440 Speaker 1: does have a new book out. It's called Trustworthy AI, 52 00:03:37,520 --> 00:03:39,360 Speaker 1: and there's a copy on the way to me, so 53 00:03:39,400 --> 00:03:43,680 Speaker 1: I'm very eager to read it myself because just talking 54 00:03:43,680 --> 00:03:46,840 Speaker 1: with Bina I felt like I was just scratching the surface. 55 00:03:47,000 --> 00:03:49,400 Speaker 1: But you're gonna hear all that, So let's get to 56 00:03:49,640 --> 00:03:53,760 Speaker 1: that interview. Bina. I want to welcome you to the show. 57 00:03:54,040 --> 00:03:57,119 Speaker 1: I am so pleased to have an expert on AI. 58 00:03:57,160 --> 00:04:02,320 Speaker 1: Trustworthy AI, no less, Welcome to tech stuff, Jonathan, thank 59 00:04:02,360 --> 00:04:04,720 Speaker 1: you so much for having me on your show. I 60 00:04:04,800 --> 00:04:08,200 Speaker 1: really enjoy your episodes, so I'm looking forward to having 61 00:04:08,240 --> 00:04:11,360 Speaker 1: this conversation with you. I am as well. And one 62 00:04:11,400 --> 00:04:13,400 Speaker 1: of the things that I like to do is kind 63 00:04:13,440 --> 00:04:18,800 Speaker 1: of set some foundation for any kind of conversation around AI, 64 00:04:18,839 --> 00:04:22,480 Speaker 1: because in my experience, and I'm sure you've experienced something 65 00:04:22,520 --> 00:04:26,000 Speaker 1: similar chatting with people about AI, it seems like everyone 66 00:04:26,080 --> 00:04:30,680 Speaker 1: has a different, sometimes very specific idea of what AI is. 67 00:04:31,279 --> 00:04:35,960 Speaker 1: And I'm curious, how do you describe AI to people? Yeah, 68 00:04:36,040 --> 00:04:39,680 Speaker 1: that's a great question to start with. So AI is 69 00:04:39,720 --> 00:04:43,520 Speaker 1: a form of intelligence that uses machines to do things 70 00:04:43,600 --> 00:04:51,039 Speaker 1: that traditionally required human intelligence. So it is artificial intelligence 71 00:04:51,040 --> 00:04:55,680 Speaker 1: which is created artificially by machines. So now I like 72 00:04:55,800 --> 00:04:59,919 Speaker 1: that description because it covers such a wide spectrum every 73 00:05:00,080 --> 00:05:04,039 Speaker 1: thing from sort of the science fiction approach we've all 74 00:05:04,480 --> 00:05:08,000 Speaker 1: seen about machines that seem to think like humans to 75 00:05:08,080 --> 00:05:10,200 Speaker 1: a point where they usually become the threat. I mean, 76 00:05:10,240 --> 00:05:12,680 Speaker 1: that's typically the way we look at it, which is 77 00:05:12,760 --> 00:05:14,360 Speaker 1: I'm sure going to come into play when we talk 78 00:05:14,360 --> 00:05:17,120 Speaker 1: about trustworthiness, because I'm sure a lot of people aren't 79 00:05:17,120 --> 00:05:22,320 Speaker 1: aware how AI can sometimes be a danger, but not 80 00:05:22,400 --> 00:05:27,680 Speaker 1: necessarily like sky Net from Terminator type danger. Let me 81 00:05:27,720 --> 00:05:31,920 Speaker 1: elaborate on that description than a little bit more on 82 00:05:32,200 --> 00:05:35,640 Speaker 1: you know that next level down on AI definition, Right, 83 00:05:36,080 --> 00:05:38,200 Speaker 1: you know, the way I think about it. There are 84 00:05:38,200 --> 00:05:43,080 Speaker 1: three types of AI. One is artificial narrow intelligence, which 85 00:05:43,160 --> 00:05:47,480 Speaker 1: can do a very specific, narrow task that a human 86 00:05:47,680 --> 00:05:52,760 Speaker 1: can do, like sort a bunch of photographs, Right, That's 87 00:05:52,760 --> 00:05:56,360 Speaker 1: a very narrow specific task. So that's artificial narrow intelligence. 88 00:05:56,720 --> 00:06:00,000 Speaker 1: And then there is a form of artificial general intelligence, 89 00:06:00,440 --> 00:06:03,239 Speaker 1: which is a form of AI that can do any 90 00:06:03,360 --> 00:06:06,680 Speaker 1: task that human beings can do, right, So it is 91 00:06:06,720 --> 00:06:09,840 Speaker 1: pretty much everything that a human being can do. And 92 00:06:09,880 --> 00:06:12,719 Speaker 1: then I think of a third category, which is artificial 93 00:06:12,920 --> 00:06:17,000 Speaker 1: super intelligence, which is a form of intelligence which is 94 00:06:17,120 --> 00:06:22,239 Speaker 1: smarter than all human beings combined and can do more 95 00:06:22,360 --> 00:06:26,920 Speaker 1: things that human intelligence couldn't do. So when we talk 96 00:06:27,000 --> 00:06:32,839 Speaker 1: about AI in the business world or in reality, it 97 00:06:33,000 --> 00:06:35,479 Speaker 1: is where we are with AI. It's very much in 98 00:06:35,520 --> 00:06:39,440 Speaker 1: that artificial narrow intelligence space. But when we hear a 99 00:06:39,480 --> 00:06:43,200 Speaker 1: lot about the in the media or the hype and 100 00:06:43,320 --> 00:06:46,880 Speaker 1: the fear, you know, it's talking really about that super 101 00:06:46,960 --> 00:06:50,160 Speaker 1: intelligence phase, which is a form of AI that is 102 00:06:50,200 --> 00:06:55,040 Speaker 1: smarter than all human beings combined and has more capabilities 103 00:06:55,080 --> 00:06:57,679 Speaker 1: that human intelligence. And I think there's a big gap 104 00:06:57,760 --> 00:07:02,200 Speaker 1: between reality and between where we you know, we are 105 00:07:02,240 --> 00:07:06,159 Speaker 1: anticipating things to be right and and you know. Part 106 00:07:06,160 --> 00:07:09,320 Speaker 1: of the reason is, you know, actual super intelligence is 107 00:07:09,360 --> 00:07:12,800 Speaker 1: a lot of our human imagination, which is where AI 108 00:07:13,120 --> 00:07:17,240 Speaker 1: was when I was studying years ago, right, So I 109 00:07:17,320 --> 00:07:19,720 Speaker 1: do think there is value in imagination. I do think 110 00:07:19,720 --> 00:07:22,200 Speaker 1: there is value in thinking of worst case scenarios so 111 00:07:22,240 --> 00:07:25,040 Speaker 1: that you can address it. But the reality is we 112 00:07:25,160 --> 00:07:28,440 Speaker 1: still today don't have the tools or the capabilities to 113 00:07:28,520 --> 00:07:32,640 Speaker 1: build out artificial general intelligence or super intelligence, and we 114 00:07:32,720 --> 00:07:36,320 Speaker 1: do not have that capability today. I see parallels in 115 00:07:36,320 --> 00:07:40,160 Speaker 1: this as well, like I have a very similar description 116 00:07:40,440 --> 00:07:44,400 Speaker 1: of autonomous cars, for example, like people talk about autonomous 117 00:07:44,480 --> 00:07:48,240 Speaker 1: cars like we've reached level five autonomy, when really I 118 00:07:48,280 --> 00:07:51,760 Speaker 1: would argue we're still around level two creeping into level three, 119 00:07:51,880 --> 00:07:54,280 Speaker 1: but we we are not close to level four or five. 120 00:07:55,080 --> 00:07:57,960 Speaker 1: And this is why I like having this kind of 121 00:07:57,960 --> 00:08:00,720 Speaker 1: conversation right up front, so that people kind of set 122 00:08:00,760 --> 00:08:06,640 Speaker 1: their expectations, because artificial intelligence can already do incredible things 123 00:08:06,640 --> 00:08:12,600 Speaker 1: in these very narrow, narrow uses, and I'm blown away 124 00:08:12,600 --> 00:08:15,480 Speaker 1: by it whenever I learned about that. But I do 125 00:08:15,640 --> 00:08:19,840 Speaker 1: also see the the allure and sometimes the trap of 126 00:08:20,360 --> 00:08:25,000 Speaker 1: extrapolating that beyond the narrow cases and thinking what happens 127 00:08:25,080 --> 00:08:29,239 Speaker 1: when this goes beyond that, which it could very well happen, 128 00:08:29,480 --> 00:08:33,600 Speaker 1: but we're not we're not at that stage yet. Um. 129 00:08:33,720 --> 00:08:37,319 Speaker 1: But but I think of things like, yes, I think 130 00:08:37,320 --> 00:08:40,600 Speaker 1: of things like like the image recognition like that to 131 00:08:40,679 --> 00:08:45,520 Speaker 1: me is still an amazing thing to see developed, Like 132 00:08:45,960 --> 00:08:48,840 Speaker 1: you know it is. Ever since I started covering tech, 133 00:08:49,360 --> 00:08:54,480 Speaker 1: the ability has grown so fast. Like I remember when, 134 00:08:54,679 --> 00:08:58,160 Speaker 1: at least on the consumer side, you might see something 135 00:08:58,200 --> 00:09:01,640 Speaker 1: that was like detecting a ace, not recognizing a face, 136 00:09:01,679 --> 00:09:05,720 Speaker 1: but detecting the structure that makes a face and for 137 00:09:05,760 --> 00:09:09,320 Speaker 1: a camera, And now you know that that looks like 138 00:09:09,360 --> 00:09:14,360 Speaker 1: stone age technology by comparison of what we're seeing today. Yes, Jarthan, 139 00:09:14,520 --> 00:09:16,880 Speaker 1: And you I know you've been cowering tech for a 140 00:09:16,920 --> 00:09:23,160 Speaker 1: long time. You know you've certainly seen seen the early evolution, right, 141 00:09:23,240 --> 00:09:26,880 Speaker 1: but you know, and look, when I studied computer science, 142 00:09:27,320 --> 00:09:31,240 Speaker 1: I did program assembly language programming basically using zeros and 143 00:09:31,400 --> 00:09:34,760 Speaker 1: ones right that level. And you know the languages that 144 00:09:34,840 --> 00:09:38,640 Speaker 1: I use, like Pascal and Fortran lots, you know those 145 00:09:38,640 --> 00:09:42,280 Speaker 1: don't even exist today, right, So there's a whole evolution happening. 146 00:09:42,600 --> 00:09:45,600 Speaker 1: And I do think that is a big component to 147 00:09:45,840 --> 00:09:49,600 Speaker 1: imagine the future so that we can at least go 148 00:09:49,760 --> 00:09:53,240 Speaker 1: towards take care of the risk, and focus on all 149 00:09:53,240 --> 00:09:56,880 Speaker 1: the good things that you know AI and technology can do. Right, 150 00:09:56,960 --> 00:10:00,000 Speaker 1: So imagination is a good thing, but not the fear 151 00:10:00,000 --> 00:10:03,080 Speaker 1: a part. And also I think I think what you 152 00:10:03,440 --> 00:10:06,800 Speaker 1: said kind of is a great message to anyone who's 153 00:10:06,840 --> 00:10:10,720 Speaker 1: interested in really focusing on AI. The fact that you 154 00:10:10,800 --> 00:10:15,040 Speaker 1: were working in assembly so close such a low level language, 155 00:10:15,520 --> 00:10:19,320 Speaker 1: you get you get a real familiarity with what these 156 00:10:19,520 --> 00:10:24,920 Speaker 1: machines can do and their their potential that I think, uh, 157 00:10:25,240 --> 00:10:27,719 Speaker 1: you you almost lose when you start working on high 158 00:10:27,840 --> 00:10:33,000 Speaker 1: level uh programming languages Like you you get so focused 159 00:10:33,080 --> 00:10:35,880 Speaker 1: on what the programming language lets you do. But if 160 00:10:35,880 --> 00:10:38,080 Speaker 1: you've worked out at that low level, you're like, hey, no, 161 00:10:38,280 --> 00:10:41,920 Speaker 1: I know circuits and wires, Okay, I am, I am, 162 00:10:41,960 --> 00:10:44,920 Speaker 1: I am one step away from this machine. Yeah, we 163 00:10:45,080 --> 00:10:47,120 Speaker 1: we have to realize, you know, just like you know, 164 00:10:47,200 --> 00:10:49,520 Speaker 1: when you talk about it today, will talk mostly about 165 00:10:49,520 --> 00:10:52,480 Speaker 1: the software, but the hardware is also evolving. Right. We 166 00:10:52,559 --> 00:10:56,160 Speaker 1: certainly don't have wax of those massive back from pure 167 00:10:56,320 --> 00:11:00,520 Speaker 1: systems that we had to program, right. I think there 168 00:11:00,640 --> 00:11:05,520 Speaker 1: is evolution happening in every dimension, and you know, it's 169 00:11:05,559 --> 00:11:08,960 Speaker 1: it's part of the growth of AI or any technology 170 00:11:09,000 --> 00:11:11,880 Speaker 1: if you think about it. M hmm, Well, I also 171 00:11:11,960 --> 00:11:14,480 Speaker 1: want to know what you mean when you when you 172 00:11:14,600 --> 00:11:18,560 Speaker 1: use the phrase trustworthy AI. So what is it that 173 00:11:18,679 --> 00:11:23,439 Speaker 1: makes AI trustworthy? And what what's the what's the alternative? 174 00:11:23,480 --> 00:11:28,880 Speaker 1: What is the untrustworthy side? Yeah, that's that's a great question, 175 00:11:28,960 --> 00:11:31,960 Speaker 1: and that's something that you know. As a technologist, I'm 176 00:11:32,160 --> 00:11:35,760 Speaker 1: enamored by all the cool things that AI can do 177 00:11:36,120 --> 00:11:39,400 Speaker 1: because I just focus on all the value creation. But 178 00:11:39,559 --> 00:11:43,120 Speaker 1: over the past a few years, as AI started becoming real, 179 00:11:43,200 --> 00:11:45,760 Speaker 1: I also realized that there are you know, with all 180 00:11:45,800 --> 00:11:49,640 Speaker 1: the good things that can do, there are negative consequences 181 00:11:49,720 --> 00:11:54,079 Speaker 1: to it, right, and so I put that negative consequences 182 00:11:54,240 --> 00:11:59,199 Speaker 1: in the under the bucket of untrusted untrustworthiness. Ethics is 183 00:11:59,240 --> 00:12:01,640 Speaker 1: a big composed under of it, right, Whether the AI 184 00:12:01,800 --> 00:12:05,800 Speaker 1: is fair or biased or transparent explainable, but also things 185 00:12:05,840 --> 00:12:09,480 Speaker 1: like is it compliant with local regulations, does it have 186 00:12:09,679 --> 00:12:12,800 Speaker 1: controls in place? Does it have governance in place to 187 00:12:12,920 --> 00:12:17,160 Speaker 1: continuously monitor for it going wrongue because you know, Jonathan, 188 00:12:17,200 --> 00:12:20,480 Speaker 1: today AI is mostly machine learning, so that it's learning 189 00:12:20,480 --> 00:12:23,400 Speaker 1: and evolving. It's not that era when we developed code 190 00:12:23,440 --> 00:12:25,800 Speaker 1: put it out there and the code states static and 191 00:12:25,840 --> 00:12:30,280 Speaker 1: its behavior was very predictable. With AI, the outputs can 192 00:12:30,360 --> 00:12:33,360 Speaker 1: change depending on inputs your feed and it's impossible to 193 00:12:33,480 --> 00:12:37,760 Speaker 1: trade on all possible inputs. So trustworthy for me is 194 00:12:37,920 --> 00:12:42,079 Speaker 1: you know, really, or is when you have addressed, when 195 00:12:42,120 --> 00:12:45,640 Speaker 1: you have thought about and addressed all the possible negative 196 00:12:45,760 --> 00:12:49,280 Speaker 1: things that this AI solution can cause. Well, I would 197 00:12:49,320 --> 00:12:51,320 Speaker 1: love to kind of dive into a little bit more 198 00:12:51,360 --> 00:12:53,760 Speaker 1: of that because one of the things that you said 199 00:12:53,760 --> 00:12:57,280 Speaker 1: that really resonated with me was the idea of transparency, 200 00:12:57,679 --> 00:13:01,480 Speaker 1: because I have covered this past episodes of tech stuff, 201 00:13:01,520 --> 00:13:05,160 Speaker 1: but the sort of the black box problem of creating 202 00:13:05,200 --> 00:13:09,280 Speaker 1: a system, for example, a machine learning system, and you 203 00:13:09,360 --> 00:13:13,719 Speaker 1: have this this machine that's training itself over and over 204 00:13:13,760 --> 00:13:16,200 Speaker 1: and over. Maybe it's adversarial training, maybe you actually have 205 00:13:16,240 --> 00:13:20,120 Speaker 1: two systems that are set against each other and you're training, 206 00:13:20,920 --> 00:13:25,320 Speaker 1: and the issues that can arise if you have distanced 207 00:13:25,360 --> 00:13:28,959 Speaker 1: yourself so far from what the machine is doing that 208 00:13:29,120 --> 00:13:33,120 Speaker 1: you are unable to determine the process by which it 209 00:13:33,760 --> 00:13:37,199 Speaker 1: arrives at its conclusions. And that to me is one 210 00:13:37,200 --> 00:13:41,120 Speaker 1: of those those pitfalls. Yes, but I would also challenge 211 00:13:41,120 --> 00:13:44,920 Speaker 1: it a little bit, Jonathan, because that a whole synthesis 212 00:13:44,960 --> 00:13:48,280 Speaker 1: of my book is that it depends on the use case. 213 00:13:48,760 --> 00:13:51,880 Speaker 1: It is not a one size fit soul. So depending 214 00:13:52,000 --> 00:13:55,920 Speaker 1: on where and to solve what problem are you using 215 00:13:55,920 --> 00:13:59,840 Speaker 1: that DAIR solution, it is for that organization that teams 216 00:14:00,080 --> 00:14:04,160 Speaker 1: decide if transparency is crucial. Right. If if your AI 217 00:14:04,280 --> 00:14:09,800 Speaker 1: solution is being used to for patient care in a 218 00:14:09,920 --> 00:14:15,840 Speaker 1: hospital system, then transparency is absolutely crucial, right. But if 219 00:14:15,880 --> 00:14:19,680 Speaker 1: you are using the AI solution to predict when an 220 00:14:19,800 --> 00:14:23,040 Speaker 1: X ray machine might fail, and you're able to predict 221 00:14:23,040 --> 00:14:26,680 Speaker 1: at accuracy rate that this machine is going to fail 222 00:14:26,680 --> 00:14:28,960 Speaker 1: in the next forty eight hours of call a technician, 223 00:14:29,760 --> 00:14:33,160 Speaker 1: transparency may not be as crucial, right. So I think 224 00:14:33,920 --> 00:14:37,360 Speaker 1: transparency is crucial depending on the use case. And that's 225 00:14:37,360 --> 00:14:40,440 Speaker 1: true for all the other dimensions as well, even fairness 226 00:14:40,440 --> 00:14:42,840 Speaker 1: and bias, which we hear a lot about. So it 227 00:14:43,000 --> 00:14:46,080 Speaker 1: really depends on the use case that that you're using 228 00:14:46,120 --> 00:14:49,760 Speaker 1: the I fall Hey there, Jonathan back at the home studio, 229 00:14:49,920 --> 00:14:52,560 Speaker 1: just here to say we are going to take a 230 00:14:52,640 --> 00:14:55,520 Speaker 1: quick break, but we'll be back with more with Bena Amanath, 231 00:14:55,640 --> 00:15:08,480 Speaker 1: the executive director of Deloitte AI Institute. Bias doesn't necessarily 232 00:15:08,560 --> 00:15:11,800 Speaker 1: mean negative, depending upon the use case of the technology. 233 00:15:12,640 --> 00:15:15,000 Speaker 1: In some cases you need to have a biased system 234 00:15:15,040 --> 00:15:18,440 Speaker 1: because it's specifically meant to be weighted to do one 235 00:15:18,480 --> 00:15:21,440 Speaker 1: thing versus another, and without the bias it doesn't do that. 236 00:15:22,400 --> 00:15:25,360 Speaker 1: But the way we typically hear about bias is when 237 00:15:25,400 --> 00:15:28,840 Speaker 1: it is making a negative impact, when it's something for 238 00:15:28,960 --> 00:15:32,720 Speaker 1: it's like like the facial recognition technologies. We've heard plenty 239 00:15:32,800 --> 00:15:36,920 Speaker 1: about that. So it is interesting to me, And uh, 240 00:15:37,080 --> 00:15:40,160 Speaker 1: I'm curious, like, what are what are some of the 241 00:15:40,320 --> 00:15:44,560 Speaker 1: uses of AI you're seeing in technology now that you 242 00:15:44,600 --> 00:15:49,560 Speaker 1: find really exciting. Yeah, no, I think you know, we're 243 00:15:49,600 --> 00:15:54,120 Speaker 1: still very early on in this technology evolution and there 244 00:15:54,160 --> 00:15:56,480 Speaker 1: are still so many use cases to be solved, so 245 00:15:56,520 --> 00:16:00,360 Speaker 1: many industries to take a I too right to point 246 00:16:00,400 --> 00:16:04,520 Speaker 1: about bias and its relevance. I completely agree with you that, 247 00:16:04,600 --> 00:16:06,600 Speaker 1: you know, it depends on the use case, and it 248 00:16:06,680 --> 00:16:09,560 Speaker 1: goes back to that first question we talked about, right, 249 00:16:09,640 --> 00:16:14,280 Speaker 1: how AI is really emulating human intelligence, which means that 250 00:16:14,440 --> 00:16:16,680 Speaker 1: it is going to carry over the biases of the 251 00:16:16,800 --> 00:16:20,640 Speaker 1: humans that are building it. Right, But as as a 252 00:16:20,680 --> 00:16:24,320 Speaker 1: business or as an organization who's looking to use an 253 00:16:24,320 --> 00:16:27,520 Speaker 1: air solution, who's looking to develop an a solution, they 254 00:16:27,560 --> 00:16:30,920 Speaker 1: really have to, you know, bring together the stakeholders to 255 00:16:31,240 --> 00:16:36,640 Speaker 1: discuss and decide how crucial is fairness or unbiased ness 256 00:16:36,680 --> 00:16:41,200 Speaker 1: important in this particular AI use case. And easy one 257 00:16:41,240 --> 00:16:44,080 Speaker 1: out is if it doesn't involve human data, then you 258 00:16:44,200 --> 00:16:46,960 Speaker 1: probably don't have to worry as biased as a factor 259 00:16:47,040 --> 00:16:50,360 Speaker 1: and address it. And if it does involve human data, 260 00:16:50,760 --> 00:16:53,800 Speaker 1: then again there is weightage in what right if there 261 00:16:53,920 --> 00:16:59,200 Speaker 1: is biased at in an algorithm that is providing personalized marketing, 262 00:17:00,120 --> 00:17:02,160 Speaker 1: that you know that there is a weight to it. 263 00:17:02,320 --> 00:17:04,160 Speaker 1: And if it is if there is biased in an 264 00:17:04,200 --> 00:17:09,359 Speaker 1: algorithm that is supporting law enforcement decisions, that's a higher rate, right. 265 00:17:09,440 --> 00:17:12,560 Speaker 1: And it's really about rating it, you know, weighing it 266 00:17:12,680 --> 00:17:16,800 Speaker 1: and deciding which ones are the one where biases acceptable 267 00:17:16,880 --> 00:17:19,639 Speaker 1: and you can still proceed and get value from the 268 00:17:19,680 --> 00:17:22,280 Speaker 1: AI solution, and which are the ones where it is 269 00:17:22,359 --> 00:17:26,359 Speaker 1: absolutely not acceptable and you need to stop and figure 270 00:17:26,400 --> 00:17:29,879 Speaker 1: out and alternate way to solve for that problem. It's 271 00:17:29,920 --> 00:17:32,600 Speaker 1: fascinating because it to me this is starting to sound 272 00:17:32,880 --> 00:17:35,879 Speaker 1: and I agree with you, like the machines we build 273 00:17:36,440 --> 00:17:39,760 Speaker 1: are in large part reflections upon ourselves, especially when we're 274 00:17:39,800 --> 00:17:44,000 Speaker 1: talking about coding and software. I mean obviously that's going 275 00:17:44,080 --> 00:17:47,040 Speaker 1: that's a creative process. I don't know that everybody views 276 00:17:47,080 --> 00:17:49,560 Speaker 1: it that way, but I think of it very similar 277 00:17:49,600 --> 00:17:53,200 Speaker 1: to creating any kind of creative work. It's a reflection 278 00:17:53,680 --> 00:17:56,600 Speaker 1: of your process and your you know, the things that 279 00:17:56,640 --> 00:18:00,399 Speaker 1: are important to you, the things you've prioritized. And it 280 00:18:00,440 --> 00:18:02,920 Speaker 1: makes me think of how we're in an era now 281 00:18:03,000 --> 00:18:06,240 Speaker 1: where I'm getting a little in the weeds here, but 282 00:18:06,240 --> 00:18:09,360 Speaker 1: we're in an era where we're more likely to address 283 00:18:09,440 --> 00:18:12,440 Speaker 1: things like, uh, mental health and the fact that we 284 00:18:12,520 --> 00:18:15,320 Speaker 1: need to be mindful and we need to improve ourselves. 285 00:18:15,359 --> 00:18:19,119 Speaker 1: And it's almost like taking that same approach, but applying 286 00:18:19,280 --> 00:18:22,280 Speaker 1: that sort of thinking to designing a system so that 287 00:18:22,680 --> 00:18:25,560 Speaker 1: we are being mindful to create the best system for 288 00:18:25,680 --> 00:18:31,400 Speaker 1: whatever purpose it is it's intended to address. Yeah, you've 289 00:18:31,400 --> 00:18:34,239 Speaker 1: got it exactly right. The way I think about it is, 290 00:18:34,280 --> 00:18:39,320 Speaker 1: how can we reduce the unintended consequences? Right? We know 291 00:18:39,480 --> 00:18:42,240 Speaker 1: there are going to be risk associated with it, How 292 00:18:42,280 --> 00:18:45,280 Speaker 1: are we going to have a discussion prior to putting 293 00:18:45,280 --> 00:18:48,199 Speaker 1: the solution out into the world and then you know, 294 00:18:48,280 --> 00:18:51,120 Speaker 1: see all the negative impacts. Can we have a proactive 295 00:18:51,119 --> 00:18:54,680 Speaker 1: discussion as part of your project planning meeting or your 296 00:18:54,760 --> 00:18:58,760 Speaker 1: design meeting right to proactively identify what are the ways 297 00:18:58,760 --> 00:19:01,080 Speaker 1: this could go wrong and fix it. Johnathan, you know, 298 00:19:01,160 --> 00:19:04,160 Speaker 1: the easiest example that I can give is we're living 299 00:19:04,160 --> 00:19:08,199 Speaker 1: in this very interesting era where you know, AI as 300 00:19:08,240 --> 00:19:11,960 Speaker 1: a core technology is developing and you know, there are 301 00:19:11,960 --> 00:19:14,439 Speaker 1: all these the value that you're getting from it, and 302 00:19:14,440 --> 00:19:17,600 Speaker 1: then there are all these negative things that can happen. 303 00:19:17,680 --> 00:19:20,080 Speaker 1: So think about you know, way back when when you 304 00:19:20,119 --> 00:19:23,440 Speaker 1: know the cars were first invented, right, we didn't even 305 00:19:23,520 --> 00:19:26,040 Speaker 1: have proper roads. We didn't have seed belts, we didn't 306 00:19:26,080 --> 00:19:29,080 Speaker 1: have speed limits, right, and be in that phase where 307 00:19:29,119 --> 00:19:31,679 Speaker 1: there are cars running on the road. They're taking us 308 00:19:31,680 --> 00:19:34,280 Speaker 1: from point to point be faster, so we want to 309 00:19:34,400 --> 00:19:37,280 Speaker 1: use it, but we don't have the seat belts put 310 00:19:37,320 --> 00:19:40,159 Speaker 1: in place, we don't have the speed limits set in place, 311 00:19:40,200 --> 00:19:43,240 Speaker 1: so you're going to see accidents. But we are humans. 312 00:19:43,359 --> 00:19:45,000 Speaker 1: We're going to learn from it and we're going to 313 00:19:45,080 --> 00:19:47,000 Speaker 1: come up with those speed limits. We're going to figure 314 00:19:47,000 --> 00:19:49,480 Speaker 1: out what are those card rails, and it is going 315 00:19:49,640 --> 00:19:52,600 Speaker 1: you know, we are going to you know, achieve a 316 00:19:52,680 --> 00:19:56,040 Speaker 1: point where you know, we have those guard rails in 317 00:19:56,040 --> 00:19:59,120 Speaker 1: place so that you can run with AI faster. It's 318 00:19:59,160 --> 00:20:02,720 Speaker 1: just that this interim phase is when you know, we 319 00:20:02,840 --> 00:20:06,240 Speaker 1: have to figure it out out in tandem while it's 320 00:20:06,520 --> 00:20:10,560 Speaker 1: running in the real world, causing accidents. And in some 321 00:20:10,640 --> 00:20:13,919 Speaker 1: cases that's that those accidents can be things where you 322 00:20:14,000 --> 00:20:16,560 Speaker 1: have it maybe in a test environment and you think, oh, 323 00:20:16,640 --> 00:20:19,240 Speaker 1: this isn't behaving the way I thought it was. But 324 00:20:19,480 --> 00:20:21,760 Speaker 1: you know, thank goodness, it hasn't been deployed out in 325 00:20:21,800 --> 00:20:26,240 Speaker 1: the real world for or within your company's UH processes, 326 00:20:26,240 --> 00:20:28,479 Speaker 1: so you think, oh, well, it didn't wipe out all 327 00:20:28,480 --> 00:20:31,879 Speaker 1: of our revenue because it's in a test environment. UH. 328 00:20:31,920 --> 00:20:34,440 Speaker 1: And in other cases, I see I see some companies. 329 00:20:34,480 --> 00:20:36,600 Speaker 1: I'm not gonna name names, Bina, I'm not gonna put 330 00:20:36,640 --> 00:20:39,760 Speaker 1: anyone on blast here, but I have seen some companies 331 00:20:40,200 --> 00:20:43,439 Speaker 1: that have taken that kind of idea and applied it 332 00:20:43,560 --> 00:20:48,080 Speaker 1: in UH specific deployments of technology where there can have 333 00:20:48,160 --> 00:20:52,440 Speaker 1: some some real world negative consequences to end users. UM. 334 00:20:52,760 --> 00:20:57,080 Speaker 1: And that to me has always a concern I find, yeah, 335 00:20:57,200 --> 00:21:00,280 Speaker 1: I find that I find it hits me wrong. Yes, 336 00:21:00,400 --> 00:21:04,040 Speaker 1: And that's the reality of how we've evolved as a 337 00:21:04,160 --> 00:21:07,359 Speaker 1: technology in their technology space. It's a bunch of you know, 338 00:21:07,440 --> 00:21:12,200 Speaker 1: technologists coming together and building these cool, new shiny technologists. Look. 339 00:21:12,680 --> 00:21:14,879 Speaker 1: You know, as I said, I am a technologist in 340 00:21:14,960 --> 00:21:18,159 Speaker 1: my DNA my training, and it's very easy to just 341 00:21:18,200 --> 00:21:20,760 Speaker 1: focus on all the good things that can do. But 342 00:21:20,920 --> 00:21:24,800 Speaker 1: with AI, now that realization has hit, you need other 343 00:21:25,760 --> 00:21:27,720 Speaker 1: you know, skill sets at the table, whether it is 344 00:21:27,720 --> 00:21:32,320 Speaker 1: a social site, is philosophers, legal and compliance to help 345 00:21:32,440 --> 00:21:35,199 Speaker 1: us figure out those seedbells and the you know, the 346 00:21:35,240 --> 00:21:39,439 Speaker 1: speed the lanes, you know, because technologies by themselves cannot 347 00:21:39,520 --> 00:21:42,120 Speaker 1: do it. So you'll see more of the discussions coming 348 00:21:42,160 --> 00:21:44,720 Speaker 1: around ethics and which is resulting in new roles and 349 00:21:44,800 --> 00:21:50,200 Speaker 1: new jobs, which becomes core and part of your engineering process. Right, 350 00:21:50,240 --> 00:21:53,600 Speaker 1: So that scope of who is involved in designing and 351 00:21:53,640 --> 00:21:57,920 Speaker 1: developing AI is definitely increasing. And the other big part, 352 00:21:58,080 --> 00:22:00,399 Speaker 1: you know, and this has been a challenge since I 353 00:22:00,440 --> 00:22:02,639 Speaker 1: started in tech. You know, there's a lack of diversity 354 00:22:02,680 --> 00:22:06,800 Speaker 1: in tech. It's a reality, right, But unfortunately, because AI 355 00:22:06,880 --> 00:22:09,760 Speaker 1: is so closely tied to human intelligence, if you don't 356 00:22:09,800 --> 00:22:13,679 Speaker 1: have enough diversity from you know, not only from a gender, 357 00:22:13,800 --> 00:22:17,440 Speaker 1: race at necessity perspective, but even a diversity of thought, right, 358 00:22:17,880 --> 00:22:21,119 Speaker 1: you're the AI solution you built is not going to 359 00:22:21,200 --> 00:22:24,480 Speaker 1: be as robust as it could be if you had 360 00:22:24,520 --> 00:22:27,560 Speaker 1: a diverse team at the table. Right, you've probably heard 361 00:22:27,600 --> 00:22:31,480 Speaker 1: of that classic example of you know, the robotic vacuums, right, 362 00:22:31,560 --> 00:22:33,720 Speaker 1: how it was designed and now it was built out. 363 00:22:34,200 --> 00:22:37,200 Speaker 1: And then in the Eastern cultures it's normal to sleep 364 00:22:37,280 --> 00:22:41,480 Speaker 1: on the floor and it sucked up human somebody who's 365 00:22:41,480 --> 00:22:44,040 Speaker 1: sleeping their hair because it was never trade on it. 366 00:22:44,040 --> 00:22:46,879 Speaker 1: It didn't come you know, it didn't come to the discussion, 367 00:22:47,200 --> 00:22:49,880 Speaker 1: and it was being designed because nobody was there from 368 00:22:49,920 --> 00:22:53,879 Speaker 1: that culture. Right. So I think, you know, the realization 369 00:22:53,920 --> 00:22:56,080 Speaker 1: that you need more diversity at the table, you need 370 00:22:56,119 --> 00:22:59,080 Speaker 1: more controls in place. It's all coming to the forefront. 371 00:22:59,160 --> 00:23:03,000 Speaker 1: I definitely see companies addressing it. But the the DNA 372 00:23:03,119 --> 00:23:05,200 Speaker 1: will now has been oh, look at all the cool 373 00:23:05,240 --> 00:23:08,200 Speaker 1: things this technology can do, let's go put it out right. 374 00:23:08,480 --> 00:23:11,520 Speaker 1: But I do think, you know, companies are getting mindful 375 00:23:11,560 --> 00:23:17,400 Speaker 1: about it and hopefully we'll reduce the number of unintended consequences. Yeah. 376 00:23:17,480 --> 00:23:22,280 Speaker 1: I see the same thing reflected in the open source community, 377 00:23:22,440 --> 00:23:27,160 Speaker 1: where you have an open source approach to developing software, 378 00:23:27,800 --> 00:23:32,760 Speaker 1: and because it's open and and anyone interested and capable 379 00:23:33,320 --> 00:23:40,000 Speaker 1: can contribute ideas get tested, very quickly. New new perspectives 380 00:23:40,000 --> 00:23:43,600 Speaker 1: get incorporated very quickly. Things that are working stick around, 381 00:23:43,640 --> 00:23:47,520 Speaker 1: things that don't work get improved. And the way I've 382 00:23:47,520 --> 00:23:50,840 Speaker 1: described it to other people is, if you have a 383 00:23:50,880 --> 00:23:54,440 Speaker 1: closed off garden that you're working on, you're only as 384 00:23:54,480 --> 00:23:57,920 Speaker 1: good as the smart people who happen to work for you. 385 00:23:58,400 --> 00:24:00,439 Speaker 1: And if you go with this other approach where you 386 00:24:00,680 --> 00:24:04,439 Speaker 1: purposefully open it up, which is like the biggest version 387 00:24:04,480 --> 00:24:06,959 Speaker 1: of let's let's try and get as much diversity of 388 00:24:07,000 --> 00:24:10,600 Speaker 1: thought in here as possible. Uh, you don't have that 389 00:24:10,680 --> 00:24:13,520 Speaker 1: limitation because you've You've just said, well, now the world 390 00:24:13,760 --> 00:24:16,199 Speaker 1: is I mean it's not the whole world, but but 391 00:24:16,280 --> 00:24:19,800 Speaker 1: effectively the world. The world can contribute if if they, 392 00:24:19,880 --> 00:24:24,320 Speaker 1: if they wish, and uh agree, I think having that 393 00:24:24,400 --> 00:24:29,560 Speaker 1: diversity is absolutely key to creating solutions that work for 394 00:24:29,640 --> 00:24:34,320 Speaker 1: as many people and as many potential uses of that 395 00:24:34,359 --> 00:24:39,640 Speaker 1: technology as possible. I being a a a white man 396 00:24:39,840 --> 00:24:44,600 Speaker 1: in the United States, I am I am essentially the 397 00:24:44,760 --> 00:24:48,120 Speaker 1: catered to audience for a lot of tech, and so 398 00:24:48,800 --> 00:24:52,440 Speaker 1: I've seen how things that were made to work really 399 00:24:52,440 --> 00:24:55,480 Speaker 1: well for me do not work for some other people. 400 00:24:55,520 --> 00:24:58,639 Speaker 1: And that's such a tiny little microcosm when we're looking 401 00:24:58,680 --> 00:25:01,320 Speaker 1: at you know, the GREA and scope of tech which 402 00:25:01,359 --> 00:25:05,560 Speaker 1: goes so far beyond just consumer electronics. UM I absolutely 403 00:25:05,600 --> 00:25:10,560 Speaker 1: agree that that diversity is is required if we're going 404 00:25:10,640 --> 00:25:14,879 Speaker 1: to have a i that is truly trustworthy. Yeah, exactly, 405 00:25:15,040 --> 00:25:17,280 Speaker 1: And you know, and then but there it's not never. 406 00:25:17,680 --> 00:25:21,879 Speaker 1: It's never as straightforward as we tend to simplify it 407 00:25:21,920 --> 00:25:26,199 Speaker 1: down to, right, like when we talk about explainability. There 408 00:25:26,240 --> 00:25:30,040 Speaker 1: there are real challenges and those are real business challenges 409 00:25:30,480 --> 00:25:33,000 Speaker 1: on even when you go down the open source route right, 410 00:25:33,040 --> 00:25:34,879 Speaker 1: a lot of time, if you go too much on 411 00:25:34,960 --> 00:25:38,919 Speaker 1: the explainableitypath, you know, you you have to still share 412 00:25:39,040 --> 00:25:42,600 Speaker 1: data and algorithms and those are strategic assets and it 413 00:25:42,680 --> 00:25:46,439 Speaker 1: can result in compromising your company's i P. Right, it 414 00:25:46,560 --> 00:25:50,760 Speaker 1: can result in you know, security hacks because the more 415 00:25:50,880 --> 00:25:54,600 Speaker 1: explainable you make it, it is more susceptible to manipulation 416 00:25:54,720 --> 00:25:58,720 Speaker 1: if it's functionality is fully understood, the privacy aspect of it, 417 00:25:58,960 --> 00:26:03,200 Speaker 1: prioritizing playability and you know, how do you make sure 418 00:26:03,320 --> 00:26:07,200 Speaker 1: you hit a balance of why you are making sure 419 00:26:07,240 --> 00:26:10,560 Speaker 1: you're mitigating the risk but at the same time protect 420 00:26:11,080 --> 00:26:15,840 Speaker 1: your organizational i P. That's that's a that's a solution. 421 00:26:15,960 --> 00:26:18,760 Speaker 1: That's that there is no one single answer. It is 422 00:26:19,119 --> 00:26:23,080 Speaker 1: for the stakeholders to come together and discuss it and 423 00:26:23,200 --> 00:26:26,320 Speaker 1: identify were that balances, because it's going to be different 424 00:26:26,400 --> 00:26:29,360 Speaker 1: depending on your business. It seems to me like you're 425 00:26:29,359 --> 00:26:31,640 Speaker 1: saying the real world is a complicated place and there's 426 00:26:31,640 --> 00:26:35,000 Speaker 1: a lot of different shades of complexity to it, and 427 00:26:35,000 --> 00:26:39,040 Speaker 1: that I can't just simply uh summarize it in a 428 00:26:39,240 --> 00:26:43,520 Speaker 1: black and white approach, which I greatly appreciate, uh, and 429 00:26:43,560 --> 00:26:45,520 Speaker 1: that that's interesting to me too. I'm glad to have 430 00:26:45,600 --> 00:26:48,560 Speaker 1: that perspective because again, like as a as a communicator 431 00:26:48,600 --> 00:26:52,600 Speaker 1: for tech, uh, I know that I too fall into 432 00:26:52,680 --> 00:26:56,960 Speaker 1: the same sort of pitfalls of oversimplifying for the purposes 433 00:26:57,000 --> 00:27:01,439 Speaker 1: of trying to get a concept across, because to really 434 00:27:01,520 --> 00:27:05,000 Speaker 1: dive into it, you start to you start to feel 435 00:27:05,000 --> 00:27:07,000 Speaker 1: like they're there are so many threads that you can't 436 00:27:07,000 --> 00:27:09,800 Speaker 1: see the rope and that or you can't see the 437 00:27:09,800 --> 00:27:13,000 Speaker 1: forest for the trees if you prefer. But but that's 438 00:27:13,040 --> 00:27:17,080 Speaker 1: that's very important to remember, and I think it is 439 00:27:17,119 --> 00:27:21,720 Speaker 1: a great reminder that again, like we said at the top, 440 00:27:21,800 --> 00:27:26,040 Speaker 1: that the use for this technology kind of defines the 441 00:27:26,119 --> 00:27:28,960 Speaker 1: approach that you need to take in order to make 442 00:27:29,000 --> 00:27:32,520 Speaker 1: certain that you're you're getting the result that you want. 443 00:27:33,240 --> 00:27:37,280 Speaker 1: UM from a really high level, can you kind of 444 00:27:37,320 --> 00:27:41,280 Speaker 1: talk about your concept of what what it is? This 445 00:27:41,359 --> 00:27:44,280 Speaker 1: is almost a trick question because there's so many different variations, 446 00:27:44,320 --> 00:27:48,280 Speaker 1: but what what what an organization's process would be when 447 00:27:48,320 --> 00:27:55,160 Speaker 1: considering to implement AI solutions like high high level approach. Yes, 448 00:27:55,560 --> 00:27:59,800 Speaker 1: Historically it's always been you know, how can we use 449 00:27:59,840 --> 00:28:02,600 Speaker 1: ARE to solve this business problem? And what's the r 450 00:28:02,640 --> 00:28:04,840 Speaker 1: O I what you know? How much profits are we 451 00:28:04,880 --> 00:28:07,200 Speaker 1: going to increase by doing this? Or how much costs 452 00:28:07,200 --> 00:28:09,760 Speaker 1: are we going to save by doing this? Trust me, 453 00:28:09,840 --> 00:28:12,159 Speaker 1: I've done this project and you know that's how you 454 00:28:12,200 --> 00:28:14,800 Speaker 1: know every conversation starts because we want to make use 455 00:28:14,880 --> 00:28:18,480 Speaker 1: technology to drive more business value, right, whether it is 456 00:28:18,480 --> 00:28:21,800 Speaker 1: through customer engagement, optimizing our existing process and so on. 457 00:28:22,200 --> 00:28:26,280 Speaker 1: I think the discussion that that if you are serious 458 00:28:26,359 --> 00:28:30,040 Speaker 1: about getting making your AI trustworthy, the discussion that needs 459 00:28:30,080 --> 00:28:33,960 Speaker 1: to happen upfront is defining what does trustworthy I mean 460 00:28:34,520 --> 00:28:39,240 Speaker 1: for for my organization? Right? And uh and it could 461 00:28:39,240 --> 00:28:42,240 Speaker 1: be different depending on the organization, It could be different 462 00:28:42,240 --> 00:28:45,959 Speaker 1: depending on the use case. But having those high level principles, 463 00:28:46,000 --> 00:28:48,080 Speaker 1: and there are plenty of principles out there, there are 464 00:28:48,080 --> 00:28:51,560 Speaker 1: plenty of frameworks out there, but I think every organization 465 00:28:51,600 --> 00:28:54,680 Speaker 1: needs to think about what are the key pillars that 466 00:28:54,760 --> 00:28:56,720 Speaker 1: they agree upon and that they would never want to 467 00:28:56,800 --> 00:29:00,320 Speaker 1: void it right. And once you have those, then next 468 00:29:00,320 --> 00:29:03,480 Speaker 1: step is to decide to make sure every employee within 469 00:29:03,520 --> 00:29:07,280 Speaker 1: your organization understands it. Because it's not just your I 470 00:29:07,400 --> 00:29:09,760 Speaker 1: T team, It's not just the engineers of the data 471 00:29:09,800 --> 00:29:15,680 Speaker 1: scientists who need to understand ethics. It's that marketing marketing 472 00:29:15,720 --> 00:29:21,200 Speaker 1: account person who is looking at using an AI solution, 473 00:29:21,480 --> 00:29:24,480 Speaker 1: buying it from a vendor to use it within your company. 474 00:29:24,520 --> 00:29:26,960 Speaker 1: They need to make sure that they are asking the 475 00:29:27,080 --> 00:29:30,640 Speaker 1: questions which ensure trustworthiness and do they do is the 476 00:29:30,680 --> 00:29:33,800 Speaker 1: software they're buying, has it been tested for fairness? What 477 00:29:33,880 --> 00:29:37,640 Speaker 1: was it tested for? So every employee within the organization 478 00:29:37,760 --> 00:29:41,600 Speaker 1: needs to understand what distrustworthy I mean for my company 479 00:29:41,640 --> 00:29:45,080 Speaker 1: and how do I how do I make it, how 480 00:29:45,080 --> 00:29:47,840 Speaker 1: do I use it in my role? So role specific training. 481 00:29:48,520 --> 00:29:50,920 Speaker 1: And then the other crucial factor to decide, and we've 482 00:29:50,960 --> 00:29:53,600 Speaker 1: see variations of it in the industry, is you know 483 00:29:53,600 --> 00:29:56,440 Speaker 1: whether it is getting a cheap AI Ethics officer or 484 00:29:56,480 --> 00:30:00,200 Speaker 1: setting up an AI thinks advisory board right, making sure 485 00:30:00,280 --> 00:30:03,760 Speaker 1: that there is somebody who is responsible to keep you know, 486 00:30:03,840 --> 00:30:07,880 Speaker 1: to keep this moing within the organization is super important. 487 00:30:07,920 --> 00:30:10,640 Speaker 1: That's more from a people perspective. And then the last 488 00:30:10,680 --> 00:30:14,040 Speaker 1: thing is really looking at your existing processes. I don't 489 00:30:14,120 --> 00:30:17,120 Speaker 1: think you need to completely come up with new processes 490 00:30:17,160 --> 00:30:21,760 Speaker 1: or new controls, but just adding in an trustworthy check 491 00:30:21,920 --> 00:30:26,480 Speaker 1: in your existing engineering processes or in your existing development 492 00:30:26,520 --> 00:30:30,200 Speaker 1: process or your procurement process to make sure you're checking 493 00:30:30,360 --> 00:30:33,720 Speaker 1: for the trustworthiness of any AI that tool that you 494 00:30:33,840 --> 00:30:37,040 Speaker 1: buy or that you build, you know, having in addition 495 00:30:37,040 --> 00:30:40,480 Speaker 1: to the r O, I ask Sen spent ten percent 496 00:30:40,680 --> 00:30:43,640 Speaker 1: of your time to brainstorm on what are the ways 497 00:30:43,640 --> 00:30:46,720 Speaker 1: this could go wrong? Right? And capture it and when 498 00:30:46,720 --> 00:30:50,720 Speaker 1: you build that technology, put those guard rails in place. 499 00:30:50,800 --> 00:30:54,800 Speaker 1: Now it is guaranteed you It is impossible to identify 500 00:30:55,000 --> 00:30:57,640 Speaker 1: all the possible ways it could go wrong, but even 501 00:30:57,680 --> 00:31:00,920 Speaker 1: if you get you know the ways it could go wrong, 502 00:31:01,000 --> 00:31:03,640 Speaker 1: it is better than not thinking about it and not 503 00:31:03,720 --> 00:31:07,280 Speaker 1: addressing it. So that is a very comprehensive way you 504 00:31:07,360 --> 00:31:09,959 Speaker 1: can do it. But it is all easy. It fits 505 00:31:09,960 --> 00:31:14,720 Speaker 1: in with the existing trainings and processes that you already 506 00:31:14,800 --> 00:31:19,200 Speaker 1: have in your business. Right. I gotta say, like as 507 00:31:19,280 --> 00:31:22,280 Speaker 1: as someone who is a technologist and uh and coming 508 00:31:22,320 --> 00:31:25,960 Speaker 1: at this from that angle, that was such a human 509 00:31:26,040 --> 00:31:29,719 Speaker 1: centric kind of answer. I really appreciate that. I've had 510 00:31:29,760 --> 00:31:34,440 Speaker 1: a lot of discussions with various leadership around different companies 511 00:31:34,760 --> 00:31:38,640 Speaker 1: and this idea of of having that explanation and getting 512 00:31:38,680 --> 00:31:42,360 Speaker 1: buy in from different departments so that everyone's on the 513 00:31:42,400 --> 00:31:45,800 Speaker 1: same page and they have an understanding of the purpose 514 00:31:45,840 --> 00:31:48,280 Speaker 1: of a tool, how it's going to be implemented, what 515 00:31:48,440 --> 00:31:51,040 Speaker 1: we expect to get out of it. Uh. That's actually 516 00:31:51,160 --> 00:31:54,440 Speaker 1: crucial for anything, whether it's a I or not. But 517 00:31:55,160 --> 00:31:58,040 Speaker 1: because I've seen so many examples of companies where you 518 00:31:58,080 --> 00:32:01,400 Speaker 1: have one department who's like a business development team wanted 519 00:32:01,480 --> 00:32:03,479 Speaker 1: us to put this in and I don't understand why. 520 00:32:03,520 --> 00:32:07,080 Speaker 1: And if they don't understand why, then you don't get 521 00:32:07,120 --> 00:32:09,040 Speaker 1: as good output on the other end of it. I 522 00:32:09,040 --> 00:32:12,160 Speaker 1: think making that part of the conversation just as much 523 00:32:12,200 --> 00:32:15,920 Speaker 1: as you know, determining the approach to get a trustworthy AI, 524 00:32:16,000 --> 00:32:18,760 Speaker 1: I think that's absolutely crucial. Yes, And you know, a 525 00:32:18,760 --> 00:32:22,080 Speaker 1: lot of times we think it's a technology problem to fix, 526 00:32:22,320 --> 00:32:24,680 Speaker 1: right it's it's a technology. You know, to build trustworthy 527 00:32:24,720 --> 00:32:26,960 Speaker 1: air you need to you know, it's a technology problem. 528 00:32:27,000 --> 00:32:30,160 Speaker 1: It's your data scientists and engineers, which you think about it, 529 00:32:30,200 --> 00:32:33,280 Speaker 1: but that's that's not the case, right, It's a it's 530 00:32:33,320 --> 00:32:36,200 Speaker 1: the entire group that needs to come together. And the 531 00:32:36,360 --> 00:32:39,080 Speaker 1: risk is not just from a technology perspective. It's a 532 00:32:39,120 --> 00:32:45,720 Speaker 1: brand and reputation rusk. There's financial consequences, there's customer satisfaction consequences, 533 00:32:46,080 --> 00:32:50,600 Speaker 1: there is so many other risks associated with if your 534 00:32:50,680 --> 00:32:54,920 Speaker 1: AI is not trustworthy. Bina and I have a little 535 00:32:54,920 --> 00:32:57,240 Speaker 1: bit more to talk about with AI, but before we 536 00:32:57,280 --> 00:33:07,360 Speaker 1: get to that, let's take another quick break. I remember 537 00:33:07,440 --> 00:33:11,200 Speaker 1: covering that over in the European Union there were various 538 00:33:11,600 --> 00:33:15,400 Speaker 1: departments that were even talking about concepts that again are 539 00:33:15,520 --> 00:33:18,240 Speaker 1: let science fiction far off concept, but even the concept 540 00:33:18,280 --> 00:33:24,800 Speaker 1: of of granting personhood toward sufficiently advanced AI for the 541 00:33:24,840 --> 00:33:29,360 Speaker 1: purposes of figuring out accountability and responsibility for when something 542 00:33:29,400 --> 00:33:33,880 Speaker 1: goes wrong, who gets held accountable when the AI doesn't 543 00:33:33,920 --> 00:33:37,560 Speaker 1: work right? What's your take on that. I think, you know, 544 00:33:37,760 --> 00:33:40,560 Speaker 1: we might reach at that at some point, but in 545 00:33:40,600 --> 00:33:44,000 Speaker 1: the interim till we don't have that kind of you know, 546 00:33:44,400 --> 00:33:48,000 Speaker 1: rules or laws. I think it's absolutely you know, one 547 00:33:48,040 --> 00:33:53,000 Speaker 1: of the components dimensions of trustworthy A is defining accountability upfront, 548 00:33:53,400 --> 00:33:57,080 Speaker 1: meaning if the AI goes wrong, who is accountable for it? 549 00:33:57,080 --> 00:33:59,160 Speaker 1: Who's going to phase the Senate hearing? Who's going to 550 00:33:59,200 --> 00:34:02,040 Speaker 1: pay the fine? Is it the data scientists to milit it, 551 00:34:02,200 --> 00:34:05,640 Speaker 1: is it the c I O who approved the project? 552 00:34:05,800 --> 00:34:08,879 Speaker 1: Is it the CEO or is it a board member? Right? So, 553 00:34:09,360 --> 00:34:12,040 Speaker 1: and the good news with that one, you know, talking 554 00:34:12,080 --> 00:34:17,120 Speaker 1: about accountability upfront makes everybody proactively think about for the 555 00:34:17,120 --> 00:34:19,040 Speaker 1: ways it could go wrong, because you don't want to 556 00:34:19,080 --> 00:34:22,120 Speaker 1: put your name on something that might go wrong and 557 00:34:22,160 --> 00:34:25,040 Speaker 1: you have not thought about it. So until we get 558 00:34:25,120 --> 00:34:29,480 Speaker 1: to that, you know, machine citizens citizen rights level, I 559 00:34:29,520 --> 00:34:33,360 Speaker 1: think you know, even today there is a dimension of 560 00:34:33,400 --> 00:34:38,000 Speaker 1: trustworthiness which is really around defining putting in a name 561 00:34:38,120 --> 00:34:42,840 Speaker 1: for who is accountable when your AI goes wrong. I 562 00:34:42,920 --> 00:34:45,040 Speaker 1: agree that that's important. I have seen some of those 563 00:34:45,040 --> 00:34:48,919 Speaker 1: Senate hearings with various UH tech people sitting in the sea, 564 00:34:49,320 --> 00:34:51,000 Speaker 1: and I know that if I were in one of 565 00:34:51,040 --> 00:34:53,360 Speaker 1: these conversations, I would not want to be that person. 566 00:34:53,760 --> 00:34:57,080 Speaker 1: And making sure we specifically define who that person is 567 00:34:57,120 --> 00:34:59,200 Speaker 1: and that it's not me would be top of my 568 00:34:59,280 --> 00:35:06,239 Speaker 1: priority life. Well, I'm also curious then. Uh. So we've 569 00:35:06,239 --> 00:35:10,000 Speaker 1: seen in a similar sense some movement on things like 570 00:35:11,239 --> 00:35:15,160 Speaker 1: autonomous cars. Uh. In a similar note, I'll talking about accountability, 571 00:35:15,160 --> 00:35:20,840 Speaker 1: where we're starting to see more governments try and consider 572 00:35:20,920 --> 00:35:24,840 Speaker 1: who is accountable for any accidents that might have happened 573 00:35:24,920 --> 00:35:30,680 Speaker 1: under cars autonomous or semi autonomous operation. Obviously that's been 574 00:35:30,760 --> 00:35:33,760 Speaker 1: a big point of discussion here in the United States, 575 00:35:34,680 --> 00:35:39,239 Speaker 1: and uh, this is one of those things. How how 576 00:35:39,280 --> 00:35:43,799 Speaker 1: how closely tied do you think do technology experts need 577 00:35:43,840 --> 00:35:49,200 Speaker 1: to be with say politicians who may not have the 578 00:35:49,280 --> 00:35:54,160 Speaker 1: insight into tech, but yet are also responsible for creating 579 00:35:54,320 --> 00:35:58,400 Speaker 1: and enacting policy that's going to have an effect on tech. 580 00:35:58,520 --> 00:36:04,640 Speaker 1: Is do you see there being more cross talk? Yeah, 581 00:36:04,680 --> 00:36:08,400 Speaker 1: you know, unlike the car example and the seed belt 582 00:36:08,440 --> 00:36:12,239 Speaker 1: and speed limit example. You know, AI does need an 583 00:36:12,320 --> 00:36:17,000 Speaker 1: understanding of technology so to come up with those speed limits. 584 00:36:17,719 --> 00:36:20,880 Speaker 1: It is so, you know, and we've honestly entered that 585 00:36:21,000 --> 00:36:24,959 Speaker 1: era where collaboration is king, right. We have to make 586 00:36:25,000 --> 00:36:30,560 Speaker 1: sure that regulators and technologies, uh, policymakers, they have collaborating 587 00:36:30,719 --> 00:36:33,239 Speaker 1: and each one is learning from the other. To come 588 00:36:33,320 --> 00:36:38,480 Speaker 1: up with the best possible guard rails or regulations or laws, 589 00:36:38,560 --> 00:36:42,279 Speaker 1: because this is not something that can be done in isolation, 590 00:36:42,320 --> 00:36:46,279 Speaker 1: and like that auto speed limit example. So I think 591 00:36:46,320 --> 00:36:49,040 Speaker 1: we're going to see more whether it is an entities 592 00:36:49,120 --> 00:36:52,480 Speaker 1: being set up who will drive this collaboration, but there 593 00:36:52,600 --> 00:36:59,720 Speaker 1: is definitely, you know, across the globe technologists being pulled together, 594 00:36:59,719 --> 00:37:04,440 Speaker 1: whether as an advisory committee or a council. That is 595 00:37:04,719 --> 00:37:08,200 Speaker 1: happening now, and you know, I do think we will 596 00:37:08,239 --> 00:37:12,359 Speaker 1: start seeing results of that collaboration coming out sooner rather 597 00:37:12,400 --> 00:37:15,520 Speaker 1: than later. I think I also believe that just like 598 00:37:15,560 --> 00:37:19,000 Speaker 1: I was talking about every organization should train all their employees, 599 00:37:19,440 --> 00:37:24,239 Speaker 1: I think every everybody who is involved in the regulation 600 00:37:24,360 --> 00:37:29,040 Speaker 1: making process should have a basic understanding of AI, level 601 00:37:29,080 --> 00:37:32,239 Speaker 1: of AI fluency, or you know, an understanding of what 602 00:37:32,280 --> 00:37:34,680 Speaker 1: does machine learning really mean, what can it do, what 603 00:37:34,800 --> 00:37:37,000 Speaker 1: can it not do? So I call it the AI 604 00:37:37,120 --> 00:37:40,640 Speaker 1: literacy training, right, So I think it's that's like ground 605 00:37:40,680 --> 00:37:45,000 Speaker 1: stakes to drive a productive collaboration. But I think this 606 00:37:45,160 --> 00:37:47,360 Speaker 1: is the time for people like you and me, Jonathan, 607 00:37:47,400 --> 00:37:50,839 Speaker 1: to really step up and make sure that we're collaborating 608 00:37:51,080 --> 00:37:57,640 Speaker 1: closely so that that it's informed and informed and relevant 609 00:37:57,880 --> 00:38:02,400 Speaker 1: regulation or relevant policy that's put together. I think relevance 610 00:38:02,520 --> 00:38:06,600 Speaker 1: is is absolutely the right word to use. UH. Again, 611 00:38:06,640 --> 00:38:09,040 Speaker 1: I'm not putting anyone on blast, but there have been 612 00:38:09,120 --> 00:38:12,520 Speaker 1: plenty of stories of people, whether they are in the 613 00:38:12,600 --> 00:38:18,399 Speaker 1: regulatory field or general politics, where their level of tech 614 00:38:18,480 --> 00:38:24,000 Speaker 1: savvy is probably not even measurable based upon some of 615 00:38:24,000 --> 00:38:26,920 Speaker 1: the things we've seen, and that is that is terrifying 616 00:38:27,040 --> 00:38:33,440 Speaker 1: when you realize the reach and the effect of technology 617 00:38:33,480 --> 00:38:37,080 Speaker 1: and how if you have a misunderstanding of it, you 618 00:38:37,120 --> 00:38:40,440 Speaker 1: can tackle something that's not really a problem, but you've 619 00:38:40,560 --> 00:38:43,960 Speaker 1: built it up as if it were while completely missing 620 00:38:44,440 --> 00:38:47,520 Speaker 1: things that we absolutely need to pay closer attention to. 621 00:38:47,680 --> 00:38:51,560 Speaker 1: So I I do try to to make literacy one 622 00:38:51,600 --> 00:38:54,719 Speaker 1: of those things that I push for and hopefully I 623 00:38:54,800 --> 00:39:00,719 Speaker 1: succeed more often than I fail. Yeah, it's we live 624 00:39:00,840 --> 00:39:04,319 Speaker 1: in this era now that you know, at least in 625 00:39:04,440 --> 00:39:09,680 Speaker 1: the UH. In the corporate world, right, we're seeing more 626 00:39:09,719 --> 00:39:15,320 Speaker 1: and more boards getting more technology savvy. Leaders are leaders 627 00:39:15,360 --> 00:39:19,480 Speaker 1: who understand technology so that because every company uses technology, 628 00:39:19,560 --> 00:39:23,279 Speaker 1: uses AI no matter which industry they're in, right, So 629 00:39:23,320 --> 00:39:27,000 Speaker 1: we're seeing that composition of boards changing, right, And I 630 00:39:27,000 --> 00:39:29,960 Speaker 1: don't think we're very far from the time when you know, 631 00:39:30,400 --> 00:39:34,640 Speaker 1: having a basic AI or technology understanding will be almost 632 00:39:34,640 --> 00:39:38,640 Speaker 1: a prerequisite. Right Again, as I said, we're living in 633 00:39:38,680 --> 00:39:42,080 Speaker 1: this interim crazy phase where there's a lot of things 634 00:39:42,120 --> 00:39:45,359 Speaker 1: happening and we don't necessarily have all the foundations set up. 635 00:39:45,719 --> 00:39:48,680 Speaker 1: The exciting news is for our generation, Jonathan, this is 636 00:39:48,680 --> 00:39:52,400 Speaker 1: our opportunity. Right the work we do today is going 637 00:39:52,480 --> 00:39:56,560 Speaker 1: to be setting the foundation for future generations. So I think, uh, 638 00:39:56,840 --> 00:39:59,640 Speaker 1: you know, having that basic AI literacy, No, it's not 639 00:39:59,760 --> 00:40:03,319 Speaker 1: set up, but you know, we we now understand that, 640 00:40:03,560 --> 00:40:07,400 Speaker 1: you know, everybody who is involved in policymaking our regulations 641 00:40:07,480 --> 00:40:10,640 Speaker 1: need to have that basic understanding. So let's make sure 642 00:40:10,800 --> 00:40:14,720 Speaker 1: that you know they have that. That's great, it's it's 643 00:40:14,760 --> 00:40:17,239 Speaker 1: it's looking at something that I have defined as a 644 00:40:17,239 --> 00:40:21,080 Speaker 1: problem and you have defined as an opportunity, which I 645 00:40:21,120 --> 00:40:24,399 Speaker 1: needed to hear honestly, because that's the kind of optimism 646 00:40:24,480 --> 00:40:29,520 Speaker 1: that I find really motivating. Been a thank you so 647 00:40:29,640 --> 00:40:33,680 Speaker 1: much for being on the show. Your book Trustworthy AI. 648 00:40:34,080 --> 00:40:36,319 Speaker 1: I have a copy coming to me. I have not 649 00:40:36,480 --> 00:40:39,160 Speaker 1: yet been able to read it. I am so eager 650 00:40:39,200 --> 00:40:42,120 Speaker 1: to go cover to cover on this because just this 651 00:40:42,120 --> 00:40:45,880 Speaker 1: this conversation has really energized me, and um, you know, 652 00:40:46,280 --> 00:40:48,719 Speaker 1: when you have a podcast about tech and you've done 653 00:40:48,719 --> 00:40:52,439 Speaker 1: more than sevent episodes, sometimes you feel like I've said 654 00:40:52,480 --> 00:40:54,400 Speaker 1: everything there is to say about that, and then I 655 00:40:54,400 --> 00:40:56,840 Speaker 1: have a conversation like this and I realized, this is 656 00:40:56,840 --> 00:40:59,520 Speaker 1: an Iceberg situation and I've just touched the very tip 657 00:40:59,560 --> 00:41:03,759 Speaker 1: of it. There an entire world beneath the surface of 658 00:41:03,760 --> 00:41:06,960 Speaker 1: the water that I haven't even scratched. So thank you 659 00:41:07,040 --> 00:41:09,719 Speaker 1: so much for coming onto the show, Jonathan. This is 660 00:41:09,760 --> 00:41:12,799 Speaker 1: a very energizing conversation for me as well. Thank you 661 00:41:12,880 --> 00:41:15,839 Speaker 1: so much for having me on your show. Once again, 662 00:41:15,880 --> 00:41:19,320 Speaker 1: I have to thank Bena Amanath for coming on the show. Uh. 663 00:41:19,360 --> 00:41:22,479 Speaker 1: I was thrilled at this opportunity when I first got 664 00:41:22,480 --> 00:41:25,279 Speaker 1: the email suggesting that I have her on my show, 665 00:41:25,320 --> 00:41:29,279 Speaker 1: because to be totally clear, her team reached out to 666 00:41:29,360 --> 00:41:33,879 Speaker 1: me and I just didn't even think about that possibility. 667 00:41:34,239 --> 00:41:36,719 Speaker 1: I am so glad that I followed up with that. 668 00:41:37,080 --> 00:41:40,120 Speaker 1: I do plan on having more interviews on this show 669 00:41:40,160 --> 00:41:42,560 Speaker 1: in the near future. I've got a couple more lined up. 670 00:41:43,040 --> 00:41:46,080 Speaker 1: I'm gonna try and do that more frequently. It is 671 00:41:46,239 --> 00:41:48,440 Speaker 1: I'm gonna be transparent with all of you. It is 672 00:41:48,560 --> 00:41:54,200 Speaker 1: very tricky for me because scheduling UH is tricky. People 673 00:41:54,200 --> 00:41:57,080 Speaker 1: are very busy, and it gives me a lot of 674 00:41:57,080 --> 00:42:01,319 Speaker 1: anxiety just being absolutely transparent with all of you out there. 675 00:42:02,160 --> 00:42:06,279 Speaker 1: The the the process of scheduling gives me a lot 676 00:42:06,320 --> 00:42:09,279 Speaker 1: of anxiety. So it's something I'm working through and I'm 677 00:42:09,320 --> 00:42:12,040 Speaker 1: trying to get more people on the show one because 678 00:42:12,080 --> 00:42:14,239 Speaker 1: there's so many interesting people out there. And just with 679 00:42:14,320 --> 00:42:17,160 Speaker 1: this conversation with Bena, I really got that that feeling 680 00:42:17,280 --> 00:42:21,239 Speaker 1: of I need this because it is giving me more 681 00:42:21,320 --> 00:42:23,919 Speaker 1: perspective than what I have and I'm I don't want 682 00:42:23,960 --> 00:42:27,239 Speaker 1: tech stuff to just be a narrow laser focus of 683 00:42:27,800 --> 00:42:32,719 Speaker 1: what Jonathan thinks about tech. Secondly, UM, you know, I 684 00:42:32,760 --> 00:42:36,160 Speaker 1: think that it benefits the show obviously to have that 685 00:42:36,160 --> 00:42:39,120 Speaker 1: that extra voice in there, and that means that it 686 00:42:39,360 --> 00:42:44,640 Speaker 1: becomes more enjoyable because despite my enormous ego, I realize 687 00:42:44,719 --> 00:42:48,080 Speaker 1: I cannot be the most entertaining person in all the world, UH, 688 00:42:48,320 --> 00:42:51,960 Speaker 1: no matter how hard I try. So I hope you 689 00:42:52,080 --> 00:42:56,000 Speaker 1: all enjoyed this. If you have suggestions for future topics, 690 00:42:56,000 --> 00:42:58,759 Speaker 1: maybe you have suggestions for future guests I should try 691 00:42:58,800 --> 00:43:02,520 Speaker 1: and get on the show. Reach out to me. Uh, 692 00:43:02,560 --> 00:43:06,040 Speaker 1: I promise I will do my best to get that 693 00:43:06,120 --> 00:43:08,680 Speaker 1: person on the show. I can't promise that it will happen, 694 00:43:08,680 --> 00:43:14,040 Speaker 1: but I'll try and I'll work through this weird stress 695 00:43:14,080 --> 00:43:16,600 Speaker 1: I get whenever it comes down to trying to schedule 696 00:43:16,640 --> 00:43:19,799 Speaker 1: things and uh, and just to be clear, Bena was 697 00:43:19,840 --> 00:43:23,319 Speaker 1: amazing because we actually tried to record that interview on 698 00:43:23,320 --> 00:43:26,799 Speaker 1: one day but had a technical issue ended up having 699 00:43:26,840 --> 00:43:31,400 Speaker 1: to reschedule. She was amazing. It was really good about 700 00:43:31,400 --> 00:43:36,120 Speaker 1: all that. So despite all of my anxiety, everything went great, 701 00:43:36,280 --> 00:43:38,920 Speaker 1: which I think is this isn't meant to be a 702 00:43:38,920 --> 00:43:41,480 Speaker 1: therapy session. But I think that's very typical for me, 703 00:43:41,600 --> 00:43:44,400 Speaker 1: where I get worked up about something turns out that 704 00:43:44,480 --> 00:43:46,680 Speaker 1: something wasn't really that big a deal. It was just 705 00:43:46,760 --> 00:43:50,080 Speaker 1: the anticipation of it that was the problem. So if 706 00:43:50,120 --> 00:43:52,120 Speaker 1: any of you out there suffer from something like that, 707 00:43:52,200 --> 00:43:54,480 Speaker 1: you know you have that same sort of experience. Listen, 708 00:43:54,480 --> 00:43:57,319 Speaker 1: I got your back. I know how it is. It 709 00:43:57,440 --> 00:44:00,160 Speaker 1: is frustrating, but you can do it all right. Eight 710 00:44:01,040 --> 00:44:03,920 Speaker 1: PEP talk Over, Episode over. I hope you enjoyed it. 711 00:44:03,960 --> 00:44:05,759 Speaker 1: I am on vacation for the rest of the week, 712 00:44:05,840 --> 00:44:09,279 Speaker 1: so you should expect some classic episodes for the rest 713 00:44:09,320 --> 00:44:11,600 Speaker 1: of this week. But that doesn't mean they're bad. It 714 00:44:11,719 --> 00:44:14,920 Speaker 1: just means they're old, just like me. I'm old, but 715 00:44:15,000 --> 00:44:17,440 Speaker 1: I'm not bad, and I will talk to you again. 716 00:44:17,440 --> 00:44:19,880 Speaker 1: Oh if you want to reach out to me, you 717 00:44:19,880 --> 00:44:22,239 Speaker 1: gotta do it on Twitter. The handle for the show 718 00:44:22,320 --> 00:44:26,080 Speaker 1: is tech Stuff h s W There. Now I get 719 00:44:26,120 --> 00:44:29,160 Speaker 1: to say the end catchphrase, I'll talk to you again 720 00:44:30,000 --> 00:44:38,520 Speaker 1: really soon. Tech Stuff is an I Heart Radio production. 721 00:44:38,760 --> 00:44:41,600 Speaker 1: For more podcasts from my Heart Radio, visit the i 722 00:44:41,719 --> 00:44:44,920 Speaker 1: Heart Radio app, Apple Podcasts, or wherever you listen to 723 00:44:45,000 --> 00:44:45,920 Speaker 1: your favorite shows.