1 00:00:04,400 --> 00:00:07,800 Speaker 1: Welcome to tech Stuff, a production from I Heart Radio. 2 00:00:11,840 --> 00:00:14,480 Speaker 1: This season of Smart Talks with IBM is all about 3 00:00:14,600 --> 00:00:18,480 Speaker 1: new creators, the developers, data scientists, c t o s 4 00:00:18,560 --> 00:00:23,280 Speaker 1: and other visionaries creatively applying technology in business to drive change. 5 00:00:23,800 --> 00:00:26,680 Speaker 1: They use their knowledge and creativity to develop better ways 6 00:00:26,680 --> 00:00:30,280 Speaker 1: of working, no matter the industry. Join hosts from your 7 00:00:30,320 --> 00:00:33,880 Speaker 1: favorite pushkin industries podcasts as they use their expertise to 8 00:00:33,960 --> 00:00:37,480 Speaker 1: deepen these conversations, and of course Malcolm Gladwell will guide 9 00:00:37,520 --> 00:00:39,839 Speaker 1: you through the season as your host and provide his 10 00:00:39,920 --> 00:00:43,000 Speaker 1: thoughts and analysis along the way. Look out for new 11 00:00:43,040 --> 00:00:45,480 Speaker 1: episodes of Smart Talks with IBM on the I Heart 12 00:00:45,560 --> 00:00:49,159 Speaker 1: Radio app, Apple Podcasts, or wherever you get your podcasts, 13 00:00:49,360 --> 00:00:56,520 Speaker 1: and learn more at IBM dot com slash smart talks. Hello, Hello, 14 00:00:56,640 --> 00:01:00,760 Speaker 1: Welcome to Smart Talks with IBM, a podcast from Industries 15 00:01:01,000 --> 00:01:05,680 Speaker 1: Ighart Radio and IBM. I'm Malcolm Glabbo. This season, we're 16 00:01:05,680 --> 00:01:10,080 Speaker 1: talking to new creators, the developers, data scientists, c t 17 00:01:10,240 --> 00:01:14,000 Speaker 1: o s and other visionaries who are creatively applying technology 18 00:01:14,000 --> 00:01:18,199 Speaker 1: and business to drive change. Channeling their knowledge and expertise, 19 00:01:18,400 --> 00:01:23,480 Speaker 1: they're developing more creative and effective solutions, no matter the industry. 20 00:01:23,959 --> 00:01:28,440 Speaker 1: Our guest today is Padre Bonadius, trust in AI practice 21 00:01:28,520 --> 00:01:34,120 Speaker 1: leader within IBM Consulting. Advocating for artificial intelligence built and 22 00:01:34,200 --> 00:01:38,920 Speaker 1: deployed responsibly is no longer just a compliance issue, but 23 00:01:39,080 --> 00:01:43,080 Speaker 1: a business imperative. Part of Phedre's job is to help 24 00:01:43,120 --> 00:01:47,920 Speaker 1: companies identify potential risks and pitfalls way before any code 25 00:01:47,960 --> 00:01:51,400 Speaker 1: is written. In today's show, you'll hear how Phaeder's team 26 00:01:51,400 --> 00:01:56,560 Speaker 1: and IBM is approaching this challenge holistically and creatively. Phedre 27 00:01:56,640 --> 00:02:00,200 Speaker 1: spoke with Dr Loris Santos, host of the Pushkin podcast 28 00:02:00,480 --> 00:02:03,960 Speaker 1: The Happiness Lab. Laurie is a professor of psychology at 29 00:02:04,000 --> 00:02:07,560 Speaker 1: Yale University and an expert on human cognition and the 30 00:02:07,600 --> 00:02:13,760 Speaker 1: cognitive biases that impede better choices. Now let's get to 31 00:02:13,760 --> 00:02:22,640 Speaker 1: the interview, Padre. I'm so excited that we get a 32 00:02:22,680 --> 00:02:25,440 Speaker 1: chance to chat today. You know, just to start off, 33 00:02:25,480 --> 00:02:28,160 Speaker 1: I'm wondering how did you get started in this role 34 00:02:28,200 --> 00:02:30,400 Speaker 1: at IBM, Like, what's the story to how you got 35 00:02:30,400 --> 00:02:33,760 Speaker 1: where you are today? Oh goodness. My background is actually 36 00:02:34,400 --> 00:02:37,880 Speaker 1: from the world of video games for entertainment, so AI 37 00:02:38,040 --> 00:02:40,880 Speaker 1: has always been very interesting to me, especially when you 38 00:02:40,919 --> 00:02:45,040 Speaker 1: intersect AI and play but several years ago, I began 39 00:02:45,080 --> 00:02:50,120 Speaker 1: to get very frustrated by what I was reading in 40 00:02:50,160 --> 00:02:56,160 Speaker 1: the news with respect to malintent through the use of AI. 41 00:02:56,639 --> 00:02:59,600 Speaker 1: And the more that I learned and the more that 42 00:02:59,639 --> 00:03:03,800 Speaker 1: I studied about this space of AI and ethics, the 43 00:03:03,840 --> 00:03:08,440 Speaker 1: more I recognize that even organizations that have the very 44 00:03:08,760 --> 00:03:15,840 Speaker 1: very best of intentions could inadvertently cause potential harm. And 45 00:03:15,880 --> 00:03:18,160 Speaker 1: so that's super cool. I love that your interest in 46 00:03:18,360 --> 00:03:21,920 Speaker 1: more responsible AI came from the gaming world. You have 47 00:03:22,000 --> 00:03:24,080 Speaker 1: to talk a little bit about your history with gaming 48 00:03:24,120 --> 00:03:27,520 Speaker 1: and that how that informed your interest in trustworthy AI. Well, 49 00:03:27,880 --> 00:03:33,040 Speaker 1: it wasn't as much necessarily the ethical components of AI 50 00:03:33,080 --> 00:03:36,320 Speaker 1: when I was working in games. It was more things like, 51 00:03:37,240 --> 00:03:41,280 Speaker 1: look at what nonplayer characters can do, you know, I mean, 52 00:03:41,360 --> 00:03:43,720 Speaker 1: if you've got an AI acting as a character within 53 00:03:43,760 --> 00:03:46,320 Speaker 1: the game, and how is it that you can use 54 00:03:46,360 --> 00:03:50,880 Speaker 1: AI in order to make a game a more interesting experience. Actually, 55 00:03:50,880 --> 00:03:53,800 Speaker 1: I ended up joining IBM to be our first global 56 00:03:53,880 --> 00:03:56,240 Speaker 1: lead for something called serious games, which is when you 57 00:03:56,320 --> 00:03:59,119 Speaker 1: use video games to do something other than just entertaining, 58 00:03:59,720 --> 00:04:01,839 Speaker 1: and of the idea of integrating real data and real 59 00:04:01,840 --> 00:04:07,120 Speaker 1: processes within sophisticated games powered by AI to solve complex problems. 60 00:04:07,720 --> 00:04:11,200 Speaker 1: It wasn't until, as I mentioned, like later, when we 61 00:04:11,280 --> 00:04:14,400 Speaker 1: started to hear all of us more and more news 62 00:04:14,440 --> 00:04:19,360 Speaker 1: about just problems. What could happen with respect to rendering 63 00:04:19,440 --> 00:04:22,800 Speaker 1: or putting out models that are inaccurate or unfair. I 64 00:04:23,080 --> 00:04:25,359 Speaker 1: know one of your inspirations for hearing other interviews that 65 00:04:25,400 --> 00:04:27,960 Speaker 1: you've done is sci Fi. I'm also a sci Fi nerd, 66 00:04:28,040 --> 00:04:30,680 Speaker 1: and I know sci Fi has talked a lot about, 67 00:04:30,920 --> 00:04:33,599 Speaker 1: you know, the trustworthiness issues that come up when we're 68 00:04:33,640 --> 00:04:35,840 Speaker 1: dealing with AI and so on, and so talk a 69 00:04:35,920 --> 00:04:38,000 Speaker 1: little bit about how you bring that to your work 70 00:04:38,080 --> 00:04:40,960 Speaker 1: in developing AI. That's a little bit more ethical. A 71 00:04:41,000 --> 00:04:45,240 Speaker 1: lovely question. So, my my parents were major techno files. 72 00:04:45,440 --> 00:04:48,560 Speaker 1: They both were immigrants to the United States, came here 73 00:04:48,560 --> 00:04:53,320 Speaker 1: to study engineering, and they met uh in college. Growing up, 74 00:04:53,360 --> 00:04:59,960 Speaker 1: my sister and I, we had Star Trek playing every night. UH. 75 00:05:00,040 --> 00:05:03,880 Speaker 1: My parents were both big fans of Gene Roddenberry's vision 76 00:05:04,160 --> 00:05:09,360 Speaker 1: of how technology could really be used to help better humankind, 77 00:05:09,600 --> 00:05:12,360 Speaker 1: and that was the ethos that, of course we grew 78 00:05:12,440 --> 00:05:16,800 Speaker 1: up in. The wonderful thing about science fiction isn't that 79 00:05:16,920 --> 00:05:21,279 Speaker 1: it predicts cars, for example, but that it predicts traffic jams, 80 00:05:22,200 --> 00:05:24,880 Speaker 1: you know, and I think there's just so much we 81 00:05:24,960 --> 00:05:28,520 Speaker 1: can learn from science fiction or in fact, like I said, 82 00:05:28,760 --> 00:05:32,520 Speaker 1: play as a mechanism to be able to teach science 83 00:05:32,560 --> 00:05:37,159 Speaker 1: fiction predicting traffic jams. I love it. But when we 84 00:05:37,240 --> 00:05:40,680 Speaker 1: think about AI and science fiction, we need to be careful. 85 00:05:41,279 --> 00:05:44,360 Speaker 1: We need to remember that AI is not something that's 86 00:05:44,360 --> 00:05:46,440 Speaker 1: going to enter our lives at some point in the 87 00:05:46,520 --> 00:05:51,800 Speaker 1: distant future. AI is something that's all around us today. 88 00:05:52,120 --> 00:05:55,839 Speaker 1: If you have a virtual assistant in your house, that's AI, 89 00:05:56,120 --> 00:05:59,960 Speaker 1: your phone app that predicts traffic AI. What a streaming 90 00:06:00,160 --> 00:06:04,880 Speaker 1: service recommends a movie? You've guessed it AI, Paeder says. 91 00:06:04,960 --> 00:06:09,039 Speaker 1: AI maybe behind the scenes determining the interest rate on 92 00:06:09,120 --> 00:06:12,040 Speaker 1: your loan, or even whether or not you're the right 93 00:06:12,080 --> 00:06:15,760 Speaker 1: candidate for that job you applied for. AI is both 94 00:06:15,920 --> 00:06:20,160 Speaker 1: ubiquitous and invisible, which is why it is so crucial 95 00:06:20,360 --> 00:06:24,719 Speaker 1: the companies learn how to build trustworthy AI. How do 96 00:06:24,800 --> 00:06:27,960 Speaker 1: we do that? When thinking about what does it take 97 00:06:28,080 --> 00:06:32,479 Speaker 1: to earn trust in something like an AI, there are 98 00:06:32,560 --> 00:06:36,760 Speaker 1: fundamentally human centric questions to be asked, right like what 99 00:06:36,920 --> 00:06:40,320 Speaker 1: is the intent of this particular AI model? How accurate 100 00:06:40,520 --> 00:06:44,640 Speaker 1: is that model? How fair is it Is it explainable 101 00:06:44,720 --> 00:06:48,040 Speaker 1: if it makes a decision that could directly affect my livelihood? 102 00:06:48,760 --> 00:06:51,640 Speaker 1: Can I inquire what data did you use about me 103 00:06:51,920 --> 00:06:56,040 Speaker 1: to make this decision? Is it protecting my data? Is 104 00:06:56,080 --> 00:07:00,359 Speaker 1: it robust? Is it protected against people who could trick 105 00:07:00,440 --> 00:07:03,839 Speaker 1: it to disadvantage me over others? I mean, there's so 106 00:07:03,880 --> 00:07:08,360 Speaker 1: many questions to be asked. Earning trust in something like 107 00:07:08,400 --> 00:07:13,880 Speaker 1: AI is fundamentally not a technological challenge but a socio 108 00:07:13,880 --> 00:07:19,200 Speaker 1: technological challenge. It can't just be solved with a tool alone. 109 00:07:20,520 --> 00:07:22,520 Speaker 1: What are the kinds of risks that companies have to 110 00:07:22,560 --> 00:07:25,360 Speaker 1: think through? Is they're developing these technologies to make sure 111 00:07:25,360 --> 00:07:28,160 Speaker 1: they're as trustworthy as possible. Well, you know, they may 112 00:07:28,200 --> 00:07:31,920 Speaker 1: be putting a lot of money into investing in AI 113 00:07:32,360 --> 00:07:35,320 Speaker 1: that gets stuck in proof of concept planned like get's 114 00:07:35,320 --> 00:07:37,640 Speaker 1: get stuck in pilot. We we've done some research where 115 00:07:37,640 --> 00:07:42,080 Speaker 1: we have found about eight of investments in AI get stuck. 116 00:07:42,760 --> 00:07:46,640 Speaker 1: And sometimes it's because the investment isn't tied directly to 117 00:07:46,680 --> 00:07:49,760 Speaker 1: a business strategy, or more often than not, people simply 118 00:07:49,800 --> 00:07:53,960 Speaker 1: don't trust the results of the AI model. As a company, 119 00:07:54,000 --> 00:07:56,360 Speaker 1: who is of course thinking about this so deeply? What 120 00:07:56,480 --> 00:07:59,200 Speaker 1: a businesses need to consider when they're trying to figure out, 121 00:07:59,400 --> 00:08:02,200 Speaker 1: you know, how to solve this big puzzle of AI ethics. 122 00:08:02,480 --> 00:08:05,280 Speaker 1: It has to be approached holistically, So you've got to 123 00:08:05,280 --> 00:08:10,280 Speaker 1: be thinking about, for example, what culture is required within 124 00:08:10,320 --> 00:08:13,600 Speaker 1: your organization in order to really be able to responsibly 125 00:08:13,680 --> 00:08:17,320 Speaker 1: create AI, what processes are in place to make sure 126 00:08:17,400 --> 00:08:20,880 Speaker 1: that you're being compliant and that your your practitioners know 127 00:08:21,040 --> 00:08:25,360 Speaker 1: what to do. And then of course AI engineering frameworks 128 00:08:25,400 --> 00:08:29,080 Speaker 1: and tooling that can assist you on this journey. There 129 00:08:29,160 --> 00:08:33,560 Speaker 1: is so much fundamentally to do. We found that actually 130 00:08:33,720 --> 00:08:37,840 Speaker 1: those that were leading responsible AI trust where the AI 131 00:08:37,920 --> 00:08:41,800 Speaker 1: initiatives within their organization has switched in the last three years. 132 00:08:42,320 --> 00:08:46,120 Speaker 1: It used to be technical leaders, for example, chief data 133 00:08:46,200 --> 00:08:50,120 Speaker 1: officer or someone who is a PhD in machine learning, 134 00:08:50,640 --> 00:08:53,600 Speaker 1: and now it's switched to be a d percent of 135 00:08:53,640 --> 00:08:58,079 Speaker 1: those leaders are now non technical business leaders maybe you know, 136 00:08:58,200 --> 00:09:03,760 Speaker 1: chief compliance officer, diversely inclusivity officers, chief legal officer. So 137 00:09:03,840 --> 00:09:07,240 Speaker 1: we're seeing a shift, and I believe firmly it's a 138 00:09:07,280 --> 00:09:12,400 Speaker 1: recognition from organizations that are seeing that in order to 139 00:09:12,480 --> 00:09:15,199 Speaker 1: really pull this off well, there has to be an 140 00:09:15,240 --> 00:09:20,600 Speaker 1: investment than a focus in culture in people and getting 141 00:09:20,600 --> 00:09:23,920 Speaker 1: people to understand why they should care about this space. 142 00:09:25,720 --> 00:09:28,520 Speaker 1: And so I see two challenges with doing that right. 143 00:09:28,679 --> 00:09:31,280 Speaker 1: One is, you know a lot of these technology companies 144 00:09:31,320 --> 00:09:34,560 Speaker 1: are really built to be tech companies, not necessarily you know, 145 00:09:35,080 --> 00:09:37,880 Speaker 1: social tech companies or having this sort of training and 146 00:09:38,040 --> 00:09:41,000 Speaker 1: ethics and beyond. Another issue seems to be that you're 147 00:09:41,040 --> 00:09:44,520 Speaker 1: really proposing a switch that's truly holistic, right, that's like 148 00:09:44,640 --> 00:09:48,120 Speaker 1: rethinking the way the company thinks about its bottom line. 149 00:09:48,280 --> 00:09:50,880 Speaker 1: And so as you think about working through these kinds 150 00:09:50,880 --> 00:09:53,400 Speaker 1: of challenges at IBM, how have you tackled this, like 151 00:09:53,440 --> 00:09:55,480 Speaker 1: how have you brought new talent in? How have you 152 00:09:55,559 --> 00:09:58,440 Speaker 1: thought really carefully about this big holistic switch that needs 153 00:09:58,440 --> 00:10:01,160 Speaker 1: to come to make AIM more trustworth be Data is 154 00:10:01,200 --> 00:10:04,680 Speaker 1: an artifact of the human experience. And if you start 155 00:10:04,760 --> 00:10:08,120 Speaker 1: with that as your definition and then think about, well 156 00:10:09,200 --> 00:10:13,079 Speaker 1: data is curated by data sideists, all data is biased. 157 00:10:13,760 --> 00:10:19,200 Speaker 1: And so if you're not recognizing bias with eyes fully open, 158 00:10:19,720 --> 00:10:25,640 Speaker 1: then ultimately you're calcifying systemic bias into systems like AI. 159 00:10:26,080 --> 00:10:28,480 Speaker 1: So some of the things that we've done at IBM, 160 00:10:28,520 --> 00:10:32,720 Speaker 1: again recognizing this important need for culture is big, big, 161 00:10:32,760 --> 00:10:37,160 Speaker 1: big focus on diversity, not only looking at teams of 162 00:10:37,240 --> 00:10:40,160 Speaker 1: data scientists and saying how many women are on this team, 163 00:10:40,280 --> 00:10:45,080 Speaker 1: how many minorities are on this team, but also insisting 164 00:10:45,200 --> 00:10:48,680 Speaker 1: on recognizing that we need to bring in people with 165 00:10:48,760 --> 00:10:53,240 Speaker 1: different world views too, for example, what's your definition of fairness? 166 00:10:54,040 --> 00:10:57,480 Speaker 1: Is your definition equality or is it equity? Also bringing 167 00:10:57,559 --> 00:11:01,560 Speaker 1: people with a wider variety of skill sets and roles, 168 00:11:01,920 --> 00:11:08,640 Speaker 1: including our social scientists, anthropologists, sociologist, psychologists like yourself, right, 169 00:11:09,360 --> 00:11:13,160 Speaker 1: behavioral scientists, designers. I mean we have one of the 170 00:11:13,360 --> 00:11:18,800 Speaker 1: leading AI design practices in the world. I mean the effort, 171 00:11:18,880 --> 00:11:22,560 Speaker 1: the investments we've been making in design thinking as a 172 00:11:22,679 --> 00:11:27,960 Speaker 1: mechanism to create frameworks for systemic empathy well before any 173 00:11:28,040 --> 00:11:32,280 Speaker 1: code is written, so people can think through how would 174 00:11:32,320 --> 00:11:36,200 Speaker 1: you design in order to mitigate for any potential harm 175 00:11:36,240 --> 00:11:39,480 Speaker 1: given not only the values of your organization, but what 176 00:11:39,520 --> 00:11:43,720 Speaker 1: are the rights of individuals asking oneself? These kinds of 177 00:11:43,800 --> 00:11:48,400 Speaker 1: questions reinforces then the idea the ethics doesn't come at 178 00:11:48,400 --> 00:11:51,640 Speaker 1: the end, like it's some kind of quality assurance, like 179 00:11:51,840 --> 00:11:54,600 Speaker 1: check I passed the audit, I've got to go, you know, 180 00:11:55,120 --> 00:11:57,360 Speaker 1: But instead, really, you know, as soon as you're thinking 181 00:11:57,400 --> 00:12:01,440 Speaker 1: about using an AI for a particular use case thinking 182 00:12:01,480 --> 00:12:05,160 Speaker 1: about you know, what is the intent of this model, 183 00:12:05,320 --> 00:12:08,560 Speaker 1: what's the relationship we ultimately want to have with AI? 184 00:12:09,240 --> 00:12:13,400 Speaker 1: And again, these are non technology questions. This is where 185 00:12:13,440 --> 00:12:18,160 Speaker 1: social scientists. Having a social scientist on your team helping 186 00:12:18,280 --> 00:12:22,520 Speaker 1: think through these kinds of questions is is critical. Let's 187 00:12:22,520 --> 00:12:25,160 Speaker 1: pause here for a second, because this is a really 188 00:12:25,160 --> 00:12:29,920 Speaker 1: profound idea. Building responsible AI does not mean that you 189 00:12:30,000 --> 00:12:32,240 Speaker 1: create a system then check in at the end and 190 00:12:32,320 --> 00:12:36,480 Speaker 1: say is this okay? Is this ethical? If you don't 191 00:12:36,520 --> 00:12:39,679 Speaker 1: ask those questions until the end of the process, you've 192 00:12:39,720 --> 00:12:43,480 Speaker 1: already failed. You have to think about ethics from the 193 00:12:43,559 --> 00:12:46,600 Speaker 1: jump from the makeup of the team to the data 194 00:12:46,640 --> 00:12:49,120 Speaker 1: you're using to train the model to the most basic 195 00:12:49,240 --> 00:12:52,079 Speaker 1: question of all, is this even the right use case 196 00:12:52,520 --> 00:12:56,960 Speaker 1: for artificial intelligence? The big lesson from IBM is this 197 00:12:57,640 --> 00:13:02,280 Speaker 1: responsible AI is something you build at every step of 198 00:13:02,320 --> 00:13:05,839 Speaker 1: the process. So this season of smart Talk is all 199 00:13:05,920 --> 00:13:09,280 Speaker 1: focused on creativity and business. My guess is that thinking 200 00:13:09,280 --> 00:13:12,560 Speaker 1: about trustworthy AI involves a lot of creativity. But talk 201 00:13:12,640 --> 00:13:14,400 Speaker 1: to me about some of the spots where you see 202 00:13:14,400 --> 00:13:18,120 Speaker 1: this work as being most creative. Oh goodness, I would 203 00:13:18,120 --> 00:13:24,280 Speaker 1: say incorporating design design thinking in particular as well as 204 00:13:24,280 --> 00:13:29,079 Speaker 1: straight up design in order to craft AI responsibly. You've 205 00:13:29,160 --> 00:13:32,080 Speaker 1: used this word design thinking, and so I'm wondering exactly 206 00:13:32,080 --> 00:13:33,920 Speaker 1: what you mean here. How do you define this idea 207 00:13:33,960 --> 00:13:37,800 Speaker 1: of design thinking. Design thinking is a practice that we 208 00:13:37,920 --> 00:13:41,120 Speaker 1: established here at IBM many years ago. In essence, what 209 00:13:41,240 --> 00:13:45,560 Speaker 1: it is, it's a way of working with groups of 210 00:13:45,600 --> 00:13:51,440 Speaker 1: people to co create a vision for something, for a 211 00:13:51,480 --> 00:13:56,600 Speaker 1: product or a service or an outcome. And typically it 212 00:13:56,760 --> 00:14:00,319 Speaker 1: starts with things like, for example, empathy maps. If you're 213 00:14:00,320 --> 00:14:03,640 Speaker 1: thinking about an end user, thinking through what is this 214 00:14:03,720 --> 00:14:08,760 Speaker 1: person thinking, seeing, hearing, feeling like, what are the experiencing 215 00:14:09,400 --> 00:14:12,600 Speaker 1: in order to ultimately craft and experience for them that 216 00:14:12,840 --> 00:14:16,800 Speaker 1: is targeted specifically for them. So we use it in 217 00:14:16,920 --> 00:14:21,040 Speaker 1: a really wide variety of different ways with respect to 218 00:14:21,040 --> 00:14:26,680 Speaker 1: trustworthy AI, even rendering an AI model explainable to a subject. 219 00:14:26,680 --> 00:14:29,200 Speaker 1: And I'll give you an example. So we've got this 220 00:14:29,280 --> 00:14:33,840 Speaker 1: wonderful program with an IBM caller, our Academy of Technology, 221 00:14:33,840 --> 00:14:37,760 Speaker 1: and we take on initiatives that steer the company in 222 00:14:37,760 --> 00:14:41,720 Speaker 1: innovative new directions. So we had an initiative where it 223 00:14:41,800 --> 00:14:46,000 Speaker 1: was titled what the Titanic taught Us About explainable AI, 224 00:14:46,920 --> 00:14:52,360 Speaker 1: and the project was imagining if there was an AI 225 00:14:52,480 --> 00:14:57,400 Speaker 1: model that could predict the likelihood of a passenger getting 226 00:14:57,440 --> 00:15:00,400 Speaker 1: a life raft on the Titanic. And we broke up 227 00:15:00,440 --> 00:15:03,360 Speaker 1: into two work streams. One was the workstream full of 228 00:15:03,400 --> 00:15:06,720 Speaker 1: the data scientists who were using all the different explainers 229 00:15:06,760 --> 00:15:08,600 Speaker 1: to come up with the predictions and they would crank 230 00:15:08,600 --> 00:15:12,280 Speaker 1: out the numbers. And the other team here's where the 231 00:15:12,360 --> 00:15:16,000 Speaker 1: social scientists lived and the designers were right where we 232 00:15:16,000 --> 00:15:20,480 Speaker 1: were thinking through how do we empower people? How do 233 00:15:20,560 --> 00:15:27,000 Speaker 1: we explain this algorithm and this predictor and the accuracy 234 00:15:27,080 --> 00:15:30,320 Speaker 1: behind this prediction in such a way as to ultimately 235 00:15:30,440 --> 00:15:33,760 Speaker 1: empower and end users. They could decide I'm not getting 236 00:15:33,840 --> 00:15:37,960 Speaker 1: on that boat, or I want to get a second 237 00:15:37,960 --> 00:15:43,080 Speaker 1: opinion please, or I want to contest the outputs of 238 00:15:43,080 --> 00:15:47,320 Speaker 1: this model because I upgraded to first class just yesterday. 239 00:15:47,880 --> 00:15:51,160 Speaker 1: See what I'm saying. And that takes a lot of creativity. 240 00:15:52,040 --> 00:15:55,480 Speaker 1: How do you design an experience for someone in order 241 00:15:55,520 --> 00:16:01,440 Speaker 1: to ultimately empower them? So design designed as is critically 242 00:16:01,560 --> 00:16:04,200 Speaker 1: critically important. And why I mentioned you know, we we've 243 00:16:04,240 --> 00:16:06,640 Speaker 1: got to open up the aperture with respect to who 244 00:16:06,680 --> 00:16:09,160 Speaker 1: we invite to the table and these kinds of conversations. 245 00:16:10,000 --> 00:16:14,000 Speaker 1: Taking the time to really understand other people's perspectives is 246 00:16:14,040 --> 00:16:17,800 Speaker 1: so important when you're doing anything creative, and it is 247 00:16:18,120 --> 00:16:22,120 Speaker 1: fundamental to the way the new creators work. The core 248 00:16:22,240 --> 00:16:25,360 Speaker 1: question you should always be asking is where will the 249 00:16:25,480 --> 00:16:29,680 Speaker 1: user be meeting this product? As Peder said, what will 250 00:16:29,760 --> 00:16:33,680 Speaker 1: they be thinking, seeing, hearing, feeling. If you can answer 251 00:16:33,760 --> 00:16:37,640 Speaker 1: those questions the way IBM does in its design thinking practice, 252 00:16:38,080 --> 00:16:41,880 Speaker 1: you will be in great shape to create almost anything. Really, 253 00:16:42,600 --> 00:16:46,080 Speaker 1: let's hear how it works in practice. And so we've 254 00:16:46,080 --> 00:16:48,480 Speaker 1: been mostly talking kind of at the metal level about 255 00:16:48,520 --> 00:16:51,720 Speaker 1: you know, how to think about AI ethics generally, but 256 00:16:51,800 --> 00:16:54,240 Speaker 1: of course the way this probably occurs in the trenches 257 00:16:54,320 --> 00:16:56,920 Speaker 1: as a client approach as IBM and they want help 258 00:16:56,920 --> 00:16:59,800 Speaker 1: with the specific problem in AI. And so I'm wondering, 259 00:16:59,800 --> 00:17:02,760 Speaker 1: from a client based perspective, where do you start having 260 00:17:02,800 --> 00:17:06,880 Speaker 1: some of these tough conversations. It has varied, to tell 261 00:17:06,880 --> 00:17:11,320 Speaker 1: you the truth, We had one client that approached us 262 00:17:11,359 --> 00:17:15,800 Speaker 1: to expand the use of an AI model to infer 263 00:17:16,040 --> 00:17:19,200 Speaker 1: skill sets of their employees, but not just to infer 264 00:17:19,280 --> 00:17:24,159 Speaker 1: their technical skills, but also their soft foundational skills, meaning 265 00:17:25,000 --> 00:17:26,760 Speaker 1: let me use an AI to determine what kind of 266 00:17:26,760 --> 00:17:31,359 Speaker 1: communicator you might be A Laurie right. Others might come 267 00:17:31,400 --> 00:17:35,919 Speaker 1: to us with, Okay, we recognize we need help setting 268 00:17:35,920 --> 00:17:38,640 Speaker 1: an AI ethics board. Is this something you can assist 269 00:17:38,760 --> 00:17:42,720 Speaker 1: us with? Or we have these values, we need to 270 00:17:42,880 --> 00:17:48,000 Speaker 1: establish AI ethics principles and processes to help us ensure 271 00:17:48,040 --> 00:17:51,919 Speaker 1: that we're compliant given regulations coming down the pike. Or 272 00:17:52,280 --> 00:17:54,439 Speaker 1: we've had clients come to us saying, please train our 273 00:17:54,480 --> 00:17:59,919 Speaker 1: people how to assess for unexpected patterns in an aim 274 00:18:00,000 --> 00:18:04,800 Speaker 1: at all, but then also how to holistically mitigate to 275 00:18:04,920 --> 00:18:11,480 Speaker 1: prevent any potential harm. And those have been phenomenal engagements. 276 00:18:12,119 --> 00:18:15,000 Speaker 1: They're huge learning moments. And so it seems like the 277 00:18:15,119 --> 00:18:18,440 Speaker 1: real additional value that IBM is bringing through this process 278 00:18:18,520 --> 00:18:21,639 Speaker 1: isn't necessarily just providing an AI algorithm or consulting on 279 00:18:21,680 --> 00:18:24,840 Speaker 1: sam AI algorithm. It seems like the real value added 280 00:18:25,200 --> 00:18:28,520 Speaker 1: is explaining how this design thinking works. You're almost like 281 00:18:28,600 --> 00:18:31,919 Speaker 1: this therapist or like a really good bartender who talks 282 00:18:31,920 --> 00:18:34,240 Speaker 1: to people, who talks whole companies through some of their 283 00:18:34,240 --> 00:18:36,879 Speaker 1: problems to try to figure out where they're going astray 284 00:18:36,960 --> 00:18:40,600 Speaker 1: before they start implementing. These things can I put Chief 285 00:18:40,680 --> 00:18:45,560 Speaker 1: Bartender Office on my I like the metaphor. I'll tell 286 00:18:45,600 --> 00:18:49,040 Speaker 1: you some of our our most valuable people on the 287 00:18:49,080 --> 00:18:53,720 Speaker 1: team for that engagement. We had an industrial organizational psychologist, 288 00:18:54,119 --> 00:18:58,680 Speaker 1: we had an anthropologist. That's why I'm saying it's important 289 00:18:58,520 --> 00:19:01,920 Speaker 1: that we bring in the social scientists because you're exactly right, 290 00:19:02,600 --> 00:19:08,119 Speaker 1: it's more than just scrutinizing the algorithm in its state. 291 00:19:08,359 --> 00:19:10,640 Speaker 1: You have to be thinking about how is it being 292 00:19:10,800 --> 00:19:14,040 Speaker 1: used holistically? And so if I was a business that 293 00:19:14,119 --> 00:19:16,680 Speaker 1: was trying to think about how a company like IBM 294 00:19:16,720 --> 00:19:19,520 Speaker 1: could come in and help out with more trustworthy AI, 295 00:19:19,680 --> 00:19:22,960 Speaker 1: what would this process really look like. Well, what we're 296 00:19:22,960 --> 00:19:26,480 Speaker 1: finding more often than not is that there'll be smaller 297 00:19:26,520 --> 00:19:31,600 Speaker 1: teams within broader organizations that either have the responsibility of 298 00:19:31,760 --> 00:19:35,679 Speaker 1: compliance and see the writing on the wall, or they've 299 00:19:35,840 --> 00:19:39,520 Speaker 1: been the ones investing in AI and are trying to 300 00:19:39,600 --> 00:19:42,680 Speaker 1: figure out how to get the rest of the organization 301 00:19:43,000 --> 00:19:46,240 Speaker 1: on board with respect to things like setting up an 302 00:19:46,240 --> 00:19:50,240 Speaker 1: ethics board or establishing principles or things like that. So 303 00:19:51,080 --> 00:19:54,040 Speaker 1: some things that we've done to help companies do this 304 00:19:54,200 --> 00:19:57,959 Speaker 1: is we kick off engagements with what we called our 305 00:19:58,160 --> 00:20:02,400 Speaker 1: our AI for leaders workshops On the one hand, it's 306 00:20:02,680 --> 00:20:05,960 Speaker 1: teaching why you should care, But on the other hand, 307 00:20:06,040 --> 00:20:08,960 Speaker 1: it's meant to get people so excited across the organization 308 00:20:09,000 --> 00:20:10,679 Speaker 1: that they want to raise their hand and say, I 309 00:20:10,720 --> 00:20:13,480 Speaker 1: want to represent this part, like, for example, I want 310 00:20:13,520 --> 00:20:15,320 Speaker 1: to be part of the ethics board as it is 311 00:20:15,359 --> 00:20:18,919 Speaker 1: being stood up. The heart parts, not the tech. The 312 00:20:19,000 --> 00:20:21,119 Speaker 1: hard part is human behavior. And I know I'm preaching 313 00:20:21,119 --> 00:20:24,080 Speaker 1: to the choir given your background, it's so nice as 314 00:20:24,080 --> 00:20:26,760 Speaker 1: a psychologist to hear this. I'm like snapping my fingers, 315 00:20:26,800 --> 00:20:30,360 Speaker 1: like preach exactly. The hard part is human behavior. So 316 00:20:31,040 --> 00:20:34,720 Speaker 1: it's been like drinking from a fire hose. I mean 317 00:20:34,760 --> 00:20:37,640 Speaker 1: in terms of the kinds of things that that we've 318 00:20:37,680 --> 00:20:40,280 Speaker 1: all been learning, and there's still so much to learn. 319 00:20:41,040 --> 00:20:45,359 Speaker 1: It really bugs me that those who are lucky enough 320 00:20:46,000 --> 00:20:48,440 Speaker 1: to be able to take classes and things like data 321 00:20:48,480 --> 00:20:52,960 Speaker 1: ethics or AI ethics self categorized as coders, machine learning scientists, 322 00:20:53,000 --> 00:20:55,640 Speaker 1: or data scientists. If we're living in a world where 323 00:20:55,640 --> 00:20:59,640 Speaker 1: AI is fundamentally being used to make decisions that could 324 00:20:59,680 --> 00:21:03,760 Speaker 1: directly affect our livelihoods, we need to know more. We 325 00:21:03,840 --> 00:21:08,280 Speaker 1: need to have more literacy and also make sure that 326 00:21:08,440 --> 00:21:13,560 Speaker 1: there is a consistent message of accessibility such that we 327 00:21:13,640 --> 00:21:17,040 Speaker 1: are saying you don't just have to be interested in coding, 328 00:21:17,320 --> 00:21:20,840 Speaker 1: like you're interested in social justice or psychology or anthropologies. 329 00:21:21,359 --> 00:21:24,119 Speaker 1: There's a seat at the table for you here because 330 00:21:24,160 --> 00:21:27,639 Speaker 1: we desperately need you. We desperately need that kind of 331 00:21:27,680 --> 00:21:32,480 Speaker 1: skill set. Just getting people to think about how do 332 00:21:32,520 --> 00:21:37,520 Speaker 1: you design something given an empathy lens to protect people? 333 00:21:37,560 --> 00:21:39,840 Speaker 1: I mean that, I think is such a crucial skill 334 00:21:39,880 --> 00:21:42,640 Speaker 1: to learn. You know, one thing I love about your 335 00:21:42,640 --> 00:21:45,200 Speaker 1: approaches that when you're talking to clients, you're almost doing 336 00:21:45,240 --> 00:21:47,520 Speaker 1: what I'm doing is a professor, where you're kind of 337 00:21:47,560 --> 00:21:50,560 Speaker 1: instructing students, getting them to think in different ways. But 338 00:21:50,640 --> 00:21:52,840 Speaker 1: I know from my field that I wind up learning 339 00:21:52,960 --> 00:21:55,560 Speaker 1: as much from students as I think sometimes they learned 340 00:21:55,640 --> 00:21:58,560 Speaker 1: from me. And so I'm wondering what what you've learned 341 00:21:58,560 --> 00:22:01,520 Speaker 1: in the process of helping so many businesses approach AI 342 00:22:01,600 --> 00:22:04,040 Speaker 1: a little bit more ethically, Like, have there been insights 343 00:22:04,040 --> 00:22:06,399 Speaker 1: that you've gotten through your interaction with clients and the 344 00:22:06,480 --> 00:22:12,640 Speaker 1: challenges they've been facing. I'm learning with every single interaction. 345 00:22:12,960 --> 00:22:19,480 Speaker 1: For example, in my mind, given the experiences that IBM 346 00:22:19,560 --> 00:22:23,640 Speaker 1: has had with respect to setting up our principles are 347 00:22:23,800 --> 00:22:28,880 Speaker 1: pillars Arii Ethics Board. There's a process to follow, right 348 00:22:29,119 --> 00:22:30,720 Speaker 1: if you're thinking about it like a book, these are 349 00:22:30,720 --> 00:22:35,960 Speaker 1: the chapters in order to to optimize the approach. Let's say, 350 00:22:36,000 --> 00:22:38,640 Speaker 1: but sometimes we work with clients that say I'm gonna 351 00:22:38,640 --> 00:22:41,600 Speaker 1: install this tool and I want to jump to chapter seven, 352 00:22:42,800 --> 00:22:45,320 Speaker 1: and it's like, okay, you know, how how do we 353 00:22:45,400 --> 00:22:50,119 Speaker 1: help navigate clients that want to skip over steps that 354 00:22:50,200 --> 00:22:54,480 Speaker 1: we think are important. Another one is again the social 355 00:22:54,600 --> 00:22:59,159 Speaker 1: scientists and bringing them into really push hard on what 356 00:22:59,359 --> 00:23:02,119 Speaker 1: is the right context where this data tell me the 357 00:23:02,119 --> 00:23:06,000 Speaker 1: origin story? Again like really pushing us to think hard 358 00:23:06,040 --> 00:23:12,399 Speaker 1: and with their perspective, you don't know, just constant, constant learning, 359 00:23:12,440 --> 00:23:14,720 Speaker 1: which is why one of the things we did at 360 00:23:14,720 --> 00:23:18,359 Speaker 1: IBM is we've established something called our Center of Excellence, 361 00:23:18,800 --> 00:23:21,280 Speaker 1: where we said, you know what IBM or we don't 362 00:23:21,280 --> 00:23:23,560 Speaker 1: care what your background is, we don't care who you are. 363 00:23:23,680 --> 00:23:27,000 Speaker 1: If you're interested in this space, you can become a member. 364 00:23:27,720 --> 00:23:30,199 Speaker 1: The Center of Excellence is a way in which we 365 00:23:30,320 --> 00:23:33,720 Speaker 1: have not only projects people can join in order to 366 00:23:33,760 --> 00:23:37,080 Speaker 1: get real life experience, but then also share back here's 367 00:23:37,119 --> 00:23:39,760 Speaker 1: what we learned. We did this with this particular and 368 00:23:39,880 --> 00:23:43,400 Speaker 1: here was our epiphany, because if we're not sharing back 369 00:23:43,440 --> 00:23:49,200 Speaker 1: and we're not constantly educating, then we're missing the opportunity 370 00:23:49,280 --> 00:23:54,679 Speaker 1: to establish the right culture. Establishing the right culture to 371 00:23:54,960 --> 00:23:59,760 Speaker 1: share what we're learning is so important. So I wanted 372 00:23:59,760 --> 00:24:02,119 Speaker 1: to But going back to where we started, you with 373 00:24:02,200 --> 00:24:05,359 Speaker 1: your technofile family watching Star Trek, I think if we 374 00:24:05,359 --> 00:24:07,920 Speaker 1: were to fast forward a couple of decades, we probably 375 00:24:07,920 --> 00:24:10,080 Speaker 1: couldn't have imagined that we'd be in the place with 376 00:24:10,119 --> 00:24:13,000 Speaker 1: AI generally where we are now, and especially as we 377 00:24:13,000 --> 00:24:16,200 Speaker 1: think through more trustworthy AI. And so you know, with 378 00:24:16,440 --> 00:24:19,520 Speaker 1: such change happening right now, with the fact that it's 379 00:24:19,560 --> 00:24:22,320 Speaker 1: a fire hose that's gonna just get even more powerful 380 00:24:22,359 --> 00:24:24,439 Speaker 1: over time, what do you think is next in this 381 00:24:24,480 --> 00:24:28,120 Speaker 1: world of thinking through more trustworthy AI. I would say 382 00:24:28,240 --> 00:24:33,360 Speaker 1: next is far more education, far more understanding. And we're 383 00:24:33,440 --> 00:24:37,600 Speaker 1: starting to see that shift, far more CEO saying yeah, 384 00:24:37,720 --> 00:24:40,320 Speaker 1: ethics has to be corrid or business. There's that, but 385 00:24:40,359 --> 00:24:44,720 Speaker 1: there's a shift. Barely half of the CEO is we're 386 00:24:44,760 --> 00:24:49,120 Speaker 1: saying that a ethics was key or important to their business. 387 00:24:49,400 --> 00:24:55,320 Speaker 1: And now you're saying the great majority so education, education, education, 388 00:24:55,520 --> 00:24:59,000 Speaker 1: And again I would underscore making it far more accessible 389 00:24:59,040 --> 00:25:03,200 Speaker 1: to far more people, which means it's not just our 390 00:25:03,359 --> 00:25:08,639 Speaker 1: classes in higher ed institutions, it's our conferences, it's anytime 391 00:25:08,640 --> 00:25:12,600 Speaker 1: we write white papers, anytime we publish articles, anytime we 392 00:25:12,640 --> 00:25:16,840 Speaker 1: do podcasts like this. Right the way we talk about 393 00:25:16,880 --> 00:25:19,960 Speaker 1: this space has to be far more accessible and open 394 00:25:20,040 --> 00:25:24,480 Speaker 1: and inviting two people with different roles, different skill sets, 395 00:25:24,520 --> 00:25:30,159 Speaker 1: different worldviews, because else again, we're just codifying our own bias. Well, feature, 396 00:25:30,200 --> 00:25:32,880 Speaker 1: I want to express my gratitude today for making AI 397 00:25:32,960 --> 00:25:35,760 Speaker 1: a little bit more accessible to everyone. This has been 398 00:25:35,760 --> 00:25:38,320 Speaker 1: such a delightful conversation. Thank you so much for joining 399 00:25:38,320 --> 00:25:40,960 Speaker 1: me for it. The pleasure was mine. Looie, thank you 400 00:25:41,000 --> 00:25:49,119 Speaker 1: for being the consummate host. Thank you. I want to 401 00:25:49,119 --> 00:25:51,600 Speaker 1: close by going back to that moment when Lorie suggested 402 00:25:51,640 --> 00:25:56,560 Speaker 1: that Phedra was actually IBM's Chief Bartender Officer, not just 403 00:25:56,680 --> 00:25:59,879 Speaker 1: because that's the best C suite title ever, but because 404 00:26:00,080 --> 00:26:03,600 Speaker 1: gets at what I think is the biggest, most important idea. 405 00:26:03,920 --> 00:26:07,159 Speaker 1: In today's episode, Pedro boiled it down into a single 406 00:26:07,240 --> 00:26:10,399 Speaker 1: line when she said, the hard part is not the tech. 407 00:26:10,840 --> 00:26:15,280 Speaker 1: The hard part is human behavior. Why is building AI 408 00:26:15,480 --> 00:26:20,480 Speaker 1: so complicated? Because people are complicated. IBM believes that building 409 00:26:20,520 --> 00:26:25,119 Speaker 1: trust into AI from the start can lead to better outcomes, 410 00:26:25,160 --> 00:26:28,320 Speaker 1: and that to build trustworthy AI, you don't just need 411 00:26:28,400 --> 00:26:31,119 Speaker 1: to think like a computer scientist. You need to think 412 00:26:31,320 --> 00:26:37,520 Speaker 1: like a psychologist, like an anthropologist. You need to understand people. 413 00:26:40,560 --> 00:26:44,440 Speaker 1: Smart Talks with IBM is produced by Molly Sosha, Alexandra Garraton, 414 00:26:44,800 --> 00:26:49,760 Speaker 1: Royston Reserve and Edith Russolo with Jacob Goldstein. We're edited 415 00:26:49,880 --> 00:26:54,240 Speaker 1: by Jan Guerra. Our engineers are Jason Gambrel, Sarah Brugere 416 00:26:54,560 --> 00:26:59,600 Speaker 1: and Ben Holiday theme song by Grandmascope. Special thanks to 417 00:26:59,680 --> 00:27:03,800 Speaker 1: Carlie Migliori, Andy Kelly, Kathy Callaghan and the eight Bar 418 00:27:03,960 --> 00:27:08,600 Speaker 1: and IBM teams, as well as the Pushkin marketing team. 419 00:27:08,760 --> 00:27:11,600 Speaker 1: Smart Talks with IBM is a production of Pushkin Industries 420 00:27:11,800 --> 00:27:15,919 Speaker 1: and i Heart Media. To find more Pushkin podcasts, listen 421 00:27:16,000 --> 00:27:19,880 Speaker 1: on the i Heart Radio app, Apple Podcasts, or wherever 422 00:27:20,400 --> 00:27:25,000 Speaker 1: you listen to podcasts. I'm Malcolm Gladwell. This is a 423 00:27:25,040 --> 00:27:33,480 Speaker 1: paid advertisement from IBM.