1 00:00:05,920 --> 00:00:09,200 Speaker 1: Hello, Hello. This is Smart Talks with IBM, a podcast 2 00:00:09,280 --> 00:00:13,039 Speaker 1: from Pushkin Industries, I Heart Radio and IBM about what 3 00:00:13,039 --> 00:00:16,000 Speaker 1: it means to look at today's most challenging problems in 4 00:00:16,040 --> 00:00:21,240 Speaker 1: a new way. I'm Malcolm Glabbo. Today I'll be chatting 5 00:00:21,239 --> 00:00:25,400 Speaker 1: with two IBM experts in artificial intelligence about the company's 6 00:00:25,440 --> 00:00:29,840 Speaker 1: approach to building and supporting trustworthy AI as a force 7 00:00:30,240 --> 00:00:37,680 Speaker 1: for positive change. I'll be speaking with IBMS Chief Privacy 8 00:00:37,720 --> 00:00:42,519 Speaker 1: Officer Christina Montgomery. She oversees the company's privacy vision and 9 00:00:42,560 --> 00:00:46,960 Speaker 1: compliance strategy globally, looking at things like immunity certificates and 10 00:00:47,080 --> 00:00:50,680 Speaker 1: vaccine passports. Not what could we do, but what were 11 00:00:50,720 --> 00:00:52,640 Speaker 1: we willing as a company to do? Where were we 12 00:00:52,680 --> 00:00:55,680 Speaker 1: going to put our skills and our knowledge and our 13 00:00:55,720 --> 00:01:00,800 Speaker 1: company brand in response to technologies that could help provide 14 00:01:00,960 --> 00:01:05,080 Speaker 1: information in response to the pandemic. She also co chairs 15 00:01:05,440 --> 00:01:09,440 Speaker 1: their AI Ethics Board. I'll also be talking with Dr 16 00:01:09,520 --> 00:01:14,200 Speaker 1: Seth Dobrin, Global Chief AI Officer at IBM. Seth leads 17 00:01:14,240 --> 00:01:18,759 Speaker 1: corporate AI strategy and is responsible for connecting AI development 18 00:01:18,959 --> 00:01:22,560 Speaker 1: with the creation of business value. Seth is also a 19 00:01:22,560 --> 00:01:26,400 Speaker 1: member of IBMS AI Ethics BORT. We want to make 20 00:01:26,440 --> 00:01:32,480 Speaker 1: sure that the technology behind AI is as fair as possible, 21 00:01:33,240 --> 00:01:36,640 Speaker 1: is as explainable as possible, is as robust as possible, 22 00:01:37,160 --> 00:01:40,720 Speaker 1: and is as privacy preserving as possible. We'll talk about 23 00:01:40,720 --> 00:01:43,600 Speaker 1: the need to create AI systems that are fair and 24 00:01:43,640 --> 00:01:46,720 Speaker 1: addressed bias, and how we need to focus on trust 25 00:01:46,760 --> 00:01:50,920 Speaker 1: and transparency to accomplish this. What might the future look 26 00:01:50,960 --> 00:01:55,560 Speaker 1: like with an open and diverse ecosystem with governance across 27 00:01:55,600 --> 00:01:59,480 Speaker 1: the industry. There's only one way to find out. Let's 28 00:02:00,000 --> 00:02:10,360 Speaker 1: I did. One of the things I'm curious about is 29 00:02:11,080 --> 00:02:17,320 Speaker 1: the origin of this interesting concern about the ethics and 30 00:02:17,440 --> 00:02:22,399 Speaker 1: trust component of AI, or is this a later kind 31 00:02:22,440 --> 00:02:27,240 Speaker 1: of of evolutionary concern. About ten years ago, when we 32 00:02:27,280 --> 00:02:31,480 Speaker 1: started down this journey to transforming business using what we 33 00:02:31,520 --> 00:02:34,760 Speaker 1: think about is AI today, the concept of trust came up, 34 00:02:34,760 --> 00:02:37,720 Speaker 1: but not in the same context that we think about 35 00:02:37,720 --> 00:02:41,320 Speaker 1: it today. The context of trust was really focused on 36 00:02:41,400 --> 00:02:43,360 Speaker 1: how do I know it's given me the right answer 37 00:02:43,960 --> 00:02:46,560 Speaker 1: so that I can make my decision. Because we didn't 38 00:02:46,560 --> 00:02:48,720 Speaker 1: have tools that help explain how an AI came to 39 00:02:48,760 --> 00:02:51,360 Speaker 1: a decision, you tended to have to get into these 40 00:02:51,400 --> 00:02:54,000 Speaker 1: bakeoffs where you had to kind of set up experiments 41 00:02:54,040 --> 00:02:56,040 Speaker 1: to show that the AI was at least as good 42 00:02:56,120 --> 00:03:00,200 Speaker 1: as human if if not better, and understand why over 43 00:03:00,280 --> 00:03:04,880 Speaker 1: time it's progressed as AI has in uh started to 44 00:03:04,919 --> 00:03:09,600 Speaker 1: come up against real human conditions. And I think that's 45 00:03:09,600 --> 00:03:12,840 Speaker 1: when we started thinking about what is going on with 46 00:03:12,919 --> 00:03:16,680 Speaker 1: AI when it relates to bias, particularly you know, about 47 00:03:16,720 --> 00:03:19,520 Speaker 1: five eight years ago there was an issue with mortgage 48 00:03:19,639 --> 00:03:23,480 Speaker 1: particularly related to the zip code, but started giving you know, 49 00:03:23,520 --> 00:03:29,360 Speaker 1: biases against people of certain races um and so I 50 00:03:29,400 --> 00:03:32,040 Speaker 1: think those things combined have led to us to the 51 00:03:32,040 --> 00:03:34,800 Speaker 1: point where we are today plufs. You know, the social 52 00:03:34,880 --> 00:03:37,800 Speaker 1: justice movement over the last two years has has really 53 00:03:37,800 --> 00:03:42,120 Speaker 1: accelerated a lot of the concern mm hmm. Because I 54 00:03:42,160 --> 00:03:46,800 Speaker 1: noticed you're you're a lawyer by trade. It's an interesting 55 00:03:46,800 --> 00:03:51,880 Speaker 1: subject because it seems like this is where AI experts 56 00:03:51,920 --> 00:03:55,720 Speaker 1: like Seth and lawyers work together. It sounds like a 57 00:03:55,800 --> 00:03:59,280 Speaker 1: kind of classic cross disciplinary endeavor. Can you talk about 58 00:03:59,320 --> 00:04:03,000 Speaker 1: that a little bit. It's absolutely cross disciplinary in nature. 59 00:04:03,400 --> 00:04:07,680 Speaker 1: For example, our AI Ethics Board, I'm the co chair. 60 00:04:07,760 --> 00:04:10,400 Speaker 1: The other co chair is our AI Ethics Global Leader 61 00:04:10,480 --> 00:04:14,920 Speaker 1: francescar Rossi, who's a well renowned researcher in AI. Ethics, 62 00:04:15,360 --> 00:04:18,000 Speaker 1: so she comes with that research background. So we had 63 00:04:18,279 --> 00:04:21,159 Speaker 1: a board in place, an AI ethics board in place 64 00:04:21,240 --> 00:04:24,840 Speaker 1: before I stepped into this job, and there were a 65 00:04:24,839 --> 00:04:27,760 Speaker 1: lot of great discussions among a lot of researchers and 66 00:04:27,800 --> 00:04:31,479 Speaker 1: a lot of people that deeply understood the technology, but 67 00:04:31,560 --> 00:04:36,200 Speaker 1: it didn't have decision making authority. It didn't have all stakeholders. 68 00:04:36,200 --> 00:04:39,480 Speaker 1: Are many stakeholders across the business at the table, and 69 00:04:39,600 --> 00:04:41,880 Speaker 1: so when I came into the job as a lawyer 70 00:04:41,920 --> 00:04:45,000 Speaker 1: and as somebody with the corporate governance background, I was 71 00:04:45,040 --> 00:04:48,640 Speaker 1: sort of tasked with building out the operational aspects of 72 00:04:48,680 --> 00:04:54,440 Speaker 1: it to make it capable of implementing centralized decision making, 73 00:04:54,480 --> 00:04:58,200 Speaker 1: to give it authority, to bring in those perspectives from 74 00:04:58,240 --> 00:05:02,599 Speaker 1: across the business and from people with different focuses within 75 00:05:02,880 --> 00:05:06,920 Speaker 1: the IBM corporation, lots of different backgrounds, and we have 76 00:05:07,400 --> 00:05:14,360 Speaker 1: very robust conversations, and we also engage the individuals throughout 77 00:05:14,400 --> 00:05:19,040 Speaker 1: IBM who either from an advocacy because they care very 78 00:05:19,120 --> 00:05:22,560 Speaker 1: much about the topic, or they're working in the space 79 00:05:22,839 --> 00:05:26,880 Speaker 1: individually and have thoughts around the topic, are doing projects 80 00:05:26,880 --> 00:05:29,279 Speaker 1: in the space, want to publish in the space. We 81 00:05:29,360 --> 00:05:32,560 Speaker 1: have a very organic way of having them be involved 82 00:05:32,600 --> 00:05:37,000 Speaker 1: as well. Absolutely necessary to have that cross disciplinary aspect. 83 00:05:37,560 --> 00:05:42,640 Speaker 1: You mentioned beginning your answer, talked book about robust conversations 84 00:05:42,760 --> 00:05:44,920 Speaker 1: phrase I love. Can both of you give me an 85 00:05:44,960 --> 00:05:49,880 Speaker 1: example of an issue that's come up with respect to 86 00:05:49,960 --> 00:05:55,000 Speaker 1: trust and AI. So so, one example might be the 87 00:05:55,040 --> 00:05:58,960 Speaker 1: technologies that we would employ as a company in response 88 00:05:59,000 --> 00:06:02,120 Speaker 1: to the COVID nineteen pandemic. So there are a lot 89 00:06:02,120 --> 00:06:05,880 Speaker 1: of things we could have done, and it became a 90 00:06:05,960 --> 00:06:10,159 Speaker 1: question not of what we're capable of deploying from a 91 00:06:10,200 --> 00:06:15,839 Speaker 1: technology perspective, but whether we should be deploying certain technologies, 92 00:06:15,920 --> 00:06:20,719 Speaker 1: whether it be facial recognition for fever detection, certain contact 93 00:06:20,720 --> 00:06:24,680 Speaker 1: tracing technologies are Digital health pass is a good example 94 00:06:25,240 --> 00:06:29,359 Speaker 1: of a technology that came through the board multiple times 95 00:06:30,000 --> 00:06:33,080 Speaker 1: in terms of like if we are going to deploy 96 00:06:33,360 --> 00:06:36,880 Speaker 1: a vaccine passport, which is not necessarily what this technology 97 00:06:36,920 --> 00:06:39,240 Speaker 1: turned out to be, but looking at things like immunity 98 00:06:39,279 --> 00:06:43,200 Speaker 1: certificates and vaccine passports, not what could we do, but 99 00:06:43,279 --> 00:06:45,440 Speaker 1: what were we willing as a company to do? Where 100 00:06:45,440 --> 00:06:48,359 Speaker 1: were we going to put our skills and our knowledge 101 00:06:48,480 --> 00:06:53,040 Speaker 1: and our company brand in response to technologies that could 102 00:06:53,040 --> 00:06:56,080 Speaker 1: help to either bring about a cure or help to 103 00:06:56,600 --> 00:07:00,880 Speaker 1: provide information in response to the pandemic. COVID is a 104 00:07:00,880 --> 00:07:05,000 Speaker 1: great example because it highlights the value and the acceleration 105 00:07:05,080 --> 00:07:10,600 Speaker 1: that good governance can bring. Because the way that we 106 00:07:10,680 --> 00:07:14,720 Speaker 1: as an ethics board laid out the rules, the guardrails, 107 00:07:14,760 --> 00:07:17,840 Speaker 1: if you will, around what we could would and wouldn't 108 00:07:17,880 --> 00:07:21,160 Speaker 1: do for COVID help people just do stuff without worrying 109 00:07:21,680 --> 00:07:23,200 Speaker 1: that we need to bring this to the board. It 110 00:07:23,240 --> 00:07:26,160 Speaker 1: also laid very clear for this type of use case 111 00:07:26,200 --> 00:07:29,040 Speaker 1: we need to go have a conversation with the board. 112 00:07:29,640 --> 00:07:32,840 Speaker 1: It also provided a venue for us as a company 113 00:07:32,920 --> 00:07:39,040 Speaker 1: to make decisions um and make risk based decisions where okay, 114 00:07:39,200 --> 00:07:41,760 Speaker 1: this isn't a little bit of the of the fuzzy area, 115 00:07:41,880 --> 00:07:44,680 Speaker 1: but we think, given what's going on right now in 116 00:07:44,800 --> 00:07:47,360 Speaker 1: the world and the importance of this, we're willing to 117 00:07:47,360 --> 00:07:49,240 Speaker 1: take this risk so long as we go back and 118 00:07:49,280 --> 00:07:52,440 Speaker 1: we clean everything up later. And so so I think 119 00:07:52,520 --> 00:07:55,720 Speaker 1: that's really important that number one, governance is set up 120 00:07:55,760 --> 00:07:59,400 Speaker 1: so that it accelerates things, not stops them. And number two, 121 00:07:59,520 --> 00:08:02,760 Speaker 1: that there's clear guidance into you know, it's not no, 122 00:08:03,280 --> 00:08:05,720 Speaker 1: it's here's what you can do and here's what you 123 00:08:05,760 --> 00:08:08,680 Speaker 1: can't do. And help the teams figure out how they 124 00:08:08,760 --> 00:08:11,520 Speaker 1: can still move things forward in a way that doesn't 125 00:08:11,680 --> 00:08:17,120 Speaker 1: infringe on our principles. Yeah, I want to sort of 126 00:08:17,120 --> 00:08:22,240 Speaker 1: give this there's a concrete sense about how a concern 127 00:08:22,280 --> 00:08:27,480 Speaker 1: about trust and transparency and such would guide what a 128 00:08:27,520 --> 00:08:32,360 Speaker 1: technology company might do. Now a real example, So, if 129 00:08:32,400 --> 00:08:35,680 Speaker 1: I want to make sure that people are wearing face 130 00:08:35,720 --> 00:08:37,880 Speaker 1: masks and then just highlight that there is someone in 131 00:08:37,920 --> 00:08:41,360 Speaker 1: this area that's not wearing a face mask and you're 132 00:08:41,400 --> 00:08:45,480 Speaker 1: not identifying the person, I think we'd be okay with that. 133 00:08:46,160 --> 00:08:48,800 Speaker 1: What we wouldn't be okay with that with is if 134 00:08:48,800 --> 00:08:51,760 Speaker 1: they wanted to identify the person in a way that 135 00:08:51,840 --> 00:08:55,040 Speaker 1: they did not consent to and that was very generic. 136 00:08:55,080 --> 00:08:57,160 Speaker 1: So I'm going to go through a database of unknown 137 00:08:57,160 --> 00:08:59,200 Speaker 1: people and I'm going to match them to this person, 138 00:09:00,840 --> 00:09:03,280 Speaker 1: and so that would not be okay, and a fuzzy 139 00:09:03,320 --> 00:09:06,480 Speaker 1: area would be you know, I'm going to match this 140 00:09:06,559 --> 00:09:08,679 Speaker 1: to a known person, so I know this is an 141 00:09:08,679 --> 00:09:11,040 Speaker 1: employee and I know this is him. This is something 142 00:09:11,080 --> 00:09:13,000 Speaker 1: that we as a board would want to have a 143 00:09:13,040 --> 00:09:15,640 Speaker 1: conversation with. If this employee is not wearing a mask, 144 00:09:16,200 --> 00:09:18,000 Speaker 1: can I match them to a name or do I 145 00:09:18,040 --> 00:09:21,360 Speaker 1: just send a security personnel over here because the employee 146 00:09:21,360 --> 00:09:24,080 Speaker 1: is not wearing a mask. That's a harder I think, 147 00:09:24,200 --> 00:09:28,920 Speaker 1: and that's a real world example that we face during COVID. Yeah, 148 00:09:29,400 --> 00:09:33,359 Speaker 1: let's talk a little bit about diversity and shared responsibility 149 00:09:33,360 --> 00:09:38,400 Speaker 1: as principles that matter in this world of AI. What 150 00:09:38,400 --> 00:09:41,440 Speaker 1: do what do those terms mean as applied to AI, 151 00:09:41,520 --> 00:09:45,000 Speaker 1: and what's the kind of practical effect of seeking to 152 00:09:45,080 --> 00:09:49,400 Speaker 1: optimize those goals? You know, I think first of all, 153 00:09:49,480 --> 00:09:54,200 Speaker 1: we need to have good representation of society doing the 154 00:09:54,240 --> 00:09:57,360 Speaker 1: work that impacts society. So A, it's just the right 155 00:09:57,360 --> 00:10:01,080 Speaker 1: thing to do. B. There's tons of research out there 156 00:10:01,120 --> 00:10:04,400 Speaker 1: that shows that diverse teams outperformed non diverse teams. There's 157 00:10:04,400 --> 00:10:07,280 Speaker 1: a Mackenzie report that says, you know, companies in the 158 00:10:07,320 --> 00:10:11,000 Speaker 1: top quartile for diversity outperformed their peers that aren't by like, 159 00:10:12,480 --> 00:10:16,120 Speaker 1: so tons of good research. The second thing is you 160 00:10:16,240 --> 00:10:18,559 Speaker 1: just don't get as good results when you don't have 161 00:10:19,120 --> 00:10:22,080 Speaker 1: equal representation at the table. There's lots of good examples 162 00:10:22,080 --> 00:10:25,680 Speaker 1: of this. So there was a hiring algorithm that was 163 00:10:25,679 --> 00:10:30,400 Speaker 1: evaluating applicants and passing forward, but all the applicants in 164 00:10:30,440 --> 00:10:32,560 Speaker 1: the past for this company, you know, the vast majority 165 00:10:32,559 --> 00:10:35,520 Speaker 1: of them were male, and so females were just summarily 166 00:10:35,520 --> 00:10:38,920 Speaker 1: wiped out, regardless to some extent of their fit for 167 00:10:38,960 --> 00:10:43,440 Speaker 1: the role. I wanted to ask Castina. A project comes 168 00:10:43,679 --> 00:10:49,720 Speaker 1: before the board, and so a conversation might be the 169 00:10:49,720 --> 00:10:52,079 Speaker 1: team you put together and the data you're looking at 170 00:10:52,960 --> 00:10:57,880 Speaker 1: is insufficiently diverse, We're worried that you're not capturing the 171 00:10:57,960 --> 00:11:00,679 Speaker 1: reality of the of the kind of world we're operating in. 172 00:11:01,120 --> 00:11:03,120 Speaker 1: Is that is that an example of a conversation might 173 00:11:03,160 --> 00:11:06,800 Speaker 1: you might have at the board level. Well, I think 174 00:11:06,840 --> 00:11:09,280 Speaker 1: the best way to look at what the board is 175 00:11:09,320 --> 00:11:13,360 Speaker 1: doing to try to address those issues of bias. I mean, so, 176 00:11:13,720 --> 00:11:17,400 Speaker 1: for example, we've got a team of researchers that work 177 00:11:17,440 --> 00:11:21,520 Speaker 1: on trusted technology, and one of the early things that 178 00:11:21,559 --> 00:11:26,240 Speaker 1: they've done is to deploy toolkits that will help detect bias, 179 00:11:26,280 --> 00:11:28,400 Speaker 1: that will help make a i AM more explainable, that 180 00:11:28,440 --> 00:11:32,040 Speaker 1: will help make it trustworthy in general. But those tools 181 00:11:32,080 --> 00:11:35,839 Speaker 1: initially very focused on bias, and they deployed them to 182 00:11:35,920 --> 00:11:39,079 Speaker 1: open source so they could be built on and improved. 183 00:11:39,240 --> 00:11:43,320 Speaker 1: Right and right now, the board is focused more broadly, 184 00:11:43,400 --> 00:11:47,080 Speaker 1: not looking at it an individual problem in an individual 185 00:11:47,240 --> 00:11:50,400 Speaker 1: use case with respect to bias, but instilling those ethical 186 00:11:50,440 --> 00:11:54,880 Speaker 1: principles across the business through something we're calling ethics by Design. 187 00:11:55,480 --> 00:11:59,960 Speaker 1: Bias was the first focus area of this Ethics by Design, 188 00:12:00,520 --> 00:12:03,120 Speaker 1: and we've got a team of folks being led by 189 00:12:03,200 --> 00:12:06,400 Speaker 1: the Ethics Board who are working on the question you 190 00:12:06,400 --> 00:12:10,160 Speaker 1: asked Malcolm about, how do we ensure that the AI 191 00:12:10,200 --> 00:12:13,080 Speaker 1: we're deploying internally or the tools and the products that 192 00:12:13,080 --> 00:12:17,320 Speaker 1: we're deploying for customers take that into account throughout the 193 00:12:17,360 --> 00:12:20,520 Speaker 1: life cycle of AI. So through this Ethics by Design, 194 00:12:20,960 --> 00:12:23,640 Speaker 1: the guidance that's coming out from the Board starts at 195 00:12:23,679 --> 00:12:28,120 Speaker 1: that conceptual phase and then applies across the life cycle 196 00:12:28,320 --> 00:12:32,280 Speaker 1: up through in the case of an internal use of AI, 197 00:12:32,480 --> 00:12:34,840 Speaker 1: up through the actual use and in the case of 198 00:12:34,840 --> 00:12:37,880 Speaker 1: AI that we're deploying for customers are putting into a product, 199 00:12:37,880 --> 00:12:41,560 Speaker 1: you not through that point of deployment. So it's very 200 00:12:41,640 --> 00:12:46,760 Speaker 1: much about embedding those considerations into our existing processes across 201 00:12:46,800 --> 00:12:49,120 Speaker 1: the company to make sure that they're thought of, not 202 00:12:49,200 --> 00:12:51,320 Speaker 1: just once and not just in the use cases that 203 00:12:51,400 --> 00:12:55,080 Speaker 1: the Board has an opportunity to review, but in our 204 00:12:55,080 --> 00:12:58,880 Speaker 1: practices as a company and in our thinking as a company. 205 00:12:58,960 --> 00:13:01,040 Speaker 1: Much like you know, we did this and companies did 206 00:13:01,080 --> 00:13:05,920 Speaker 1: this years ago, um with respect to privacy and security, 207 00:13:06,000 --> 00:13:09,040 Speaker 1: that concept of privacy and security by design which some 208 00:13:09,200 --> 00:13:11,520 Speaker 1: may be familiar with that stem from the g d 209 00:13:11,640 --> 00:13:15,160 Speaker 1: PR in Europe. Now we're doing the same thing with ethics. 210 00:13:15,160 --> 00:13:19,400 Speaker 1: How unusual is what you guys are doing. I mean 211 00:13:19,440 --> 00:13:22,080 Speaker 1: if I if I lined up all the tech companies 212 00:13:22,120 --> 00:13:24,760 Speaker 1: that are heavily into AI right now, would I find 213 00:13:24,800 --> 00:13:27,800 Speaker 1: similar programs in all of them? Or are you guys 214 00:13:28,760 --> 00:13:32,920 Speaker 1: off by yourselves? So I think we take a little 215 00:13:32,960 --> 00:13:36,240 Speaker 1: bit of a unique perspective. In fact, we were recently 216 00:13:36,240 --> 00:13:40,800 Speaker 1: recognized as a leader in the ethical deployment of technology 217 00:13:40,800 --> 00:13:44,280 Speaker 1: and responsible technology use by the World Economic Forum. So 218 00:13:44,440 --> 00:13:48,680 Speaker 1: World Economic Forum and the Marcola Center Um of Ethics 219 00:13:48,720 --> 00:13:51,680 Speaker 1: at at Santa Clara University did an independent case study 220 00:13:51,920 --> 00:13:55,640 Speaker 1: of IBM that did recognize our leadership in this space. 221 00:13:55,679 --> 00:13:58,280 Speaker 1: Because of the holistic approach that we take, we're a 222 00:13:58,280 --> 00:14:01,160 Speaker 1: little bit different I think in some other tech companies 223 00:14:01,200 --> 00:14:05,800 Speaker 1: that do have similar counsels in place because of the 224 00:14:06,040 --> 00:14:09,720 Speaker 1: broad and cross disciplinary nature of ours. We're not just researchers, 225 00:14:09,720 --> 00:14:15,439 Speaker 1: were not just technologists. We literally have representation from backgrounds 226 00:14:15,679 --> 00:14:18,720 Speaker 1: spanning across the company, whether it be you know, legal 227 00:14:18,800 --> 00:14:22,440 Speaker 1: or developers or researchers or or you know just HR 228 00:14:22,600 --> 00:14:25,680 Speaker 1: professionals and the like. So that makes us a little 229 00:14:25,680 --> 00:14:28,600 Speaker 1: bit unique the program itself. And then I think we 230 00:14:28,680 --> 00:14:32,880 Speaker 1: hear from clients that are thinking for themselves about how 231 00:14:32,920 --> 00:14:35,880 Speaker 1: do I make sure that the technology I'm deploying or 232 00:14:36,040 --> 00:14:42,840 Speaker 1: using externally or with my clients is trustworthy? Right, So 233 00:14:42,840 --> 00:14:45,040 Speaker 1: so they're asking us, how did you go about this, 234 00:14:45,800 --> 00:14:47,840 Speaker 1: how do you think about it as a company, what 235 00:14:47,880 --> 00:14:52,880 Speaker 1: are your practices? So on that point, we are CEO 236 00:14:53,560 --> 00:14:57,080 Speaker 1: is the co chair of a something called the Global 237 00:14:57,120 --> 00:15:01,280 Speaker 1: AI Action Alliance initiated by the WEF, and as part 238 00:15:01,320 --> 00:15:05,080 Speaker 1: of that, we've committed to sort of open source our approach. 239 00:15:05,160 --> 00:15:07,240 Speaker 1: So we've been talking a lot about our approach. I 240 00:15:07,280 --> 00:15:09,480 Speaker 1: think it is a little bit unique, as I said, 241 00:15:09,520 --> 00:15:12,160 Speaker 1: but we are sharing it because again, we don't want 242 00:15:12,160 --> 00:15:14,920 Speaker 1: to be the only ones that have trustworthy the AI 243 00:15:15,000 --> 00:15:18,120 Speaker 1: and that have this holistic, cross disciplinary approach, because we 244 00:15:18,160 --> 00:15:21,040 Speaker 1: think it's the right approach. It's certainly the right approach 245 00:15:21,080 --> 00:15:22,880 Speaker 1: for our company, and we want to share it with 246 00:15:22,920 --> 00:15:36,960 Speaker 1: the world. It's not secret or proprietary, but if you 247 00:15:37,080 --> 00:15:40,800 Speaker 1: talk to the analyst community that serves the tech the 248 00:15:40,840 --> 00:15:44,240 Speaker 1: tech you know, the tech sector. They say far and wide, 249 00:15:44,240 --> 00:15:47,240 Speaker 1: IBM is is is ahead in terms of things that 250 00:15:47,280 --> 00:15:51,760 Speaker 1: we're actually doing as opposed to talking about it all 251 00:15:51,840 --> 00:15:56,440 Speaker 1: while making sure that it is enforceable and impactful. So 252 00:15:56,800 --> 00:15:59,680 Speaker 1: for instance, you know we were talking about we review 253 00:15:59,760 --> 00:16:04,640 Speaker 1: you cases and we can require that the teams adjust them. 254 00:16:04,680 --> 00:16:07,680 Speaker 1: That's unique, right, Most of the other tech companies do 255 00:16:07,760 --> 00:16:11,240 Speaker 1: not have that level of oversight in terms of ensuring 256 00:16:11,320 --> 00:16:14,480 Speaker 1: that their their outcomes are are aligned. There's a lot 257 00:16:14,520 --> 00:16:17,120 Speaker 1: of good talk, but I think you know, the weft 258 00:16:17,520 --> 00:16:19,120 Speaker 1: use case that came out on I think it was 259 00:16:19,560 --> 00:16:23,440 Speaker 1: September really supports that that we're ahead. And then if 260 00:16:23,440 --> 00:16:26,880 Speaker 1: you look at companies just in general that have AI 261 00:16:26,960 --> 00:16:29,520 Speaker 1: ethics board. My experiences that with all the companies and 262 00:16:29,560 --> 00:16:33,880 Speaker 1: I know interact with hundreds of of leaders and companies 263 00:16:33,920 --> 00:16:37,560 Speaker 1: a year, less than five percent of them have a 264 00:16:37,680 --> 00:16:41,800 Speaker 1: board in place, and even fewer of those kind of 265 00:16:42,320 --> 00:16:45,640 Speaker 1: really have a rhythm going and know how they're gonna 266 00:16:46,000 --> 00:16:49,680 Speaker 1: going to operate as a board. Yet m hmm. I 267 00:16:49,680 --> 00:16:51,160 Speaker 1: wanted to talk a little bit about the rule of 268 00:16:51,200 --> 00:16:59,400 Speaker 1: government here, these government leading or following here. UM, I 269 00:16:59,400 --> 00:17:03,360 Speaker 1: would say they're catching up. I think we're following is 270 00:17:03,400 --> 00:17:08,720 Speaker 1: probably the most improved, right because look, I think over 271 00:17:08,800 --> 00:17:12,480 Speaker 1: the last uh couple of years, as we talked about, 272 00:17:12,600 --> 00:17:15,320 Speaker 1: or maybe it's been almost ten years at this point 273 00:17:15,359 --> 00:17:18,360 Speaker 1: in time, as these issues have come to light, companies 274 00:17:18,400 --> 00:17:22,960 Speaker 1: have largely been left to themselves to impose guardrails upon 275 00:17:23,240 --> 00:17:26,000 Speaker 1: their practices and their use of AI. Let's not be 276 00:17:26,280 --> 00:17:29,680 Speaker 1: to say that there aren't laws that regulate for example, 277 00:17:29,840 --> 00:17:36,119 Speaker 1: discrimination laws do would apply to technology that's discriminatory, but 278 00:17:36,280 --> 00:17:39,160 Speaker 1: the unique aspects, to the extent there are unique aspects 279 00:17:39,240 --> 00:17:45,240 Speaker 1: or issues that get amplified through the application of AI systems. Um, 280 00:17:45,280 --> 00:17:47,880 Speaker 1: the government is really just catching up. So we've got 281 00:17:47,920 --> 00:17:53,119 Speaker 1: the the EU proposed a comprehensive regulatory framework for AI 282 00:17:53,640 --> 00:17:56,520 Speaker 1: in the spring time frame. UM. We see in the 283 00:17:56,640 --> 00:18:00,960 Speaker 1: US the FTC is starting to focus on and algorithmic 284 00:18:01,000 --> 00:18:04,600 Speaker 1: bias and just in general on algorithms and that they 285 00:18:04,640 --> 00:18:07,120 Speaker 1: be fair and the like. So there are numerous other 286 00:18:07,160 --> 00:18:13,159 Speaker 1: initiatives following the EU that are looking at frameworks for 287 00:18:13,240 --> 00:18:17,760 Speaker 1: governing AI and regulating AI, and we've been involved, I 288 00:18:17,800 --> 00:18:22,159 Speaker 1: mentioned earlier on our Precision Regulation recommendation. So we have 289 00:18:22,240 --> 00:18:26,200 Speaker 1: something called the IBM Policy Lab, and what differentiates our 290 00:18:26,240 --> 00:18:29,760 Speaker 1: advocacy through the Policy Lab is that we try to 291 00:18:29,800 --> 00:18:36,480 Speaker 1: make concrete, actionable policy recommendations, so not just again articulating principles, 292 00:18:36,520 --> 00:18:40,919 Speaker 1: but really concrete recommendations for companies and for governments and 293 00:18:41,000 --> 00:18:45,800 Speaker 1: policymakers around the globe to implemented to follow um things 294 00:18:45,880 --> 00:18:48,960 Speaker 1: like you know, out of our precision regulation of AI. 295 00:18:49,080 --> 00:18:53,040 Speaker 1: That's where our recommendation is that regulation should be risk based, 296 00:18:53,160 --> 00:18:55,960 Speaker 1: it should be context specific. It should look at and 297 00:18:56,040 --> 00:19:00,960 Speaker 1: allocate responsibility on the party that's closest to the risk, 298 00:19:01,040 --> 00:19:03,199 Speaker 1: and that may be different at different times in the 299 00:19:03,280 --> 00:19:06,239 Speaker 1: life cycle of an AI system. So we deploy some 300 00:19:06,320 --> 00:19:11,200 Speaker 1: general purpose technologies and then our clients train those over time, 301 00:19:11,400 --> 00:19:13,480 Speaker 1: so you know, bearing the risk it should sit with 302 00:19:13,600 --> 00:19:15,760 Speaker 1: the party that's closest to the risk at the different 303 00:19:15,800 --> 00:19:19,199 Speaker 1: points in time in the AI life cycle. You know, 304 00:19:20,240 --> 00:19:26,560 Speaker 1: one of the interesting things about this issue today, we're 305 00:19:26,600 --> 00:19:31,360 Speaker 1: now in a situation where someone like IBM, I'm guessing 306 00:19:31,800 --> 00:19:35,000 Speaker 1: is that it would be as sensitive to public reaction 307 00:19:35,320 --> 00:19:37,600 Speaker 1: to the uses of AI as they would be to 308 00:19:37,680 --> 00:19:40,640 Speaker 1: government reaction to the uses of AI. And I wanted 309 00:19:40,640 --> 00:19:43,960 Speaker 1: to just way those you know, is that this is 310 00:19:44,000 --> 00:19:47,240 Speaker 1: a this kind of fascinating development in our age that 311 00:19:47,720 --> 00:19:52,240 Speaker 1: all of a sudden it almost seems like whatever form 312 00:19:52,320 --> 00:19:55,960 Speaker 1: public reaction takes can be a more powerful lever of 313 00:19:55,960 --> 00:20:00,600 Speaker 1: of in moving changing corporate behavior. Then what government are saying? 314 00:20:00,800 --> 00:20:02,399 Speaker 1: And do you do you think this is true in 315 00:20:02,440 --> 00:20:06,360 Speaker 1: this AI space? I think the government regulation that we're 316 00:20:06,359 --> 00:20:11,000 Speaker 1: seeing is responding to public sentiment. So I agree with 317 00:20:11,040 --> 00:20:14,560 Speaker 1: you a hundred percent that that this is being moved 318 00:20:14,560 --> 00:20:17,320 Speaker 1: by the public. And you know, oftentimes when we have 319 00:20:17,400 --> 00:20:21,840 Speaker 1: conversations at the ethics board, Okay Christina and the lawyers 320 00:20:21,840 --> 00:20:23,679 Speaker 1: say okay, this is not a legal issue, then the 321 00:20:23,720 --> 00:20:26,600 Speaker 1: next conversation is what happens if this story shows up 322 00:20:26,600 --> 00:20:28,080 Speaker 1: on the front page of the New York Times of 323 00:20:28,119 --> 00:20:32,920 Speaker 1: the Wall Street Journal. So so absolutely we consider that. 324 00:20:34,359 --> 00:20:35,920 Speaker 1: So I would also I would add to that, like 325 00:20:36,119 --> 00:20:40,360 Speaker 1: we've been well, probably think the oldest technology company, we're 326 00:20:40,640 --> 00:20:43,840 Speaker 1: over a hundred years old, and our clients have looked 327 00:20:43,840 --> 00:20:48,359 Speaker 1: to us for that hundred plus years to responsibly usher 328 00:20:48,400 --> 00:20:52,320 Speaker 1: in new technologies right and to manage their data, their 329 00:20:52,400 --> 00:20:55,280 Speaker 1: most sensitive data, in a trusted way. So for us, 330 00:20:55,359 --> 00:20:58,640 Speaker 1: it's it's not just about the the headline risk. It's 331 00:20:58,640 --> 00:21:01,240 Speaker 1: about ensuring that we have of business going forward because 332 00:21:01,240 --> 00:21:06,960 Speaker 1: our clients trust us UM and society trusts US. So 333 00:21:07,320 --> 00:21:11,560 Speaker 1: the guardrails we put in place, particularly around the trust 334 00:21:11,600 --> 00:21:14,159 Speaker 1: and Transparency principles, or the guard rails we put in 335 00:21:14,160 --> 00:21:17,040 Speaker 1: place around responsible data use. In the COVID pandemic, there 336 00:21:17,119 --> 00:21:21,639 Speaker 1: was nothing that from a legal perspective, said we couldn't 337 00:21:21,680 --> 00:21:24,600 Speaker 1: do more. There was nothing that said in the US 338 00:21:24,680 --> 00:21:29,119 Speaker 1: we can't use facial recognition technology and our sites. But 339 00:21:29,359 --> 00:21:34,000 Speaker 1: we made principal decisions, and we made those decisions because 340 00:21:34,040 --> 00:21:36,800 Speaker 1: we think they're the right decisions to make. And when 341 00:21:36,840 --> 00:21:40,440 Speaker 1: I look back at the Ethics Board and the analysis 342 00:21:40,960 --> 00:21:42,879 Speaker 1: and the use cases that have come forward over the 343 00:21:42,880 --> 00:21:46,359 Speaker 1: course of the last two years, I can think of 344 00:21:46,480 --> 00:21:49,600 Speaker 1: very few where we said we're not going to do 345 00:21:49,680 --> 00:21:54,640 Speaker 1: this because we're afraid of regulatory repercussions. UM. In fact, 346 00:21:54,680 --> 00:21:57,879 Speaker 1: I can't think of any where because it wouldn't have 347 00:21:57,920 --> 00:22:00,160 Speaker 1: come to the board if it was a league all. 348 00:22:00,960 --> 00:22:06,040 Speaker 1: But yet we did refine in some cases stop I 349 00:22:07,480 --> 00:22:12,840 Speaker 1: actual transactions right and solutions because we felt they were 350 00:22:12,880 --> 00:22:18,159 Speaker 1: not the right thing to do. Yeah, yeah, A question 351 00:22:18,200 --> 00:22:20,800 Speaker 1: for either of you, can you can you dig a 352 00:22:20,840 --> 00:22:24,239 Speaker 1: little more into this, into the real world applications of this. 353 00:22:25,040 --> 00:22:27,200 Speaker 1: What are some of the very kind of concrete kinds 354 00:22:27,200 --> 00:22:29,960 Speaker 1: of things that come out of this focus on untrust. 355 00:22:32,359 --> 00:22:35,680 Speaker 1: So so you know, some some real world examples of 356 00:22:35,880 --> 00:22:39,880 Speaker 1: how trust plays into what we're doing. Is gets back 357 00:22:39,920 --> 00:22:43,360 Speaker 1: to a couple of things Christina said earlier around how 358 00:22:43,359 --> 00:22:45,320 Speaker 1: we're open sourcing a lot of what we do. So 359 00:22:45,800 --> 00:22:50,159 Speaker 1: our research division builds a lot of the technology that 360 00:22:50,200 --> 00:22:54,359 Speaker 1: winds up in our products um uh. And then, particularly 361 00:22:54,359 --> 00:22:57,760 Speaker 1: related to this topic of AI ethics and trustworthy AI 362 00:22:59,200 --> 00:23:01,720 Speaker 1: are the fall is to open source the base of 363 00:23:01,760 --> 00:23:04,560 Speaker 1: the technology. So we have a whole bunch of open 364 00:23:04,600 --> 00:23:08,280 Speaker 1: source tool kits um that anyone can use. In fact, 365 00:23:08,359 --> 00:23:11,199 Speaker 1: some of our competitors use them as much as we 366 00:23:11,280 --> 00:23:14,760 Speaker 1: do in their products. And then we build value adds 367 00:23:14,880 --> 00:23:18,159 Speaker 1: on top of those and so that is something that 368 00:23:18,200 --> 00:23:22,280 Speaker 1: we advocate strongly for in the Ethics Board helps support 369 00:23:22,359 --> 00:23:25,359 Speaker 1: us with that, as do you know, our our product teams, 370 00:23:25,440 --> 00:23:30,480 Speaker 1: because the value is you know, AI is one of 371 00:23:30,480 --> 00:23:34,720 Speaker 1: those spaces where when something goes wrong, it affects everyone, right, 372 00:23:34,840 --> 00:23:38,040 Speaker 1: so if if there's a big issue with AI, everyone's 373 00:23:38,040 --> 00:23:40,560 Speaker 1: going to be concerned about all AI, and so we 374 00:23:40,600 --> 00:23:45,479 Speaker 1: want to make sure that the technology behind AI is 375 00:23:45,520 --> 00:23:49,960 Speaker 1: as fair as possible, is as explainable as possible, is 376 00:23:49,960 --> 00:23:54,000 Speaker 1: as robust as possible, and is as privacy preserving as possible. 377 00:23:54,040 --> 00:23:56,800 Speaker 1: So tool kits that address those are all publicly available, 378 00:23:57,200 --> 00:23:59,760 Speaker 1: and then we build value added capabilities on top of 379 00:23:59,800 --> 00:24:02,320 Speaker 1: that when we set when we bring those things to 380 00:24:02,320 --> 00:24:05,720 Speaker 1: our customers in the form of an integrated platform that 381 00:24:05,800 --> 00:24:08,439 Speaker 1: helps manage the whole life cycle of an AI. Because 382 00:24:09,040 --> 00:24:13,280 Speaker 1: AI is different than software in that the technology under 383 00:24:13,359 --> 00:24:16,320 Speaker 1: AI is machine learning. What that means is that the 384 00:24:16,359 --> 00:24:19,879 Speaker 1: machine keeps learning over time and adjusting the model over time. 385 00:24:20,640 --> 00:24:23,240 Speaker 1: Once you write a piece of software, it's done, it 386 00:24:23,280 --> 00:24:26,080 Speaker 1: doesn't change. And so you need to figure out how 387 00:24:26,119 --> 00:24:29,800 Speaker 1: do you continuously monitor your your AI over time for 388 00:24:30,040 --> 00:24:33,560 Speaker 1: those things I just described and integrate them into your 389 00:24:33,920 --> 00:24:37,600 Speaker 1: security and privacy by design practices so that they're continuously 390 00:24:38,400 --> 00:24:42,160 Speaker 1: updating and aligned to your company's principles as well as 391 00:24:42,680 --> 00:24:48,880 Speaker 1: societal principles as well as any relevant regulations. Yeah, when 392 00:24:48,960 --> 00:24:56,040 Speaker 1: this question, give me one suggestion prediction about what AI 393 00:24:56,119 --> 00:25:01,399 Speaker 1: looks like five years or ten years for now. Yeah, So, 394 00:25:01,400 --> 00:25:05,320 Speaker 1: so that is a really really good question. And you know, 395 00:25:05,960 --> 00:25:09,760 Speaker 1: when we when we look at what AI does today, AI, 396 00:25:10,080 --> 00:25:14,200 Speaker 1: while it's very insightful, and it helps us realize things 397 00:25:14,200 --> 00:25:16,080 Speaker 1: that as humans we may not have picked up on 398 00:25:16,119 --> 00:25:19,800 Speaker 1: our own. And so to augment our intelligence, A surfaces 399 00:25:19,840 --> 00:25:24,240 Speaker 1: insights and maybe reduce as a complexity from almost infinite 400 00:25:24,280 --> 00:25:27,920 Speaker 1: and comprehensible to humans. Two, I have five choices now 401 00:25:27,960 --> 00:25:30,159 Speaker 1: that I can make based on the output of an AI. 402 00:25:30,480 --> 00:25:34,159 Speaker 1: There's there's AI is unable for the most part today 403 00:25:34,160 --> 00:25:39,639 Speaker 1: to provide context or reasoning. Right, So AI provides an answer, 404 00:25:39,720 --> 00:25:42,639 Speaker 1: but there's no reasoning as we think about it as 405 00:25:42,720 --> 00:25:48,000 Speaker 1: humans associated with it. There's a new technology that's coming up, 406 00:25:48,600 --> 00:25:50,960 Speaker 1: that's all. There's a bunch of them that are lumped 407 00:25:51,000 --> 00:25:56,919 Speaker 1: under something called neurosymbolic reasoning. And what neurosymbolic reasoning means, 408 00:25:57,040 --> 00:26:05,800 Speaker 1: it's using mathematical equations. So AI algorithms to reason similarly 409 00:26:05,880 --> 00:26:09,399 Speaker 1: to a human does. So, for instance, you know, the 410 00:26:09,440 --> 00:26:12,640 Speaker 1: Internet contains all sorts of things good and bad, and 411 00:26:12,640 --> 00:26:16,399 Speaker 1: and let's let's look at something that's relevant to to 412 00:26:16,480 --> 00:26:19,840 Speaker 1: me at least being of Jewish background. Right, you want 413 00:26:20,359 --> 00:26:24,919 Speaker 1: you want algorithms to know about the Nazi regime, But 414 00:26:25,040 --> 00:26:30,920 Speaker 1: you don't want algorithms spewing rhetoric about the Nazi regime. Today, 415 00:26:30,920 --> 00:26:34,400 Speaker 1: when we build an AI, it's almost impossible for us 416 00:26:34,480 --> 00:26:37,960 Speaker 1: to get the algorithm to differentiate those two things. With 417 00:26:38,040 --> 00:26:43,440 Speaker 1: a tool like reasoning around it, you could exclude prevent 418 00:26:43,880 --> 00:26:48,879 Speaker 1: an algorithm from saying from learning rhetoric that is, you know, 419 00:26:48,960 --> 00:26:52,880 Speaker 1: not conducive to norms. It's just you know, an example. 420 00:26:52,920 --> 00:26:54,680 Speaker 1: So those are the kinds of things you'll see over 421 00:26:54,720 --> 00:26:58,119 Speaker 1: the next three to five years. I think we'll see 422 00:26:58,760 --> 00:27:06,120 Speaker 1: a lot more explainableity and transparency around AI. So for example, 423 00:27:06,160 --> 00:27:11,119 Speaker 1: whether it may be you're seeing this ad because you 424 00:27:11,119 --> 00:27:14,359 Speaker 1: you know, went on and searched for X, Y and Z, 425 00:27:14,680 --> 00:27:17,720 Speaker 1: You're seeing a shoe ad because you visited this site, 426 00:27:17,800 --> 00:27:20,239 Speaker 1: you know to extent it's it's that, or there'll be 427 00:27:20,640 --> 00:27:23,600 Speaker 1: more transparency you're dealing with a chatbot, you know, just 428 00:27:23,720 --> 00:27:26,080 Speaker 1: when AI is being applied to you. I think you'll 429 00:27:26,080 --> 00:27:29,879 Speaker 1: see a lot more transparency and disclosure around that. And 430 00:27:29,920 --> 00:27:36,280 Speaker 1: then the sort of uh answer, less practical, more aspirational 431 00:27:36,320 --> 00:27:39,720 Speaker 1: answer I think is you know, we know AI is 432 00:27:39,840 --> 00:27:45,080 Speaker 1: changing jobs, it's eliminating some, it's creating new jobs, and 433 00:27:45,119 --> 00:27:50,879 Speaker 1: I think hopefully right with principles around AI. That it 434 00:27:51,000 --> 00:27:54,439 Speaker 1: be used to augment to help humans, that it be 435 00:27:54,600 --> 00:27:57,520 Speaker 1: human centered, that it put people first at the heart 436 00:27:57,560 --> 00:28:01,920 Speaker 1: of the technology. UH, that it will make people better 437 00:28:02,520 --> 00:28:05,440 Speaker 1: and smarter at what they do and they'll be more 438 00:28:05,520 --> 00:28:09,440 Speaker 1: interesting work. Right. So I'm hoping that that will ultimately 439 00:28:09,480 --> 00:28:12,320 Speaker 1: be something that will come out of AI as there's 440 00:28:12,400 --> 00:28:16,760 Speaker 1: more awareness around where it's being used in your life 441 00:28:16,800 --> 00:28:20,720 Speaker 1: already day to day, more transparency around that, more explainability 442 00:28:20,760 --> 00:28:27,400 Speaker 1: around that, and then ultimately more trust. Um, we'll wonderful. 443 00:28:27,440 --> 00:28:29,879 Speaker 1: I think that covers our basis. This has been really 444 00:28:30,000 --> 00:28:33,000 Speaker 1: really fascinating. Thank you for joining me for this, and 445 00:28:33,800 --> 00:28:36,520 Speaker 1: I expect that we will be having both as a 446 00:28:36,560 --> 00:28:40,240 Speaker 1: company inside IBM and as a society many many, many 447 00:28:40,280 --> 00:28:43,840 Speaker 1: many more conversations about AI in the coming years. So 448 00:28:43,880 --> 00:28:45,840 Speaker 1: I'm glad to be on the early end of that 449 00:28:46,520 --> 00:28:50,680 Speaker 1: process because we're not done with this one, are we not? 450 00:28:50,760 --> 00:28:53,760 Speaker 1: By a long shot? The beginning, guess, just the beginning. 451 00:28:54,240 --> 00:29:02,640 Speaker 1: Thank you again, Yeah, thanks for having Thank you. Thank 452 00:29:02,680 --> 00:29:05,520 Speaker 1: you again to Christina Montgomery and Sethburn for the discussion 453 00:29:05,600 --> 00:29:09,840 Speaker 1: about trust and transparency around AI and for their insights 454 00:29:10,080 --> 00:29:13,000 Speaker 1: about what may be possible in the future. It will 455 00:29:13,040 --> 00:29:16,720 Speaker 1: be fascinating to see how IBM can help foster positive 456 00:29:16,800 --> 00:29:23,240 Speaker 1: change in the industry. Smart Talks with IBM is produced 457 00:29:23,240 --> 00:29:27,840 Speaker 1: by Emily Rostak with Collie Magliori and Katherine Gurda, Edited 458 00:29:27,880 --> 00:29:32,600 Speaker 1: by Karen Shakerge, mixed and mastered by Jason Gambrel. Music 459 00:29:33,000 --> 00:29:36,840 Speaker 1: by Gramascope. Special thanks to Molly Sosha, Andy Kelly, me 460 00:29:37,000 --> 00:29:40,360 Speaker 1: La Belle, Jacob Weisberg, had a Fine Erk Sander and 461 00:29:40,480 --> 00:29:44,920 Speaker 1: Maggie Taylor, the teams at eight Bar and IBM. Smart 462 00:29:44,920 --> 00:29:47,600 Speaker 1: Talks at IBM is a production of Pushkin Industries and 463 00:29:47,680 --> 00:29:52,640 Speaker 1: I Heart Radio. This is a paid advertisement from IBM. 464 00:29:52,680 --> 00:29:55,640 Speaker 1: You can find more episodes at IBM dot com slash 465 00:29:55,960 --> 00:29:59,520 Speaker 1: smart Talks. You'll find more Pushkin podcasts on the I 466 00:29:59,640 --> 00:30:03,280 Speaker 1: Heart Dio app, Apple Podcasts, or wherever you like to listen. 467 00:30:03,920 --> 00:30:06,520 Speaker 1: I'm Malcolm Gladbow. See you next time.