1 00:00:04,440 --> 00:00:12,639 Speaker 1: Welcome to tech Stuff, a production from iHeartRadio. This season 2 00:00:12,720 --> 00:00:16,200 Speaker 1: on smart Talks with IBM, Malcolm Gladwell and team are 3 00:00:16,239 --> 00:00:19,479 Speaker 1: diving into the transformative world of artificial intelligence with a 4 00:00:19,520 --> 00:00:24,120 Speaker 1: fresh perspective on the concept of open What does open 5 00:00:24,560 --> 00:00:27,920 Speaker 1: really mean in the context of AI. It can mean 6 00:00:28,000 --> 00:00:31,600 Speaker 1: open source code or open data, but it also encompasses 7 00:00:31,720 --> 00:00:36,040 Speaker 1: fostering an ecosystem of ideas, ensuring diverse perspectives are heard, 8 00:00:36,159 --> 00:00:39,919 Speaker 1: and enabling new levels of transparency. Join hosts from your 9 00:00:39,960 --> 00:00:43,560 Speaker 1: favorite pushkin podcasts as they explore how openness and AI 10 00:00:43,800 --> 00:00:49,040 Speaker 1: is reshaping industries, driving innovation, and redefining what's possible. You'll 11 00:00:49,080 --> 00:00:52,360 Speaker 1: hear from industry experts and leaders about the implications and 12 00:00:52,400 --> 00:00:56,160 Speaker 1: possibilities of open AI, and of course, Malcolm Gladwell will 13 00:00:56,200 --> 00:00:58,200 Speaker 1: be there to guide you through the season with his 14 00:00:58,360 --> 00:01:01,840 Speaker 1: unique insights. Look out for new episodes of Smart Talks 15 00:01:01,920 --> 00:01:05,360 Speaker 1: every other week on the iHeartRadio app, Apple Podcasts, or 16 00:01:05,360 --> 00:01:08,920 Speaker 1: wherever you get your podcasts, and learn more at IBM 17 00:01:09,080 --> 00:01:11,360 Speaker 1: dot com slash smart Talks. 18 00:01:12,520 --> 00:01:15,559 Speaker 2: Hey Malcolm Glabell, here, I'm back in your feed today 19 00:01:15,600 --> 00:01:18,399 Speaker 2: because we are re releasing an episode of Smart Talks 20 00:01:18,440 --> 00:01:23,520 Speaker 2: with IBM on a very timely topic, AI governance and 21 00:01:23,600 --> 00:01:28,360 Speaker 2: why regulation is critical to building responsible and accountable AI. 22 00:01:29,000 --> 00:01:34,840 Speaker 2: I hope you enjoy it. Hello, Hello, Welcome to Smart 23 00:01:34,840 --> 00:01:40,480 Speaker 2: Talks with IBM, a podcast from Pushkin Industries, iHeartRadio and IBM. 24 00:01:40,840 --> 00:01:45,200 Speaker 2: I'm Malcolm Glabwell. This season, we're continuing our conversation with 25 00:01:45,400 --> 00:01:49,800 Speaker 2: new creators visionaries who are creatively applying technology in business 26 00:01:49,840 --> 00:01:53,600 Speaker 2: to drive change, but with a focus on the transformative 27 00:01:53,600 --> 00:01:57,560 Speaker 2: power of artificial intelligence and what it means to leverage 28 00:01:57,600 --> 00:02:02,440 Speaker 2: AI as a game changing multiple for your business. Our 29 00:02:02,440 --> 00:02:07,640 Speaker 2: guest today is Christina Montgomery, IBM's Chief Privacy and Trust Officer. 30 00:02:08,240 --> 00:02:12,240 Speaker 2: She's also chair of IBM's AI Ethics Board. In addition 31 00:02:12,320 --> 00:02:16,680 Speaker 2: to overseeing IBM's privacy policy, A core part of Christina's 32 00:02:16,760 --> 00:02:21,120 Speaker 2: job involves AI governance, making sure the way AI is 33 00:02:21,280 --> 00:02:27,520 Speaker 2: used complies with the international legal regulations customized for each industry. 34 00:02:28,200 --> 00:02:33,399 Speaker 2: In today's episode, Christina will explain why businesses need foundational 35 00:02:33,440 --> 00:02:38,000 Speaker 2: principles when it comes to using technology, why AI regulation 36 00:02:38,320 --> 00:02:42,680 Speaker 2: should focus on specific use cases over the technology itself, 37 00:02:43,160 --> 00:02:47,320 Speaker 2: and share a little bit about her landmark congressional testimony. 38 00:02:47,720 --> 00:02:51,960 Speaker 2: Last May, Christina spoke with doctor Lori Santos, host of 39 00:02:52,000 --> 00:02:56,760 Speaker 2: the Pushkin podcast The Happiness Lab, a cognitive scientist and 40 00:02:56,960 --> 00:03:01,240 Speaker 2: psychology professor at Yale University, Laurie is an expert on 41 00:03:01,480 --> 00:03:06,520 Speaker 2: human happiness and cognition. Okay, let's get to the interview. 42 00:03:09,080 --> 00:03:11,160 Speaker 3: So Christina, I'm so excited to talk to you today. 43 00:03:11,320 --> 00:03:13,560 Speaker 3: So let's start by talking a little bit about your 44 00:03:13,639 --> 00:03:16,320 Speaker 3: role at IBM. What does a Chief Privacy and Trust 45 00:03:16,360 --> 00:03:17,400 Speaker 3: Officer actually do. 46 00:03:18,200 --> 00:03:21,960 Speaker 4: It's a really dynamic profession and it's not a new profession, 47 00:03:22,520 --> 00:03:25,360 Speaker 4: but the role has really changed. I mean, my role 48 00:03:25,480 --> 00:03:29,200 Speaker 4: today is broader than just helping to ensure compliance with 49 00:03:29,360 --> 00:03:33,959 Speaker 4: data protection laws globally. I'm also responsible for AI governance 50 00:03:34,040 --> 00:03:36,840 Speaker 4: I co chair or AI Ethics Board here at IBM, 51 00:03:37,280 --> 00:03:40,360 Speaker 4: and for data clearance in data governance as well for 52 00:03:40,440 --> 00:03:43,960 Speaker 4: the company. So I have both a compliance aspect to 53 00:03:44,000 --> 00:03:46,760 Speaker 4: my role, really important on a global basis, but also 54 00:03:47,280 --> 00:03:52,160 Speaker 4: help the business to competitively differentiate, because really trust is 55 00:03:52,200 --> 00:03:55,880 Speaker 4: a strategic advantage for IBM and a competitive differentiator as 56 00:03:55,920 --> 00:03:59,760 Speaker 4: a company that's been responsibly managing the most sensitive data 57 00:03:59,760 --> 00:04:02,480 Speaker 4: for our clients for more than a century now and 58 00:04:02,600 --> 00:04:05,320 Speaker 4: helping to usher new technologies into the world with trust 59 00:04:05,360 --> 00:04:08,760 Speaker 4: and transparency, and so that's also a key aspect of 60 00:04:08,760 --> 00:04:09,200 Speaker 4: my role. 61 00:04:09,680 --> 00:04:11,880 Speaker 3: And so you joined us here on smart Talks back 62 00:04:11,920 --> 00:04:14,120 Speaker 3: in twenty twenty one and you chatted with us about 63 00:04:14,160 --> 00:04:17,800 Speaker 3: IBM's approach of building trust and transparency with AI, and 64 00:04:17,960 --> 00:04:20,000 Speaker 3: that was only two years ago. But it almost feels 65 00:04:20,000 --> 00:04:22,320 Speaker 3: like an eternity has happened in the field of AI 66 00:04:22,480 --> 00:04:25,520 Speaker 3: since then, And so I'm curious how much has changed 67 00:04:25,560 --> 00:04:28,000 Speaker 3: since you were here last time. Were the things you 68 00:04:28,080 --> 00:04:30,760 Speaker 3: told us before, you know, are they still true? How 69 00:04:30,760 --> 00:04:31,560 Speaker 3: are things changing? 70 00:04:32,240 --> 00:04:35,719 Speaker 4: You're absolutely right, it feels like the world has changed 71 00:04:35,800 --> 00:04:39,520 Speaker 4: really in the last two years. But the same fundamental 72 00:04:39,520 --> 00:04:44,440 Speaker 4: principles and the same overall governance applied to IBM's program 73 00:04:45,040 --> 00:04:49,359 Speaker 4: for data protection and responsible AI that we talked about 74 00:04:49,440 --> 00:04:52,080 Speaker 4: two years ago, and not much has changed there from 75 00:04:52,160 --> 00:04:55,279 Speaker 4: our perspective. And the good thing is we've put these 76 00:04:55,320 --> 00:04:59,840 Speaker 4: practices and this governance approach into place, and we've had 77 00:05:00,080 --> 00:05:03,680 Speaker 4: an established way of looking at these emerging technologies. As 78 00:05:03,720 --> 00:05:07,120 Speaker 4: the technology evolves, the tech is more powerful, for sure, 79 00:05:07,240 --> 00:05:11,320 Speaker 4: foundation models are vastly larger and more capable and are 80 00:05:11,360 --> 00:05:14,760 Speaker 4: creating in some respects new issues. But that just makes 81 00:05:14,800 --> 00:05:16,640 Speaker 4: it all the more urgent to do what we've been 82 00:05:16,680 --> 00:05:20,240 Speaker 4: doing and to put trust and transparency into place across 83 00:05:20,279 --> 00:05:22,920 Speaker 4: the business to be accountable to those principles. 84 00:05:23,880 --> 00:05:26,039 Speaker 3: And so our conversation today is really centered around this 85 00:05:26,160 --> 00:05:29,400 Speaker 3: need for new AI regulation and part of that regulation 86 00:05:29,520 --> 00:05:32,440 Speaker 3: involves the mitigation of bias. And this is something I 87 00:05:32,440 --> 00:05:34,800 Speaker 3: think about a ton as a psychologist, right, you know, 88 00:05:34,839 --> 00:05:37,599 Speaker 3: I know, like my students and everyone who's interacting with 89 00:05:37,640 --> 00:05:40,760 Speaker 3: AI is assuming that the kind of knowledge that they're 90 00:05:40,800 --> 00:05:43,800 Speaker 3: getting from this kind of learning is accurate, right, But 91 00:05:43,880 --> 00:05:45,880 Speaker 3: of course AI is only as good as the knowledge 92 00:05:45,920 --> 00:05:48,200 Speaker 3: that's going in. And so talk to me a little 93 00:05:48,200 --> 00:05:51,479 Speaker 3: bit about like why bias occurs in AI and the 94 00:05:51,560 --> 00:05:53,440 Speaker 3: level of the problem that we're really dealing with. 95 00:05:54,520 --> 00:05:57,560 Speaker 4: Yeah, well, obviously AI is based on data, right, It's 96 00:05:57,920 --> 00:06:02,920 Speaker 4: trained with data, and that data could be biased in 97 00:06:02,960 --> 00:06:05,400 Speaker 4: and of itself, and that's where issues could come up. 98 00:06:05,440 --> 00:06:07,360 Speaker 4: They come up in the data, they could also come 99 00:06:07,440 --> 00:06:10,960 Speaker 4: up in the output of the models themselves. So it's 100 00:06:11,080 --> 00:06:15,799 Speaker 4: really important that you build bias consideration and bias testing 101 00:06:15,960 --> 00:06:19,320 Speaker 4: into your product development cycle. And so what we've been 102 00:06:19,400 --> 00:06:21,919 Speaker 4: thinking about here at IBM and doing we had some 103 00:06:22,000 --> 00:06:24,960 Speaker 4: of our research teams delivered some of the very first 104 00:06:24,960 --> 00:06:28,320 Speaker 4: toolkits to help detect bias years ago now right and 105 00:06:28,360 --> 00:06:31,960 Speaker 4: deploy them to open source, and we have put into 106 00:06:32,040 --> 00:06:35,360 Speaker 4: place for our developers here at IBM and Ethics by 107 00:06:35,400 --> 00:06:38,640 Speaker 4: Design playbook that's sort of a step by step approach 108 00:06:39,360 --> 00:06:45,920 Speaker 4: which also addresses very fully biased considerations, and we provide 109 00:06:45,960 --> 00:06:49,120 Speaker 4: not only like here's a point when you should test 110 00:06:49,200 --> 00:06:51,520 Speaker 4: for it and you consider it in the data, you 111 00:06:51,600 --> 00:06:53,360 Speaker 4: have to measure it both at the data and the 112 00:06:53,360 --> 00:06:56,880 Speaker 4: model level or the outcome level, and we provide guidance 113 00:06:56,920 --> 00:06:59,520 Speaker 4: with respect to what tools can best be used to 114 00:06:59,520 --> 00:07:03,080 Speaker 4: accomplish that. So it's a really important issue. It's one 115 00:07:03,120 --> 00:07:06,160 Speaker 4: you can't just talk about. You have to provide essentially 116 00:07:06,200 --> 00:07:09,480 Speaker 4: the technology and the capabilities and the guidance to enable 117 00:07:09,520 --> 00:07:10,640 Speaker 4: people to test for it. 118 00:07:11,080 --> 00:07:13,800 Speaker 3: Recently, you had this wonderful opportunity to head to Congress 119 00:07:13,800 --> 00:07:16,880 Speaker 3: to talk about AI, and in your testimony before Congress 120 00:07:16,880 --> 00:07:19,880 Speaker 3: you mentioned that it's often said that innovation moves too 121 00:07:19,960 --> 00:07:22,520 Speaker 3: fast for government to keep up. And this is something 122 00:07:22,560 --> 00:07:24,720 Speaker 3: that I also worry about as a psychologist. Right our 123 00:07:24,720 --> 00:07:27,640 Speaker 3: policy makers really understanding the issues that they're dealing with, 124 00:07:28,080 --> 00:07:30,360 Speaker 3: and so I'm curious how you're approaching this challenge of 125 00:07:30,400 --> 00:07:32,800 Speaker 3: adapting AI policies to keep up with the sort of 126 00:07:32,920 --> 00:07:35,400 Speaker 3: rapid pace of all the advancements we're seeing in the 127 00:07:35,400 --> 00:07:36,800 Speaker 3: AI technology itself. 128 00:07:38,040 --> 00:07:41,240 Speaker 4: I think it's really critically important that you have foundational 129 00:07:41,280 --> 00:07:46,760 Speaker 4: principles that applied to not only how you use technology, 130 00:07:46,800 --> 00:07:48,280 Speaker 4: but whether you're going to use it in the first 131 00:07:48,320 --> 00:07:49,920 Speaker 4: place and where you're going to use and apply it 132 00:07:49,960 --> 00:07:54,040 Speaker 4: across your company. And then your program from a governance perspective, 133 00:07:54,120 --> 00:07:56,640 Speaker 4: has to be agile. It has to be able to 134 00:07:56,720 --> 00:08:02,400 Speaker 4: address emerging capabilities, new training methods, etc. And part of 135 00:08:02,440 --> 00:08:07,000 Speaker 4: that involves helping to educate and instill and empower a 136 00:08:07,000 --> 00:08:10,800 Speaker 4: trustworthy culture at a company so you can spot those 137 00:08:10,840 --> 00:08:13,120 Speaker 4: issues so you can ask the right questions at the 138 00:08:13,160 --> 00:08:16,360 Speaker 4: right time if you try. We talked about during the 139 00:08:16,400 --> 00:08:20,640 Speaker 4: Senate hearing, and IBM's been talking for years about regulating 140 00:08:20,960 --> 00:08:24,080 Speaker 4: the use, not the technology itself, because if you try 141 00:08:24,120 --> 00:08:27,880 Speaker 4: to regulate technology, you're very quickly going to find out 142 00:08:28,320 --> 00:08:31,240 Speaker 4: regulation will absolutely never keep up with that. 143 00:08:31,520 --> 00:08:33,720 Speaker 3: And so in your testimony to Congress, you also talked 144 00:08:33,720 --> 00:08:36,959 Speaker 3: about this idea of a precision regulation approach for AI. 145 00:08:37,559 --> 00:08:40,240 Speaker 3: Tell me more about this. What is a precision regulation approach, 146 00:08:40,400 --> 00:08:42,240 Speaker 3: and why could that be so important. 147 00:08:42,400 --> 00:08:45,360 Speaker 4: It's funny because I was able to share with Congress 148 00:08:46,040 --> 00:08:49,800 Speaker 4: our precision regulation point of view in twenty twenty three, 149 00:08:50,160 --> 00:08:53,000 Speaker 4: but that precision regulation point of view was published by 150 00:08:53,080 --> 00:08:57,760 Speaker 4: IBM in twenty twenty. So we have not changed our 151 00:08:57,920 --> 00:09:02,800 Speaker 4: position that you should apply the tightest controls, the strictest 152 00:09:02,840 --> 00:09:07,120 Speaker 4: regulatory requirements to the technology where the end use and 153 00:09:07,240 --> 00:09:11,199 Speaker 4: risk of societal harm is the greatest. So that's essentially 154 00:09:11,200 --> 00:09:14,160 Speaker 4: what it is. There's lots of AI technology that's used 155 00:09:14,160 --> 00:09:17,840 Speaker 4: today that doesn't touch people, that's very low risk in nature. 156 00:09:18,320 --> 00:09:21,760 Speaker 4: And even when you think about AI that delivers a 157 00:09:21,880 --> 00:09:27,320 Speaker 4: movie recommendation versus AI that is used to diagnose cancer, right, 158 00:09:27,360 --> 00:09:30,880 Speaker 4: there's very different implications associated with those two uses of 159 00:09:30,920 --> 00:09:35,719 Speaker 4: the technology. And so essentially what precision regulation is is 160 00:09:35,760 --> 00:09:39,160 Speaker 4: apply different rules to different risks, right, more stringent regulation 161 00:09:39,400 --> 00:09:42,520 Speaker 4: to the use cases with the greatest risk. And then 162 00:09:42,679 --> 00:09:46,839 Speaker 4: also we build that out calling for things like transparency. 163 00:09:47,480 --> 00:09:51,040 Speaker 4: You see it today with content right, misinformation and the like. 164 00:09:51,600 --> 00:09:55,559 Speaker 4: We believe that consumers should always know when they're interacting 165 00:09:55,600 --> 00:09:58,280 Speaker 4: with an AI system, So be transparent, don't hyd your 166 00:09:58,320 --> 00:10:03,240 Speaker 4: AI define the risks. So as a country, we need 167 00:10:03,280 --> 00:10:06,319 Speaker 4: to have some clear guidance right and globally as well 168 00:10:06,679 --> 00:10:10,240 Speaker 4: in terms of which uses of AI or higher risk 169 00:10:10,360 --> 00:10:14,680 Speaker 4: where we'll apply higher and stricter regulation and have sort 170 00:10:14,679 --> 00:10:17,600 Speaker 4: of a common understanding of what those high risk uses 171 00:10:17,640 --> 00:10:21,240 Speaker 4: are and then demonstrate the impact in the cases of 172 00:10:21,280 --> 00:10:26,040 Speaker 4: those higher risk uses. So companies who are using AI 173 00:10:26,200 --> 00:10:30,559 Speaker 4: in spaces where they can impact people's legal rights, for example, 174 00:10:31,160 --> 00:10:35,400 Speaker 4: should have to conduct an impact assessment that demonstrates that 175 00:10:35,480 --> 00:10:38,680 Speaker 4: the technology isn't biased. So we've been pretty clear about 176 00:10:39,080 --> 00:10:42,800 Speaker 4: apply the most stringent regulation to the highest risk uses 177 00:10:42,840 --> 00:10:43,199 Speaker 4: of AI. 178 00:10:44,679 --> 00:10:46,920 Speaker 3: And so so far we've been talking about your congressional 179 00:10:46,960 --> 00:10:49,800 Speaker 3: testimony in terms of, you know, the specific content that 180 00:10:49,840 --> 00:10:52,320 Speaker 3: you talked about, But I'm just curious on a personal level, 181 00:10:52,640 --> 00:10:54,760 Speaker 3: you know, what was that like right like right now 182 00:10:54,760 --> 00:10:57,040 Speaker 3: it feels like at a policy level, like there's a 183 00:10:57,120 --> 00:10:59,440 Speaker 3: kind of fever pitch going on with AI right now. 184 00:10:59,840 --> 00:11:01,600 Speaker 3: What did that feel like to kind of really have 185 00:11:01,679 --> 00:11:03,960 Speaker 3: the opportunity to talk to policy makers and sort of 186 00:11:03,960 --> 00:11:07,000 Speaker 3: influence what they're thinking about AI technologies like in the 187 00:11:07,000 --> 00:11:07,760 Speaker 3: coming century. 188 00:11:07,800 --> 00:11:11,000 Speaker 4: Perhaps I was really an honor to be able to 189 00:11:11,040 --> 00:11:14,040 Speaker 4: do that and to be one of the first set 190 00:11:14,040 --> 00:11:18,160 Speaker 4: of invitees to the first hearing, and what I learned 191 00:11:18,160 --> 00:11:21,320 Speaker 4: from it essentially is, you know, really two things. The 192 00:11:21,360 --> 00:11:25,240 Speaker 4: first is really the value of authenticity. So both as 193 00:11:25,400 --> 00:11:29,360 Speaker 4: an individual and as a company, I was able to 194 00:11:29,440 --> 00:11:31,880 Speaker 4: talk about what I do. You know, I did need 195 00:11:31,920 --> 00:11:35,160 Speaker 4: a lot of advanced prep, right. I talked about what 196 00:11:35,320 --> 00:11:38,800 Speaker 4: my job is, what IBM has been putting in place 197 00:11:38,840 --> 00:11:43,080 Speaker 4: for years now. So this isn't about creating something. This 198 00:11:43,240 --> 00:11:45,840 Speaker 4: was just about showing up and being authentic. And we 199 00:11:45,840 --> 00:11:48,960 Speaker 4: were invited for a reason. We were invited because we 200 00:11:48,960 --> 00:11:52,840 Speaker 4: were one of the earliest companies in the AI technology space. 201 00:11:53,520 --> 00:11:58,640 Speaker 4: We're the oldest technology company and we are trusted and 202 00:11:58,960 --> 00:12:01,360 Speaker 4: that's an honor. And then the second thing I came 203 00:12:01,400 --> 00:12:04,120 Speaker 4: away with was really how important this issue is to society. 204 00:12:04,160 --> 00:12:08,640 Speaker 4: I don't think I appreciated it as much until following 205 00:12:08,760 --> 00:12:13,120 Speaker 4: that experience. I had outreached from colleagues I hadn't worked 206 00:12:13,120 --> 00:12:16,040 Speaker 4: with for years. I had an outreach from family members 207 00:12:16,040 --> 00:12:18,680 Speaker 4: who heard me on the radio. You know, my mother 208 00:12:18,880 --> 00:12:21,679 Speaker 4: and my mother in law, and my nieces and nephews 209 00:12:21,760 --> 00:12:24,160 Speaker 4: and my friends of my kids were all like, oh, 210 00:12:24,160 --> 00:12:26,120 Speaker 4: I get it. I get what you do. Now, Wow, 211 00:12:26,200 --> 00:12:29,400 Speaker 4: that's pretty cool, you know So that was really probably 212 00:12:29,480 --> 00:12:32,200 Speaker 4: the best and most impactful takeaway that I had. 213 00:12:32,480 --> 00:12:36,080 Speaker 2: The mass adoption of generative AI, happening at breakneck speed, 214 00:12:36,480 --> 00:12:40,480 Speaker 2: has spurred societies and governments around the world to get 215 00:12:40,559 --> 00:12:46,880 Speaker 2: serious about regulating AI. For businesses, compliance is complex enough already, 216 00:12:47,040 --> 00:12:50,360 Speaker 2: but throw anever involving technology like AI into the mix, 217 00:12:50,800 --> 00:12:57,040 Speaker 2: and compliance itself becomes an exercise in adaptability. As regulators 218 00:12:57,080 --> 00:13:01,600 Speaker 2: seek greater accountability in how AI is used, businesses need 219 00:13:01,679 --> 00:13:06,600 Speaker 2: help creating governance processes comprehensive enough to comply with the law, 220 00:13:06,840 --> 00:13:09,959 Speaker 2: but agile enough to keep up with the rapid rate 221 00:13:10,000 --> 00:13:15,360 Speaker 2: of change in AI development. Regulatory scrutiny isn't the only 222 00:13:15,440 --> 00:13:20,600 Speaker 2: consideration either responsible AI governance. A business's ability to prove 223 00:13:20,640 --> 00:13:25,440 Speaker 2: its AI models are transparent and explainable is also key 224 00:13:25,520 --> 00:13:30,520 Speaker 2: to building trust with customers, regardless of industry. In the 225 00:13:30,559 --> 00:13:34,800 Speaker 2: next part of their conversation, Laurie asked Christina what businesses 226 00:13:34,840 --> 00:13:40,320 Speaker 2: should consider when approaching AI governance. Let's listen, So. 227 00:13:40,240 --> 00:13:43,600 Speaker 3: What's a particular role that businesses are playing in AI governance? Like, 228 00:13:43,640 --> 00:13:46,120 Speaker 3: why is it so critical for businesses to be part 229 00:13:46,120 --> 00:13:46,400 Speaker 3: of this. 230 00:13:47,240 --> 00:13:51,719 Speaker 4: So I think it's really critically important that businesses understand 231 00:13:52,240 --> 00:13:55,080 Speaker 4: the impacts that technology can have, both in making them 232 00:13:55,120 --> 00:13:58,960 Speaker 4: better businesses, but the impacts that those technologies can have 233 00:13:59,440 --> 00:14:04,559 Speaker 4: on the consumers that they are supporting. You know, businesses 234 00:14:04,679 --> 00:14:09,320 Speaker 4: need to be deploying AI technology that is in alignment 235 00:14:09,480 --> 00:14:11,360 Speaker 4: with the goals that they set for it, and that 236 00:14:11,440 --> 00:14:14,319 Speaker 4: can be trusted. I think for us and for our clients, 237 00:14:14,640 --> 00:14:17,560 Speaker 4: a lot of this comes back to trust in tech. 238 00:14:18,000 --> 00:14:23,720 Speaker 4: If you deploy something that doesn't work, that hallucinates, that discriminates, 239 00:14:24,200 --> 00:14:28,280 Speaker 4: that isn't transparent, where decisions can't be explained, then you 240 00:14:28,440 --> 00:14:32,080 Speaker 4: are going to very rapidly erode the trust at best 241 00:14:32,240 --> 00:14:35,560 Speaker 4: right of your clients and at worst for yourself. You're 242 00:14:35,600 --> 00:14:38,280 Speaker 4: going to create legal and regulatory issues for yourself as well. 243 00:14:38,360 --> 00:14:42,240 Speaker 4: So trusted technology is really important, and I think there's 244 00:14:42,280 --> 00:14:44,480 Speaker 4: a lot of pressure on businesses today to move very 245 00:14:44,560 --> 00:14:47,280 Speaker 4: rapidly and adopt technology. But if you do it without 246 00:14:47,320 --> 00:14:50,840 Speaker 4: having a program of governance in place, you're really risking 247 00:14:51,000 --> 00:14:52,080 Speaker 4: eroding that trust. 248 00:14:52,440 --> 00:14:54,440 Speaker 3: And so this is really where I think a strong 249 00:14:54,560 --> 00:14:57,160 Speaker 3: AI governance comes in. You know, talk about from your 250 00:14:57,200 --> 00:15:00,880 Speaker 3: perspective how this really contributes to me containing the trust 251 00:15:00,880 --> 00:15:03,600 Speaker 3: that customers and stakeholders have in these technologies. 252 00:15:03,840 --> 00:15:06,360 Speaker 4: Yeah. Absolutely. I mean you need to have a governance 253 00:15:06,400 --> 00:15:10,400 Speaker 4: program because you need to understand that the technology, particularly 254 00:15:10,440 --> 00:15:15,000 Speaker 4: in the AI space, that you are deploying, is explainable. 255 00:15:15,080 --> 00:15:19,400 Speaker 4: You need to understand why it's making decisions and recommendations 256 00:15:19,480 --> 00:15:20,920 Speaker 4: that it's making, and you need to be able to 257 00:15:20,960 --> 00:15:23,320 Speaker 4: explain that to your consumers. I mean, you can't do 258 00:15:23,400 --> 00:15:25,480 Speaker 4: that if you don't know where your data is coming from, 259 00:15:25,520 --> 00:15:27,960 Speaker 4: what data are you using to train those models, if 260 00:15:27,960 --> 00:15:31,800 Speaker 4: you don't have a program that manages the alignment of 261 00:15:31,840 --> 00:15:35,400 Speaker 4: your AI models over time to make sure as AI 262 00:15:35,600 --> 00:15:40,000 Speaker 4: learns and evolves over uses, which is in large part 263 00:15:40,640 --> 00:15:44,440 Speaker 4: what makes it so beneficial that it stays in alignment 264 00:15:44,440 --> 00:15:47,920 Speaker 4: with the objectives that you set for the technology over time. 265 00:15:48,560 --> 00:15:52,400 Speaker 4: So you can't do that without a robust governance process 266 00:15:52,440 --> 00:15:55,880 Speaker 4: in place. So we work with clients to share our 267 00:15:55,920 --> 00:15:58,440 Speaker 4: own story here at IBM in terms of how we 268 00:15:58,480 --> 00:16:02,880 Speaker 4: put that in place also in our consulting practice to 269 00:16:03,040 --> 00:16:07,760 Speaker 4: help clients work with these new generative capabilities and foundation 270 00:16:07,880 --> 00:16:10,400 Speaker 4: models and the like in order to put them to 271 00:16:10,440 --> 00:16:12,440 Speaker 4: work for their business in a way that's going to 272 00:16:12,480 --> 00:16:15,360 Speaker 4: be impactful to that business, but at the same time 273 00:16:15,520 --> 00:16:16,120 Speaker 4: be trusted. 274 00:16:16,320 --> 00:16:18,120 Speaker 3: So now I wanted to turn a little bit towards 275 00:16:18,160 --> 00:16:22,040 Speaker 3: Watson x governance, and so IBM recently announced their AI platform, 276 00:16:22,080 --> 00:16:25,800 Speaker 3: Watson X, which will include a governance component. Could you 277 00:16:25,800 --> 00:16:28,440 Speaker 3: tell us a little more about watsonx dot governance. 278 00:16:29,040 --> 00:16:31,280 Speaker 4: Yeah, I mean before I do that, I'll just back 279 00:16:31,360 --> 00:16:35,000 Speaker 4: up and talk about the full platform and then lean 280 00:16:35,040 --> 00:16:37,880 Speaker 4: into Watson X because I think it's important to understand 281 00:16:38,160 --> 00:16:44,120 Speaker 4: the delivery of a full suite of capabilities to get data, 282 00:16:44,200 --> 00:16:47,080 Speaker 4: to train models, and then to govern them over their 283 00:16:47,120 --> 00:16:52,600 Speaker 4: life cycle. All of these things are really important. From 284 00:16:52,680 --> 00:16:55,560 Speaker 4: the onset you need to make sure that you have. 285 00:16:56,480 --> 00:17:01,040 Speaker 4: For our watsonex dot AI for example, that's the studio 286 00:17:01,120 --> 00:17:05,560 Speaker 4: to train new foundation models and generative AI and machine 287 00:17:05,600 --> 00:17:11,399 Speaker 4: learning capabilities. And we are populating that studio with some 288 00:17:11,880 --> 00:17:17,400 Speaker 4: IBM trained foundation models which we're curating and tailoring more 289 00:17:17,440 --> 00:17:20,680 Speaker 4: specifically for enterprises. So that's really important. It comes back 290 00:17:20,720 --> 00:17:23,800 Speaker 4: to the point I made earlier about business trust and 291 00:17:23,920 --> 00:17:30,320 Speaker 4: the need to have enterprise ready technologies in the AI space. 292 00:17:30,640 --> 00:17:34,199 Speaker 4: And then the watsonex dot data is a fit for 293 00:17:34,280 --> 00:17:37,800 Speaker 4: purpose data store or a data Lake and then watsonex 294 00:17:37,840 --> 00:17:42,520 Speaker 4: dot gov. So that's a particular component of the platform 295 00:17:42,920 --> 00:17:46,679 Speaker 4: that my team and the AI Ethics Board has really 296 00:17:46,720 --> 00:17:49,920 Speaker 4: worked closely with the product team on developing, and we're 297 00:17:50,119 --> 00:17:52,919 Speaker 4: using it internally here in the Chief Privacy Office as 298 00:17:52,960 --> 00:17:57,359 Speaker 4: well to help us govern our own uses of AI 299 00:17:57,480 --> 00:18:03,440 Speaker 4: technology and our compliance program here, and it essentially helps 300 00:18:03,480 --> 00:18:08,120 Speaker 4: to notify you if a model becomes biased or gets 301 00:18:08,160 --> 00:18:11,280 Speaker 4: out of alignment as you're using it over time. So 302 00:18:11,359 --> 00:18:14,000 Speaker 4: companies are going to need these capabilities. I mean they 303 00:18:14,040 --> 00:18:18,240 Speaker 4: need them today to deliver technologies with trust. They'll need 304 00:18:18,280 --> 00:18:21,960 Speaker 4: them tomorrow to comply with regulation which is on the horizon. 305 00:18:22,040 --> 00:18:24,840 Speaker 3: And I think compliance becomes even more complex when you 306 00:18:24,920 --> 00:18:29,040 Speaker 3: consider international data protection laws and regulations. Honestly, I don't 307 00:18:29,040 --> 00:18:31,640 Speaker 3: know how anyone on any company's legal team is keeping 308 00:18:31,680 --> 00:18:33,959 Speaker 3: up with us these days. But my question for you 309 00:18:34,040 --> 00:18:37,680 Speaker 3: is really how can businesses develop a strategy to maintain 310 00:18:37,720 --> 00:18:40,720 Speaker 3: compliance and to deal with it in this ever changing landscape. 311 00:18:40,800 --> 00:18:44,800 Speaker 4: It's increasingly more challenging. In fact, I saw statistic just 312 00:18:44,840 --> 00:18:49,440 Speaker 4: this morning that the regulatory obligations on companies have increased 313 00:18:49,480 --> 00:18:53,119 Speaker 4: something like seven hundred times in the last twenty years. 314 00:18:53,160 --> 00:18:57,760 Speaker 4: So it really is a huge focus area for companies. 315 00:18:57,920 --> 00:19:00,760 Speaker 4: You have to have a process in place in order 316 00:19:00,840 --> 00:19:03,399 Speaker 4: to do that, and it's not easy, particularly for a 317 00:19:03,400 --> 00:19:07,280 Speaker 4: company like IBM that it has a presence in over 318 00:19:07,320 --> 00:19:10,320 Speaker 4: one hundred and seventy countries around the world. There is 319 00:19:10,359 --> 00:19:15,240 Speaker 4: more than one hundred and fifty comprehensive privacy regulations, there 320 00:19:15,280 --> 00:19:19,800 Speaker 4: are regulations of non personal data, there are AI regulations emerging, 321 00:19:20,800 --> 00:19:24,840 Speaker 4: So you really need an operational approach to it in 322 00:19:24,960 --> 00:19:27,000 Speaker 4: order to stay compliant. But one of the things we 323 00:19:27,040 --> 00:19:29,199 Speaker 4: do is we set a baseline. And a lot of 324 00:19:29,200 --> 00:19:32,679 Speaker 4: companies do this as well. So we define a privacy baseline, 325 00:19:32,720 --> 00:19:37,520 Speaker 4: we define an AI baseline, and we ensure then as 326 00:19:37,520 --> 00:19:39,960 Speaker 4: a result of that that there are very few deviances 327 00:19:40,000 --> 00:19:43,080 Speaker 4: because it incorporates in that baseline. So that's one of 328 00:19:43,119 --> 00:19:45,320 Speaker 4: the ways we do it. Other companies, I think are 329 00:19:45,359 --> 00:19:50,560 Speaker 4: similarly situated in terms of doing that. But again, it 330 00:19:51,160 --> 00:19:53,399 Speaker 4: is a real challenge for global companies. It's one of 331 00:19:53,440 --> 00:19:57,400 Speaker 4: the reasons why we advocate for as much alignment as 332 00:19:57,440 --> 00:20:02,840 Speaker 4: possible on the international realm as well as nationally here 333 00:20:02,840 --> 00:20:06,640 Speaker 4: in the US, as much alignment as possible to make 334 00:20:06,880 --> 00:20:11,800 Speaker 4: compliance easier for easier and not just because companies want 335 00:20:11,840 --> 00:20:15,000 Speaker 4: an easy way to comply. But the harder it is, 336 00:20:15,280 --> 00:20:19,159 Speaker 4: the less likely there will be compliance. And it's not 337 00:20:19,240 --> 00:20:25,320 Speaker 4: the objective of anybody, governments, companies, consumers to have to 338 00:20:25,520 --> 00:20:29,040 Speaker 4: set legal obligations that companies simply can't meet. 339 00:20:29,480 --> 00:20:31,479 Speaker 3: So what advice would you give to other companies who 340 00:20:31,520 --> 00:20:34,480 Speaker 3: are looking to rethink or strengthen their approach to AI government. 341 00:20:34,520 --> 00:20:37,760 Speaker 4: Think you need to start with, as we did, foundational principles, 342 00:20:38,400 --> 00:20:41,840 Speaker 4: and you need to start making decisions about what technology 343 00:20:41,880 --> 00:20:44,199 Speaker 4: you're going to deploy and what technology you're not, What 344 00:20:44,240 --> 00:20:45,320 Speaker 4: are you going to use it for, and what aren't 345 00:20:45,359 --> 00:20:46,720 Speaker 4: you going to use it for? And then when you 346 00:20:46,760 --> 00:20:51,080 Speaker 4: do use it, align to those principles. That's really important. 347 00:20:51,200 --> 00:20:55,720 Speaker 4: Formalize a program, have someone within the organization, whether it's 348 00:20:55,760 --> 00:21:00,400 Speaker 4: the chief privacy officer, whether it's some other role, chief 349 00:21:00,440 --> 00:21:06,040 Speaker 4: AI ethics officer, but have an accountable individual and accountable organization. 350 00:21:06,720 --> 00:21:09,159 Speaker 4: Do a maturity assessment, figure out where you are and 351 00:21:09,160 --> 00:21:12,200 Speaker 4: where you need to be, and really start, you know, 352 00:21:12,720 --> 00:21:17,199 Speaker 4: putting it into place today. Don't wait for regulation to 353 00:21:17,280 --> 00:21:19,960 Speaker 4: apply directly to your business because it'll be too late. 354 00:21:20,920 --> 00:21:24,280 Speaker 3: So Smart Talks features new creators these visionaries like yourself 355 00:21:24,320 --> 00:21:27,840 Speaker 3: who are creatively applying technology in business to drive change. 356 00:21:28,080 --> 00:21:30,720 Speaker 3: I'm curious if you see yourself as creative. 357 00:21:31,200 --> 00:21:34,520 Speaker 4: You know, I definitely do. I mean, you need to 358 00:21:34,600 --> 00:21:39,240 Speaker 4: be creative when you're working in an industry that evolves 359 00:21:39,320 --> 00:21:43,800 Speaker 4: so very quickly. So you know, I started with IBM 360 00:21:44,040 --> 00:21:47,160 Speaker 4: when we were primarily a hardware company, right, and we've 361 00:21:47,280 --> 00:21:50,679 Speaker 4: changed our business so significantly over the years. And the 362 00:21:50,760 --> 00:21:55,240 Speaker 4: issues that are raised with respect to each new technology, 363 00:21:55,240 --> 00:21:58,880 Speaker 4: whether it be cloud, whether it be AI now where 364 00:21:58,920 --> 00:22:00,479 Speaker 4: we're seeing a ton of issues, or you look at 365 00:22:00,480 --> 00:22:04,639 Speaker 4: emergent issues in the space of things like neurotechnologies and 366 00:22:04,720 --> 00:22:11,320 Speaker 4: quantum computers. You have to be strategic and you have 367 00:22:11,440 --> 00:22:14,720 Speaker 4: to be creative and thinking about how you can adapt 368 00:22:15,320 --> 00:22:20,560 Speaker 4: agilely quickly a company to an environment that is changing 369 00:22:20,600 --> 00:22:22,360 Speaker 4: so quickly and. 370 00:22:22,359 --> 00:22:25,399 Speaker 3: With this transformation happening at such a rapid pace. Do 371 00:22:25,440 --> 00:22:27,520 Speaker 3: you think creativity plays a role in how you think 372 00:22:27,520 --> 00:22:30,840 Speaker 3: about and implement, specifically a trustworthy AI strategy. 373 00:22:33,320 --> 00:22:37,359 Speaker 4: Yeah, I absolutely think it does, because again, it comes 374 00:22:37,400 --> 00:22:40,560 Speaker 4: back to these capabilities, and there are ways, I guess 375 00:22:40,760 --> 00:22:44,520 Speaker 4: how you define creativity could be different, right, But I'm 376 00:22:44,560 --> 00:22:47,760 Speaker 4: thinking of creativity in the sense of sort of agility 377 00:22:47,800 --> 00:22:51,920 Speaker 4: and strategic vision and creative problem solving. I think that's 378 00:22:52,160 --> 00:22:55,280 Speaker 4: really important in the world that we're in right now, 379 00:22:55,320 --> 00:22:59,800 Speaker 4: being able to creatively problem solve with new issues that 380 00:22:59,880 --> 00:23:03,080 Speaker 4: are rising sort of every day. 381 00:23:03,440 --> 00:23:05,000 Speaker 3: And so, how do you see the role of chief 382 00:23:05,040 --> 00:23:08,680 Speaker 3: privacy officer evolving in the future as AI technology continues 383 00:23:08,680 --> 00:23:11,600 Speaker 3: to advance, Like what stuff should CPOs take to stay 384 00:23:11,600 --> 00:23:13,520 Speaker 3: ahead of all these changes that are come in their way? 385 00:23:15,080 --> 00:23:18,960 Speaker 4: So the role is evolving in most companies, I would 386 00:23:19,040 --> 00:23:23,960 Speaker 4: say pretty rapidly. Many companies are looking to chief privacy 387 00:23:24,000 --> 00:23:27,560 Speaker 4: officers who are ready understand the data that's being used 388 00:23:27,560 --> 00:23:31,040 Speaker 4: in the organization and have programs to ensure compliance with 389 00:23:31,160 --> 00:23:34,960 Speaker 4: laws that require you to manage that data in accordance 390 00:23:35,000 --> 00:23:37,600 Speaker 4: with data protection laws and the like. It's a natural 391 00:23:37,640 --> 00:23:43,640 Speaker 4: place and position for AI responsibility. And so I think 392 00:23:43,680 --> 00:23:46,320 Speaker 4: what's happening to a lot of chief privacy officers is 393 00:23:46,359 --> 00:23:49,880 Speaker 4: they're being asked to take on this AI governance responsibility 394 00:23:49,880 --> 00:23:53,399 Speaker 4: for companies and if not take it on, at least 395 00:23:53,440 --> 00:23:56,879 Speaker 4: play a very key role working with other parts of 396 00:23:56,920 --> 00:24:00,520 Speaker 4: the business in AI governance. So that really is changing. 397 00:24:00,760 --> 00:24:04,840 Speaker 4: And if chief privacy officers are in companies who maybe 398 00:24:04,840 --> 00:24:08,639 Speaker 4: haven't started thinking about AI yet, they should, so I 399 00:24:08,640 --> 00:24:12,520 Speaker 4: would encourage them to look at different resources that are 400 00:24:12,520 --> 00:24:16,879 Speaker 4: available already in AI governance space. For example, the International 401 00:24:16,960 --> 00:24:21,360 Speaker 4: Association of Privacy Professionals, which is the seventy five thousand 402 00:24:21,400 --> 00:24:26,520 Speaker 4: member professional body for the profession of chief Privacy Officers, 403 00:24:26,600 --> 00:24:31,240 Speaker 4: just recently launched an AI Governance Initiative and an AI 404 00:24:31,280 --> 00:24:35,280 Speaker 4: Governance Certification program. I sit on their advisory board. But 405 00:24:35,359 --> 00:24:38,040 Speaker 4: that's just emblematic of the fact that the field is 406 00:24:38,119 --> 00:24:39,600 Speaker 4: changing so rapidly. 407 00:24:40,800 --> 00:24:43,359 Speaker 3: And so, you know, speaking of rapid change. When you're 408 00:24:43,680 --> 00:24:46,040 Speaker 3: back here on smart Talks in twenty twenty one, you 409 00:24:46,080 --> 00:24:48,440 Speaker 3: said that the future of AI will be more transparent 410 00:24:48,480 --> 00:24:50,679 Speaker 3: and more trustworthy. You know, what do you see the 411 00:24:50,720 --> 00:24:52,760 Speaker 3: next five to ten years holding. You know, when you're 412 00:24:52,800 --> 00:24:55,679 Speaker 3: back on smart Talks in you know, twenty twenty six, 413 00:24:55,800 --> 00:24:57,480 Speaker 3: you know twenty thirty, You know what are we going 414 00:24:57,520 --> 00:24:59,760 Speaker 3: to be talking about when it comes to AI technology 415 00:24:59,800 --> 00:25:00,480 Speaker 3: and governance. 416 00:25:01,320 --> 00:25:03,439 Speaker 4: So I try to be an optimist, right And I 417 00:25:03,520 --> 00:25:07,280 Speaker 4: said that two years ago, and I think we're seeing 418 00:25:07,359 --> 00:25:11,800 Speaker 4: it now come into fruition, and there will be requirements, 419 00:25:12,600 --> 00:25:15,399 Speaker 4: whether they're coming from the US, whether they're coming from Europe, 420 00:25:15,440 --> 00:25:19,040 Speaker 4: whether they're just coming from voluntary adoption by clients of 421 00:25:19,119 --> 00:25:23,840 Speaker 4: things like the NISS Risk Management Framework, really important voluntary frameworks. 422 00:25:24,600 --> 00:25:28,200 Speaker 4: You're going to have to adopt transparent and explainable practices 423 00:25:28,359 --> 00:25:31,040 Speaker 4: in your uses of AI. So I do see that happening. 424 00:25:31,040 --> 00:25:33,480 Speaker 4: And in the next five to ten years, boy, I 425 00:25:33,520 --> 00:25:39,479 Speaker 4: think we'll see more research into trust in techniques because 426 00:25:39,520 --> 00:25:43,440 Speaker 4: we don't really know, for example, how to water mark. 427 00:25:44,200 --> 00:25:46,800 Speaker 4: We were calling for things like water marking. There'll be 428 00:25:47,000 --> 00:25:52,600 Speaker 4: more research into how to do that. I think you'll see, 429 00:25:52,840 --> 00:25:55,680 Speaker 4: you know, regulation that's specifically going to require those types 430 00:25:55,720 --> 00:25:57,960 Speaker 4: of things. So I think again, I think the regulation 431 00:25:58,040 --> 00:26:00,600 Speaker 4: is going to drive research. It's going to drive research 432 00:26:00,680 --> 00:26:04,400 Speaker 4: into these areas that will help ensure that we can 433 00:26:04,480 --> 00:26:08,720 Speaker 4: deliver new capabilities, generative capabilities and the like with trust 434 00:26:08,760 --> 00:26:09,760 Speaker 4: and explainability. 435 00:26:09,960 --> 00:26:12,000 Speaker 3: Thank you so much, Christina for joining me on smart 436 00:26:12,000 --> 00:26:13,680 Speaker 3: Talks to talk about AI and governance. 437 00:26:14,640 --> 00:26:16,640 Speaker 4: Well, thank you very much for having me. 438 00:26:17,920 --> 00:26:22,639 Speaker 2: To unlock the transformative growth possible with artificial intelligence. Businesses 439 00:26:22,680 --> 00:26:25,840 Speaker 2: need to know what they wish to grow into first. 440 00:26:26,760 --> 00:26:29,639 Speaker 2: Like Christina said, the best way forward in the AI 441 00:26:29,720 --> 00:26:33,800 Speaker 2: future is for businesses to figure out their own foundational 442 00:26:33,840 --> 00:26:38,439 Speaker 2: principles around using the technology, drawing on those principles to 443 00:26:38,520 --> 00:26:42,280 Speaker 2: apply AI in a way that's ethically consistent with their 444 00:26:42,359 --> 00:26:46,000 Speaker 2: mission and complies with the legal frameworks built to hold 445 00:26:46,040 --> 00:26:51,080 Speaker 2: the technology accountable. As AI adoption grows more and more widespread, 446 00:26:51,280 --> 00:26:55,560 Speaker 2: so too will the expectation from consumers and regulators that 447 00:26:55,680 --> 00:27:01,080 Speaker 2: businesses use it responsibly. Investing independable AI governance is a 448 00:27:01,119 --> 00:27:05,120 Speaker 2: way for businesses to lay the foundations for technology that 449 00:27:05,160 --> 00:27:09,040 Speaker 2: their customers can trust while rising to the challenge of 450 00:27:09,160 --> 00:27:14,960 Speaker 2: increasing regulatory complexity. Though the emergence of AI does complicate 451 00:27:15,040 --> 00:27:19,600 Speaker 2: an already tough compliance landscape, businesses now face a creative 452 00:27:19,680 --> 00:27:24,320 Speaker 2: opportunity to set a precedent for what accountability in AI 453 00:27:24,480 --> 00:27:28,440 Speaker 2: looks like and rethink what it means to deploy trustworthy 454 00:27:28,880 --> 00:27:34,720 Speaker 2: artificial intelligence. I'm Malcolm Gladwell. This is a paid advertisement 455 00:27:35,080 --> 00:27:38,359 Speaker 2: from IBM. Smart Talks with IBM will be taking a 456 00:27:38,400 --> 00:27:42,560 Speaker 2: short hiatus, but look for new episodes in the coming weeks. 457 00:27:43,200 --> 00:27:46,600 Speaker 2: Smart Talks with IBM is produced by Matt Ramano, David 458 00:27:46,680 --> 00:27:51,720 Speaker 2: jaw Nische Venkat and Royston Deserve with Jacob Goldstein. We're 459 00:27:51,840 --> 00:27:55,520 Speaker 2: edited by Lydia Gene Kott. Our engineer is Jason Gambrel. 460 00:27:55,840 --> 00:28:00,800 Speaker 2: Theme song by Gramoscope. Special thanks to Carli Migliori, Andy Kelly, 461 00:28:01,200 --> 00:28:05,440 Speaker 2: Kathy Callahan and the eight Bar and IBM teams, as 462 00:28:05,440 --> 00:28:09,600 Speaker 2: well as the Pushkin marketing team. Smart Talks with IBM 463 00:28:09,880 --> 00:28:14,200 Speaker 2: is a production of Pushkin Industries and Ruby Studio at iHeartMedia. 464 00:28:14,840 --> 00:28:18,960 Speaker 2: To find more Pushkin podcasts, listen on the iHeartRadio app, 465 00:28:19,240 --> 00:28:28,600 Speaker 2: Apple Podcasts, or wherever you listen to podcasts.