1 00:00:04,519 --> 00:00:12,719 Speaker 1: Welcome to tech Stuff, a production from iHeartRadio. This season 2 00:00:12,840 --> 00:00:16,280 Speaker 1: on smart Talks with IBM, Malcolm Gladwell and team are 3 00:00:16,320 --> 00:00:19,560 Speaker 1: diving into the transformative world of artificial intelligence with a 4 00:00:19,600 --> 00:00:24,239 Speaker 1: fresh perspective on the concept of open What does open 5 00:00:24,680 --> 00:00:28,000 Speaker 1: really mean in the context of AI. It can mean 6 00:00:28,120 --> 00:00:31,720 Speaker 1: open source code or open data, but it also encompasses 7 00:00:31,800 --> 00:00:36,120 Speaker 1: fostering an ecosystem of ideas, ensuring diverse perspectives are heard, 8 00:00:36,280 --> 00:00:40,000 Speaker 1: and enabling new levels of transparency. Join hosts from your 9 00:00:40,040 --> 00:00:43,640 Speaker 1: favorite Pushkin podcasts as they explore how openness in AI 10 00:00:43,880 --> 00:00:49,160 Speaker 1: is reshaping industries, driving innovation, and redefining what's possible. You'll 11 00:00:49,159 --> 00:00:52,440 Speaker 1: hear from industry experts and leaders about the implications and 12 00:00:52,479 --> 00:00:56,280 Speaker 1: possibilities of open AI, and of course, Malcolm Gladwell will 13 00:00:56,280 --> 00:00:58,280 Speaker 1: be there to guide you through the season with his 14 00:00:58,440 --> 00:01:01,960 Speaker 1: unique insights. Look out for new episodes of Smart Talks 15 00:01:02,040 --> 00:01:05,440 Speaker 1: every other week on the iHeartRadio app, Apple Podcasts, or 16 00:01:05,440 --> 00:01:09,000 Speaker 1: wherever you get your podcasts, and learn more at IBM 17 00:01:09,160 --> 00:01:21,400 Speaker 1: dot com, slash smart Talks. 18 00:01:18,720 --> 00:01:26,720 Speaker 2: Pushkin Hello, Hello, Welcome to Smart Talks with IBM, a 19 00:01:26,800 --> 00:01:33,120 Speaker 2: podcast from Pushkin Industries, iHeartRadio and IBM. I'm Malcolm Gladwell. 20 00:01:33,160 --> 00:01:36,400 Speaker 2: This season, we're diving back into the world of artificial intelligence, 21 00:01:36,760 --> 00:01:39,760 Speaker 2: but with a focus on the powerful concept of open 22 00:01:40,319 --> 00:01:46,399 Speaker 2: its possibilities, implications, and misconceptions. We'll look at openness from 23 00:01:46,440 --> 00:01:49,200 Speaker 2: a variety of angles and explore how the concept is 24 00:01:49,240 --> 00:01:53,280 Speaker 2: already reshaping industries, ways of doing business, and our very 25 00:01:53,360 --> 00:01:58,639 Speaker 2: notion of what's possible. In today's episode, Jacob Goldstein sits 26 00:01:58,640 --> 00:02:02,640 Speaker 2: down with Rebecca Finley, the CEO of the Partnership on AI, 27 00:02:03,320 --> 00:02:07,360 Speaker 2: a nonprofit group grappling with important questions around the future 28 00:02:07,360 --> 00:02:11,560 Speaker 2: of AI. Their conversation focuses on Rebecca's work bringing together 29 00:02:11,880 --> 00:02:15,800 Speaker 2: a community of diverse stakeholders to help shape the conversation 30 00:02:16,320 --> 00:02:21,800 Speaker 2: around accountable AI governance. Rebecca explains why transparency is so 31 00:02:21,960 --> 00:02:26,960 Speaker 2: crucial for scaling the technology responsibly, and she highlights how 32 00:02:27,000 --> 00:02:30,520 Speaker 2: working with groups like the AI Alliance can provide valuable 33 00:02:30,520 --> 00:02:35,079 Speaker 2: insights in order to build the resources, infrastructure, and community 34 00:02:35,480 --> 00:02:40,400 Speaker 2: around releasing open source models. So, without further ado, let's 35 00:02:40,400 --> 00:02:41,680 Speaker 2: get to that conversation. 36 00:02:48,800 --> 00:02:50,480 Speaker 3: Can you say your name and your job? 37 00:02:50,960 --> 00:02:54,120 Speaker 4: My name is Rebecca Finley. I am the CEO of 38 00:02:54,160 --> 00:02:57,840 Speaker 4: the Partnership on AI to benefit people and society. Often 39 00:02:57,919 --> 00:02:59,800 Speaker 4: referred to as PAI. 40 00:03:00,760 --> 00:03:03,280 Speaker 3: How did you get here? What was your job before 41 00:03:03,320 --> 00:03:05,160 Speaker 3: you had the job that you have now. 42 00:03:06,160 --> 00:03:11,079 Speaker 4: I came to PAI about three years ago, having had 43 00:03:11,080 --> 00:03:16,320 Speaker 4: the opportunity to work for the Canadian Institute for Advance Research, 44 00:03:16,880 --> 00:03:21,560 Speaker 4: developing and deploying all of their programs related to the 45 00:03:21,600 --> 00:03:27,160 Speaker 4: intersection of technology and society. And one of the areas 46 00:03:27,360 --> 00:03:31,680 Speaker 4: that the Canadian Institute had been funding since nineteen eighty 47 00:03:31,720 --> 00:03:35,560 Speaker 4: two was research into artificial intelligence. 48 00:03:35,840 --> 00:03:37,840 Speaker 3: Wow, early, they were early. 49 00:03:38,880 --> 00:03:43,200 Speaker 4: It was a very early commitment and an ongoing commitment 50 00:03:43,320 --> 00:03:48,720 Speaker 4: at the Institute to fund long term fundamental questions of 51 00:03:48,760 --> 00:03:57,720 Speaker 4: scientific importance in interdisciplinary research programs that were often committed 52 00:03:57,720 --> 00:04:01,240 Speaker 4: and funded to for well over a decade. The AI, 53 00:04:01,480 --> 00:04:04,560 Speaker 4: Robotics and Society program that kicked off the work at 54 00:04:04,560 --> 00:04:10,280 Speaker 4: the Institute eventually became a program very much focused on 55 00:04:10,880 --> 00:04:16,000 Speaker 4: deep learning and reinforcement learning, neural networks, all of the 56 00:04:16,680 --> 00:04:21,960 Speaker 4: current iteration of AI, or certainly the pregenerative AI iteration 57 00:04:22,080 --> 00:04:26,599 Speaker 4: of AI that led to this transformation that we've seen 58 00:04:26,720 --> 00:04:29,960 Speaker 4: in terms of online search and all sorts of ways 59 00:04:30,000 --> 00:04:32,880 Speaker 4: in which predictive AI has been deployed. So I had 60 00:04:32,920 --> 00:04:36,280 Speaker 4: the opportunity to see the very early days of that 61 00:04:36,440 --> 00:04:40,680 Speaker 4: research coming together, and when in the early sort of 62 00:04:40,720 --> 00:04:46,479 Speaker 4: two thousand, twenty and tens, when compute capability came together 63 00:04:46,680 --> 00:04:51,480 Speaker 4: with data capability through some of the Internet companies and otherwise, 64 00:04:51,560 --> 00:04:55,239 Speaker 4: and we really saw this technology start to take off. 65 00:04:55,960 --> 00:04:59,360 Speaker 4: I had the opportunity to start up a program specifically 66 00:04:59,400 --> 00:05:04,040 Speaker 4: focused on the impacts of AI in society. There was, 67 00:05:04,160 --> 00:05:06,839 Speaker 4: as you know, at that time, some concerns both about 68 00:05:06,839 --> 00:05:11,279 Speaker 4: the potential for the technology, but also in terms of 69 00:05:11,320 --> 00:05:14,280 Speaker 4: what we were seeing around data sets and bias and 70 00:05:14,279 --> 00:05:18,839 Speaker 4: discrimination and potential impact on future jobs. And so bringing 71 00:05:18,880 --> 00:05:23,200 Speaker 4: a whole group of experts, whether they were ethicists or 72 00:05:23,680 --> 00:05:29,200 Speaker 4: lawyers or economists, sociologists into the discussion about AI was 73 00:05:29,279 --> 00:05:31,920 Speaker 4: core to that new program and continues to be core 74 00:05:32,000 --> 00:05:35,839 Speaker 4: to my commitment to bringing diverse perspectives together to solve 75 00:05:36,279 --> 00:05:39,159 Speaker 4: the challenges and opportunities that AI offers today. 76 00:05:40,279 --> 00:05:42,839 Speaker 3: So specifically, what is your job now? What is the 77 00:05:42,839 --> 00:05:44,920 Speaker 3: work you do? What is the work that PAI does? 78 00:05:46,000 --> 00:05:49,839 Speaker 4: I like to answer that question by asking two questions. 79 00:05:50,279 --> 00:05:53,680 Speaker 4: First and foremost, do you believe that the world is 80 00:05:53,920 --> 00:05:57,839 Speaker 4: more divided today than it ever has been in recent history? 81 00:05:58,680 --> 00:06:02,480 Speaker 4: And do you believe that if we don't create spaces 82 00:06:03,080 --> 00:06:06,440 Speaker 4: for very different perspectives to come together, we won't be 83 00:06:06,520 --> 00:06:09,240 Speaker 4: able to solve the challenges that are in front of 84 00:06:09,279 --> 00:06:13,240 Speaker 4: the world today. My answer to both of those questions is, yes, 85 00:06:13,720 --> 00:06:17,279 Speaker 4: we're more divided, and two, we need to seek out 86 00:06:17,400 --> 00:06:22,159 Speaker 4: those spaces where those very different perspectives can come together 87 00:06:22,640 --> 00:06:25,880 Speaker 4: to solve those great challenges. And that's what I get 88 00:06:25,880 --> 00:06:29,440 Speaker 4: to do as CEO of the Partnership on AI. We 89 00:06:29,440 --> 00:06:34,159 Speaker 4: were begun in twenty sixteen with a fundamental commitment to 90 00:06:34,400 --> 00:06:40,920 Speaker 4: bringing together experts, whether they were in industry, academia, civil society, 91 00:06:41,040 --> 00:06:45,280 Speaker 4: or philanthropy, coming together to identify what are the most 92 00:06:45,279 --> 00:06:49,000 Speaker 4: important questions when we think about developing AI centered on 93 00:06:49,080 --> 00:06:52,480 Speaker 4: people and communities, and then how do we begin to 94 00:06:52,600 --> 00:06:55,400 Speaker 4: develop the solutions to make sure we benefit appropriately. 95 00:06:56,400 --> 00:07:01,719 Speaker 3: So that's a very big picture set of ideas. I'm 96 00:07:01,760 --> 00:07:04,000 Speaker 3: curious on a sort of more day to day level. 97 00:07:04,080 --> 00:07:06,400 Speaker 3: I mean, you talk about collaborating with all these different 98 00:07:06,480 --> 00:07:08,800 Speaker 3: kinds of people, all these different groups, what does that 99 00:07:08,880 --> 00:07:11,600 Speaker 3: actually look like. What are some specific examples of how 100 00:07:11,640 --> 00:07:12,480 Speaker 3: you do this work? 101 00:07:13,040 --> 00:07:16,560 Speaker 4: So right now we have about one hundred and twenty 102 00:07:16,680 --> 00:07:22,280 Speaker 4: partners in sixteen countries. They come together through working groups 103 00:07:22,360 --> 00:07:25,320 Speaker 4: that we look at through a variety of different perspectives. 104 00:07:25,360 --> 00:07:29,239 Speaker 4: It could be AI, labor and the economy. It could 105 00:07:29,280 --> 00:07:33,960 Speaker 4: be how do you build a healthy information ecosystem. It 106 00:07:34,000 --> 00:07:37,239 Speaker 4: could be how do you bring more diverse perspectives into 107 00:07:37,280 --> 00:07:41,720 Speaker 4: the inclusive and equitable development of AI. It could be 108 00:07:41,840 --> 00:07:46,120 Speaker 4: what are the emerging opportunities with these very very large 109 00:07:46,160 --> 00:07:50,120 Speaker 4: foundation model applications and how do you deploy those safely? 110 00:07:50,600 --> 00:07:54,160 Speaker 4: And these groups come together most importantly to say, what 111 00:07:54,240 --> 00:07:57,680 Speaker 4: are the questions we need to answer collectively, So they 112 00:07:57,720 --> 00:08:00,520 Speaker 4: come together in working groups. I have an amazing staff 113 00:08:00,560 --> 00:08:04,120 Speaker 4: team who hold the pen on synthesizing research and data 114 00:08:04,440 --> 00:08:10,000 Speaker 4: and evidence, developing frameworks, best practices, resources, all sorts of 115 00:08:10,080 --> 00:08:12,480 Speaker 4: things that we can offer up to the community, be 116 00:08:12,600 --> 00:08:16,280 Speaker 4: they in industry or in policy, to say this is 117 00:08:16,360 --> 00:08:19,000 Speaker 4: how we can well, this is what good looks like, 118 00:08:19,120 --> 00:08:20,520 Speaker 4: and this is how we can do it on a 119 00:08:20,560 --> 00:08:22,560 Speaker 4: day to day basis. So that's what we do, and 120 00:08:22,600 --> 00:08:25,920 Speaker 4: then we publish our materials. It's all open. We make 121 00:08:25,960 --> 00:08:28,200 Speaker 4: sure that we get them into the hands of those 122 00:08:28,200 --> 00:08:30,920 Speaker 4: communities that can use them, and then we drive and 123 00:08:30,960 --> 00:08:33,520 Speaker 4: work with those communities to put them into practice. 124 00:08:34,160 --> 00:08:37,120 Speaker 3: You use the word open there and describing your publications. 125 00:08:38,040 --> 00:08:40,800 Speaker 3: I know, in the world of AI, on the sort 126 00:08:40,800 --> 00:08:44,640 Speaker 3: of technical side, there's a lot of debate, say, or 127 00:08:44,720 --> 00:08:49,679 Speaker 3: discussion about kind of open versus closed AI, And I'm 128 00:08:49,720 --> 00:08:53,880 Speaker 3: curious how you kind of encounter that particular discussion. What 129 00:08:54,040 --> 00:08:56,040 Speaker 3: is your view on open versus closed AI. 130 00:08:57,080 --> 00:09:02,920 Speaker 4: So the current discussion between open and closed release of 131 00:09:03,080 --> 00:09:08,280 Speaker 4: AI models came once we saw chat, GPT and other 132 00:09:08,559 --> 00:09:12,960 Speaker 4: very large generative AI systems being deployed out into the 133 00:09:13,000 --> 00:09:18,679 Speaker 4: hands of consumers around the world, and there emerged some 134 00:09:18,840 --> 00:09:23,640 Speaker 4: fear about the potential of these models to act in 135 00:09:23,720 --> 00:09:27,240 Speaker 4: all sorts of catastrophic ways. So there were concerns that 136 00:09:27,280 --> 00:09:32,319 Speaker 4: the models could be deployed with regard to different development 137 00:09:32,320 --> 00:09:37,679 Speaker 4: of viruses or biomedical weapons, or even nuclear weapons, or 138 00:09:37,720 --> 00:09:41,800 Speaker 4: through manipulation or otherwise. So this are emerged about over 139 00:09:41,840 --> 00:09:47,640 Speaker 4: the last eighteen months, this real concern that these models, 140 00:09:47,760 --> 00:09:52,079 Speaker 4: if deployed openly, could lead to some level of truly 141 00:09:52,200 --> 00:09:57,800 Speaker 4: catastrophic risk. And what emerged is actually that we discovered 142 00:09:58,360 --> 00:10:00,240 Speaker 4: that through a whole bunch of work that's been done 143 00:10:00,280 --> 00:10:03,600 Speaker 4: over the last little while, that releasing them openly has 144 00:10:03,679 --> 00:10:06,040 Speaker 4: not led and doesn't appear to be leading in any 145 00:10:06,040 --> 00:10:11,320 Speaker 4: way to catastrophic risk. In facts, releasing them openly allows 146 00:10:11,360 --> 00:10:16,120 Speaker 4: for much more greater scrutiny and understanding of the safety 147 00:10:16,160 --> 00:10:18,800 Speaker 4: measures that have been put into place. And so what 148 00:10:18,920 --> 00:10:22,560 Speaker 4: happened was sort of the pendulum swung very much towards 149 00:10:22,600 --> 00:10:26,120 Speaker 4: concerned about really catastrophic risk and safety over the last year, 150 00:10:26,160 --> 00:10:28,760 Speaker 4: and over the last year we've seen it swing back 151 00:10:28,840 --> 00:10:31,559 Speaker 4: as we learn more and more about how these models 152 00:10:31,840 --> 00:10:34,440 Speaker 4: are being used and how they are being deployed into 153 00:10:34,480 --> 00:10:40,439 Speaker 4: the world. My feeling is we must approach this work openly, 154 00:10:40,800 --> 00:10:44,080 Speaker 4: and it's not just open release of models or what 155 00:10:44,120 --> 00:10:48,080 Speaker 4: we think of as traditional open source forms of model 156 00:10:48,120 --> 00:10:51,280 Speaker 4: development or otherwise, but we really need to think about 157 00:10:51,320 --> 00:10:56,040 Speaker 4: how do we build an open innovation ecosystem that fundamentally 158 00:10:56,080 --> 00:11:00,000 Speaker 4: allows both for the innovation to be shared with many people, 159 00:11:00,280 --> 00:11:03,960 Speaker 4: but also for safety and security to be rigorously upheld. 160 00:11:04,440 --> 00:11:07,880 Speaker 3: So when you talk about this kind of broader idea 161 00:11:07,880 --> 00:11:12,480 Speaker 3: of open innovation beyond open source or you know, transparency 162 00:11:12,480 --> 00:11:15,840 Speaker 3: and models, like, what do you mean sort of specifically 163 00:11:15,840 --> 00:11:17,320 Speaker 3: how does that look in the world. 164 00:11:17,679 --> 00:11:21,320 Speaker 4: So I have three particular points of view when it 165 00:11:21,320 --> 00:11:23,960 Speaker 4: comes to open innovation because I think we need to 166 00:11:24,000 --> 00:11:27,400 Speaker 4: think both both upstream around the research that is driving 167 00:11:27,440 --> 00:11:30,560 Speaker 4: these models and downstream in terms of the benefits of 168 00:11:30,600 --> 00:11:34,320 Speaker 4: these models to others. So, first and foremost, what we 169 00:11:34,400 --> 00:11:37,000 Speaker 4: have known in terms of how AI has been developed, 170 00:11:37,000 --> 00:11:39,040 Speaker 4: and yes, I had an opportunity to see it when 171 00:11:39,040 --> 00:11:42,160 Speaker 4: I was at the Canadian Institute for Advanced Research is 172 00:11:42,200 --> 00:11:48,360 Speaker 4: a very open form of scientific publication and rigorous peer review. 173 00:11:48,640 --> 00:11:51,720 Speaker 4: And what happens when we release openly is you have 174 00:11:51,800 --> 00:11:56,040 Speaker 4: an opportunity for the research to be interrogated to determine 175 00:11:56,040 --> 00:11:59,320 Speaker 4: the quality and significance of that, but then also for 176 00:11:59,400 --> 00:12:02,240 Speaker 4: it to be picked up by many others. And then secondly, 177 00:12:02,679 --> 00:12:06,800 Speaker 4: openness for me is about transparency. We released a set 178 00:12:06,800 --> 00:12:10,360 Speaker 4: of very strong recommendations last year around the way in 179 00:12:10,440 --> 00:12:14,680 Speaker 4: which these very large foundation models could be deployed safely. 180 00:12:15,360 --> 00:12:19,400 Speaker 4: They're all about disclosure. They're all about disclosure and documentation 181 00:12:19,600 --> 00:12:22,880 Speaker 4: right from the early days pre R and D development 182 00:12:22,920 --> 00:12:25,400 Speaker 4: of these systems, right in terms of thinking about what's 183 00:12:25,440 --> 00:12:27,960 Speaker 4: in the training data and how is it being used, 184 00:12:28,120 --> 00:12:32,160 Speaker 4: all the way through to post deployment monitoring and disclosure. 185 00:12:32,720 --> 00:12:35,960 Speaker 4: So I really think that this is important transparency through it. 186 00:12:36,000 --> 00:12:38,880 Speaker 4: And then the third piece is openness in terms of 187 00:12:38,920 --> 00:12:42,319 Speaker 4: who was around the table to benefit from this technology. 188 00:12:42,720 --> 00:12:44,840 Speaker 4: We know that if we're really going to see these 189 00:12:44,880 --> 00:12:49,920 Speaker 4: new models having being successful deployed into education or healthcare 190 00:12:50,120 --> 00:12:53,920 Speaker 4: or climate and sustainability, we need to have those experts 191 00:12:54,000 --> 00:12:57,120 Speaker 4: in those communities at the table charting this and making 192 00:12:57,160 --> 00:12:59,800 Speaker 4: sure that the technology is working for them. Those are 193 00:12:59,800 --> 00:13:01,520 Speaker 4: the free ways I think about openness. 194 00:13:02,640 --> 00:13:06,040 Speaker 3: Is there like a particular project that you've worked on 195 00:13:06,160 --> 00:13:10,080 Speaker 3: that you feel like, you know reflects your approach to 196 00:13:10,240 --> 00:13:11,240 Speaker 3: responsible AI. 197 00:13:12,320 --> 00:13:15,280 Speaker 4: So there's a really interesting project that we have underway 198 00:13:15,280 --> 00:13:19,480 Speaker 4: at PAI that is looking at responsible practices squarely when 199 00:13:19,520 --> 00:13:23,040 Speaker 4: it comes to the use of synthetic media. And what 200 00:13:23,080 --> 00:13:26,680 Speaker 4: we heard from our community was that they were looking 201 00:13:26,720 --> 00:13:30,240 Speaker 4: for a clear code of conduct about what does it 202 00:13:30,360 --> 00:13:33,280 Speaker 4: mean to be responsible in this space. And so what 203 00:13:33,440 --> 00:13:36,480 Speaker 4: happened is we pulled together a number of working groups 204 00:13:36,480 --> 00:13:40,359 Speaker 4: to come together. They included industry representatives, They also included 205 00:13:40,640 --> 00:13:46,400 Speaker 4: civil society organizations like WITNESS, a number of academic institutions 206 00:13:46,440 --> 00:13:49,520 Speaker 4: and otherwise. And what we heard was that there were 207 00:13:49,760 --> 00:13:55,680 Speaker 4: clear requirements that creators could take, that developers of the 208 00:13:55,720 --> 00:13:58,800 Speaker 4: technology could take. And then also distributors. So when we 209 00:13:58,880 --> 00:14:03,360 Speaker 4: think about those generative AI systems being deployed across platforms 210 00:14:03,400 --> 00:14:06,560 Speaker 4: and otherwise, and we came up with a framework for 211 00:14:06,640 --> 00:14:10,000 Speaker 4: what responsibility looks like. What does it mean to have consent, 212 00:14:10,160 --> 00:14:13,800 Speaker 4: what does it mean to disclose responsibly, what does it 213 00:14:13,840 --> 00:14:18,120 Speaker 4: mean to embed technology into it? So, for example, we've 214 00:14:18,120 --> 00:14:21,160 Speaker 4: heard many people talk about the importance of water marking 215 00:14:21,240 --> 00:14:23,640 Speaker 4: systems right and making sure that we have a way 216 00:14:23,640 --> 00:14:25,920 Speaker 4: to water mark them. But what we know from the 217 00:14:25,960 --> 00:14:30,240 Speaker 4: technology is that is a very very complex and complicated problem, 218 00:14:30,480 --> 00:14:33,680 Speaker 4: and what might work on a technical level certainly hits 219 00:14:33,720 --> 00:14:36,720 Speaker 4: a whole new set of complications when we start labeling 220 00:14:36,760 --> 00:14:39,840 Speaker 4: and disclosing out to the public about what that technology 221 00:14:39,880 --> 00:14:43,520 Speaker 4: actually means. All of these I believe are solvable problems, 222 00:14:43,520 --> 00:14:47,160 Speaker 4: but they all needed to have a clear code underneath 223 00:14:47,160 --> 00:14:49,400 Speaker 4: them that was saying this is what we will commit to. 224 00:14:49,720 --> 00:14:53,000 Speaker 4: And we now have a number of organizations, many many 225 00:14:53,000 --> 00:14:56,240 Speaker 4: of the large technology companies, but also many of the 226 00:14:56,720 --> 00:14:59,840 Speaker 4: small startups who are operating in this base, civil society, 227 00:15:00,040 --> 00:15:03,840 Speaker 4: media organizations like the BBC and the CBC who's have 228 00:15:04,000 --> 00:15:07,520 Speaker 4: signed on. And one of the really exciting pieces of 229 00:15:07,640 --> 00:15:11,160 Speaker 4: that is that we're now seeing how it's changing practice. 230 00:15:11,440 --> 00:15:14,200 Speaker 4: So a year in we asked each of our partners 231 00:15:14,240 --> 00:15:17,440 Speaker 4: to come up with a clear case study about how 232 00:15:17,480 --> 00:15:20,720 Speaker 4: that work has changed the way they are making decisions, 233 00:15:21,120 --> 00:15:25,720 Speaker 4: deploying technology and ensuring that they're being responsible in their use. 234 00:15:25,800 --> 00:15:28,720 Speaker 4: And that is creating now a whole resource online that 235 00:15:28,720 --> 00:15:31,080 Speaker 4: we're able to share with others about what does it 236 00:15:31,200 --> 00:15:34,480 Speaker 4: mean to be responsible in this place. There's so much 237 00:15:34,520 --> 00:15:36,680 Speaker 4: more work to be done, and the exciting thing is 238 00:15:36,720 --> 00:15:38,920 Speaker 4: once you have a foundation like this in place, we 239 00:15:38,960 --> 00:15:42,560 Speaker 4: can continue to build on it. So much interest now 240 00:15:42,560 --> 00:15:45,520 Speaker 4: in the policy space, for example, about this work as well. 241 00:15:46,600 --> 00:15:50,440 Speaker 3: Are there any specific examples of those sort of case 242 00:15:50,480 --> 00:15:55,160 Speaker 3: studies or the real world experiences that say media organizations 243 00:15:55,240 --> 00:15:57,560 Speaker 3: had that are interesting that are illuminating. 244 00:15:57,960 --> 00:16:02,960 Speaker 4: Yes. So, for example, what we saw with the BBC 245 00:16:03,560 --> 00:16:06,680 Speaker 4: is that they're developing a lot of content as a 246 00:16:06,680 --> 00:16:10,200 Speaker 4: public broadcaster, both in terms of their news coverage but 247 00:16:10,320 --> 00:16:12,880 Speaker 4: also in terms of some of the resources that they 248 00:16:12,880 --> 00:16:16,600 Speaker 4: are developing for the British public as well. And what 249 00:16:16,640 --> 00:16:19,080 Speaker 4: they talked about was the way in which they had 250 00:16:19,320 --> 00:16:24,840 Speaker 4: used synthetic media in a very very sensitive environment where 251 00:16:24,880 --> 00:16:29,520 Speaker 4: they were hearing from individuals talk about personal experiences, but 252 00:16:29,640 --> 00:16:33,200 Speaker 4: wanted to have some way to change the face entirely 253 00:16:33,320 --> 00:16:36,320 Speaker 4: in terms of the individuals who were speaking. So that's 254 00:16:36,360 --> 00:16:39,440 Speaker 4: a very complicated ethical question, right, how do you do 255 00:16:39,600 --> 00:16:42,400 Speaker 4: that responsibly? And what is the way in which you 256 00:16:42,560 --> 00:16:46,640 Speaker 4: use that technology, and most importantly, how do you disclose it? 257 00:16:46,920 --> 00:16:49,400 Speaker 4: So their case study looked at that in some real 258 00:16:49,520 --> 00:16:53,120 Speaker 4: detail about the process they went through to make the 259 00:16:53,200 --> 00:16:57,160 Speaker 4: decision responsibly to do what they chose, how they intended 260 00:16:57,200 --> 00:16:58,920 Speaker 4: to use the technology in that space. 261 00:17:00,040 --> 00:17:04,280 Speaker 3: Describe your work and some of these studies, the idea 262 00:17:04,320 --> 00:17:08,159 Speaker 3: of transparency seems to be a theme. Talk about the 263 00:17:08,200 --> 00:17:10,200 Speaker 3: importance of transparency in this kind of work. 264 00:17:11,359 --> 00:17:16,320 Speaker 4: Yeah, transparency is fundamental to responsibility. I always like to 265 00:17:16,320 --> 00:17:20,159 Speaker 4: say it's not accountability in a complete sense, but it 266 00:17:20,240 --> 00:17:24,159 Speaker 4: is a first step to driving accountability more fully. So, 267 00:17:24,680 --> 00:17:27,960 Speaker 4: when we think about how these systems are developed, they're 268 00:17:28,000 --> 00:17:33,240 Speaker 4: often developed behind closed doors inside companies who are making 269 00:17:33,280 --> 00:17:37,320 Speaker 4: decisions about what and how these products will work from 270 00:17:37,359 --> 00:17:42,600 Speaker 4: a business perspective. And what disclosure and transparency can provide 271 00:17:42,640 --> 00:17:46,040 Speaker 4: is some sense of the decisions that were made leading 272 00:17:46,080 --> 00:17:48,879 Speaker 4: up to the way in which those models were deployed. 273 00:17:49,000 --> 00:17:54,359 Speaker 4: So This could be ensuring that individual's private information was 274 00:17:54,400 --> 00:17:58,720 Speaker 4: protected through the process and won't be inadvertently disclosed, or otherwise, 275 00:17:59,160 --> 00:18:02,199 Speaker 4: it could be providing some sense of how well the 276 00:18:02,280 --> 00:18:06,000 Speaker 4: system performs against a whole level of quality measures. So 277 00:18:06,040 --> 00:18:08,840 Speaker 4: we have all of these different types of evaluations and 278 00:18:08,920 --> 00:18:12,200 Speaker 4: a measures that are emerging about the quality of these 279 00:18:12,200 --> 00:18:16,080 Speaker 4: systems as they're deployed. Being transparent about how they perform 280 00:18:16,119 --> 00:18:19,240 Speaker 4: against these systems is really crucial to that as well. 281 00:18:19,480 --> 00:18:22,560 Speaker 4: We have a whole ecosystem that's starting to emerge around 282 00:18:22,640 --> 00:18:25,639 Speaker 4: auditing of these systems. So what does that look like 283 00:18:25,720 --> 00:18:28,120 Speaker 4: we think about auditors and all sorts of other sectors 284 00:18:28,160 --> 00:18:30,639 Speaker 4: of the economy. What does it look like to be 285 00:18:30,720 --> 00:18:33,919 Speaker 4: auditing these systems to ensure that they're meeting all of 286 00:18:33,960 --> 00:18:37,640 Speaker 4: those both legal but additional ethical requirements that we want 287 00:18:37,680 --> 00:18:38,840 Speaker 4: to make sure that are in place. 288 00:18:40,160 --> 00:18:44,720 Speaker 3: What are some of the hardest ethical dilemmas you've come 289 00:18:44,800 --> 00:18:46,920 Speaker 3: up against in AI policy? 290 00:18:48,160 --> 00:18:51,520 Speaker 4: Well, the interesting thing about AI policy right is what 291 00:18:51,560 --> 00:18:55,800 Speaker 4: it works very simply in one setting, can be highly 292 00:18:55,880 --> 00:18:59,199 Speaker 4: complicated in another setting. And so, for example, I have 293 00:18:59,240 --> 00:19:02,119 Speaker 4: an app that I adore. It's an app on my 294 00:19:02,240 --> 00:19:05,240 Speaker 4: phone that allows me to take a photo of a bird, 295 00:19:05,800 --> 00:19:08,040 Speaker 4: and it will help me to better understand, you know, 296 00:19:08,080 --> 00:19:10,439 Speaker 4: what that bird is, and give me all sorts of 297 00:19:10,440 --> 00:19:14,639 Speaker 4: information about that bird. Now, it's probably right most of 298 00:19:14,680 --> 00:19:17,160 Speaker 4: the time, and it's certainly right enough of the time 299 00:19:17,200 --> 00:19:20,200 Speaker 4: to give me great pleasure and delight when I'm out walking. 300 00:19:20,760 --> 00:19:24,199 Speaker 4: You could think about that exact same technology applied. So, 301 00:19:24,440 --> 00:19:27,680 Speaker 4: for example, now you're a security guard and you're working 302 00:19:28,240 --> 00:19:31,840 Speaker 4: in a shopping plaza, and you're able to take photos 303 00:19:31,880 --> 00:19:35,280 Speaker 4: of individuals who you may think are acting suspiciously in 304 00:19:35,280 --> 00:19:37,840 Speaker 4: some way and match that photo up with some sort 305 00:19:37,920 --> 00:19:42,119 Speaker 4: of a database of individuals that may have been found, 306 00:19:42,119 --> 00:19:44,920 Speaker 4: you know, to have some sort of connection to other 307 00:19:45,000 --> 00:19:47,360 Speaker 4: criminal behavior in the past. Right, So what goes from 308 00:19:47,440 --> 00:19:50,919 Speaker 4: being a delightful Oh, isn't this an interesting bird? To 309 00:19:51,000 --> 00:19:55,399 Speaker 4: a very very creepy What is this say about surveillance 310 00:19:55,480 --> 00:19:59,080 Speaker 4: and privacy and access to public spaces? And that is 311 00:19:59,119 --> 00:20:02,359 Speaker 4: the nature of AI. So much of the concern about 312 00:20:02,359 --> 00:20:07,520 Speaker 4: the ethical use and deployment of AI is how an 313 00:20:07,640 --> 00:20:13,080 Speaker 4: organization is making the choices within the social and systemic 314 00:20:13,160 --> 00:20:16,919 Speaker 4: structure they sit. So so much about the ethics of 315 00:20:16,960 --> 00:20:20,679 Speaker 4: AI is understanding what is the use case how is 316 00:20:20,720 --> 00:20:24,440 Speaker 4: it being used, how is it being constrained? How does 317 00:20:24,480 --> 00:20:27,520 Speaker 4: it start to infringe upon what we think of as 318 00:20:27,880 --> 00:20:32,320 Speaker 4: the human rights of an individual to privacy, And so 319 00:20:32,400 --> 00:20:35,640 Speaker 4: you have to constantly be thinking about ethics. What could 320 00:20:35,640 --> 00:20:39,119 Speaker 4: work very well in one situation absolutely doesn't work in another. 321 00:20:39,480 --> 00:20:43,000 Speaker 4: We often talk about these as socio technical questions. Right, 322 00:20:43,400 --> 00:20:46,760 Speaker 4: just because the technology works doesn't actually mean that it 323 00:20:46,800 --> 00:20:48,760 Speaker 4: should be used and deployed. 324 00:20:49,520 --> 00:20:54,520 Speaker 3: What's an example of where the partnership on AI influence 325 00:20:54,760 --> 00:20:58,160 Speaker 3: changes either in policy or in industry practice. 326 00:20:59,320 --> 00:21:02,040 Speaker 4: We talked a little bit about the Framework for Synthetic 327 00:21:02,119 --> 00:21:06,520 Speaker 4: Media and how that has allowed companies and media organizations 328 00:21:06,560 --> 00:21:09,720 Speaker 4: and civil society organizations to really think deeply about the 329 00:21:09,760 --> 00:21:12,600 Speaker 4: way in which they're using this. Another area that we 330 00:21:12,760 --> 00:21:18,840 Speaker 4: focused on has been around responsible deployment of foundation and 331 00:21:19,000 --> 00:21:21,679 Speaker 4: large scale models. So, as I said, we issued a 332 00:21:21,800 --> 00:21:26,080 Speaker 4: set of recommendations last year that really laid out for 333 00:21:26,280 --> 00:21:31,120 Speaker 4: these very large developers and deployers of foundation and frontier models, 334 00:21:31,200 --> 00:21:35,439 Speaker 4: what does good look like right from R and D 335 00:21:35,680 --> 00:21:39,520 Speaker 4: through to deployment monitoring. And it has been very encouraging 336 00:21:39,720 --> 00:21:42,520 Speaker 4: to see that that work has been picked up by 337 00:21:42,920 --> 00:21:47,000 Speaker 4: companies and really articulated as part of the fabric of 338 00:21:47,040 --> 00:21:51,880 Speaker 4: the deployment of their foundation models and systems moving forward. 339 00:21:52,280 --> 00:21:55,359 Speaker 4: So much of this work is around creating clear definitions 340 00:21:55,359 --> 00:21:58,600 Speaker 4: of what we're meaning as the technology evolves and clear 341 00:21:58,640 --> 00:22:01,160 Speaker 4: sets of responsibilities. So it's great to see that work 342 00:22:01,200 --> 00:22:04,679 Speaker 4: getting picked up. The NTIA in the United States just 343 00:22:04,720 --> 00:22:09,119 Speaker 4: released a report on open models and the release of 344 00:22:09,119 --> 00:22:11,840 Speaker 4: open models. Great to see our work sited there as 345 00:22:11,880 --> 00:22:15,200 Speaker 4: contributing to that analysis. Great to see some of our 346 00:22:15,240 --> 00:22:19,199 Speaker 4: definitions and synthetic media getting picked up by legislators in 347 00:22:19,240 --> 00:22:22,880 Speaker 4: different countries. Really just it's important, I think, for us 348 00:22:22,880 --> 00:22:26,080 Speaker 4: to build capacity, knowledge and understanding and our policy makers 349 00:22:26,119 --> 00:22:30,840 Speaker 4: in this moment as the technology is evolving and accelerating 350 00:22:30,840 --> 00:22:31,600 Speaker 4: in its development. 351 00:22:32,720 --> 00:22:36,320 Speaker 3: What's the AI Alliance and why did Partnership on AI 352 00:22:36,440 --> 00:22:37,160 Speaker 3: decide to join? 353 00:22:37,720 --> 00:22:41,240 Speaker 4: So you had asked about the debate between open versus 354 00:22:41,359 --> 00:22:45,880 Speaker 4: closed models and how that has evolved over the last year, 355 00:22:46,200 --> 00:22:50,719 Speaker 4: and the AI Alliance was a community of organizations that 356 00:22:50,840 --> 00:22:55,080 Speaker 4: came together to really think about, Okay, if we support 357 00:22:55,600 --> 00:22:59,119 Speaker 4: open release of models, what does that look like and 358 00:22:59,160 --> 00:23:01,520 Speaker 4: what does the community the need. And so that's about 359 00:23:01,560 --> 00:23:06,040 Speaker 4: one hundred organizations IBM one of our founding partners is 360 00:23:06,080 --> 00:23:09,119 Speaker 4: also one of the founding partners of the AI Alliance. 361 00:23:09,440 --> 00:23:12,719 Speaker 4: It's a community that brings together a number of academic 362 00:23:12,800 --> 00:23:17,000 Speaker 4: institutions many countries around the world, and they're really focused 363 00:23:17,119 --> 00:23:22,479 Speaker 4: on how do you build the resources and infrastructure and 364 00:23:22,560 --> 00:23:26,680 Speaker 4: community around what open source in these large scale models 365 00:23:26,720 --> 00:23:30,280 Speaker 4: really mean. So that could be open data sets, that 366 00:23:30,400 --> 00:23:35,200 Speaker 4: could be open technology development. Really building on that understanding 367 00:23:35,200 --> 00:23:38,320 Speaker 4: that we need an infrastructure in place and a community 368 00:23:38,359 --> 00:23:43,600 Speaker 4: engaged in thinking about safety and innovation through the open lens. 369 00:23:44,400 --> 00:23:48,520 Speaker 2: This approach brings together organizations and experts from around the 370 00:23:48,560 --> 00:23:54,520 Speaker 2: globe with different backgrounds, experiences, and perspectives to transparently and 371 00:23:54,680 --> 00:23:59,439 Speaker 2: openly address the challenges and opportunities today. I poses the 372 00:23:59,480 --> 00:24:04,280 Speaker 2: collaborative nature of the AI Alliance encourages discussion, debate, and innovation. 373 00:24:05,119 --> 00:24:08,200 Speaker 2: Through these efforts, IBM is helping to build a community 374 00:24:08,520 --> 00:24:12,160 Speaker 2: around transparent open technology. 375 00:24:12,800 --> 00:24:15,760 Speaker 3: So I want to talk about the future for a minute. 376 00:24:16,119 --> 00:24:19,520 Speaker 3: I'm trures what you see as the biggest obstacles to 377 00:24:20,400 --> 00:24:23,879 Speaker 3: widespread adoption of responsible AI practices. 378 00:24:24,640 --> 00:24:29,959 Speaker 4: One of the biggest obstacles today is an inability and 379 00:24:30,000 --> 00:24:33,880 Speaker 4: a really a lack of understanding about how to use 380 00:24:33,960 --> 00:24:38,280 Speaker 4: these models and how they can most effectively drive forward 381 00:24:38,720 --> 00:24:42,640 Speaker 4: a company's commitment to whatever products and services it might 382 00:24:42,720 --> 00:24:46,479 Speaker 4: be deploying. So I always recommend a couple of things 383 00:24:46,480 --> 00:24:49,040 Speaker 4: for companies to really to think about this and to 384 00:24:49,119 --> 00:24:53,960 Speaker 4: get started. One is think about how you are already 385 00:24:54,040 --> 00:24:57,359 Speaker 4: using AI across all of your business products and services. 386 00:24:57,640 --> 00:25:02,520 Speaker 4: Because already AI is integri into our workforces and into 387 00:25:02,600 --> 00:25:04,919 Speaker 4: our workstreams and into the way in which companies are 388 00:25:04,920 --> 00:25:08,800 Speaker 4: communicating with their clients every day. So understand how you 389 00:25:08,840 --> 00:25:12,760 Speaker 4: are already using it and understand how you are integrating 390 00:25:12,920 --> 00:25:16,159 Speaker 4: oversight and monitoring into those One of the best and 391 00:25:16,359 --> 00:25:19,800 Speaker 4: clearest ways in which a company can really understand how 392 00:25:19,840 --> 00:25:22,919 Speaker 4: to use this responsibly is through documentation. It's one of 393 00:25:22,920 --> 00:25:26,320 Speaker 4: the areas where there's a clear consensus in the community. 394 00:25:26,400 --> 00:25:29,160 Speaker 4: So how do you document the models that you are using, 395 00:25:29,280 --> 00:25:31,679 Speaker 4: making sure that you've got a registry in place. How 396 00:25:31,720 --> 00:25:34,119 Speaker 4: do you document the data that you are using and 397 00:25:34,160 --> 00:25:36,399 Speaker 4: where that data comes from. This is sort of the 398 00:25:36,440 --> 00:25:39,840 Speaker 4: first system, first line of defense in terms of understanding 399 00:25:39,880 --> 00:25:42,439 Speaker 4: both what is in place and what you need to 400 00:25:42,440 --> 00:25:45,920 Speaker 4: do in order to monitor it moving forward. And then secondly, 401 00:25:46,040 --> 00:25:48,560 Speaker 4: once you've got an understanding of how you're already using 402 00:25:48,560 --> 00:25:51,120 Speaker 4: the system, look at ways in which you could begin 403 00:25:51,160 --> 00:25:54,000 Speaker 4: to pilot or iterate in a low risk way using 404 00:25:54,040 --> 00:25:56,880 Speaker 4: these systems to really begin to see how and what 405 00:25:56,960 --> 00:25:59,320 Speaker 4: structures you need to have in place to use it 406 00:25:59,359 --> 00:26:03,840 Speaker 4: moving forward. And then thirdly, make sure that you structure 407 00:26:03,920 --> 00:26:07,320 Speaker 4: a team in place internally that's able to do some 408 00:26:07,480 --> 00:26:12,720 Speaker 4: of this. Cross departmental monitoring, knowledge sharing and learning boards 409 00:26:12,760 --> 00:26:16,000 Speaker 4: are very very interested in this technology, So thinking about 410 00:26:16,040 --> 00:26:17,800 Speaker 4: how you can have a system or a team in 411 00:26:17,840 --> 00:26:21,199 Speaker 4: place internally that's reporting to your board, giving them a 412 00:26:21,240 --> 00:26:25,200 Speaker 4: sense of both the opportunities that it is identifies for 413 00:26:25,240 --> 00:26:27,879 Speaker 4: you and the additional risk mitigation and management you might 414 00:26:27,920 --> 00:26:30,600 Speaker 4: be putting into place. And then you know, once you 415 00:26:30,680 --> 00:26:33,680 Speaker 4: have those things into place, you're really going to need 416 00:26:33,720 --> 00:26:38,000 Speaker 4: to understand how you work with the most valuable asset 417 00:26:38,119 --> 00:26:40,920 Speaker 4: you have, which is your people. How do you make 418 00:26:40,960 --> 00:26:44,639 Speaker 4: sure that AI systems are working for the workers, making 419 00:26:44,680 --> 00:26:47,000 Speaker 4: sure that they're going into place. The most important and 420 00:26:47,040 --> 00:26:50,520 Speaker 4: impressive implementations we see are those where you have the 421 00:26:51,040 --> 00:26:53,840 Speaker 4: workers who are going to be engaged in this process. 422 00:26:53,920 --> 00:26:57,720 Speaker 4: Central to figuring out how to develop and deploy it 423 00:26:57,960 --> 00:27:00,720 Speaker 4: in order to really enhance their work, gets a core 424 00:27:00,840 --> 00:27:03,600 Speaker 4: part of a set of shared Prosperity guidelines that we 425 00:27:03,680 --> 00:27:05,040 Speaker 4: issued last year. 426 00:27:05,880 --> 00:27:11,639 Speaker 3: And then from the side of policy makers, how should 427 00:27:11,680 --> 00:27:17,240 Speaker 3: policy makers think about the balance between innovation and regulation. 428 00:27:17,920 --> 00:27:20,520 Speaker 4: Yeah, it's so interesting, isn't it that we always think of, 429 00:27:20,680 --> 00:27:25,160 Speaker 4: you know, innovation and regulation as being two sides of 430 00:27:25,200 --> 00:27:29,520 Speaker 4: a coin, when in fact, so much innovation comes from 431 00:27:30,280 --> 00:27:34,320 Speaker 4: having a clear set of guardrails and regulation in place. 432 00:27:34,640 --> 00:27:37,320 Speaker 4: We think about all of the innovation that's happened in 433 00:27:37,560 --> 00:27:43,040 Speaker 4: the automotive industry, right we can drive faster because we 434 00:27:43,280 --> 00:27:46,480 Speaker 4: have breaks, we can drive faster because we have seat 435 00:27:46,520 --> 00:27:49,520 Speaker 4: belts in place. So I think it's often interesting to 436 00:27:49,560 --> 00:27:51,320 Speaker 4: me that we think about the two as being on 437 00:27:51,400 --> 00:27:54,280 Speaker 4: either side of the coin, but in actual fact, you 438 00:27:54,440 --> 00:28:01,439 Speaker 4: can't be innovative without being responsible as well. So I 439 00:28:01,480 --> 00:28:04,080 Speaker 4: think from a policy maker perspective, what we have been 440 00:28:04,119 --> 00:28:07,800 Speaker 4: really encouraging them to do is to understand that you've 441 00:28:07,800 --> 00:28:12,040 Speaker 4: got foundational regulation in place that works for you nationally. 442 00:28:12,119 --> 00:28:16,000 Speaker 4: This could be ensuring that you have strong privacy protections 443 00:28:16,040 --> 00:28:20,400 Speaker 4: in place. It could be ensuring that you are understanding 444 00:28:20,440 --> 00:28:24,760 Speaker 4: potential online harms, particularly to vulnerable communities and then look 445 00:28:24,800 --> 00:28:28,040 Speaker 4: at what you need to be doing internationally to being 446 00:28:28,119 --> 00:28:32,159 Speaker 4: both competitive and sustainable. There's all sorts of mechanisms that 447 00:28:32,200 --> 00:28:34,560 Speaker 4: are in place right now at the international level to 448 00:28:34,600 --> 00:28:38,000 Speaker 4: think about how do we build an interoperable space for 449 00:28:38,080 --> 00:28:39,880 Speaker 4: these technologies moving forward. 450 00:28:40,440 --> 00:28:44,000 Speaker 3: We've been talking in various ways about what it means 451 00:28:44,160 --> 00:28:49,800 Speaker 3: to responsibly develop AI, and if you're going to boil 452 00:28:49,920 --> 00:28:53,680 Speaker 3: that down, you know the essential concerns that people should 453 00:28:53,720 --> 00:28:56,520 Speaker 3: be thinking about, like what are the key things to 454 00:28:56,560 --> 00:28:59,480 Speaker 3: think about in responsible AI? 455 00:29:00,240 --> 00:29:04,480 Speaker 4: So if you are a company, if we're talking specifically 456 00:29:04,520 --> 00:29:08,320 Speaker 4: through the company lens, when we're thinking about responsible use 457 00:29:08,480 --> 00:29:13,400 Speaker 4: of AI, the most important difference between this form of 458 00:29:13,480 --> 00:29:17,320 Speaker 4: AI technologies and other forms of technologies that we have 459 00:29:17,440 --> 00:29:22,400 Speaker 4: used previously is the integration of data and the training 460 00:29:22,680 --> 00:29:25,280 Speaker 4: models that go on top of that data. So when 461 00:29:25,280 --> 00:29:28,920 Speaker 4: we think about responsibility, first and foremost, you need to 462 00:29:28,960 --> 00:29:32,440 Speaker 4: think about your data. Where did it come from, What 463 00:29:32,720 --> 00:29:36,480 Speaker 4: consent and disclosure requirements do you have on it? Are 464 00:29:36,520 --> 00:29:40,680 Speaker 4: you privacy protecting? You can't be thinking about AI within 465 00:29:40,760 --> 00:29:43,480 Speaker 4: your company without thinking about data, and that's both your 466 00:29:43,520 --> 00:29:47,560 Speaker 4: training data. But then once you're using your systems and 467 00:29:47,680 --> 00:29:50,880 Speaker 4: integrating and interacting with your consumers, how are you protecting 468 00:29:50,920 --> 00:29:54,200 Speaker 4: the data that's coming out of those systems as well? 469 00:29:54,600 --> 00:29:58,680 Speaker 4: And then secondly is when you're thinking about how to 470 00:29:58,800 --> 00:30:02,760 Speaker 4: deploy that AA system, the most important thing you want 471 00:30:02,800 --> 00:30:06,680 Speaker 4: to think about is are we being transparent about how 472 00:30:06,720 --> 00:30:10,080 Speaker 4: it's being used with our clients and our partners. So 473 00:30:10,640 --> 00:30:13,560 Speaker 4: you know, the idea that if I'm a customer, I 474 00:30:13,600 --> 00:30:17,720 Speaker 4: should know when I'm interacting with an AI system, I 475 00:30:17,720 --> 00:30:20,720 Speaker 4: should know when I'm interacting with a human. So I 476 00:30:20,760 --> 00:30:24,320 Speaker 4: think those two pieces are the fundamentals. And then of 477 00:30:24,400 --> 00:30:27,640 Speaker 4: course you want to be thinking carefully about, you know, 478 00:30:27,680 --> 00:30:32,200 Speaker 4: making sure that whatever jurisdiction you're operating in, you're meeting 479 00:30:32,240 --> 00:30:35,600 Speaker 4: all of the legal requirements with regard to the services 480 00:30:35,600 --> 00:30:36,760 Speaker 4: and products that you're offering. 481 00:30:37,280 --> 00:30:42,360 Speaker 3: Let's finish with the speed round, complete the sentence. In 482 00:30:42,480 --> 00:30:45,760 Speaker 3: five years, AI will will. 483 00:30:45,680 --> 00:30:51,480 Speaker 4: Drive equity, justice and shared prosperity if we choose to 484 00:30:51,600 --> 00:30:54,240 Speaker 4: set that future trajectory for this technology. 485 00:30:55,240 --> 00:30:59,000 Speaker 3: What is the number one thing that people misunderstand about AI? 486 00:31:00,160 --> 00:31:04,000 Speaker 4: AI is not good, and AI is not bad, but 487 00:31:04,120 --> 00:31:09,240 Speaker 4: AI is also not neutral. It is a product of 488 00:31:09,280 --> 00:31:13,760 Speaker 4: the choices we make as humans about how we deploy it. 489 00:31:13,800 --> 00:31:14,400 Speaker 4: In the world. 490 00:31:15,800 --> 00:31:19,040 Speaker 3: What advice would you give yourself ten years ago to 491 00:31:19,240 --> 00:31:24,520 Speaker 3: better prepare yourself for today? 492 00:31:25,240 --> 00:31:30,280 Speaker 4: Ten years ago, I wish that I had known just 493 00:31:30,720 --> 00:31:38,520 Speaker 4: how fundamental the enduring questions of ethics and responsibility would 494 00:31:38,520 --> 00:31:43,880 Speaker 4: be as we developed this technology moving forward, So many 495 00:31:43,920 --> 00:31:47,520 Speaker 4: of the questions that we ask about AI are questions 496 00:31:47,560 --> 00:31:52,320 Speaker 4: about ourselves and the way in which we use technology 497 00:31:52,880 --> 00:31:55,480 Speaker 4: and the way in which technology can advance the work 498 00:31:55,520 --> 00:31:56,080 Speaker 4: we're doing. 499 00:31:57,280 --> 00:32:00,480 Speaker 3: How do you use AI in your day to day life? 500 00:32:00,960 --> 00:32:04,280 Speaker 4: I use AI all day every day, So whether it's 501 00:32:04,400 --> 00:32:08,280 Speaker 4: my bird app when I go out for my morning walk, 502 00:32:08,560 --> 00:32:11,680 Speaker 4: helping me to better identify birds that I see, or 503 00:32:11,760 --> 00:32:14,720 Speaker 4: whether it is my mapping app that's helping me to 504 00:32:14,760 --> 00:32:18,560 Speaker 4: get more speedily through traffic to whatever meeting I need 505 00:32:18,600 --> 00:32:22,000 Speaker 4: to go to. I use AI all the time. I 506 00:32:22,120 --> 00:32:26,520 Speaker 4: really enjoy using some of the generative AI chatbots more 507 00:32:26,560 --> 00:32:29,680 Speaker 4: for fun than for anything else. As a creative partner 508 00:32:29,720 --> 00:32:33,640 Speaker 4: in thinking through ideas and integrating it into all aspects 509 00:32:33,680 --> 00:32:36,400 Speaker 4: of our lives. Is just so much about the way 510 00:32:36,440 --> 00:32:37,520 Speaker 4: in which we live today. 511 00:32:38,800 --> 00:32:43,080 Speaker 3: So people use the word open to mean different things, 512 00:32:43,640 --> 00:32:46,880 Speaker 3: even just in the context of technology. How do you 513 00:32:46,960 --> 00:32:49,280 Speaker 3: define open in the context of your work. 514 00:32:49,960 --> 00:32:52,120 Speaker 4: So there is the question of open as it is 515 00:32:52,200 --> 00:32:56,040 Speaker 4: deployed to technology, which we've talked a lot about. But 516 00:32:56,120 --> 00:33:00,480 Speaker 4: I do think a big piece of PAI is open minded. 517 00:33:01,360 --> 00:33:04,760 Speaker 4: We need to be open minded truly to listen to, 518 00:33:05,360 --> 00:33:09,480 Speaker 4: for example, what a civil society advocate might say about 519 00:33:09,480 --> 00:33:11,760 Speaker 4: what they're seeing in terms of the way in which 520 00:33:11,840 --> 00:33:15,680 Speaker 4: AI is interacting in a particular community. Or we need 521 00:33:15,760 --> 00:33:18,520 Speaker 4: to be open minded to hear from a technologist about 522 00:33:18,560 --> 00:33:21,200 Speaker 4: their hopes and dreams of where this technology might go 523 00:33:21,280 --> 00:33:25,560 Speaker 4: moving forward. And we need to have those conversations listening 524 00:33:25,560 --> 00:33:28,920 Speaker 4: to each other to really identify how we're going to 525 00:33:28,960 --> 00:33:33,240 Speaker 4: meet the challenge and opportunity of AI today. So open 526 00:33:34,200 --> 00:33:39,520 Speaker 4: is just fundamental to the partnership on AI. I often 527 00:33:39,560 --> 00:33:42,600 Speaker 4: call it an experiment in open innovation. 528 00:33:44,200 --> 00:33:45,960 Speaker 3: Rebecca, thank you so much for your time. 529 00:33:46,800 --> 00:33:48,680 Speaker 4: It is my pleasure. Thank you for having me. 530 00:33:51,240 --> 00:33:53,960 Speaker 2: Thank you to Rebecca and Jacob for that engaging discussion 531 00:33:54,320 --> 00:33:57,200 Speaker 2: about some of the most pressing issues facing the future 532 00:33:57,280 --> 00:34:01,360 Speaker 2: of AI. As Rebecca emphasized, whether you're thinking about data 533 00:34:01,400 --> 00:34:06,080 Speaker 2: privacy or disclosure, transparency and openness are key to solving 534 00:34:06,160 --> 00:34:13,040 Speaker 2: challenges and capitalizing on new opportunities by developing best practices 535 00:34:13,120 --> 00:34:17,520 Speaker 2: and resources Partnership on AI is building out the guardrails 536 00:34:17,800 --> 00:34:20,839 Speaker 2: to support the release of open source models and the 537 00:34:20,880 --> 00:34:25,759 Speaker 2: practice of post deployment monitoring. By sharing their work with 538 00:34:25,800 --> 00:34:31,200 Speaker 2: the broader community, Rebecca and Pai are demonstrating how working responsibly, 539 00:34:31,600 --> 00:34:38,160 Speaker 2: ethically and openly can help drive innovation. Smart Talks with 540 00:34:38,239 --> 00:34:42,719 Speaker 2: IBM is produced by Matt Romano, Joey Fishground, Amy Gaines McQuaid, 541 00:34:43,120 --> 00:34:47,319 Speaker 2: and Jacob Goldstein. We're edited by Lydia gene Kott. Our 542 00:34:47,360 --> 00:34:51,920 Speaker 2: engineers are Sarah Bugaier and Ben Holliday. Theme song by Gramoscope. 543 00:34:52,080 --> 00:34:55,359 Speaker 2: Special thanks to the eight Bar and IBM teams, as 544 00:34:55,360 --> 00:34:58,920 Speaker 2: well as the Pushkin marketing team. Smart Talks with IBM 545 00:34:59,000 --> 00:35:02,799 Speaker 2: is a production of Pushkin Industries and Ruby Studio at iHeartMedia. 546 00:35:03,480 --> 00:35:06,960 Speaker 2: To find more Pushkin podcasts, listen on the iHeartRadio app, 547 00:35:07,200 --> 00:35:19,280 Speaker 2: Apple Podcasts, or wherever you listen to podcasts. I'm Malcolm Glapwell. 548 00:35:19,320 --> 00:35:23,040 Speaker 2: This is a paid advertisement from IBM. The conversations on 549 00:35:23,040 --> 00:35:41,160 Speaker 2: this podcast don't necessarily represent IBM's positions, strategies or opinions.