1 00:00:00,120 --> 00:00:02,880 Speaker 1: Hey everyone, it's Robert and Joe here. Today we've got 2 00:00:02,880 --> 00:00:05,120 Speaker 1: something a little different to share with you. It's a 3 00:00:05,160 --> 00:00:08,760 Speaker 1: new season of the Smart Talks with IBM podcast series. 4 00:00:09,240 --> 00:00:12,039 Speaker 2: This season, on smart Talks, Malcolm Gladwell and team are 5 00:00:12,080 --> 00:00:15,280 Speaker 2: diving into the transformative world of artificial intelligence with a 6 00:00:15,320 --> 00:00:18,640 Speaker 2: fresh perspective on the concept of open What does open 7 00:00:18,720 --> 00:00:21,919 Speaker 2: really mean in the context of AI. It can mean 8 00:00:22,040 --> 00:00:25,639 Speaker 2: open source code or open data, but it also encompasses 9 00:00:25,720 --> 00:00:30,800 Speaker 2: fostering an ecosystem of ideas, ensuring diverse perspectives are heard, 10 00:00:31,160 --> 00:00:33,559 Speaker 2: and enabling new levels of transparency. 11 00:00:33,880 --> 00:00:37,120 Speaker 1: Join hosts from your favorite pushkin podcasts as they explore 12 00:00:37,120 --> 00:00:40,960 Speaker 1: how openness and AI is reshaping industries, driving innovation, and 13 00:00:41,000 --> 00:00:44,880 Speaker 1: redefining what's possible. You'll hear from industry experts and leaders 14 00:00:44,880 --> 00:00:48,360 Speaker 1: about the implication and possibilities of open AI, and of course, 15 00:00:48,720 --> 00:00:50,920 Speaker 1: Malcolm Gladwell will be there to guide you through the 16 00:00:50,960 --> 00:00:52,720 Speaker 1: season with his unique insights. 17 00:00:53,000 --> 00:00:55,560 Speaker 2: Look out for new episodes of Smart Talks every other 18 00:00:55,640 --> 00:00:59,200 Speaker 2: week on the iHeartRadio app, Apple Podcasts, or wherever you 19 00:00:59,240 --> 00:01:02,520 Speaker 2: get your podcast and learn more at IBM dot com, 20 00:01:02,560 --> 00:01:14,160 Speaker 2: Slash smart Talks. 21 00:01:11,200 --> 00:01:19,240 Speaker 3: Pushkin Hello, Hello, Welcome to Smart Talks with IBM, a 22 00:01:19,280 --> 00:01:24,600 Speaker 3: podcast from Pushkin Industries, iHeartRadio and IBM. I'm Malcolm Gladwell. 23 00:01:25,600 --> 00:01:28,840 Speaker 3: This season, we're diving back into the world of artificial intelligence, 24 00:01:29,200 --> 00:01:32,200 Speaker 3: but with a focus on the powerful concept of open 25 00:01:32,800 --> 00:01:38,880 Speaker 3: its possibilities, implications, and misconceptions. We'll look at openness from 26 00:01:38,880 --> 00:01:41,679 Speaker 3: a variety of angles and explore how the concept is 27 00:01:41,720 --> 00:01:45,759 Speaker 3: already reshaping industries, ways of doing business and our very 28 00:01:45,840 --> 00:01:51,080 Speaker 3: notion of what's possible. In today's episode, Jacob Goldstein sits 29 00:01:51,120 --> 00:01:55,080 Speaker 3: down with Rebecca Finley, the CEO of the Partnership on Ai, 30 00:01:55,800 --> 00:01:59,720 Speaker 3: a nonprofit group grappling with important questions around the future 31 00:01:59,720 --> 00:02:04,040 Speaker 3: of AI. Their conversation focuses on Rebecca's work bringing together 32 00:02:04,360 --> 00:02:08,280 Speaker 3: a community of diverse stakeholders to help shape the conversation 33 00:02:08,800 --> 00:02:14,280 Speaker 3: around accountable AI governance. Rebecca explains why transparency is so 34 00:02:14,400 --> 00:02:19,440 Speaker 3: crucial for scaling the technology responsibly, and she highlights how 35 00:02:19,480 --> 00:02:22,960 Speaker 3: working with groups like the AI Alliance can provide valuable 36 00:02:23,000 --> 00:02:27,559 Speaker 3: insights in order to build the resources, infrastructure, and community 37 00:02:27,919 --> 00:02:32,800 Speaker 3: around releasing open source models. So, without further ado, let's 38 00:02:32,880 --> 00:02:34,120 Speaker 3: get to that conversation. 39 00:02:41,240 --> 00:02:42,919 Speaker 4: Can you just say your name and your job? 40 00:02:43,400 --> 00:02:46,600 Speaker 5: My name is Rebecca Finley. I am the CEO of 41 00:02:46,639 --> 00:02:50,320 Speaker 5: the Partnership on AI to Benefit People and Society, often 42 00:02:50,400 --> 00:02:52,280 Speaker 5: referred to as PAI. 43 00:02:53,200 --> 00:02:55,760 Speaker 4: How did you get here? What was your job before 44 00:02:55,800 --> 00:02:57,639 Speaker 4: you have the job that you have now? 45 00:02:58,600 --> 00:03:03,640 Speaker 5: I came to about three years ago having had the 46 00:03:03,680 --> 00:03:08,760 Speaker 5: opportunity to work for the Canadian Institute for Advance Research, 47 00:03:09,320 --> 00:03:14,040 Speaker 5: developing and deploying all of their programs related to the 48 00:03:14,080 --> 00:03:19,639 Speaker 5: intersection of technology and society, and one of the areas 49 00:03:19,840 --> 00:03:24,160 Speaker 5: that the Canadian Institute had been funding since nineteen eighty 50 00:03:24,200 --> 00:03:28,040 Speaker 5: two was research into artificial intelligence. 51 00:03:28,280 --> 00:03:30,280 Speaker 4: Wow, early, they were early. 52 00:03:31,360 --> 00:03:35,680 Speaker 5: It was a very early commitment and an ongoing commitment 53 00:03:35,800 --> 00:03:41,160 Speaker 5: at the Institute to fund long term fundamental questions of 54 00:03:41,240 --> 00:03:50,160 Speaker 5: scientific importance in interdisciplinary research programs that were often committed 55 00:03:50,200 --> 00:03:53,720 Speaker 5: and funded to for well over a decade. The AI 56 00:03:53,920 --> 00:03:57,000 Speaker 5: Robotics and Society program that kicked off the work at 57 00:03:57,040 --> 00:04:02,880 Speaker 5: the Institute eventually became a program very much focused on 58 00:04:03,360 --> 00:04:08,480 Speaker 5: deep learning and reinforcement learning, neural networks. All of the 59 00:04:09,160 --> 00:04:14,400 Speaker 5: current iteration of AI, or certainly the pregenerative AI iteration 60 00:04:14,560 --> 00:04:19,080 Speaker 5: of AI that led to this transformation that we've seen 61 00:04:19,160 --> 00:04:22,440 Speaker 5: in terms of online search and all sorts of ways 62 00:04:22,440 --> 00:04:25,360 Speaker 5: in which predictive AI has been deployed. So I had 63 00:04:25,360 --> 00:04:28,760 Speaker 5: the opportunity to see the very early days of that 64 00:04:28,880 --> 00:04:33,159 Speaker 5: research coming together, and when in the early sort of 65 00:04:33,200 --> 00:04:38,480 Speaker 5: two thousand and twenty and tens, when compute capability came 66 00:04:38,560 --> 00:04:43,080 Speaker 5: together with data capability through some of the Internet companies 67 00:04:43,080 --> 00:04:47,080 Speaker 5: and otherwise, and we really saw this technology start to 68 00:04:47,120 --> 00:04:50,120 Speaker 5: take off, I had the opportunity to start up a 69 00:04:50,160 --> 00:04:55,280 Speaker 5: program specifically focused on the impacts of AI in society. 70 00:04:55,960 --> 00:04:58,720 Speaker 5: There was, as you know, at that time, some concerns 71 00:04:58,760 --> 00:05:03,240 Speaker 5: both about the potential for the technology, but also in 72 00:05:03,360 --> 00:05:05,920 Speaker 5: terms of what we were seeing around data sets and 73 00:05:06,200 --> 00:05:10,440 Speaker 5: bias and discrimination and potential impact on future jobs. And 74 00:05:10,560 --> 00:05:14,760 Speaker 5: so bringing a whole group of experts, whether they were 75 00:05:14,760 --> 00:05:20,520 Speaker 5: ethicists or lawyers or economists sociologists into the discussion about 76 00:05:20,560 --> 00:05:24,000 Speaker 5: AI was core to that new program and continues to 77 00:05:24,040 --> 00:05:27,720 Speaker 5: be core to my commitment to bringing diverse perspectives together 78 00:05:27,839 --> 00:05:31,640 Speaker 5: to solve the challenges and opportunities that AI offers today. 79 00:05:32,720 --> 00:05:35,279 Speaker 4: So specifically, what is your job now? What is the 80 00:05:35,320 --> 00:05:37,360 Speaker 4: work you do? What is the work that PAI does. 81 00:05:38,440 --> 00:05:42,320 Speaker 5: I like to answer that question by asking two questions. 82 00:05:42,760 --> 00:05:46,120 Speaker 5: First and foremost, do you believe that the world is 83 00:05:46,360 --> 00:05:50,279 Speaker 5: more divided today than it ever has been in recent history. 84 00:05:51,160 --> 00:05:55,000 Speaker 5: And do you believe that if we don't create spaces 85 00:05:55,560 --> 00:05:58,880 Speaker 5: for very different perspectives to come together, we won't be 86 00:05:58,960 --> 00:06:01,719 Speaker 5: able to solve the challenges that are in front of 87 00:06:01,720 --> 00:06:05,680 Speaker 5: the world today. My answer to both of those questions is, yes, 88 00:06:06,200 --> 00:06:09,760 Speaker 5: we're more divided, and two, we need to seek out 89 00:06:09,880 --> 00:06:14,600 Speaker 5: those spaces where those very different perspectives can come together 90 00:06:15,120 --> 00:06:18,320 Speaker 5: to solve those great challenges. And that's what I get 91 00:06:18,360 --> 00:06:21,880 Speaker 5: to do as CEO of the Partnership on AI. We 92 00:06:21,880 --> 00:06:26,599 Speaker 5: were begun in twenty sixteen with a fundamental commitment to 93 00:06:26,839 --> 00:06:33,320 Speaker 5: bringing together experts, whether they were in industry, academia, civil society, 94 00:06:33,480 --> 00:06:37,720 Speaker 5: or philanthropy, coming together to identify what are the most 95 00:06:37,760 --> 00:06:41,479 Speaker 5: important questions when we think about developing AI centered on 96 00:06:41,560 --> 00:06:44,920 Speaker 5: people and communities, and then how do we begin to 97 00:06:45,040 --> 00:06:47,880 Speaker 5: develop the solutions to make sure we benefit appropriately. 98 00:06:48,880 --> 00:06:54,159 Speaker 4: So that's a very big picture set of ideas. I'm 99 00:06:54,200 --> 00:06:56,520 Speaker 4: curious on a sort of more day to day level. 100 00:06:56,520 --> 00:06:58,839 Speaker 4: I mean, you talk about collaborating with all these different 101 00:06:58,960 --> 00:07:01,280 Speaker 4: kinds of people, all these different groups, what does that 102 00:07:01,320 --> 00:07:04,040 Speaker 4: actually look like. What are some specific examples of how 103 00:07:04,080 --> 00:07:04,960 Speaker 4: you do this work. 104 00:07:05,520 --> 00:07:09,040 Speaker 5: So right now we have about one hundred and twenty 105 00:07:09,160 --> 00:07:14,760 Speaker 5: partners in sixteen countries. They come together through working groups 106 00:07:14,840 --> 00:07:17,760 Speaker 5: that we look at through a variety of different perspectives. 107 00:07:17,800 --> 00:07:21,640 Speaker 5: It could be AI, labor and the economy. It could 108 00:07:21,720 --> 00:07:26,400 Speaker 5: be how do you build a healthy information ecosystem. It 109 00:07:26,440 --> 00:07:29,640 Speaker 5: could be how do you bring more diverse perspectives into 110 00:07:29,760 --> 00:07:34,200 Speaker 5: the inclusive and equitable development of AI. It could be 111 00:07:34,320 --> 00:07:38,600 Speaker 5: what are the emerging opportunities with these very very large 112 00:07:38,640 --> 00:07:42,560 Speaker 5: foundation model applications and how do you deploy those safely? 113 00:07:43,040 --> 00:07:46,600 Speaker 5: And these groups come together most importantly to say what 114 00:07:46,720 --> 00:07:50,160 Speaker 5: are the questions we need to answer collectively, So they 115 00:07:50,160 --> 00:07:52,960 Speaker 5: come together in working groups. I have an amazing staff 116 00:07:53,040 --> 00:07:56,560 Speaker 5: team who hold the pen on synthesizing research and data 117 00:07:56,920 --> 00:08:02,480 Speaker 5: and evidence, developing frameworks, best practice resources, all sorts of 118 00:08:02,520 --> 00:08:04,960 Speaker 5: things that we can offer up to the community, be 119 00:08:05,080 --> 00:08:08,720 Speaker 5: they in industry or in policy, to say this is 120 00:08:08,800 --> 00:08:11,480 Speaker 5: how we can well, this is what good looks like, 121 00:08:11,560 --> 00:08:12,960 Speaker 5: and this is how we can do it on a 122 00:08:13,040 --> 00:08:15,000 Speaker 5: day to day basis. So that's what we do, and 123 00:08:15,040 --> 00:08:18,360 Speaker 5: then we publish our materials. It's all open. We make 124 00:08:18,440 --> 00:08:20,680 Speaker 5: sure that we get them into the hands of those 125 00:08:20,680 --> 00:08:23,360 Speaker 5: communities that can use them, and then we drive and 126 00:08:23,440 --> 00:08:25,960 Speaker 5: work with those communities to put them into practice. 127 00:08:26,600 --> 00:08:29,600 Speaker 4: You use the word open there and describing your publications. 128 00:08:30,520 --> 00:08:33,280 Speaker 4: I know, in the world of AI on the sort 129 00:08:33,280 --> 00:08:37,079 Speaker 4: of technical side, there's a lot of debate, say, or 130 00:08:37,200 --> 00:08:42,160 Speaker 4: discussion about kind of open versus closed AI, And I'm 131 00:08:42,200 --> 00:08:46,320 Speaker 4: curious how you kind of encounter that particular discussion. What 132 00:08:46,480 --> 00:08:48,480 Speaker 4: is your view on open versus closed AI? 133 00:08:49,559 --> 00:08:55,360 Speaker 5: So the current discussion between open and closed release of 134 00:08:55,559 --> 00:09:00,400 Speaker 5: AI models came once we saw a CHAT, GPT and 135 00:09:00,520 --> 00:09:05,319 Speaker 5: other very large generative AI systems being deployed out into 136 00:09:05,320 --> 00:09:10,679 Speaker 5: the hands of consumers around the world, and there emerged 137 00:09:11,000 --> 00:09:15,960 Speaker 5: some fear about the potential of these models to act 138 00:09:16,000 --> 00:09:19,479 Speaker 5: in all sorts of catastrophic ways. So there were concerns 139 00:09:19,520 --> 00:09:23,679 Speaker 5: that the models could be deployed with regard to different 140 00:09:24,240 --> 00:09:29,880 Speaker 5: development of viruses or biomedical weapons or even nuclear weapons, 141 00:09:30,000 --> 00:09:33,800 Speaker 5: or through manipulation or otherwise. So this are emerged about 142 00:09:34,040 --> 00:09:40,080 Speaker 5: over the last eighteen months, this real concern that these models, 143 00:09:40,200 --> 00:09:44,520 Speaker 5: if deployed openly, could lead to some level of truly 144 00:09:44,640 --> 00:09:50,280 Speaker 5: catastrophic risk. And what emerged is actually that we discovered 145 00:09:50,840 --> 00:09:52,720 Speaker 5: that through a whole bunch of work that's been done 146 00:09:52,720 --> 00:09:56,120 Speaker 5: over the last little while that releasing them openly has 147 00:09:56,120 --> 00:09:58,480 Speaker 5: not led and doesn't appear to be leading in any 148 00:09:58,520 --> 00:10:03,760 Speaker 5: way to catastrophic risk. In facts, releasing them openly allows 149 00:10:03,800 --> 00:10:08,559 Speaker 5: for much more greater scrutiny and understanding of the safety 150 00:10:08,600 --> 00:10:11,200 Speaker 5: measures that have been put into place, And so what 151 00:10:11,360 --> 00:10:15,000 Speaker 5: happened was sort of the pendulum swung very much towards 152 00:10:15,040 --> 00:10:18,600 Speaker 5: concerned about really catastrophic risk and safety over the last year, 153 00:10:18,640 --> 00:10:21,240 Speaker 5: and over the last year we've seen it swing back 154 00:10:21,320 --> 00:10:24,000 Speaker 5: as we learn more and more about how these models 155 00:10:24,320 --> 00:10:26,880 Speaker 5: are being used and how they are being deployed into 156 00:10:26,920 --> 00:10:32,880 Speaker 5: the world. My feeling is we must approach this work openly, 157 00:10:33,280 --> 00:10:36,520 Speaker 5: and it's not just open release of models or what 158 00:10:36,600 --> 00:10:40,520 Speaker 5: we think of as traditional open source forms of model 159 00:10:40,600 --> 00:10:43,760 Speaker 5: development or otherwise, but we really need to think about 160 00:10:43,800 --> 00:10:48,520 Speaker 5: how do we build an open innovation ecosystem that fundamentally 161 00:10:48,559 --> 00:10:52,640 Speaker 5: allows both for the innovation to be shared with many people, 162 00:10:52,720 --> 00:10:56,440 Speaker 5: but also for safety and security to be rigorously upheld. 163 00:10:56,880 --> 00:11:00,319 Speaker 4: So when you talk about this kind of broader idea 164 00:11:00,400 --> 00:11:04,960 Speaker 4: of open innovation beyond open source or you know, transparency 165 00:11:04,960 --> 00:11:08,239 Speaker 4: and models, like, what do you mean sort of specifically 166 00:11:08,280 --> 00:11:09,800 Speaker 4: how does that look in the world. 167 00:11:10,160 --> 00:11:13,760 Speaker 5: So I have three particular points of view when it 168 00:11:13,800 --> 00:11:16,400 Speaker 5: comes to open innovation, because I think we need to 169 00:11:16,440 --> 00:11:19,880 Speaker 5: think both both upstream around the research that is driving 170 00:11:19,880 --> 00:11:23,000 Speaker 5: these models and downstream in terms of the benefits of 171 00:11:23,040 --> 00:11:26,760 Speaker 5: these models to others. So, first and foremost, what we 172 00:11:26,840 --> 00:11:29,440 Speaker 5: have known in terms of how AI has been developed, 173 00:11:29,440 --> 00:11:31,480 Speaker 5: and yes, I had an opportunity to see it when 174 00:11:31,520 --> 00:11:34,559 Speaker 5: I was at the Canadian Institute for Advanced Research is 175 00:11:34,640 --> 00:11:40,800 Speaker 5: a very open form of scientific publication and rigorous peer review. 176 00:11:41,120 --> 00:11:44,200 Speaker 5: And what happens when we release openly is you have 177 00:11:44,240 --> 00:11:48,440 Speaker 5: an opportunity for the research to be interrogated to determine 178 00:11:48,480 --> 00:11:51,760 Speaker 5: the quality and significance of that, but then also for 179 00:11:51,880 --> 00:11:54,679 Speaker 5: it to be picked up by many others. And then secondly, 180 00:11:55,160 --> 00:11:59,240 Speaker 5: openness for me is about transparency. We released a set 181 00:11:59,280 --> 00:12:02,840 Speaker 5: of very strong recommendations last year around the way in 182 00:12:02,880 --> 00:12:07,160 Speaker 5: which these very large foundation models could be deployed safely. 183 00:12:07,800 --> 00:12:11,880 Speaker 5: They're all about disclosure. They're all about disclosure and documentation 184 00:12:12,040 --> 00:12:15,359 Speaker 5: right from the early days pre R and D development 185 00:12:15,360 --> 00:12:17,840 Speaker 5: of these systems, right in terms of thinking about what's 186 00:12:17,880 --> 00:12:20,439 Speaker 5: in the training data and how is it being used? 187 00:12:20,600 --> 00:12:24,640 Speaker 5: All the way through to post deployment monitoring and disclosure. 188 00:12:25,160 --> 00:12:28,400 Speaker 5: So I really think that this is important transparency through it. 189 00:12:28,440 --> 00:12:31,360 Speaker 5: And then the third piece is openness in terms of 190 00:12:31,400 --> 00:12:34,760 Speaker 5: who was around the table to benefit from this technology. 191 00:12:35,200 --> 00:12:37,320 Speaker 5: We know that if we're really going to see these 192 00:12:37,360 --> 00:12:42,679 Speaker 5: new models being successful deployed into education or healthcare or 193 00:12:42,760 --> 00:12:46,520 Speaker 5: climate and sustainability, we need to have those experts in 194 00:12:46,600 --> 00:12:49,920 Speaker 5: those communities at the table charting this and making sure 195 00:12:50,000 --> 00:12:52,320 Speaker 5: that the technology is working for them. So those are 196 00:12:52,320 --> 00:12:54,000 Speaker 5: the three ways I think about openness. 197 00:12:55,080 --> 00:12:58,480 Speaker 4: Is there like a particular project that you've worked on 198 00:12:58,640 --> 00:13:02,560 Speaker 4: that you feel like you know reflects your approach to 199 00:13:02,720 --> 00:13:03,720 Speaker 4: responsible AI. 200 00:13:04,760 --> 00:13:07,720 Speaker 5: So there's a really interesting project that we have underway 201 00:13:07,720 --> 00:13:11,880 Speaker 5: at PAI that is looking at responsible practices squarely when 202 00:13:11,960 --> 00:13:15,480 Speaker 5: it comes to the use of synthetic media. And what 203 00:13:15,520 --> 00:13:19,120 Speaker 5: we heard from our community was that they were looking 204 00:13:19,160 --> 00:13:22,680 Speaker 5: for a clear code of conduct about what does it 205 00:13:22,800 --> 00:13:25,720 Speaker 5: mean to be responsible in this space? And so what 206 00:13:25,920 --> 00:13:28,920 Speaker 5: happened is we pulled together a number of working groups 207 00:13:28,960 --> 00:13:32,800 Speaker 5: to come together. They included industry representatives, They also included 208 00:13:33,120 --> 00:13:38,880 Speaker 5: civil society organizations like WITNESS, a number of academic institutions 209 00:13:38,920 --> 00:13:41,960 Speaker 5: and otherwise, and what we heard was that there were 210 00:13:42,240 --> 00:13:48,160 Speaker 5: clear requirements that creators could take, that developers of the 211 00:13:48,200 --> 00:13:51,280 Speaker 5: technology could take, and then also distributors. So when we 212 00:13:51,320 --> 00:13:55,839 Speaker 5: think about those generative AI systems being deployed across platforms 213 00:13:55,840 --> 00:13:59,040 Speaker 5: and otherwise, and we came up with a framework for 214 00:13:59,120 --> 00:14:02,480 Speaker 5: what responsibility looks like. What does it mean to have consent, 215 00:14:02,640 --> 00:14:06,240 Speaker 5: what does it mean to disclose responsibly, what does it 216 00:14:06,320 --> 00:14:10,560 Speaker 5: mean to embed technology into it? So, for example, we've 217 00:14:10,559 --> 00:14:13,600 Speaker 5: heard many people talk about the importance of water marking 218 00:14:13,679 --> 00:14:16,079 Speaker 5: systems right and making sure that we have a way 219 00:14:16,120 --> 00:14:18,360 Speaker 5: to water mark them. But what we know from the 220 00:14:18,440 --> 00:14:22,680 Speaker 5: technology is that is a very very complex and complicated problem, 221 00:14:22,920 --> 00:14:26,160 Speaker 5: and what might work on a technical level certainly hits 222 00:14:26,160 --> 00:14:29,160 Speaker 5: a whole new set of complications when we start labeling 223 00:14:29,240 --> 00:14:32,320 Speaker 5: and disclosing out to the public about what that technology 224 00:14:32,360 --> 00:14:35,960 Speaker 5: actually means. All of these, I believe are solvable problems, 225 00:14:35,960 --> 00:14:39,560 Speaker 5: but they all needed to have a clear code underneath 226 00:14:39,640 --> 00:14:41,840 Speaker 5: them that was saying this is what we will commit to. 227 00:14:42,160 --> 00:14:45,440 Speaker 5: And we now have a number of organizations, many many 228 00:14:45,480 --> 00:14:48,720 Speaker 5: of the large technology companies but also many of the 229 00:14:49,200 --> 00:14:52,320 Speaker 5: small startups who are operating in this based civil society 230 00:14:52,360 --> 00:14:56,160 Speaker 5: and media organizations like the BBC and the CBC who's 231 00:14:56,200 --> 00:14:59,640 Speaker 5: have signed on. And one of the really exciting pieces 232 00:14:59,720 --> 00:15:03,640 Speaker 5: of that is that we're now seeing how it's changing practice. 233 00:15:03,920 --> 00:15:06,640 Speaker 5: So a year in we asked each of our partners 234 00:15:06,720 --> 00:15:09,920 Speaker 5: to come up with a clear case study about how 235 00:15:09,960 --> 00:15:13,160 Speaker 5: that work has changed the way they are making decisions, 236 00:15:13,600 --> 00:15:18,200 Speaker 5: deploying technology and ensuring that they're being responsible in their use. 237 00:15:18,240 --> 00:15:21,160 Speaker 5: And that is creating now a whole resource online that 238 00:15:21,200 --> 00:15:23,560 Speaker 5: we're able to share with others about what does it 239 00:15:23,640 --> 00:15:26,960 Speaker 5: mean to be responsible in this place. There's so much 240 00:15:27,000 --> 00:15:29,160 Speaker 5: more work to be done, and the exciting thing is 241 00:15:29,200 --> 00:15:31,360 Speaker 5: once you have a foundation like this in place, we 242 00:15:31,440 --> 00:15:35,000 Speaker 5: can continue to build on it. So much interest now 243 00:15:35,040 --> 00:15:38,000 Speaker 5: in the policy space, for example, about this work as well. 244 00:15:39,080 --> 00:15:42,880 Speaker 4: Are there any specific examples of those sort of case 245 00:15:42,920 --> 00:15:47,640 Speaker 4: studies or the real world experiences that say media organizations 246 00:15:47,680 --> 00:15:50,000 Speaker 4: had that are interesting that are illuminating. 247 00:15:50,400 --> 00:15:55,400 Speaker 5: Yes. So, for example, what we saw with the BBC 248 00:15:56,000 --> 00:15:59,120 Speaker 5: is that they're developing a lot of content as a 249 00:15:59,120 --> 00:16:02,560 Speaker 5: public broadcast are both in terms of their news coverage 250 00:16:02,560 --> 00:16:05,200 Speaker 5: but also in terms of some of the resources that 251 00:16:05,200 --> 00:16:08,840 Speaker 5: they are developing for the British public as well. And 252 00:16:08,880 --> 00:16:11,320 Speaker 5: what they talked about was the way in which they 253 00:16:11,360 --> 00:16:16,960 Speaker 5: had used synthetic media in a very very sensitive environment 254 00:16:17,080 --> 00:16:21,520 Speaker 5: where they were hearing from individuals talk about personal experiences, 255 00:16:21,880 --> 00:16:25,000 Speaker 5: but wanted to have some way to change the face 256 00:16:25,240 --> 00:16:28,440 Speaker 5: entirely in terms of the individuals who were speaking. So 257 00:16:28,560 --> 00:16:31,720 Speaker 5: that's a very complicated ethical question, right, how do you 258 00:16:31,840 --> 00:16:34,760 Speaker 5: do that responsibly? And what is the way in which 259 00:16:34,800 --> 00:16:38,280 Speaker 5: you use that technology, and most importantly, how do you 260 00:16:38,400 --> 00:16:41,240 Speaker 5: disclose it? So their case study looked at that in 261 00:16:41,320 --> 00:16:45,200 Speaker 5: some real detail about the process they went through to 262 00:16:45,280 --> 00:16:49,000 Speaker 5: make the decision responsibly to do what they chose, how 263 00:16:49,000 --> 00:16:51,360 Speaker 5: they intended to use the technology in that space. 264 00:16:52,120 --> 00:16:55,119 Speaker 4: As you describe your work and some of these studies, 265 00:16:55,280 --> 00:17:00,320 Speaker 4: the idea of transparency seems to be a theme. Talk 266 00:17:00,360 --> 00:17:02,680 Speaker 4: about the importance of transparency in this kind of work. 267 00:17:03,840 --> 00:17:08,760 Speaker 5: Yeah, transparency is fundamental to responsibility. I always like to 268 00:17:08,800 --> 00:17:12,600 Speaker 5: say it's not accountability in a complete sense, but it 269 00:17:12,680 --> 00:17:16,639 Speaker 5: is a first step to driving accountability more fully, so, 270 00:17:17,160 --> 00:17:20,440 Speaker 5: when we think about how these systems are developed, they're 271 00:17:20,440 --> 00:17:25,679 Speaker 5: often developed behind closed doors inside companies who are making 272 00:17:25,760 --> 00:17:29,800 Speaker 5: decisions about what and how these products will work from 273 00:17:29,800 --> 00:17:35,040 Speaker 5: a business perspective, and what disclosure and transparency can provide 274 00:17:35,119 --> 00:17:38,480 Speaker 5: is some sense of the decisions that were made leading 275 00:17:38,560 --> 00:17:41,359 Speaker 5: up to the way in which those models were deployed. 276 00:17:41,440 --> 00:17:46,840 Speaker 5: So this could be ensuring that individual's private information was 277 00:17:46,880 --> 00:17:51,199 Speaker 5: protected through the process and won't be inadvertently disclosed or otherwise. 278 00:17:51,640 --> 00:17:54,679 Speaker 5: It could be providing some sense of how well the 279 00:17:54,720 --> 00:17:58,439 Speaker 5: system performs against a whole level of quality measures. So 280 00:17:58,520 --> 00:18:01,320 Speaker 5: we have all of these different types of evaluations and 281 00:18:01,359 --> 00:18:04,640 Speaker 5: a measures that are emerging about the quality of these 282 00:18:04,680 --> 00:18:08,520 Speaker 5: systems as they're deployed. Being transparent about how they perform 283 00:18:08,600 --> 00:18:11,720 Speaker 5: against these systems is really crucial to that as well. 284 00:18:11,960 --> 00:18:15,040 Speaker 5: We have a whole ecosystem that's starting to emerge around 285 00:18:15,119 --> 00:18:18,119 Speaker 5: auditing of these systems. So what does that look like 286 00:18:18,200 --> 00:18:20,600 Speaker 5: we think about auditors and all sorts of other sectors 287 00:18:20,600 --> 00:18:23,119 Speaker 5: of the economy. What does it look like to be 288 00:18:23,200 --> 00:18:26,359 Speaker 5: auditing these systems to ensure that they're meeting all of 289 00:18:26,400 --> 00:18:30,120 Speaker 5: those both legal but additional ethical requirements that we want 290 00:18:30,160 --> 00:18:31,320 Speaker 5: to make sure that are in place. 291 00:18:32,640 --> 00:18:37,160 Speaker 4: What are some of the hardest ethical dilemmas you've come 292 00:18:37,280 --> 00:18:39,359 Speaker 4: up against in AI policy. 293 00:18:40,600 --> 00:18:43,959 Speaker 5: Well, the interesting thing about AI policy right is what 294 00:18:44,000 --> 00:18:48,280 Speaker 5: it works very simply in one setting, can be highly 295 00:18:48,320 --> 00:18:51,679 Speaker 5: complicated in another setting. And so, for example, I have 296 00:18:51,720 --> 00:18:54,560 Speaker 5: an app that I adore. It's an app on my 297 00:18:54,720 --> 00:18:57,719 Speaker 5: phone that allows me to take a photo of a bird, 298 00:18:58,280 --> 00:19:00,520 Speaker 5: and it will help me to better understand and you know, 299 00:19:00,520 --> 00:19:02,879 Speaker 5: what that bird is, and give me all sorts of 300 00:19:02,920 --> 00:19:07,080 Speaker 5: information about that bird. Now, it's probably right most of 301 00:19:07,119 --> 00:19:09,639 Speaker 5: the time, and it's certainly right enough of the time 302 00:19:09,680 --> 00:19:12,640 Speaker 5: to give me great pleasure and delight when I'm out walking. 303 00:19:13,200 --> 00:19:16,679 Speaker 5: You could think about that exact same technology applied. So 304 00:19:16,880 --> 00:19:20,120 Speaker 5: for example, now you're a security guard and you're working 305 00:19:20,680 --> 00:19:24,280 Speaker 5: in a shopping plaza, and you're able to take photos 306 00:19:24,320 --> 00:19:27,719 Speaker 5: of individuals who you may think are acting suspiciously in 307 00:19:27,760 --> 00:19:30,320 Speaker 5: some way and match that photo up with some sort 308 00:19:30,359 --> 00:19:34,560 Speaker 5: of a database of individuals that may have been found, 309 00:19:34,600 --> 00:19:37,400 Speaker 5: you know, to have some sort of connection to other 310 00:19:37,440 --> 00:19:39,880 Speaker 5: criminal behavior in the past. Right, So what goes from 311 00:19:39,920 --> 00:19:43,359 Speaker 5: being a delightful Oh, isn't this an interesting bird? To 312 00:19:43,480 --> 00:19:47,879 Speaker 5: a very very creepy What is this say about surveillance 313 00:19:47,920 --> 00:19:51,560 Speaker 5: and privacy and access to public spaces? And that is 314 00:19:51,600 --> 00:19:54,800 Speaker 5: the nature of AI. So much of the concern about 315 00:19:54,840 --> 00:20:00,840 Speaker 5: the ethical use and deployment of AI is how organization 316 00:20:01,520 --> 00:20:06,080 Speaker 5: is making the choices within the social and systemic structure 317 00:20:06,600 --> 00:20:09,760 Speaker 5: they sit. So so much about the ethics of AI 318 00:20:10,040 --> 00:20:13,280 Speaker 5: is understanding what is the use case, how is it 319 00:20:13,320 --> 00:20:17,080 Speaker 5: being used, how is it being constrained? How does it 320 00:20:17,160 --> 00:20:20,439 Speaker 5: start to infringe upon what we think of as the 321 00:20:20,520 --> 00:20:24,920 Speaker 5: human rights of an individual to privacy? And so you 322 00:20:25,080 --> 00:20:28,280 Speaker 5: have to constantly be thinking about ethics. What could work 323 00:20:28,400 --> 00:20:31,600 Speaker 5: very well in one situation absolutely doesn't work in another. 324 00:20:31,920 --> 00:20:35,440 Speaker 5: We often talk about these as socio technical questions. Right, 325 00:20:35,840 --> 00:20:39,199 Speaker 5: just because the technology works doesn't actually mean that it 326 00:20:39,240 --> 00:20:41,200 Speaker 5: should be used and deployed. 327 00:20:41,960 --> 00:20:46,959 Speaker 4: What's an example of where the partnership on AI influence 328 00:20:47,200 --> 00:20:50,640 Speaker 4: changes either in policy or in industry practice. 329 00:20:51,840 --> 00:20:54,520 Speaker 5: We talked a little bit about the framework for Synthetic 330 00:20:54,600 --> 00:20:59,000 Speaker 5: Media and how that has allowed companies and media organizations 331 00:20:59,000 --> 00:21:02,120 Speaker 5: and civil society or organizations to really think deeply about 332 00:21:02,119 --> 00:21:04,960 Speaker 5: the way in which they're using this. Another area that 333 00:21:05,000 --> 00:21:10,960 Speaker 5: we focused on has been around responsible deployment of foundation 334 00:21:11,240 --> 00:21:14,040 Speaker 5: and large scale models. So, as I said, we issued 335 00:21:14,080 --> 00:21:18,240 Speaker 5: a set of recommendations last year that really laid out 336 00:21:18,480 --> 00:21:22,399 Speaker 5: for these very large developers and deployers of foundation and 337 00:21:22,480 --> 00:21:27,639 Speaker 5: frontier models, what does good look like right from R 338 00:21:27,680 --> 00:21:30,720 Speaker 5: and D through to deployment monitoring, and it has been 339 00:21:30,880 --> 00:21:34,480 Speaker 5: very encouraging to see that that work has been picked 340 00:21:34,560 --> 00:21:38,840 Speaker 5: up by companies and really articulated as part of the 341 00:21:38,840 --> 00:21:43,520 Speaker 5: fabric of the deployment of their foundation models and systems 342 00:21:43,520 --> 00:21:46,640 Speaker 5: moving forward. So much of this work is around creating 343 00:21:46,720 --> 00:21:50,280 Speaker 5: clear definitions of what we're meaning as the technology evolves 344 00:21:50,640 --> 00:21:53,280 Speaker 5: and clear sets of responsibility. So it's great to see 345 00:21:53,280 --> 00:21:56,600 Speaker 5: that work getting picked up. The NTIA in the United 346 00:21:56,600 --> 00:22:01,119 Speaker 5: States just released a report on open models and the 347 00:22:01,160 --> 00:22:03,879 Speaker 5: release of open models. Great to see our work sited 348 00:22:03,920 --> 00:22:07,320 Speaker 5: there as contributing to that analysis. Great to see some 349 00:22:07,400 --> 00:22:10,720 Speaker 5: of our definitions and synthetic media getting picked up by 350 00:22:10,800 --> 00:22:14,960 Speaker 5: legislators in different countries. Really just it's important, I think, 351 00:22:15,000 --> 00:22:17,679 Speaker 5: for us to build capacity, knowledge and understanding and our 352 00:22:17,720 --> 00:22:22,320 Speaker 5: policy makers in this moment as the technology is evolving 353 00:22:22,400 --> 00:22:24,080 Speaker 5: and accelerating in its development. 354 00:22:25,200 --> 00:22:28,800 Speaker 4: What's the AI Alliance and why did Partnership on AI 355 00:22:28,880 --> 00:22:29,640 Speaker 4: decide to join? 356 00:22:30,200 --> 00:22:33,720 Speaker 5: So you had asked about the debate between open versus 357 00:22:33,840 --> 00:22:38,320 Speaker 5: closed models and how that has evolved over the last year, 358 00:22:38,680 --> 00:22:43,159 Speaker 5: and the AI Alliance was a community of organizations that 359 00:22:43,320 --> 00:22:47,600 Speaker 5: came together to really think about, okay, if we support 360 00:22:48,080 --> 00:22:51,600 Speaker 5: open release of models, what does that look like and 361 00:22:51,640 --> 00:22:54,119 Speaker 5: what does the community need? And so that's about one 362 00:22:54,240 --> 00:22:58,800 Speaker 5: hundred organizations. IBM, one of our founding partners, is also 363 00:22:58,880 --> 00:23:02,080 Speaker 5: one of the founding partner of the AI Alliance. It's 364 00:23:02,119 --> 00:23:06,120 Speaker 5: a community that brings together a number of academic institutions 365 00:23:06,520 --> 00:23:09,959 Speaker 5: many countries around the world, and they're really focused on 366 00:23:10,520 --> 00:23:15,639 Speaker 5: how do you build the resources and infrastructure and community 367 00:23:15,800 --> 00:23:19,600 Speaker 5: around what open source in these large scale models really mean. 368 00:23:19,720 --> 00:23:23,199 Speaker 5: So that could be open data sets, that could be 369 00:23:23,320 --> 00:23:27,919 Speaker 5: open technology development. Really building on that understanding that we 370 00:23:28,000 --> 00:23:31,520 Speaker 5: need an infrastructure in place and a community engaged in 371 00:23:31,600 --> 00:23:36,040 Speaker 5: thinking about safety and innovation through the open lens. 372 00:23:36,880 --> 00:23:41,000 Speaker 3: This approach brings together organizations and experts from around the 373 00:23:41,000 --> 00:23:46,960 Speaker 3: globe with different backgrounds, experiences, and perspectives to transparently and 374 00:23:47,119 --> 00:23:51,919 Speaker 3: openly address the challenges and opportunities today. I poses the 375 00:23:51,960 --> 00:23:56,760 Speaker 3: collaborative nature of the AI Alliance encourages discussion, debate, and innovation. 376 00:23:57,560 --> 00:24:00,639 Speaker 3: Through these efforts, IBM is helping to build the community 377 00:24:00,960 --> 00:24:04,640 Speaker 3: around transparent open technology. 378 00:24:05,280 --> 00:24:08,359 Speaker 4: So I want to talk about the future for a minute. 379 00:24:08,560 --> 00:24:11,840 Speaker 4: I'm sure is what you see as the biggest obstacles 380 00:24:11,920 --> 00:24:16,359 Speaker 4: to widespread adoption of responsible AI practices. 381 00:24:17,080 --> 00:24:22,399 Speaker 5: One of the biggest obstacles today is an inability and 382 00:24:22,600 --> 00:24:26,720 Speaker 5: really a lack of understanding about how to use these 383 00:24:26,800 --> 00:24:31,280 Speaker 5: models and how they can most effectively drive forward a 384 00:24:31,359 --> 00:24:35,959 Speaker 5: company's commitment to whatever products and services it might be deploying. 385 00:24:36,359 --> 00:24:39,720 Speaker 5: So I always recommend a couple of things for companies 386 00:24:39,880 --> 00:24:43,119 Speaker 5: really to think about this and to get started. One 387 00:24:43,440 --> 00:24:47,760 Speaker 5: is think about how you are already using AI across 388 00:24:47,800 --> 00:24:51,560 Speaker 5: all of your business products and services, Because already AI 389 00:24:51,880 --> 00:24:55,840 Speaker 5: is integrated into our workforces and into our workstreams, and 390 00:24:55,840 --> 00:24:58,840 Speaker 5: into the way in which companies are communicating with their 391 00:24:58,880 --> 00:25:02,360 Speaker 5: clients every day. So understand how you are already using 392 00:25:02,440 --> 00:25:06,919 Speaker 5: it and understand how you are integrating oversight and monitoring 393 00:25:06,960 --> 00:25:09,720 Speaker 5: into those One of the best and clearest ways in 394 00:25:09,760 --> 00:25:12,720 Speaker 5: which a company can really understand how to use this 395 00:25:12,840 --> 00:25:16,080 Speaker 5: responsibly is through documentation. It's one of the areas where 396 00:25:16,080 --> 00:25:19,320 Speaker 5: there's a clear consensus in the community. So how do 397 00:25:19,359 --> 00:25:22,240 Speaker 5: you document the models that you are using, making sure 398 00:25:22,280 --> 00:25:24,359 Speaker 5: that you've got a registry in place. How do you 399 00:25:24,480 --> 00:25:27,040 Speaker 5: document the data that you are using and where that 400 00:25:27,119 --> 00:25:29,560 Speaker 5: data comes from. This is sort of the first system, 401 00:25:29,680 --> 00:25:33,000 Speaker 5: first line of defense in terms of understanding both what 402 00:25:33,200 --> 00:25:35,239 Speaker 5: is in place and what you need to do in 403 00:25:35,320 --> 00:25:38,720 Speaker 5: order to monitor it moving forward. And then secondly, once 404 00:25:38,760 --> 00:25:41,600 Speaker 5: you've got an understanding of how you're already using the system, 405 00:25:42,000 --> 00:25:44,240 Speaker 5: look at ways in which you could begin to pilot 406 00:25:44,400 --> 00:25:47,160 Speaker 5: or iterate in a low risk way using these systems 407 00:25:47,160 --> 00:25:50,080 Speaker 5: to really begin to see how and what structures you 408 00:25:50,160 --> 00:25:52,680 Speaker 5: need to have in place to use it moving forward. 409 00:25:53,040 --> 00:25:57,080 Speaker 5: And then thirdly, make sure that you structure a team 410 00:25:57,119 --> 00:26:00,199 Speaker 5: in place internally that's able to do some of this 411 00:26:00,320 --> 00:26:05,720 Speaker 5: cross departmental monitoring, Knowledge sharing and learning boards are very 412 00:26:05,800 --> 00:26:08,800 Speaker 5: very interested in this technology, So thinking about how you 413 00:26:08,840 --> 00:26:11,200 Speaker 5: can have a system or a team in place internally 414 00:26:11,240 --> 00:26:14,119 Speaker 5: that's reporting to your board, giving them a sense of 415 00:26:14,160 --> 00:26:18,040 Speaker 5: both the opportunities that it identifies for you and the 416 00:26:18,080 --> 00:26:21,359 Speaker 5: additional risk mitigation and management you might be putting into place. 417 00:26:21,720 --> 00:26:25,160 Speaker 5: And then once you have those things into place, you're 418 00:26:25,280 --> 00:26:28,679 Speaker 5: really going to need to understand how you work with 419 00:26:28,800 --> 00:26:31,879 Speaker 5: the most valuable asset you have, which is your people. 420 00:26:32,520 --> 00:26:35,520 Speaker 5: How do you make sure that AI systems are working 421 00:26:35,880 --> 00:26:38,359 Speaker 5: for the workers, making sure that they're going into place. 422 00:26:38,440 --> 00:26:42,080 Speaker 5: The most important and impressive implementations we see are those 423 00:26:42,119 --> 00:26:44,560 Speaker 5: where you have the workers who are going to be 424 00:26:44,600 --> 00:26:48,600 Speaker 5: engaged in this process central to figuring out how to 425 00:26:48,680 --> 00:26:52,520 Speaker 5: develop and deploy it in order to really enhance their work. 426 00:26:52,560 --> 00:26:55,280 Speaker 5: It's a core part of a set of Shared Prosperity 427 00:26:55,280 --> 00:26:57,480 Speaker 5: guidelines that we issued last year. 428 00:26:58,320 --> 00:27:04,080 Speaker 4: And then, from the side of policy makers, how should 429 00:27:04,119 --> 00:27:09,680 Speaker 4: policy makers think about the balance between innovation and regulation. 430 00:27:10,400 --> 00:27:13,000 Speaker 5: Yeah, it's so interesting, isn't it that we always think of, 431 00:27:13,119 --> 00:27:17,640 Speaker 5: you know, innovation and regulation as being two sides of 432 00:27:17,680 --> 00:27:22,000 Speaker 5: a coin, when in fact, so much innovation comes from 433 00:27:22,760 --> 00:27:26,800 Speaker 5: having a clear set of guardrails and regulation in place. 434 00:27:27,119 --> 00:27:29,800 Speaker 5: We think about all of the innovation that's happened in 435 00:27:30,040 --> 00:27:35,480 Speaker 5: the automotive industry, right we can drive faster because we 436 00:27:35,720 --> 00:27:38,960 Speaker 5: have breaks, we can drive faster because we have seat 437 00:27:38,960 --> 00:27:42,000 Speaker 5: belts in place. So I think it's often interesting to 438 00:27:42,040 --> 00:27:43,760 Speaker 5: me that we think about the two as being on 439 00:27:43,880 --> 00:27:46,720 Speaker 5: either side of the coin, but an actual fact, you 440 00:27:46,880 --> 00:27:52,600 Speaker 5: can't be innovative without being responsible as well. And so 441 00:27:53,800 --> 00:27:56,280 Speaker 5: I think from a policy maker perspective, what we have 442 00:27:56,359 --> 00:28:00,040 Speaker 5: been really encouraging them to do is to understand that 443 00:28:00,080 --> 00:28:04,520 Speaker 5: you've got foundational regulation in place that works for you. Nationally, 444 00:28:04,560 --> 00:28:08,480 Speaker 5: this could be ensuring that you have strong privacy protections 445 00:28:08,520 --> 00:28:12,880 Speaker 5: in place. It could be ensuring that you are understanding 446 00:28:12,920 --> 00:28:17,240 Speaker 5: potential online harms, particularly to vulnerable communities, and then look 447 00:28:17,240 --> 00:28:20,520 Speaker 5: at what you need to be doing internationally to being 448 00:28:20,600 --> 00:28:24,639 Speaker 5: both competitive and sustainable. There's all sorts of mechanisms that 449 00:28:24,680 --> 00:28:27,040 Speaker 5: are in place right now at the international level to 450 00:28:27,040 --> 00:28:30,480 Speaker 5: think about how do we build an interoperable space for 451 00:28:30,560 --> 00:28:32,320 Speaker 5: these technologies moving forward. 452 00:28:32,880 --> 00:28:36,479 Speaker 4: We've been talking in various ways about what it means 453 00:28:36,640 --> 00:28:42,280 Speaker 4: to responsibly develop AI, and if you're going to boil 454 00:28:42,360 --> 00:28:46,160 Speaker 4: that down, you know the essential concerns that people should 455 00:28:46,160 --> 00:28:48,960 Speaker 4: be thinking about, like what are the key things to 456 00:28:49,040 --> 00:28:51,920 Speaker 4: think about in responsible AI? 457 00:28:52,680 --> 00:28:56,920 Speaker 5: So if you are a company, if we're talking specifically 458 00:28:57,000 --> 00:29:01,479 Speaker 5: through the company lens, when we're thinking about responsese of AI, 459 00:29:02,160 --> 00:29:07,440 Speaker 5: the most important difference between this form of AI technologies 460 00:29:07,480 --> 00:29:10,760 Speaker 5: and other forms of technologies that we have used previously 461 00:29:11,440 --> 00:29:15,719 Speaker 5: is the integration of data and the training models that 462 00:29:15,800 --> 00:29:18,040 Speaker 5: go on top of that data. So when we think 463 00:29:18,080 --> 00:29:21,920 Speaker 5: about responsibility, first and foremost, you need to think about 464 00:29:21,920 --> 00:29:26,120 Speaker 5: your data. Where did it come from? What consent and 465 00:29:26,160 --> 00:29:30,480 Speaker 5: disclosure requirements do you have on it? Are you privacy protecting? 466 00:29:30,920 --> 00:29:34,120 Speaker 5: You can't be thinking about AI within your company without 467 00:29:34,120 --> 00:29:36,880 Speaker 5: thinking about data, and that's both your training data. But 468 00:29:36,960 --> 00:29:41,520 Speaker 5: then once you're using your systems and integrating and interacting 469 00:29:41,560 --> 00:29:44,040 Speaker 5: with your consumers, how are you protecting the data that's 470 00:29:44,080 --> 00:29:48,120 Speaker 5: coming out of those systems as well? And then secondly 471 00:29:48,440 --> 00:29:53,280 Speaker 5: is when you're thinking about how to deploy that AI system, 472 00:29:53,720 --> 00:29:56,040 Speaker 5: the most important thing you want to think about is 473 00:29:56,400 --> 00:30:00,280 Speaker 5: are we being transparent about how it's being you with 474 00:30:00,320 --> 00:30:04,000 Speaker 5: our clients and our partners. So you know, the idea 475 00:30:04,040 --> 00:30:07,160 Speaker 5: that if I'm a customer, I should know when I'm 476 00:30:07,200 --> 00:30:11,520 Speaker 5: interacting with an AI system, I should know when I'm 477 00:30:11,560 --> 00:30:14,440 Speaker 5: interacting with a human. So I think those two pieces 478 00:30:14,560 --> 00:30:17,600 Speaker 5: are the fundamentals. And then of course you want to 479 00:30:17,640 --> 00:30:21,480 Speaker 5: be thinking carefully about, you know, making sure that whatever 480 00:30:21,760 --> 00:30:25,800 Speaker 5: jurisdiction you're operating in, you're meeting all of the legal 481 00:30:25,840 --> 00:30:29,200 Speaker 5: requirements with regard to the services and products that you're offering. 482 00:30:29,720 --> 00:30:34,840 Speaker 4: Let's finish with the speed round, complete the sentence. In 483 00:30:34,920 --> 00:30:36,800 Speaker 4: five years, AI will. 484 00:30:37,800 --> 00:30:43,280 Speaker 5: Will drive equity, justice, and shared prosperity if we choose 485 00:30:43,840 --> 00:30:46,720 Speaker 5: to set that future trajectory for this technology. 486 00:30:47,680 --> 00:30:51,480 Speaker 4: What is the number one thing that people misunderstand about AI. 487 00:30:52,600 --> 00:30:56,440 Speaker 5: AI is not good, and AI is not bad, But 488 00:30:56,600 --> 00:31:01,719 Speaker 5: AI is also not neutral. It is a product of 489 00:31:01,760 --> 00:31:06,000 Speaker 5: the choices we make as humans about how we deploy 490 00:31:06,080 --> 00:31:06,880 Speaker 5: it in the world. 491 00:31:08,280 --> 00:31:11,520 Speaker 4: What advice would you give yourself ten years ago to 492 00:31:11,720 --> 00:31:17,000 Speaker 4: better prepare yourself for today? 493 00:31:17,720 --> 00:31:22,760 Speaker 5: Ten years ago, I wish that I had known just 494 00:31:23,160 --> 00:31:30,920 Speaker 5: how fundamental the enduring questions of ethics and responsibility would 495 00:31:31,000 --> 00:31:36,280 Speaker 5: be as we developed this technology moving forward, So many 496 00:31:36,360 --> 00:31:40,000 Speaker 5: of the questions that we ask about AI are questions 497 00:31:40,000 --> 00:31:44,760 Speaker 5: about ourselves and the way in which we use technology, 498 00:31:45,320 --> 00:31:47,960 Speaker 5: and the way in which technology can advance the work 499 00:31:48,000 --> 00:31:48,560 Speaker 5: we're doing. 500 00:31:49,720 --> 00:31:52,080 Speaker 4: How do you use AI in your day to day 501 00:31:52,120 --> 00:31:52,800 Speaker 4: life today? 502 00:31:53,400 --> 00:31:56,760 Speaker 5: I use AI all day every day. So whether it's 503 00:31:56,840 --> 00:32:00,360 Speaker 5: my bird app when I go out for my learning 504 00:32:00,400 --> 00:32:03,640 Speaker 5: walk helping me to better identify birds that I see, 505 00:32:03,920 --> 00:32:07,040 Speaker 5: or whether it is my mapping app that's helping me 506 00:32:07,120 --> 00:32:10,680 Speaker 5: to get more speedily through traffic to whatever meeting I 507 00:32:10,800 --> 00:32:13,840 Speaker 5: need to go to, I use AI all the time. 508 00:32:14,360 --> 00:32:18,160 Speaker 5: I really enjoy using some of the generative AI chatbots 509 00:32:18,760 --> 00:32:21,680 Speaker 5: more for fun than for anything else. As a creative 510 00:32:21,720 --> 00:32:25,520 Speaker 5: partner in thinking through ideas and integrating it into all 511 00:32:25,640 --> 00:32:28,680 Speaker 5: aspects of our lives. Is just so much about the 512 00:32:28,680 --> 00:32:30,000 Speaker 5: way in which we live today. 513 00:32:31,280 --> 00:32:35,520 Speaker 4: So people use the word open to mean different things, 514 00:32:36,120 --> 00:32:39,360 Speaker 4: even just in the context of technology. How do you 515 00:32:39,400 --> 00:32:41,600 Speaker 4: define open in the context. 516 00:32:41,160 --> 00:32:41,760 Speaker 2: Of your work. 517 00:32:42,400 --> 00:32:44,560 Speaker 5: So there is the question of open as it is 518 00:32:44,680 --> 00:32:48,480 Speaker 5: deployed to technology, which we've talked a lot about. But 519 00:32:48,600 --> 00:32:52,920 Speaker 5: I do think a big piece of PAI is open minded. 520 00:32:53,800 --> 00:32:57,240 Speaker 5: We need to be open minded truly to listen to, 521 00:32:57,800 --> 00:33:01,960 Speaker 5: for example, what a civil society advocate might say about 522 00:33:01,960 --> 00:33:04,200 Speaker 5: what they're seeing in terms of the way in which 523 00:33:04,280 --> 00:33:08,160 Speaker 5: AI is interacting in a particular community. Or we need 524 00:33:08,200 --> 00:33:11,000 Speaker 5: to be open minded to hear from a technologist about 525 00:33:11,000 --> 00:33:13,680 Speaker 5: their hopes and dreams of where this technology might go 526 00:33:13,760 --> 00:33:17,960 Speaker 5: moving forward. And we need to have those conversations listening 527 00:33:18,040 --> 00:33:21,400 Speaker 5: to each other to really identify how we're going to 528 00:33:21,440 --> 00:33:25,680 Speaker 5: meet the challenge and opportunity of AI today. So open 529 00:33:26,640 --> 00:33:31,960 Speaker 5: is just fundamental to the partnership on AI. I often 530 00:33:32,000 --> 00:33:35,040 Speaker 5: call it an experiment in open innovation. 531 00:33:36,680 --> 00:33:38,400 Speaker 4: Rebecca, thank you so much for your time. 532 00:33:39,280 --> 00:33:41,160 Speaker 5: It is my pleasure. Thank you for having me. 533 00:33:43,680 --> 00:33:46,440 Speaker 3: Thank you to Rebecca and Jacob for that engaging discussion 534 00:33:46,760 --> 00:33:49,640 Speaker 3: about some of the most pressing issues facing the future 535 00:33:49,720 --> 00:33:53,840 Speaker 3: of AI. As Rebecca emphasized, whether you're thinking about data 536 00:33:53,840 --> 00:33:58,520 Speaker 3: privacy or disclosure, transparency and openness are key to solving 537 00:33:58,640 --> 00:34:05,560 Speaker 3: challenges and capitalizing on new opportunities by developing best practices 538 00:34:05,600 --> 00:34:10,000 Speaker 3: and resources. Partnership on AI is building out the guardrails 539 00:34:10,280 --> 00:34:13,279 Speaker 3: to support the release of open source models and the 540 00:34:13,320 --> 00:34:18,239 Speaker 3: practice of post deployment monitoring. By sharing their work with 541 00:34:18,280 --> 00:34:23,640 Speaker 3: the broader community, Rebecca and Pai are demonstrating how working responsibly, 542 00:34:24,040 --> 00:34:30,640 Speaker 3: ethically and openly can help drive innovation. Smart Talks with 543 00:34:30,680 --> 00:34:35,160 Speaker 3: IBM is produced by Matt Romano, Joey Fishground, Amy Gaines McQuaid, 544 00:34:35,600 --> 00:34:39,759 Speaker 3: and Jacob Goldstein. We're edited by Lydia jen Kott. Our 545 00:34:39,840 --> 00:34:44,400 Speaker 3: engineers are Sarah Brugaer and Ben Holliday. Theme song by Gramoscope. 546 00:34:44,560 --> 00:34:47,799 Speaker 3: Special thanks to the eight Bar and IBM teams, as 547 00:34:47,840 --> 00:34:51,320 Speaker 3: well as the Pushkin marketing team. Smart Talks with IBM 548 00:34:51,440 --> 00:34:55,280 Speaker 3: is a production of Pushkin Industries and Ruby Studio at iHeartMedia. 549 00:34:55,960 --> 00:34:59,399 Speaker 3: To find more Pushkin podcasts, listen on the iHeartRadio app, 550 00:34:59,600 --> 00:35:04,720 Speaker 3: Apple Podcasts, or wherever you listen to podcasts. I'm Malcolm Glapwell. 551 00:35:11,760 --> 00:35:15,480 Speaker 3: This is a paid advertisement from IBM. The conversations on 552 00:35:15,520 --> 00:35:33,640 Speaker 3: this podcast don't necessarily represent IBM's positions, strategies or opinions,