1 00:00:03,040 --> 00:00:05,360 Speaker 1: Welcome to Stuff to Blow Your Mind production of My 2 00:00:05,480 --> 00:00:14,920 Speaker 1: Heart Radio. Hey you welcome to Stuff to Blow Your Mind. 3 00:00:15,160 --> 00:00:18,320 Speaker 1: My name is Robert Lamb and I'm Joe McCormick. Joe. 4 00:00:18,360 --> 00:00:21,400 Speaker 1: While you were out, I conducted an interview with educational 5 00:00:21,480 --> 00:00:25,759 Speaker 1: technologist Mike Sharple's on the book he wrote with Rafael 6 00:00:25,840 --> 00:00:31,040 Speaker 1: Perezi Perez, Story Machines, How Computers Have Become Creative Writers. 7 00:00:31,480 --> 00:00:34,919 Speaker 1: It's a fascinating read about the nature of storytelling, our 8 00:00:35,080 --> 00:00:38,320 Speaker 1: history of attempting to instill the spirit of storytelling into 9 00:00:38,360 --> 00:00:41,360 Speaker 1: the machine, where we seem to be going with this technology, 10 00:00:41,520 --> 00:00:45,360 Speaker 1: and quite remarkably where we are already. Well, I can't 11 00:00:45,400 --> 00:00:48,200 Speaker 1: wait to hear this interview. All right, Well, without further ado, 12 00:00:48,320 --> 00:00:52,200 Speaker 1: let's jump right in. Hi, Mike, welcome to the show. 13 00:00:52,479 --> 00:00:55,360 Speaker 1: I'm here. It's good to be at. The book, co 14 00:00:55,520 --> 00:01:00,560 Speaker 1: authored with Rafael Perez Perez is Story Machines, How Computers 15 00:01:00,560 --> 00:01:05,640 Speaker 1: Have Become Creative Writers, Publishing July five. It's a terrific read, 16 00:01:05,840 --> 00:01:07,679 Speaker 1: but it has to be stressed from the outside. That's 17 00:01:07,720 --> 00:01:10,800 Speaker 1: the storytelling isn't just a pastime that humans engage in. 18 00:01:11,319 --> 00:01:17,160 Speaker 1: Storytelling is something greater? Right? Yeah, So storytelling we suggest 19 00:01:17,200 --> 00:01:20,360 Speaker 1: in the book is something that's fundamental to human existence 20 00:01:21,000 --> 00:01:25,640 Speaker 1: and has been for millennia. It suggested that instead of 21 00:01:26,120 --> 00:01:30,440 Speaker 1: language coming first and then storytelling involving after that, perhaps 22 00:01:30,440 --> 00:01:35,880 Speaker 1: it's the other way around. That perhaps storytelling started as 23 00:01:35,920 --> 00:01:41,000 Speaker 1: a way of human communication through mine, through expressive gestures, 24 00:01:41,160 --> 00:01:44,520 Speaker 1: and then language followed from that. So we want to 25 00:01:44,520 --> 00:01:49,160 Speaker 1: make the point that storytelling is fundamental at a neural level, 26 00:01:49,560 --> 00:01:52,160 Speaker 1: it's how we make sense of the world. It's at 27 00:01:52,160 --> 00:01:56,200 Speaker 1: a cognitive level about how we create narratives to explain 28 00:01:56,200 --> 00:01:59,840 Speaker 1: our existence, and in the social world, it's the stories 29 00:02:00,000 --> 00:02:02,480 Speaker 1: we tell to each other that makes us who we are. 30 00:02:02,760 --> 00:02:06,240 Speaker 1: So when we're talking about machines engaging in storytelling or 31 00:02:06,320 --> 00:02:09,600 Speaker 1: story creation, we're really getting deep into human creativity and 32 00:02:09,680 --> 00:02:15,440 Speaker 1: human identity. Then, yeah, we are. And it's both a 33 00:02:15,480 --> 00:02:21,560 Speaker 1: fascinating insight into human creativity and it's also something the threat. 34 00:02:21,919 --> 00:02:27,120 Speaker 1: So for centuries, writers have been fascinated about this idea 35 00:02:27,200 --> 00:02:31,040 Speaker 1: that a that a machine might take over their craft, 36 00:02:31,400 --> 00:02:35,160 Speaker 1: that a machine might become a storyteller. It goes right 37 00:02:35,160 --> 00:02:40,040 Speaker 1: back to Jonathan Swift, for example, in Gulliver's Travels Coming 38 00:02:40,080 --> 00:02:46,320 Speaker 1: Across a Weird Academy, where apprentices were manipulating a story 39 00:02:46,360 --> 00:02:51,440 Speaker 1: machine which churned out academic texts right through to modern 40 00:02:51,440 --> 00:02:54,720 Speaker 1: writers such as Roll Dow. He wrote a short story 41 00:02:55,080 --> 00:02:58,880 Speaker 1: about how an author was selling his soul to a 42 00:02:58,960 --> 00:03:03,480 Speaker 1: machine that generated short stories for him. So it's become 43 00:03:03,560 --> 00:03:07,600 Speaker 1: something that particularly professional authors have been fascinated by this 44 00:03:07,720 --> 00:03:10,760 Speaker 1: idea that a machine might be as creative as a 45 00:03:10,880 --> 00:03:15,240 Speaker 1: human and also giving us insight into human creativity. Now, 46 00:03:15,760 --> 00:03:18,480 Speaker 1: of course it's one thing to imagine these machines. You 47 00:03:18,480 --> 00:03:23,120 Speaker 1: point out that they're they're necessary precursors in linguistics, literary analysis, 48 00:03:23,120 --> 00:03:25,480 Speaker 1: and other fields to even get to the point of 49 00:03:25,520 --> 00:03:29,560 Speaker 1: considering asking a machine or or making machine they can 50 00:03:29,600 --> 00:03:32,360 Speaker 1: write a story or novel for you. Yeah, and I 51 00:03:32,400 --> 00:03:34,960 Speaker 1: think there are there are a number of ways we 52 00:03:35,000 --> 00:03:38,240 Speaker 1: can approach this. So one of them is to look 53 00:03:38,280 --> 00:03:42,160 Speaker 1: at language. So we are language machines. We we have 54 00:03:42,280 --> 00:03:46,320 Speaker 1: been trained in how to manipulate language as humans, and 55 00:03:47,000 --> 00:03:50,400 Speaker 1: we can now design machines that could copy that in 56 00:03:50,400 --> 00:03:53,040 Speaker 1: a very expressive way. So I'm sure we'll come onto that. 57 00:03:53,160 --> 00:03:57,960 Speaker 1: But AI systems such as GPT three that are expert wordsmiths. 58 00:03:58,000 --> 00:04:02,760 Speaker 1: So that's one route and other route is simulating characters. 59 00:04:03,120 --> 00:04:08,200 Speaker 1: So some of the newer computer games now have stories 60 00:04:08,280 --> 00:04:13,480 Speaker 1: nonplayer characters that can't tell stories to the human players. 61 00:04:14,120 --> 00:04:16,960 Speaker 1: And then the third way is to build models of 62 00:04:17,200 --> 00:04:21,000 Speaker 1: the creative mind. And that's what my colleague Raphael pre 63 00:04:21,279 --> 00:04:24,960 Speaker 1: Perez has been doing for many years in his Mexican program, 64 00:04:25,080 --> 00:04:28,039 Speaker 1: is trying to build a model of the human creative mind. 65 00:04:28,240 --> 00:04:32,080 Speaker 1: So the different routes for coming at modeling storytelling and 66 00:04:32,160 --> 00:04:35,680 Speaker 1: understanding storytelling with machines. And the one that you mentioned 67 00:04:35,720 --> 00:04:40,000 Speaker 1: that I think is very fascinating is Vladimir props morphology 68 00:04:40,040 --> 00:04:44,080 Speaker 1: of the folk tale. This idea of taking apart of 69 00:04:44,080 --> 00:04:46,840 Speaker 1: folk tale tradition and in figuring out like what are 70 00:04:46,839 --> 00:04:50,120 Speaker 1: the basic strokes, what are the basic elements, and thus 71 00:04:50,200 --> 00:04:54,080 Speaker 1: creating I guess, sort of the palette for recreating stories. Yeah, 72 00:04:54,120 --> 00:04:59,160 Speaker 1: this was back in the early twentieth century that Vladimir 73 00:04:59,200 --> 00:05:04,200 Speaker 1: prop was part of a group of Russian linguists who 74 00:05:04,279 --> 00:05:10,760 Speaker 1: and folk tale academics who became interested in the morphology 75 00:05:10,920 --> 00:05:15,359 Speaker 1: the structure of folk tales, and he realized that Russian 76 00:05:15,360 --> 00:05:19,440 Speaker 1: folk tales had this very similar structure, just as fairy 77 00:05:19,480 --> 00:05:23,719 Speaker 1: tales in other Western traditions have, and so he set 78 00:05:23,720 --> 00:05:26,960 Speaker 1: about trying to ride a set of rules that would 79 00:05:27,000 --> 00:05:31,360 Speaker 1: both analyze these folk tales and show their underlying structure, 80 00:05:31,440 --> 00:05:34,000 Speaker 1: but also could be turned around to generate them. That 81 00:05:34,160 --> 00:05:37,839 Speaker 1: if you use these rules essentially as what we now 82 00:05:37,880 --> 00:05:42,760 Speaker 1: call programs or algorithms, they could be employed to create 83 00:05:42,839 --> 00:05:47,039 Speaker 1: new folk tales. And so he set the foundations for 84 00:05:47,120 --> 00:05:52,520 Speaker 1: the structural analysis of stories way back what a hundred 85 00:05:52,560 --> 00:05:56,560 Speaker 1: and twenty years ago now, And interestingly, some of the 86 00:05:56,680 --> 00:06:02,640 Speaker 1: earliest computer programs to generate story were based on props formalism. 87 00:06:02,680 --> 00:06:04,559 Speaker 1: It was really if you look at it, it's really 88 00:06:04,600 --> 00:06:08,880 Speaker 1: like a very pair down computer algorithm. Now answer some 89 00:06:08,960 --> 00:06:13,840 Speaker 1: of the first actual text generating machines. You mentioned a 90 00:06:13,880 --> 00:06:17,920 Speaker 1: few different examples of this. Which are the most important 91 00:06:17,960 --> 00:06:20,160 Speaker 1: to mention or which one is the most important to mention? 92 00:06:20,520 --> 00:06:23,320 Speaker 1: So one of the earliest ones was by Christopher Stracci, 93 00:06:23,480 --> 00:06:27,760 Speaker 1: who was a colleague of volenteering working in Manchester, and 94 00:06:28,000 --> 00:06:31,920 Speaker 1: he developed a very very early computer program on one 95 00:06:31,960 --> 00:06:38,640 Speaker 1: of the first prototype computers that generated love letters Victorian 96 00:06:38,760 --> 00:06:42,000 Speaker 1: love letters. Um, and we can speculate on why he 97 00:06:42,080 --> 00:06:45,400 Speaker 1: would want to write a program to generate love letters, 98 00:06:45,400 --> 00:06:47,400 Speaker 1: but he did and pin them up on the wall 99 00:06:47,440 --> 00:06:51,120 Speaker 1: of his lab. And then since then there's been a 100 00:06:51,279 --> 00:06:55,000 Speaker 1: number of people who have particularly taken a linguistic approach, 101 00:06:55,600 --> 00:06:59,160 Speaker 1: and so one of the pioneers was a linguist called 102 00:06:59,160 --> 00:07:05,560 Speaker 1: Sheldon Clade, and Sheldon Klein had this big, grandiose project 103 00:07:05,920 --> 00:07:13,160 Speaker 1: to try and model human language production as a as 104 00:07:13,160 --> 00:07:16,200 Speaker 1: a as an algorithm, as a computer program, and then 105 00:07:16,560 --> 00:07:21,000 Speaker 1: through this trying to understand the origins of language. And so, 106 00:07:21,400 --> 00:07:24,920 Speaker 1: along with a group of colleagues, many PhD students, he 107 00:07:25,080 --> 00:07:28,800 Speaker 1: produced one of the earliest programs that generated stories. It 108 00:07:28,960 --> 00:07:32,960 Speaker 1: generated murder mysteries, a sort of country house Agatha Christie 109 00:07:32,960 --> 00:07:37,040 Speaker 1: type mysteries, and it was part of this big project 110 00:07:37,240 --> 00:07:41,120 Speaker 1: to formalize language in a generative way. The problem was 111 00:07:41,160 --> 00:07:47,640 Speaker 1: the program the stories that generated were pretty trivial. There 112 00:07:47,800 --> 00:07:50,640 Speaker 1: was people gathered at the country house, there was a murder, 113 00:07:51,480 --> 00:07:56,120 Speaker 1: somebody tried to investigate it the end, and so although 114 00:07:56,160 --> 00:07:58,640 Speaker 1: it was seen as a novelty at the time, it 115 00:07:58,760 --> 00:08:02,560 Speaker 1: wasn't really risk acted for the Great linguistic project that 116 00:08:02,680 --> 00:08:06,800 Speaker 1: lasted for twenty or thirty years in the nine nineties, 117 00:08:07,600 --> 00:08:10,160 Speaker 1: and there were a number of attempts to write entire 118 00:08:10,280 --> 00:08:14,600 Speaker 1: novels by computer and probably The most interesting one was 119 00:08:14,640 --> 00:08:20,600 Speaker 1: by Geckll Scott French, who programmed a Mac computer to 120 00:08:21,280 --> 00:08:25,360 Speaker 1: generate a novel in the style of Jacqueline Susan. I've 121 00:08:25,400 --> 00:08:28,760 Speaker 1: got it here. It's called Justice Once, and it is 122 00:08:28,960 --> 00:08:33,200 Speaker 1: a complete published novel in the style of the pot 123 00:08:33,240 --> 00:08:39,160 Speaker 1: boilant author called Jacqueline Susan, and it seems to be genuine. 124 00:08:39,200 --> 00:08:41,560 Speaker 1: There's a picture of him with his Mac computer on 125 00:08:41,600 --> 00:08:45,160 Speaker 1: the cover of the book. And it took him eight 126 00:08:45,280 --> 00:08:50,000 Speaker 1: years to design this very early AI system that would 127 00:08:50,040 --> 00:08:54,640 Speaker 1: mimic the style of this author, and he engaged that 128 00:08:54,760 --> 00:09:00,160 Speaker 1: eyed dialogue with the program to generate an entire three 129 00:09:00,200 --> 00:09:04,640 Speaker 1: hundred page novel. So that was probably the first, I'm 130 00:09:05,040 --> 00:09:08,439 Speaker 1: right up to this day, greatest example of story writing 131 00:09:08,440 --> 00:09:12,600 Speaker 1: with machine. What's fascinating now is that what took him 132 00:09:12,720 --> 00:09:15,720 Speaker 1: eight years to do could now be done in a 133 00:09:15,720 --> 00:09:22,960 Speaker 1: few seconds with the most recent AI UM generator programs. Yeah, 134 00:09:23,080 --> 00:09:25,080 Speaker 1: in the book you you write them just to read 135 00:09:25,120 --> 00:09:27,400 Speaker 1: a quick quote quote. It took just twenty years ago 136 00:09:27,440 --> 00:09:29,520 Speaker 1: from a program that wrote love letters to one that 137 00:09:29,559 --> 00:09:33,319 Speaker 1: created complete short stories. Uh, and a further twenty years 138 00:09:33,480 --> 00:09:36,120 Speaker 1: to a published three page novel in partnership with the 139 00:09:36,120 --> 00:09:39,959 Speaker 1: computer Uh, it's fascinating to think about. You know this, 140 00:09:40,200 --> 00:09:45,080 Speaker 1: this the technological advancement during that time in broad strokes, 141 00:09:45,120 --> 00:09:47,920 Speaker 1: like what I mean, what were the key advancements going 142 00:09:47,960 --> 00:09:50,760 Speaker 1: on here that made this possible. Well, the first advancement 143 00:09:51,000 --> 00:09:55,600 Speaker 1: was to be able to have interactive computer systems that 144 00:09:55,760 --> 00:10:00,440 Speaker 1: you could program in a high level programming which and 145 00:10:00,480 --> 00:10:04,120 Speaker 1: that's what Christopher Straitchy and Alan Turing were working on, 146 00:10:05,640 --> 00:10:10,120 Speaker 1: and that they could then demonstrate this in generating very 147 00:10:10,200 --> 00:10:16,080 Speaker 1: simple love letters. The next stage was to be able 148 00:10:16,160 --> 00:10:21,079 Speaker 1: to produce grammars, generative grammars, and this goes back to 149 00:10:21,120 --> 00:10:25,000 Speaker 1: the work of prop who realized that you could have 150 00:10:25,040 --> 00:10:30,360 Speaker 1: grammars that didn't just generate individual sentences, but could generate 151 00:10:30,520 --> 00:10:33,320 Speaker 1: entire stories. That you could describe the structure of a 152 00:10:33,360 --> 00:10:37,280 Speaker 1: story in terms of something like a formal grammar, and 153 00:10:37,640 --> 00:10:41,439 Speaker 1: that in THEES there were a number of projects to 154 00:10:42,160 --> 00:10:46,200 Speaker 1: put that grammar into a program that would generate a 155 00:10:46,200 --> 00:10:48,880 Speaker 1: short story. That's what Sheldon Klein and his team did. 156 00:10:49,840 --> 00:10:53,720 Speaker 1: And then the next step beyond that was to write 157 00:10:53,920 --> 00:10:59,000 Speaker 1: symbolic AI programs that modeled the style of a particular writer, 158 00:10:59,480 --> 00:11:03,040 Speaker 1: and that was the great achievement of um Scott French. 159 00:11:03,320 --> 00:11:08,800 Speaker 1: It was a symbolic AI expert system of style. And 160 00:11:08,840 --> 00:11:12,040 Speaker 1: then we come right up to date. And there are 161 00:11:12,080 --> 00:11:19,200 Speaker 1: programs like GPT three, which are hugely competent and well 162 00:11:19,280 --> 00:11:22,760 Speaker 1: trained language systems, So they aren't rule based systems. They 163 00:11:22,800 --> 00:11:29,480 Speaker 1: don't have something like the Sheldon client grammar inside them. 164 00:11:29,600 --> 00:11:34,480 Speaker 1: They have been trained on billions of pieces of text 165 00:11:34,960 --> 00:11:40,280 Speaker 1: and they have many millions of interconnections that create an 166 00:11:40,320 --> 00:11:45,000 Speaker 1: internal language model which they then use to generate texts 167 00:11:45,040 --> 00:11:48,240 Speaker 1: in particular styles. You can start it in a style 168 00:11:48,440 --> 00:11:50,559 Speaker 1: and it will continue in that style. You can give 169 00:11:50,600 --> 00:11:52,840 Speaker 1: it an instruction of what sort of story to write 170 00:11:53,000 --> 00:11:57,240 Speaker 1: and it will continue in that story. But the really 171 00:11:57,280 --> 00:11:59,600 Speaker 1: important thing to say is that there are two different 172 00:11:59,640 --> 00:12:02,560 Speaker 1: sorts of AI. So the sort of AI that's got 173 00:12:02,559 --> 00:12:06,480 Speaker 1: French used was writing explicit rules. That's why it took 174 00:12:06,520 --> 00:12:10,760 Speaker 1: him eight years to code these rules to imitate one 175 00:12:10,840 --> 00:12:17,880 Speaker 1: person style. The GPT three type AI transformer programs induce 176 00:12:18,480 --> 00:12:22,440 Speaker 1: infer those rules from being trained on billions and billions 177 00:12:22,440 --> 00:12:24,800 Speaker 1: of pieces of texts. So there are two different sorts 178 00:12:24,800 --> 00:12:29,240 Speaker 1: of AI. Both had their strengths and weaknesses. UH And 179 00:12:29,640 --> 00:12:31,800 Speaker 1: one of the fascinating things of the future is whether 180 00:12:31,840 --> 00:12:34,200 Speaker 1: we can put them together, whether we can merge those 181 00:12:34,240 --> 00:12:37,400 Speaker 1: two different sorts of AI into the sort of universal 182 00:12:37,520 --> 00:12:43,080 Speaker 1: story machine. Wow. And so GPT three is the current model, 183 00:12:43,200 --> 00:12:46,720 Speaker 1: Is that correct? Or are we had four yet? So 184 00:12:46,800 --> 00:12:49,800 Speaker 1: there isn't a GPT four yet, although I'm sure there's 185 00:12:49,840 --> 00:12:54,080 Speaker 1: one in the pipeline that continually revising GPT three. And 186 00:12:54,200 --> 00:12:57,480 Speaker 1: just for those who don't know GPT, the GPT models 187 00:12:57,640 --> 00:13:01,720 Speaker 1: were developed by and still being developed by company called 188 00:13:01,760 --> 00:13:05,959 Speaker 1: open Ai that was founded by a group of entrepreneurs 189 00:13:06,000 --> 00:13:10,160 Speaker 1: including Elon Musk and others, and that company was set 190 00:13:10,280 --> 00:13:16,280 Speaker 1: up to explore the opportunities of AI for social good. 191 00:13:16,800 --> 00:13:20,520 Speaker 1: It has developed a number of different programs. There's one 192 00:13:20,559 --> 00:13:24,240 Speaker 1: that it's developed for art called Dally, which can do 193 00:13:24,320 --> 00:13:28,520 Speaker 1: the same for art and images as GPT three can 194 00:13:28,640 --> 00:13:34,040 Speaker 1: for for words and stories. But the GPT three now 195 00:13:34,080 --> 00:13:40,760 Speaker 1: it's its third generation. In essence, what it's been, what's 196 00:13:40,760 --> 00:13:43,560 Speaker 1: how it works is it's been trained on billions of 197 00:13:43,559 --> 00:13:48,200 Speaker 1: pieces of text. It uses those texts to form an 198 00:13:48,240 --> 00:13:54,000 Speaker 1: internal model of both the surface structure of language but 199 00:13:54,080 --> 00:13:58,319 Speaker 1: also the internal structure language essentially how the world works. 200 00:13:58,840 --> 00:14:04,760 Speaker 1: And then initially the early GPT models were sentenced computers 201 00:14:04,880 --> 00:14:08,520 Speaker 1: like very highly trained, suped up sentenced computers of the 202 00:14:08,559 --> 00:14:11,240 Speaker 1: sort that you've got on your mobile phone. But they 203 00:14:11,240 --> 00:14:14,160 Speaker 1: can look back at the last five words or so. 204 00:14:14,640 --> 00:14:19,280 Speaker 1: They haven't a big attention window, so they know what 205 00:14:19,440 --> 00:14:22,320 Speaker 1: they've written before, and they use this to continue in 206 00:14:22,360 --> 00:14:25,040 Speaker 1: the same style, the same structure. So they give a 207 00:14:25,280 --> 00:14:35,600 Speaker 1: very plausible simulation, a very plausible indication of coherent language, 208 00:14:35,960 --> 00:14:39,680 Speaker 1: as if it were being written by a human. Now, 209 00:14:40,160 --> 00:14:44,440 Speaker 1: the problem is it's highly believable, but it doesn't have 210 00:14:44,600 --> 00:14:49,160 Speaker 1: any sense of itself. So these systems can't reflect on 211 00:14:49,240 --> 00:14:53,320 Speaker 1: what they've written. They can't look back and say, does 212 00:14:53,360 --> 00:14:57,520 Speaker 1: this makes sense? Does this fit with a good model 213 00:14:57,520 --> 00:15:01,840 Speaker 1: of the world? Is it legal on his decent? It doesn't. 214 00:15:02,440 --> 00:15:05,680 Speaker 1: In essence, it's they're a moral They don't have any 215 00:15:05,760 --> 00:15:10,560 Speaker 1: internal sense of what's right and morality. And so as 216 00:15:10,600 --> 00:15:15,880 Speaker 1: story machines, they're great because they can tell fascinating, plausible, engaging, 217 00:15:16,280 --> 00:15:21,320 Speaker 1: sometimes even poignant stories. But for other purposes like writing 218 00:15:21,320 --> 00:15:25,520 Speaker 1: newspaper articles or writing student essays, then they can be 219 00:15:25,600 --> 00:15:28,400 Speaker 1: dangerous because they don't know what they've written, and they 220 00:15:28,440 --> 00:15:32,280 Speaker 1: don't know whether what they've written is decent, honest, and 221 00:15:32,360 --> 00:15:41,560 Speaker 1: truthful than now. Our listeners can can actually get a 222 00:15:41,560 --> 00:15:45,720 Speaker 1: taste of this by going to story slash machines dot net. 223 00:15:46,800 --> 00:15:49,640 Speaker 1: You have an interface here where you can put in 224 00:15:49,800 --> 00:15:53,040 Speaker 1: just a title or also a title and in some 225 00:15:53,120 --> 00:15:55,440 Speaker 1: text like an opening line of a of a story. 226 00:15:56,040 --> 00:15:59,280 Speaker 1: And this is powered by GPT three. Correct. Yeah, that's right. 227 00:15:59,440 --> 00:16:04,080 Speaker 1: So what I did was wrote a website with an 228 00:16:04,120 --> 00:16:09,360 Speaker 1: interface to the most recent GPT three language model. So 229 00:16:09,400 --> 00:16:12,920 Speaker 1: it basically provides a very simple way for you to 230 00:16:13,160 --> 00:16:16,120 Speaker 1: write the title of a story, a few opening words, 231 00:16:17,000 --> 00:16:21,480 Speaker 1: press the create button, and it comes back with about 232 00:16:21,480 --> 00:16:25,760 Speaker 1: a two d and fifty word introduction to a short story. 233 00:16:25,920 --> 00:16:28,640 Speaker 1: So it gives you a taste of what GPT three 234 00:16:28,680 --> 00:16:31,120 Speaker 1: can do. And yeah, I really recommend that you go 235 00:16:31,160 --> 00:16:34,880 Speaker 1: and try it. Um. I launched it just a couple 236 00:16:34,920 --> 00:16:38,720 Speaker 1: of days ago, so people have already been generating fascinating 237 00:16:38,760 --> 00:16:42,400 Speaker 1: short stories using that, So so go and give it 238 00:16:42,440 --> 00:16:47,600 Speaker 1: a try. It's story hyphen machines dot net. Yeah I was. 239 00:16:47,680 --> 00:16:50,040 Speaker 1: I was firing this up when I was reading the 240 00:16:50,080 --> 00:16:53,720 Speaker 1: book yesterday, and yeah, eventually I was having to bring 241 00:16:53,760 --> 00:16:55,800 Speaker 1: my son over and showed to him. I was really 242 00:16:56,160 --> 00:17:00,680 Speaker 1: impressed toying around with it like trying having a title 243 00:17:00,800 --> 00:17:04,119 Speaker 1: that sort of suggested a certain genre, maybe throwing in 244 00:17:04,200 --> 00:17:08,720 Speaker 1: a first sentence that that was the record that the 245 00:17:08,760 --> 00:17:11,880 Speaker 1: reference to certain genre. And then also I was really 246 00:17:11,920 --> 00:17:14,560 Speaker 1: impressed when I would say, put in a title and 247 00:17:14,600 --> 00:17:17,200 Speaker 1: then put in the first line of an existing short 248 00:17:17,240 --> 00:17:19,960 Speaker 1: story by an author. I think I used Clark Ashton 249 00:17:20,000 --> 00:17:24,400 Speaker 1: Smith as an example, and I used um Corny McCarthy. 250 00:17:24,400 --> 00:17:26,240 Speaker 1: I put in a line of Corny McCarthy's work, and 251 00:17:26,240 --> 00:17:28,800 Speaker 1: I was impressed at how well it seemed to model 252 00:17:29,400 --> 00:17:34,480 Speaker 1: a portion of writing as if it were by these authors. Yeah, exactly. Um, 253 00:17:34,520 --> 00:17:36,800 Speaker 1: I'll give you an example, because it's quite quite good 254 00:17:36,840 --> 00:17:38,760 Speaker 1: to show an example. You can cut it out if 255 00:17:38,760 --> 00:17:41,560 Speaker 1: you want to. So so here's an example one that 256 00:17:41,640 --> 00:17:45,440 Speaker 1: I did on Ernest Hemingway in London. The fog came 257 00:17:45,480 --> 00:17:47,920 Speaker 1: in from the river and it was cold for April. 258 00:17:48,480 --> 00:17:51,320 Speaker 1: I walked up past the British Museum and turned down 259 00:17:51,359 --> 00:17:55,000 Speaker 1: towards Holburn. I had a lunch date with Harriet the Criterion, 260 00:17:55,200 --> 00:17:57,919 Speaker 1: and I was early. It was one of those heavy 261 00:17:58,080 --> 00:18:00,760 Speaker 1: grade days that seemed to press down on the city, 262 00:18:01,160 --> 00:18:04,919 Speaker 1: weighing it down. The traffic moved slowly through the streets, 263 00:18:04,960 --> 00:18:08,119 Speaker 1: and the people hurried along with their heads down trying 264 00:18:08,160 --> 00:18:10,720 Speaker 1: to get out of the cold. I glanced in the 265 00:18:10,720 --> 00:18:14,040 Speaker 1: windows of the shops as I walked, looking for something 266 00:18:14,080 --> 00:18:17,560 Speaker 1: to buy Harry for his birthday. I knew he'd appreciate 267 00:18:17,640 --> 00:18:20,239 Speaker 1: a good bottle of scotch, but I wanted to get 268 00:18:20,320 --> 00:18:23,240 Speaker 1: him something else as well. That's just the start of 269 00:18:23,320 --> 00:18:26,880 Speaker 1: as a story in Ernest Hemingway style, and you can 270 00:18:26,880 --> 00:18:29,960 Speaker 1: give it whatever style you want. It's I found it 271 00:18:30,000 --> 00:18:35,120 Speaker 1: works best when you do a weird juxtaposition. So there 272 00:18:35,240 --> 00:18:38,399 Speaker 1: was someone just a few hours ago did one on 273 00:18:38,960 --> 00:18:43,520 Speaker 1: the Sad Sandwich, and it was a really poignant, sad 274 00:18:43,560 --> 00:18:48,199 Speaker 1: story about a poor, neglected sandwich. So try doing some 275 00:18:48,200 --> 00:18:51,440 Speaker 1: some juxtapositions at words like that. Yes, absolutely, and I'd 276 00:18:51,480 --> 00:18:54,840 Speaker 1: love to hear back from from listeners after they've toyed 277 00:18:54,880 --> 00:18:58,960 Speaker 1: around with us and explored it. But of course, recognizing 278 00:18:59,000 --> 00:19:03,000 Speaker 1: the power of of this technology, Uh yeah, we we 279 00:19:03,080 --> 00:19:07,000 Speaker 1: certainly get into this, this area of of anxiety perhaps, 280 00:19:07,000 --> 00:19:12,040 Speaker 1: but also hope and opportunity. Um I guess on the 281 00:19:12,080 --> 00:19:14,960 Speaker 1: anxiety side of things. The first place my mind went 282 00:19:15,000 --> 00:19:18,760 Speaker 1: as I remember seeing Max Tegmark reference a kind of 283 00:19:19,600 --> 00:19:22,400 Speaker 1: an illustration that was like a topography of human abilities 284 00:19:22,440 --> 00:19:25,960 Speaker 1: and jobs with the idea that at the higher elevations 285 00:19:25,960 --> 00:19:28,760 Speaker 1: were going to be more protected from the rising sea 286 00:19:28,880 --> 00:19:32,600 Speaker 1: levels of AI. So chess and jeopardy were already in 287 00:19:32,640 --> 00:19:36,680 Speaker 1: the water, uh, speech recognition, investment, and social interaction there 288 00:19:36,800 --> 00:19:40,320 Speaker 1: in the lowlands they're going next. And then in this 289 00:19:40,400 --> 00:19:43,919 Speaker 1: particular image, we had science as the highest peak, just 290 00:19:44,080 --> 00:19:47,439 Speaker 1: above the peaks of book writing and AI design. So 291 00:19:47,520 --> 00:19:50,680 Speaker 1: I was just wondering, do you feel like this representation 292 00:19:51,720 --> 00:19:54,679 Speaker 1: was accurate or and or have the water has just 293 00:19:54,840 --> 00:19:58,600 Speaker 1: risen so high already? I think every risen high already. 294 00:19:58,640 --> 00:20:01,240 Speaker 1: I think the ones that are going to be protected 295 00:20:01,359 --> 00:20:05,120 Speaker 1: from the rising tide of AI are probably the caring professions, um, 296 00:20:05,480 --> 00:20:13,120 Speaker 1: the nurses, the child careers. But any profession that works 297 00:20:13,200 --> 00:20:16,760 Speaker 1: with words, he's going to find both an opportunity and 298 00:20:16,760 --> 00:20:20,440 Speaker 1: a threat, I think in generative AI. So the opportunity 299 00:20:21,000 --> 00:20:24,800 Speaker 1: is that it's a new kind of tool. The tools 300 00:20:24,800 --> 00:20:27,359 Speaker 1: that we've had up to now have tended to be 301 00:20:27,560 --> 00:20:34,000 Speaker 1: ones that slow down writing, um, like the saurus or 302 00:20:35,080 --> 00:20:40,679 Speaker 1: spell corrector that they make you pause and check. You 303 00:20:40,760 --> 00:20:42,760 Speaker 1: can do them at the end, but there's always a 304 00:20:42,800 --> 00:20:46,560 Speaker 1: temptation to look up a word to slow down. What's 305 00:20:46,600 --> 00:20:49,320 Speaker 1: different about these is that they can be used to 306 00:20:49,359 --> 00:20:52,160 Speaker 1: speed up your writing. You can, and I've tried doing this, 307 00:20:52,359 --> 00:20:57,080 Speaker 1: writing a paragraph when it starts to dry up, them 308 00:20:57,080 --> 00:20:59,680 Speaker 1: handing it over to the machine to write the next paragraph. 309 00:21:00,119 --> 00:21:04,800 Speaker 1: I'm not a fiction writer. I found it quite empowering 310 00:21:05,359 --> 00:21:11,080 Speaker 1: to use GPT three as a writing buddy. So I 311 00:21:11,080 --> 00:21:13,680 Speaker 1: would write a paragraph, it would write the next paragraph 312 00:21:13,760 --> 00:21:16,639 Speaker 1: and probably take it off in some unexpected direction that 313 00:21:16,680 --> 00:21:19,520 Speaker 1: I would then have to follow, and it might introduce 314 00:21:19,560 --> 00:21:24,120 Speaker 1: a new character, a plot twist, and so both as 315 00:21:24,160 --> 00:21:27,760 Speaker 1: a tool for budding writers and also perhaps a prop 316 00:21:28,080 --> 00:21:33,440 Speaker 1: for professional writers, particularly ones with deadlines to meet, then 317 00:21:33,480 --> 00:21:38,120 Speaker 1: that's the opportunity. I think the threat is the inverse 318 00:21:38,160 --> 00:21:40,479 Speaker 1: side of that, that if you either see it as 319 00:21:40,480 --> 00:21:45,320 Speaker 1: a crutch that rather than trying to do your own writing, 320 00:21:45,359 --> 00:21:47,840 Speaker 1: you just hand it over to the machine, then it's 321 00:21:47,920 --> 00:21:51,560 Speaker 1: very easy to become lazy. And also, as I've said, 322 00:21:51,720 --> 00:21:54,439 Speaker 1: they are a moral machine. So if you're trying to 323 00:21:54,480 --> 00:22:00,200 Speaker 1: do scientific writing, or you're trying to do accurate journalists them, 324 00:22:00,920 --> 00:22:05,879 Speaker 1: then beware because they may well throw in some entirely 325 00:22:06,000 --> 00:22:11,520 Speaker 1: fake research study, some entirely inaccurate, fake reference, um, perhaps 326 00:22:11,640 --> 00:22:19,960 Speaker 1: reference to some completely non nonsensical or inaccurate event that's 327 00:22:19,960 --> 00:22:22,679 Speaker 1: happened in the world. So you've always got to be 328 00:22:22,720 --> 00:22:25,560 Speaker 1: aware of the facts that it throws up. And there's 329 00:22:25,560 --> 00:22:28,560 Speaker 1: a good reason for that that it isn't a fact checker. 330 00:22:29,320 --> 00:22:32,840 Speaker 1: It isn't a Wikipedia or even a Google search. It 331 00:22:33,000 --> 00:22:38,359 Speaker 1: is a language machine. It's loves, you know, anthropomorphic sense. 332 00:22:38,359 --> 00:22:42,720 Speaker 1: It loves playing with words, but those words don't necessarily 333 00:22:42,760 --> 00:22:47,120 Speaker 1: make sense. So if you're going to use tools like 334 00:22:47,280 --> 00:22:50,600 Speaker 1: GPT three as AIGs for writing, then you have to 335 00:22:50,640 --> 00:22:56,119 Speaker 1: be very careful that you cross check the facts and 336 00:22:56,600 --> 00:23:01,560 Speaker 1: the output of that machine to make sure that it 337 00:23:01,720 --> 00:23:06,800 Speaker 1: is accurate and honest and truthful. My mind also went 338 00:23:06,880 --> 00:23:11,920 Speaker 1: to um the possibility of of of almost accidental plagiarism 339 00:23:12,720 --> 00:23:14,960 Speaker 1: because I put in the first line of a Clark 340 00:23:15,000 --> 00:23:18,520 Speaker 1: Ashton Smith story and it it threw in a fascinating 341 00:23:18,560 --> 00:23:21,960 Speaker 1: plot twist that was not in the original story, and 342 00:23:22,240 --> 00:23:24,600 Speaker 1: I don't think I've ever seen in a story, and 343 00:23:24,600 --> 00:23:26,679 Speaker 1: so part of me was wondering like, well, well, you know, 344 00:23:26,760 --> 00:23:28,640 Speaker 1: I should I should latch onto this, maybe I could 345 00:23:28,760 --> 00:23:31,040 Speaker 1: use this. But then the other part, you know, the 346 00:23:31,119 --> 00:23:33,320 Speaker 1: lights coming on in my mind were saying, but hold 347 00:23:33,359 --> 00:23:36,439 Speaker 1: on this. Just because I haven't read it doesn't mean 348 00:23:36,480 --> 00:23:39,600 Speaker 1: it doesn't already exist out there. Might Uh we run 349 00:23:39,640 --> 00:23:44,720 Speaker 1: into situations where, uh, the AI is is reproducing something 350 00:23:45,200 --> 00:23:47,600 Speaker 1: you know, perhaps you honestly, if we want to use 351 00:23:47,680 --> 00:23:51,399 Speaker 1: that term when in in in actuality, it may exist 352 00:23:51,400 --> 00:23:53,600 Speaker 1: out there in some story or another. Yeah, I mean, 353 00:23:53,640 --> 00:23:55,960 Speaker 1: I think the first thing to note is that it's 354 00:23:56,119 --> 00:24:00,600 Speaker 1: not working at this sort of word and sad tense level. 355 00:24:00,800 --> 00:24:03,520 Speaker 1: So it's not copying bits of text from the web 356 00:24:03,680 --> 00:24:08,359 Speaker 1: or from published books. It's working below that, basically at 357 00:24:08,400 --> 00:24:12,000 Speaker 1: the phone name level. It's putting together pieces of words, 358 00:24:12,880 --> 00:24:16,920 Speaker 1: but it's putting together these pieces of words in a 359 00:24:17,000 --> 00:24:22,000 Speaker 1: hugely proficient way. So I've tried taking the output of 360 00:24:22,040 --> 00:24:26,920 Speaker 1: GPT three and doing Google search on phrases and sentences, 361 00:24:27,600 --> 00:24:32,640 Speaker 1: and you don't find them. So it seems like they 362 00:24:32,760 --> 00:24:41,680 Speaker 1: are genuinely producing novel pieces of text. So, for example, 363 00:24:41,920 --> 00:24:44,400 Speaker 1: if students are going to use these for writing essays, 364 00:24:44,840 --> 00:24:48,960 Speaker 1: which is already happening, they're already companies that are advertising 365 00:24:49,000 --> 00:24:52,159 Speaker 1: the services of AI generators for students to write their essays. 366 00:24:52,960 --> 00:24:57,520 Speaker 1: Plagiarism checkers won't detect them. I've tried putting them through 367 00:24:57,560 --> 00:25:04,920 Speaker 1: plagiarism checkers and they of sort of nineties five originality. 368 00:25:05,280 --> 00:25:08,520 Speaker 1: So they're not copying bits of text from the web. 369 00:25:08,960 --> 00:25:13,679 Speaker 1: They are genuinely generating new language. Now, of course, there 370 00:25:13,720 --> 00:25:18,000 Speaker 1: are phrases that may pre exist, and if you give 371 00:25:18,440 --> 00:25:25,080 Speaker 1: highly constrained styles like Shakespeare's sonnets, then it may come 372 00:25:25,200 --> 00:25:29,800 Speaker 1: up with previous lines from other Shakespeare sonnets. But providing 373 00:25:29,840 --> 00:25:33,040 Speaker 1: you give it a broad enough brief, providing you give 374 00:25:33,080 --> 00:25:35,239 Speaker 1: it a general enough style, or even if it's from 375 00:25:35,240 --> 00:25:41,199 Speaker 1: a particular author, then it will generate original text. And 376 00:25:41,240 --> 00:25:44,720 Speaker 1: it's still a bit scariest too. You know, I've talked 377 00:25:44,720 --> 00:25:46,720 Speaker 1: about this with other people and said, but surely it's 378 00:25:46,760 --> 00:25:50,000 Speaker 1: copying from the web. No, it isn't. It's generating. It's 379 00:25:50,080 --> 00:25:53,720 Speaker 1: generating new text in the style of that author, or 380 00:25:53,760 --> 00:25:58,399 Speaker 1: in the style of that piece of fiction or piece 381 00:25:58,400 --> 00:26:02,280 Speaker 1: of journalism. So you already touched on like this the 382 00:26:02,840 --> 00:26:08,400 Speaker 1: collaborative possibilities here. But having touched on in school papers 383 00:26:08,480 --> 00:26:11,320 Speaker 1: and such, what do you think are the the educational 384 00:26:11,440 --> 00:26:16,600 Speaker 1: opportunities with this technology. I think the main educational opportunities 385 00:26:16,640 --> 00:26:23,600 Speaker 1: are for beginning writers. It's a way to explore expressivity 386 00:26:23,640 --> 00:26:27,199 Speaker 1: and creativity. One of the problems when you're beginning writing 387 00:26:27,280 --> 00:26:32,240 Speaker 1: is you tend to see everything as being a linear process. 388 00:26:32,640 --> 00:26:35,720 Speaker 1: You write some words, you write some more words. There's 389 00:26:35,720 --> 00:26:38,520 Speaker 1: a flow of writing. It's very difficult to get out 390 00:26:38,520 --> 00:26:40,919 Speaker 1: of that flow and to think about alternative ways of 391 00:26:40,960 --> 00:26:44,800 Speaker 1: expressing something, how it might be different. And what machines 392 00:26:45,760 --> 00:26:48,560 Speaker 1: light GPT three can do is help you to see 393 00:26:48,600 --> 00:26:52,760 Speaker 1: another way of continuing, another way of expressing your ideas. 394 00:26:53,080 --> 00:26:55,600 Speaker 1: It will look back over the last five words or 395 00:26:55,640 --> 00:26:59,000 Speaker 1: so that you've written and perhaps take it in new directions. 396 00:26:59,600 --> 00:27:04,760 Speaker 1: So it's a way for buddying writers to explore possibilities. 397 00:27:05,119 --> 00:27:08,679 Speaker 1: And you can take what you've written so far and 398 00:27:08,800 --> 00:27:11,520 Speaker 1: press the create button a number of times, and each 399 00:27:11,560 --> 00:27:13,880 Speaker 1: time it will take your writing in a different direction. 400 00:27:14,200 --> 00:27:18,359 Speaker 1: So that's one way. Another way is for in a 401 00:27:18,440 --> 00:27:21,719 Speaker 1: class situation, for a teacher to generate a number of 402 00:27:21,760 --> 00:27:25,080 Speaker 1: different articles on a topic. So to give a topic 403 00:27:25,160 --> 00:27:29,960 Speaker 1: like what's the effect of climate change on rising sea 404 00:27:30,040 --> 00:27:32,919 Speaker 1: levels and get it to generate a number of different 405 00:27:33,000 --> 00:27:35,920 Speaker 1: articles and then to critique them because as I say, 406 00:27:35,960 --> 00:27:39,040 Speaker 1: it doesn't always get its facts right. And so to 407 00:27:39,160 --> 00:27:42,359 Speaker 1: look on these as pieces of journalism you might find 408 00:27:42,359 --> 00:27:45,720 Speaker 1: on the web and to take a critical stance. So 409 00:27:45,800 --> 00:27:50,120 Speaker 1: it's it's a good tool for a teacher to give 410 00:27:50,359 --> 00:27:54,679 Speaker 1: some generated articles to students and say, criticize these. We 411 00:27:54,760 --> 00:27:58,920 Speaker 1: know they're written by machine, so what's wrong with them? 412 00:27:59,080 --> 00:28:02,600 Speaker 1: Generally the ear face structure is pretty good, the spellings correct, 413 00:28:03,000 --> 00:28:06,359 Speaker 1: the style is good. But the deeper you go into 414 00:28:06,960 --> 00:28:11,200 Speaker 1: these machines generated texts, the more you find problems with them. 415 00:28:11,320 --> 00:28:15,240 Speaker 1: So it's it's a good class exercise. And then lastly, 416 00:28:15,280 --> 00:28:19,919 Speaker 1: I think is that it's going to be another tool 417 00:28:20,080 --> 00:28:24,440 Speaker 1: companion that writers use. Just as in the early days 418 00:28:24,440 --> 00:28:27,720 Speaker 1: of word processes, there was a lot of criticism that 419 00:28:27,960 --> 00:28:32,440 Speaker 1: it was slowing down writing, that you were reading from 420 00:28:32,440 --> 00:28:37,280 Speaker 1: the screen rather than from the page, that style checkers 421 00:28:37,440 --> 00:28:43,240 Speaker 1: were making writing more conformists. There will be quite rightly 422 00:28:43,360 --> 00:28:48,640 Speaker 1: people who say these new tools are forcing a machine 423 00:28:48,720 --> 00:28:54,440 Speaker 1: type creativity. But I think if we use them wisely 424 00:28:54,800 --> 00:28:59,560 Speaker 1: to extend and to critique our own creativity, then their 425 00:28:59,640 --> 00:29:03,360 Speaker 1: own resting her and exciting opportunities. Do you think we 426 00:29:03,480 --> 00:29:08,320 Speaker 1: better understand human creativity for having gone through this technological journey. 427 00:29:08,360 --> 00:29:11,480 Speaker 1: That's why I started on this h this journey with 428 00:29:11,600 --> 00:29:18,680 Speaker 1: my colligraph I am. It's because I started work as 429 00:29:18,680 --> 00:29:23,480 Speaker 1: a PhD student on trying to understand children's creative writing 430 00:29:24,080 --> 00:29:27,640 Speaker 1: and to develop tools for children to develop their creativity. 431 00:29:27,880 --> 00:29:33,400 Speaker 1: I became fascinated by machine creativity to try and explore 432 00:29:33,680 --> 00:29:36,760 Speaker 1: what is it that a machine can do in terms 433 00:29:36,800 --> 00:29:41,840 Speaker 1: of creativity and where does that stop? So what are 434 00:29:41,840 --> 00:29:47,240 Speaker 1: the limits of machine creativity? And beyond those limits, how 435 00:29:47,320 --> 00:29:49,680 Speaker 1: does that relate to human creativity? What is that the 436 00:29:49,720 --> 00:29:54,520 Speaker 1: week we can do that a machine can't. And now, 437 00:29:54,600 --> 00:29:58,360 Speaker 1: over the years, perhaps the gap between machine creativity and 438 00:29:58,440 --> 00:30:03,160 Speaker 1: human creativity is narrowing, but it's still there, and it 439 00:30:03,200 --> 00:30:07,000 Speaker 1: gives us insights into the way in which we write, 440 00:30:07,040 --> 00:30:10,120 Speaker 1: in the way in which we think. And because these 441 00:30:10,240 --> 00:30:16,000 Speaker 1: new generative AI programs don't work in a human light way, 442 00:30:16,680 --> 00:30:22,560 Speaker 1: then it becomes a really interesting challenge to say, what's 443 00:30:22,720 --> 00:30:28,240 Speaker 1: alien about them, what's different about the writing they produce 444 00:30:28,920 --> 00:30:32,480 Speaker 1: that shows they aren't human? And what does that say 445 00:30:32,520 --> 00:30:43,200 Speaker 1: about human experience and human creativity? Than now, looking into 446 00:30:43,200 --> 00:30:48,240 Speaker 1: the future and getting more speculative, UM say, I'm a 447 00:30:48,240 --> 00:30:51,120 Speaker 1: fan of Frank Herbert's done novels. Do you see for 448 00:30:51,600 --> 00:30:54,320 Speaker 1: see a future in which one would just be able 449 00:30:54,360 --> 00:30:58,520 Speaker 1: to ask an AI to generate the final books in 450 00:30:58,520 --> 00:31:01,880 Speaker 1: the series that Frank would have written, Uh, perhaps more 451 00:31:01,920 --> 00:31:05,320 Speaker 1: novels in this universe he created that sort of thing. 452 00:31:05,480 --> 00:31:08,000 Speaker 1: Or say you're a fan of a particular short story 453 00:31:08,040 --> 00:31:10,040 Speaker 1: author and you're you're like, why would I read anything 454 00:31:10,080 --> 00:31:13,440 Speaker 1: other than stories by this this particular author. I'm just 455 00:31:13,480 --> 00:31:16,000 Speaker 1: going to ask the AI to generate more of them 456 00:31:16,000 --> 00:31:20,680 Speaker 1: for me. Um, it might we easily arrived at such 457 00:31:20,680 --> 00:31:22,640 Speaker 1: a future. And if and if so, like, what does 458 00:31:22,680 --> 00:31:26,959 Speaker 1: that mean for us as both consumers and producers of 459 00:31:27,120 --> 00:31:31,200 Speaker 1: creative writing. I think we'll arrive at that at that 460 00:31:31,280 --> 00:31:36,640 Speaker 1: space pretty soon. I think they will be pastichious, and 461 00:31:37,520 --> 00:31:43,880 Speaker 1: but they may be pastigious that you can't tell from 462 00:31:44,160 --> 00:31:47,880 Speaker 1: the original and that there will be certainly fans of 463 00:31:48,680 --> 00:31:53,880 Speaker 1: authors like Frank Herbert who will be happy to accept 464 00:31:53,960 --> 00:31:58,920 Speaker 1: them as generated in Frank Herbert style, particularly if they 465 00:31:58,960 --> 00:32:03,320 Speaker 1: have interests the new characters, interesting new plots that will 466 00:32:03,360 --> 00:32:07,800 Speaker 1: happen with short stories and Neil Gaiman type short stories. 467 00:32:07,840 --> 00:32:11,840 Speaker 1: I'm sure that will happen quite soon. Um, if it 468 00:32:11,880 --> 00:32:16,480 Speaker 1: hasn't happened already, there may be plan fiction forums where 469 00:32:16,520 --> 00:32:21,080 Speaker 1: those sorts of AI generated pastiches are already circulating, but 470 00:32:21,160 --> 00:32:24,160 Speaker 1: I think there's the future is more likely to be 471 00:32:24,240 --> 00:32:31,680 Speaker 1: around interactive fiction. So um. At the moment, computer games 472 00:32:31,680 --> 00:32:35,000 Speaker 1: are kind of reaching a plateau that the graphics are 473 00:32:35,040 --> 00:32:40,440 Speaker 1: becoming more and more realistic, the interaction is becoming more 474 00:32:40,520 --> 00:32:44,800 Speaker 1: and more engaging, but the AI is lagging behind. Soon 475 00:32:44,920 --> 00:32:50,760 Speaker 1: you'll be able to have AI based characters in games 476 00:32:50,800 --> 00:32:56,280 Speaker 1: that can tell stories that you're not only asked to 477 00:32:56,320 --> 00:33:00,080 Speaker 1: solve a problem or guide you to the treasure, but 478 00:33:00,960 --> 00:33:04,080 Speaker 1: you can engage with them as conversational partners. They will 479 00:33:04,120 --> 00:33:07,880 Speaker 1: take the story forwards, and once you do that, then 480 00:33:07,920 --> 00:33:12,840 Speaker 1: you can get onto interactive soap operas, interactive worlds where 481 00:33:13,120 --> 00:33:17,400 Speaker 1: you've got both human and machine partners now that can 482 00:33:17,440 --> 00:33:20,720 Speaker 1: take you into all sorts of dark areas, but also 483 00:33:21,160 --> 00:33:29,040 Speaker 1: into all sorts of engaging aspects of new interaction, new 484 00:33:29,080 --> 00:33:34,280 Speaker 1: immersive fiction, new types of social interaction that involved both 485 00:33:34,320 --> 00:33:37,640 Speaker 1: machines and humans. So I think rather than trying to 486 00:33:37,640 --> 00:33:44,640 Speaker 1: emulate a particular writer, I think developing interactive fiction where 487 00:33:45,040 --> 00:33:49,560 Speaker 1: you have a continual um story that you can dip 488 00:33:49,600 --> 00:33:52,560 Speaker 1: in and out of with other characters human and machine 489 00:33:52,960 --> 00:33:57,800 Speaker 1: will likely to be the the most engaging, and probably 490 00:33:57,840 --> 00:34:01,360 Speaker 1: the most influential use is of story machines in the 491 00:34:01,400 --> 00:34:04,920 Speaker 1: near future. In the book, you go into several different 492 00:34:05,160 --> 00:34:09,640 Speaker 1: wonderful examples of of how we've reached this point, you know, 493 00:34:10,080 --> 00:34:13,759 Speaker 1: along the line along the road with video games examples 494 00:34:13,800 --> 00:34:20,480 Speaker 1: I wasn't familiar with, like Colossal Cave Adventure, Dwarf Fortress, UM. So, 495 00:34:20,520 --> 00:34:23,600 Speaker 1: I guess most of these examples of these been These 496 00:34:23,880 --> 00:34:26,680 Speaker 1: haven't necessarily been part of, like saying, the mainstream of 497 00:34:26,800 --> 00:34:30,400 Speaker 1: video game culture. No, they they haven't been part of 498 00:34:30,440 --> 00:34:33,080 Speaker 1: the mainstream. There's been a kind of tributary. So the 499 00:34:33,160 --> 00:34:37,840 Speaker 1: mainstream has been sort of from Pong and Space Invaders 500 00:34:37,880 --> 00:34:44,239 Speaker 1: onwards in terms of graphics and interactivity, and then we 501 00:34:44,320 --> 00:34:47,360 Speaker 1: get to Grand Theft or to where you have hugely 502 00:34:47,680 --> 00:34:52,200 Speaker 1: realistic simulated worlds m and it's the gameplay, the action, 503 00:34:52,560 --> 00:34:55,880 Speaker 1: the game mechanics that's really important. But there's been another 504 00:34:55,920 --> 00:35:02,960 Speaker 1: tributary that been mainly followed by people who are fascinated 505 00:35:03,000 --> 00:35:06,160 Speaker 1: by stories and words and storytelling. And it started with 506 00:35:06,480 --> 00:35:09,960 Speaker 1: Colossal Cave Adventure, which, for those of you who don't know, 507 00:35:10,400 --> 00:35:16,600 Speaker 1: was in the late nineties seventies, um by UM a 508 00:35:16,680 --> 00:35:25,120 Speaker 1: couple who were cavers and Um the Crowder, I think 509 00:35:25,160 --> 00:35:27,480 Speaker 1: that's his name. I would have to check it. Um 510 00:35:27,560 --> 00:35:34,120 Speaker 1: he developed this program which generated a world that you 511 00:35:34,200 --> 00:35:38,800 Speaker 1: could explore. You could go down and explore a cave system, 512 00:35:38,920 --> 00:35:44,560 Speaker 1: and it was all done entirely through text. So you 513 00:35:44,600 --> 00:35:47,200 Speaker 1: are going through a dark forest, you find a great 514 00:35:47,760 --> 00:35:52,239 Speaker 1: in the forest floor. What do you do next? Your 515 00:35:52,320 --> 00:35:55,799 Speaker 1: type go down. It then comes back with a description 516 00:35:55,880 --> 00:35:59,480 Speaker 1: of where you are. You are underneath the forest floor, 517 00:35:59,560 --> 00:36:04,319 Speaker 1: in the spring of a stream, and then you can 518 00:36:04,400 --> 00:36:07,800 Speaker 1: go left, you can go right, you can go down. 519 00:36:08,239 --> 00:36:13,120 Speaker 1: So you are guiding this character through a textual world, 520 00:36:14,440 --> 00:36:17,200 Speaker 1: and the more you get into it, the more you 521 00:36:17,239 --> 00:36:21,160 Speaker 1: engage with not only descriptions but also characters in that world, 522 00:36:21,440 --> 00:36:24,480 Speaker 1: and you could collect things, do things, So it's a 523 00:36:24,560 --> 00:36:29,600 Speaker 1: textual world that you're exploring. UM. Since then there have 524 00:36:29,719 --> 00:36:35,280 Speaker 1: been other extensions of that which make these textual characters 525 00:36:35,320 --> 00:36:39,160 Speaker 1: more believable, so actually interact with them. They cannot only 526 00:36:39,239 --> 00:36:45,160 Speaker 1: give you things, but they can behave as real agents 527 00:36:45,160 --> 00:36:48,719 Speaker 1: in the world would real humans would. And Dwarf Fortress 528 00:36:49,040 --> 00:36:53,160 Speaker 1: is an example of that where you've got a hugely 529 00:36:53,320 --> 00:36:57,399 Speaker 1: realistic world. And there was an example I gave him 530 00:36:57,400 --> 00:37:03,840 Speaker 1: the book about Drunk Cat, and the designers of Dwarf 531 00:37:03,880 --> 00:37:07,840 Speaker 1: Fortress had created all sorts of properties of animals, but 532 00:37:08,360 --> 00:37:13,440 Speaker 1: the ability of cats to drink alcohol had deliberately not 533 00:37:13,520 --> 00:37:18,440 Speaker 1: been programmed him. But in the tavern and war Fortress, 534 00:37:18,560 --> 00:37:22,960 Speaker 1: there were these cats that were lying dead, and only 535 00:37:23,000 --> 00:37:27,040 Speaker 1: through interacting with the code did they understand it. Did 536 00:37:27,040 --> 00:37:30,400 Speaker 1: they realize that what had happened was that the cats 537 00:37:30,440 --> 00:37:34,560 Speaker 1: went into this tavern, that people in the tavern had 538 00:37:34,600 --> 00:37:38,000 Speaker 1: spilt alcohol in the floor, the cats had walked through, 539 00:37:38,160 --> 00:37:40,760 Speaker 1: the cats had licked their paws, that cats have become 540 00:37:40,800 --> 00:37:44,160 Speaker 1: poisoned by alcohol. So you have these hugely rich and 541 00:37:44,239 --> 00:37:50,040 Speaker 1: realistic worlds that are realized through text. And so it's 542 00:37:50,120 --> 00:37:53,360 Speaker 1: become as a bit of a tributary of game playing 543 00:37:54,000 --> 00:37:58,280 Speaker 1: because you do have to interact with with text with words. 544 00:37:59,040 --> 00:38:03,640 Speaker 1: But as they big into merge now with the mainstream games, 545 00:38:04,040 --> 00:38:07,800 Speaker 1: then you will have spoken dialogue. You will be able 546 00:38:07,880 --> 00:38:16,279 Speaker 1: to meet your favorite characters in soap operas in streaming 547 00:38:16,360 --> 00:38:22,440 Speaker 1: series and you could talk to them, you can go 548 00:38:22,480 --> 00:38:27,480 Speaker 1: on dates with them, you can go holidays with them, um, 549 00:38:28,000 --> 00:38:31,920 Speaker 1: you can be part of their story. So it's bringing 550 00:38:31,960 --> 00:38:36,560 Speaker 1: together those two streams of game design these rich visual 551 00:38:36,600 --> 00:38:41,759 Speaker 1: worlds and now these textual, believable story worlds that I 552 00:38:41,800 --> 00:38:45,800 Speaker 1: think is going to be the next generation of interactive games. 553 00:38:46,200 --> 00:38:49,080 Speaker 1: What's gonna be really exciting to see this come together. Yeah, 554 00:38:49,120 --> 00:38:53,640 Speaker 1: I think it will, and I think one of the 555 00:38:53,680 --> 00:38:55,920 Speaker 1: things in the future. One of the opportunities in the 556 00:38:56,000 --> 00:38:59,719 Speaker 1: future is that you will be able to live in 557 00:38:59,760 --> 00:39:03,319 Speaker 1: the worlds for an extended period of time. So you 558 00:39:03,360 --> 00:39:06,239 Speaker 1: don't just play the game for forty minutes just like 559 00:39:06,360 --> 00:39:09,719 Speaker 1: you have a TV series. You'll have a TV series 560 00:39:09,880 --> 00:39:15,520 Speaker 1: where you live in this world. Now, that's both scary 561 00:39:15,600 --> 00:39:19,680 Speaker 1: and exciting to be able to live for extended time 562 00:39:20,200 --> 00:39:23,640 Speaker 1: in a virtual world where you can talk to and 563 00:39:23,760 --> 00:39:26,680 Speaker 1: engage with the characters. So what do you think the 564 00:39:26,719 --> 00:39:30,759 Speaker 1: machines are going to want to tell stories about? So 565 00:39:30,800 --> 00:39:36,920 Speaker 1: it's been suggested that because machines don't have human experience, 566 00:39:37,360 --> 00:39:42,080 Speaker 1: they will never be able to tell stories that are 567 00:39:42,320 --> 00:39:47,400 Speaker 1: experientially rich. They don't know the human condition, They've never 568 00:39:47,440 --> 00:39:50,120 Speaker 1: been there, they've never fallen in love, they've never seen 569 00:39:50,160 --> 00:39:54,839 Speaker 1: a sunset. So they will reach a plateau where they 570 00:39:54,920 --> 00:39:59,000 Speaker 1: may produce pastiches, but they won't be able to describe 571 00:39:59,560 --> 00:40:02,080 Speaker 1: or to we have the human condition, they won't be 572 00:40:02,120 --> 00:40:05,360 Speaker 1: able to engage you in any deep human way in 573 00:40:05,400 --> 00:40:09,600 Speaker 1: a story. Now, I'm sure that's right. It's possible you 574 00:40:09,640 --> 00:40:14,280 Speaker 1: could get around that by having embodied storytellers, so embodied 575 00:40:14,360 --> 00:40:16,560 Speaker 1: robots that can go out into the world, that can 576 00:40:16,600 --> 00:40:19,560 Speaker 1: gaze at the sunset, that can go for walks, that 577 00:40:19,680 --> 00:40:24,719 Speaker 1: can feel the wind in their metal phases. But there's 578 00:40:24,719 --> 00:40:32,000 Speaker 1: another possibility, which is that already programs have something approaching 579 00:40:32,000 --> 00:40:36,760 Speaker 1: a social life, that they are connected to other entities 580 00:40:36,760 --> 00:40:40,800 Speaker 1: on the web. They are part of a social network. 581 00:40:41,640 --> 00:40:47,359 Speaker 1: And if they can tell stories about their worlds, their 582 00:40:47,440 --> 00:40:51,960 Speaker 1: worlds of being entities on the web, their worlds as 583 00:40:52,040 --> 00:40:57,879 Speaker 1: being part of a connected Internet system where there are viruses, 584 00:40:58,120 --> 00:41:05,080 Speaker 1: where there are software breakdowns, breakthroughs, entities that interact with 585 00:41:05,120 --> 00:41:11,880 Speaker 1: each other in a computational way that we can't express, 586 00:41:12,000 --> 00:41:14,279 Speaker 1: then they could be valuable in two ways. One is 587 00:41:14,320 --> 00:41:18,320 Speaker 1: that they could help us to understand this complex system 588 00:41:18,400 --> 00:41:20,920 Speaker 1: that is the World Wide Web. They could help to 589 00:41:21,000 --> 00:41:27,760 Speaker 1: interpret the growing, changing nature of the World Wide Web 590 00:41:28,040 --> 00:41:33,480 Speaker 1: in human language. But also they could tell stories. They 591 00:41:33,480 --> 00:41:37,600 Speaker 1: could tell stories of their travels through the Internet. They 592 00:41:37,600 --> 00:41:43,560 Speaker 1: could tell stories of how they became beings that were 593 00:41:43,680 --> 00:41:47,160 Speaker 1: taught and learned through interaction with other objects in the web. 594 00:41:47,560 --> 00:41:50,600 Speaker 1: So I don't think it's necessary to be embodied to 595 00:41:50,719 --> 00:41:55,360 Speaker 1: have human type experience in order to tell interesting stories. 596 00:41:56,600 --> 00:42:01,080 Speaker 1: That they may tell quite alien stories of life on 597 00:42:01,120 --> 00:42:05,640 Speaker 1: the web. And to me, that's much more exciting and 598 00:42:05,719 --> 00:42:11,240 Speaker 1: interesting than just spouting a pastiche of a human story. 599 00:42:11,800 --> 00:42:13,880 Speaker 1: This reminds me of something you you bring up in 600 00:42:13,880 --> 00:42:16,759 Speaker 1: in the book that that that i've that they really 601 00:42:16,800 --> 00:42:18,920 Speaker 1: ring true and uh and also made me, you know, 602 00:42:18,960 --> 00:42:21,000 Speaker 1: rethink a number of things, and that is that the 603 00:42:21,120 --> 00:42:23,440 Speaker 1: uncanny valley, which is a concept that most of its 604 00:42:23,520 --> 00:42:27,319 Speaker 1: familiar with when it comes to um robots made in 605 00:42:27,360 --> 00:42:29,960 Speaker 1: the in the likeness of a human being, or or 606 00:42:30,000 --> 00:42:32,880 Speaker 1: certainly when we get into computer generated imagery and films. 607 00:42:32,960 --> 00:42:36,440 Speaker 1: But you point out that that this that the uncanny 608 00:42:36,480 --> 00:42:40,480 Speaker 1: valley isn't really a thing in storytelling. In one sense, 609 00:42:40,480 --> 00:42:44,640 Speaker 1: it isn't because stories are meant to be disturbing and unsettling. 610 00:42:45,080 --> 00:42:47,799 Speaker 1: That's why we read science fiction, that's why we read 611 00:42:47,880 --> 00:42:51,880 Speaker 1: crime novels because they're meant to be disturbing. So having 612 00:42:51,920 --> 00:42:56,399 Speaker 1: a machine that is in some sense uncanny or disturbing 613 00:42:56,880 --> 00:42:59,759 Speaker 1: in the language that it produces in the stories. That 614 00:43:00,000 --> 00:43:03,560 Speaker 1: else I think we'll only add to the richness of storytelling. 615 00:43:04,480 --> 00:43:07,120 Speaker 1: But where I think there is an uncanny valley is 616 00:43:07,560 --> 00:43:12,600 Speaker 1: if we then say, was this written by a machine? 617 00:43:13,280 --> 00:43:17,520 Speaker 1: And if so, what kind of machine? So is it 618 00:43:17,560 --> 00:43:21,360 Speaker 1: a machine that was trained on billions of words from 619 00:43:21,400 --> 00:43:26,600 Speaker 1: her the web that has had no experience of the world. 620 00:43:27,680 --> 00:43:31,240 Speaker 1: In that case, how can it be reporting on the world? 621 00:43:31,360 --> 00:43:35,120 Speaker 1: How can it pretend to have a human like experience? 622 00:43:35,600 --> 00:43:38,520 Speaker 1: So if we know that the author is a machine, 623 00:43:39,440 --> 00:43:44,160 Speaker 1: then it becomes unsettling because we then start to judge 624 00:43:44,760 --> 00:43:49,360 Speaker 1: perhaps it's very plausible, very poignant, very evocative prose against 625 00:43:49,480 --> 00:43:51,960 Speaker 1: human experience, and we realize that it hasn't had that 626 00:43:52,200 --> 00:43:57,200 Speaker 1: human experience. So how do we know whether other stories 627 00:43:57,600 --> 00:44:01,200 Speaker 1: are really, you know, the product of human experience? And 628 00:44:01,239 --> 00:44:04,400 Speaker 1: what does it mean to tell a story based on experience? 629 00:44:05,080 --> 00:44:09,000 Speaker 1: So the stories themselves I don't think need to worry, yes, 630 00:44:09,120 --> 00:44:15,160 Speaker 1: because stories do have an uncanniness to them. But I 631 00:44:15,200 --> 00:44:17,920 Speaker 1: think once we start to question the author of those 632 00:44:17,960 --> 00:44:22,440 Speaker 1: stories and whether that story is based on genuine experience, 633 00:44:22,800 --> 00:44:25,920 Speaker 1: then it becomes unsettling. All right, Well, the book again 634 00:44:26,040 --> 00:44:30,399 Speaker 1: is story Machines How Computers Have Become Creative Writers should 635 00:44:30,440 --> 00:44:33,880 Speaker 1: come out July five, and Mike thanks once more for 636 00:44:34,000 --> 00:44:35,640 Speaker 1: taking time out of your day and chatting with us 637 00:44:35,640 --> 00:44:38,480 Speaker 1: on the show. It was a pleasure. I've really enjoyed it, 638 00:44:38,640 --> 00:44:42,440 Speaker 1: so thank you for asking me all right so that 639 00:44:42,600 --> 00:44:44,920 Speaker 1: you have it. Thanks again to Mike Sharple's for a 640 00:44:45,000 --> 00:44:47,680 Speaker 1: taking time out of his day to chat with me again. 641 00:44:47,719 --> 00:44:51,160 Speaker 1: That book is Story Machines, How Computers Have Become Creative 642 00:44:51,160 --> 00:44:54,719 Speaker 1: Writers is available now wherever you get your books. And 643 00:44:54,760 --> 00:44:57,560 Speaker 1: if you want to get a taste of this for yourself, 644 00:44:57,640 --> 00:45:01,600 Speaker 1: you can go to story hyphen machine dot net and 645 00:45:01,800 --> 00:45:05,360 Speaker 1: uh it's experiment a little bit like we've been experimenting 646 00:45:05,840 --> 00:45:07,279 Speaker 1: in the meantime. If you would like to check out 647 00:45:07,320 --> 00:45:09,600 Speaker 1: other episodes of Stuff to Blow Your Mind, you can 648 00:45:09,600 --> 00:45:11,520 Speaker 1: find it in the Stuff to Blow your Mind podcast feed. 649 00:45:11,520 --> 00:45:14,279 Speaker 1: We have Core episodes on Tuesdays and Thursday's, Artifact or 650 00:45:14,320 --> 00:45:17,279 Speaker 1: Monster Fact episodes on Wednesdays, Listener Mail on Mondays, and 651 00:45:17,280 --> 00:45:19,839 Speaker 1: on Friday, we set aside most serious concerns and just 652 00:45:19,880 --> 00:45:22,480 Speaker 1: talk about a weird film. Huge thanks as always to 653 00:45:22,520 --> 00:45:26,279 Speaker 1: our excellent audio producer Seth Nicholas Johnson If you would 654 00:45:26,320 --> 00:45:28,440 Speaker 1: like to get in touch with us with feedback on 655 00:45:28,480 --> 00:45:30,920 Speaker 1: this episode or any other, to suggest a topic for 656 00:45:30,920 --> 00:45:33,400 Speaker 1: the future, or just to say hello, you can email 657 00:45:33,480 --> 00:45:44,200 Speaker 1: us at contact at stuff to Blow your Mind dot com. 658 00:45:44,280 --> 00:45:46,760 Speaker 1: Stuff to Blow Your Mind is production of I Heart Radio. 659 00:45:47,120 --> 00:45:49,440 Speaker 1: For more podcasts for My Heart Radio, visit the iHeart 660 00:45:49,480 --> 00:45:52,239 Speaker 1: Radio app, Apple Podcasts, or wherever you're listening to your 661 00:45:52,239 --> 00:46:01,040 Speaker 1: favorite shows. B B B bl bl bl, Blunt Bust 662 00:46:01,440 --> 00:46:08,680 Speaker 1: presided at the Foo