1 00:00:02,520 --> 00:00:08,160 Speaker 1: Bloomberg Audio Studios, podcasts, radio news Now. 2 00:00:08,200 --> 00:00:11,280 Speaker 2: Earlier this month, an AI model called Manus went viral 3 00:00:11,360 --> 00:00:15,040 Speaker 2: for its apparent ability to act more independently than AI chatbox. 4 00:00:15,080 --> 00:00:15,240 Speaker 1: Now. 5 00:00:15,280 --> 00:00:19,040 Speaker 2: The development of the so called artificial intelligence agents have 6 00:00:19,200 --> 00:00:22,160 Speaker 2: raised concerns that they will erode human ability to think. 7 00:00:22,160 --> 00:00:25,240 Speaker 2: But Read Hoffman, the Lincoln co founder, Microsoft board member 8 00:00:25,480 --> 00:00:28,840 Speaker 2: and Greylock partner, thinks the opposite. Now He's just published 9 00:00:28,840 --> 00:00:31,800 Speaker 2: a book in which he basically argues that aisystems gain 10 00:00:31,920 --> 00:00:35,880 Speaker 2: greater abilities, they will enhance human agency, hence the book's 11 00:00:35,920 --> 00:00:38,479 Speaker 2: title Souperagency? What could possibly go. 12 00:00:38,560 --> 00:00:39,840 Speaker 1: Right with our AI? 13 00:00:39,920 --> 00:00:41,880 Speaker 2: Feature? And read is here with me? Thank you so much, 14 00:00:42,120 --> 00:00:43,920 Speaker 2: Read Hoffen for joining us. I mean, this is the 15 00:00:43,960 --> 00:00:45,680 Speaker 2: fresh of breath air because there is a lot of 16 00:00:45,720 --> 00:00:47,680 Speaker 2: concern and a lot of worry that AI takes over 17 00:00:48,320 --> 00:00:52,040 Speaker 2: the computers will be in charge and will basically stop 18 00:00:52,159 --> 00:00:53,280 Speaker 2: using our critical thinking. 19 00:00:53,479 --> 00:00:57,120 Speaker 1: Yes, but actually, in fact, if anyone plays with AI today, 20 00:00:57,320 --> 00:01:01,360 Speaker 1: it is the most amazing education technology. 21 00:01:01,040 --> 00:01:02,800 Speaker 3: We have created in human history. 22 00:01:03,040 --> 00:01:05,399 Speaker 1: If you want to learn anything, I use it to 23 00:01:05,480 --> 00:01:09,080 Speaker 1: learn everything from quantum mechanics to like, huh, I wonder 24 00:01:09,120 --> 00:01:12,640 Speaker 1: what cooking souved in this way looks like it's everything. 25 00:01:12,920 --> 00:01:15,520 Speaker 2: But I guess the concern is that, you know, especially 26 00:01:15,720 --> 00:01:18,399 Speaker 2: people that go into a first time job, or students 27 00:01:18,600 --> 00:01:21,640 Speaker 2: or like you know, college kids don't use their critical 28 00:01:21,640 --> 00:01:23,880 Speaker 2: thinking anymore. Because if you just go into chat box 29 00:01:24,200 --> 00:01:25,840 Speaker 2: and say write me a song, it with this, this, 30 00:01:25,880 --> 00:01:27,200 Speaker 2: and this, it does it for you. 31 00:01:27,440 --> 00:01:29,880 Speaker 1: Well, it definitely can do a bunch of things for you, 32 00:01:30,000 --> 00:01:32,360 Speaker 1: but that can help you elevate your game. 33 00:01:32,800 --> 00:01:33,000 Speaker 3: Right. 34 00:01:33,080 --> 00:01:35,160 Speaker 1: So it's a little bit like if you were just 35 00:01:35,240 --> 00:01:38,520 Speaker 1: copying Wikipedia as handing in your essay, sure you could 36 00:01:38,520 --> 00:01:41,600 Speaker 1: do that, but actually, in fact you should use it 37 00:01:41,640 --> 00:01:44,680 Speaker 1: to inspire you to make you think better, to say, hey, 38 00:01:44,880 --> 00:01:47,400 Speaker 1: like for example, when I was writing Superagency, I would 39 00:01:47,920 --> 00:01:50,600 Speaker 1: put in sections and say, how at a history of 40 00:01:50,680 --> 00:01:54,400 Speaker 1: technology specialists critique what I've said, and then I understand 41 00:01:54,400 --> 00:01:56,400 Speaker 1: it and I can decide whether or not I change 42 00:01:56,440 --> 00:01:59,080 Speaker 1: your editings or and therefore the book gets better. 43 00:01:59,200 --> 00:02:02,200 Speaker 2: So I read and reduce cognitive capabilities. Right, this is 44 00:02:02,240 --> 00:02:05,480 Speaker 2: a big concern that we stop thinking, that we think less, 45 00:02:05,480 --> 00:02:06,520 Speaker 2: that we think differently. 46 00:02:06,800 --> 00:02:09,639 Speaker 1: Well, I think it makes it just like all technology, 47 00:02:09,880 --> 00:02:12,800 Speaker 1: you can approach it being lazy, and so if you 48 00:02:12,880 --> 00:02:15,320 Speaker 1: just say, okay, I'm going to outsource it, just like 49 00:02:15,360 --> 00:02:17,800 Speaker 1: for example, I'm going to say, whatever the first search 50 00:02:17,800 --> 00:02:20,880 Speaker 1: result on Google is, that's the answer. And if you 51 00:02:21,000 --> 00:02:24,400 Speaker 1: do it that way, then of course that doesn't help 52 00:02:24,440 --> 00:02:27,320 Speaker 1: you extend. But if you do anything in terms of 53 00:02:28,280 --> 00:02:31,000 Speaker 1: having it be a dialogue with you, having it extend 54 00:02:31,040 --> 00:02:34,680 Speaker 1: your capabilities asking a question, getting an answer, asking another question, 55 00:02:35,280 --> 00:02:38,560 Speaker 1: then it it greatly amplifies your capabilities. 56 00:02:38,800 --> 00:02:41,000 Speaker 2: How do you get rid of the biases, because if 57 00:02:41,040 --> 00:02:45,399 Speaker 2: you have too many biases then it excus of course democracy. 58 00:02:46,000 --> 00:02:50,200 Speaker 1: Well, so all of the major AI labs are trying 59 00:02:50,240 --> 00:02:54,560 Speaker 1: to get it as kind of call it unbiased as possible. Now, 60 00:02:55,080 --> 00:02:57,919 Speaker 1: within human perspective and human knowledge, there's always some things 61 00:02:57,919 --> 00:02:58,280 Speaker 1: a bias. 62 00:02:58,280 --> 00:02:59,120 Speaker 3: We're always learning. 63 00:02:59,480 --> 00:03:02,280 Speaker 1: Like if we kind of look at human beings fifty 64 00:03:02,360 --> 00:03:04,280 Speaker 1: years ago and we are fifty years past and we 65 00:03:04,320 --> 00:03:06,280 Speaker 1: look at them and say, oh, they were biased about this, 66 00:03:06,680 --> 00:03:09,280 Speaker 1: I'm certain humans fifty years from now will be looking 67 00:03:09,320 --> 00:03:11,880 Speaker 1: at us the same way. So it's an ongoing process 68 00:03:12,240 --> 00:03:14,040 Speaker 1: with us as well as the technology. 69 00:03:14,480 --> 00:03:16,160 Speaker 2: Is there anything that worries you about AI? 70 00:03:16,600 --> 00:03:20,040 Speaker 1: The primary thing that worries me is I call AI 71 00:03:20,120 --> 00:03:23,800 Speaker 1: the cognitive industrial Revolution. It's both for the upside, which 72 00:03:23,840 --> 00:03:27,639 Speaker 1: is this whole society we live in, middle class education, 73 00:03:28,200 --> 00:03:32,639 Speaker 1: medicine all comes from the Industrial Revolution. That same amplification 74 00:03:32,800 --> 00:03:35,960 Speaker 1: is coming, but the transitions are difficult. So the thing 75 00:03:36,000 --> 00:03:38,840 Speaker 1: that primarily worries me is to say, look, we're going 76 00:03:38,880 --> 00:03:42,200 Speaker 1: to have to navigate this challenging transition, just like the 77 00:03:42,240 --> 00:03:44,360 Speaker 1: Industrial Revolution was a challenging transition. 78 00:03:44,880 --> 00:03:46,640 Speaker 3: But that's how we have. 79 00:03:46,760 --> 00:03:50,200 Speaker 1: Our children on our future generations be prosperous and have 80 00:03:50,480 --> 00:03:52,720 Speaker 1: amazing societies. 81 00:03:52,760 --> 00:03:54,560 Speaker 3: And so that's the challenge we need to rise to. 82 00:03:54,880 --> 00:03:58,520 Speaker 2: So what's the right way of either designing AI or 83 00:03:58,560 --> 00:04:00,920 Speaker 2: designing safeguards for AI? 84 00:04:01,120 --> 00:04:05,320 Speaker 1: Yeah, well, part of it, there's kind of a two 85 00:04:05,400 --> 00:04:08,280 Speaker 1: part audience for super agency. One is the people who 86 00:04:08,320 --> 00:04:12,680 Speaker 1: are AI fearful or concerned to help them become AI curious. 87 00:04:12,920 --> 00:04:17,320 Speaker 1: But it's also for technologists, which is design for human agency, 88 00:04:17,400 --> 00:04:19,280 Speaker 1: designed for increasing human agency. 89 00:04:19,600 --> 00:04:20,480 Speaker 3: That should be your. 90 00:04:20,360 --> 00:04:24,520 Speaker 1: Design principle and fundamental And the book is also I 91 00:04:24,520 --> 00:04:25,520 Speaker 1: hope helpful for them. 92 00:04:25,760 --> 00:04:29,240 Speaker 2: So read this is basically putting the human you know, 93 00:04:29,560 --> 00:04:31,800 Speaker 2: at the center still of everything. So how they fit 94 00:04:31,880 --> 00:04:32,880 Speaker 2: into the next decade? 95 00:04:33,000 --> 00:04:36,680 Speaker 1: Yeah, well, AI, I think can be amplification intelligence, not 96 00:04:36,720 --> 00:04:40,520 Speaker 1: just artificial intelligence and that amplification the superpowers that we get, 97 00:04:40,680 --> 00:04:43,520 Speaker 1: and it's not just part of super agency. Is if 98 00:04:43,560 --> 00:04:46,480 Speaker 1: you hit a superpower, that helps me too. That's how 99 00:04:46,520 --> 00:04:49,000 Speaker 1: we have superagency together, as long as. 100 00:04:48,920 --> 00:04:51,160 Speaker 2: It's democratic and everybody has it or is that a 101 00:04:51,240 --> 00:04:53,159 Speaker 2: question for in a second phase? 102 00:04:53,360 --> 00:04:55,440 Speaker 1: Well, I think one of the good things about it, 103 00:04:55,480 --> 00:04:58,080 Speaker 1: And that's part of the reason why the first chapter 104 00:04:58,240 --> 00:04:59,840 Speaker 1: is about humanity enters. 105 00:04:59,600 --> 00:05:00,880 Speaker 3: The chat at chat GBT. 106 00:05:01,360 --> 00:05:04,440 Speaker 1: When we build technologies for hundreds of millions and billions 107 00:05:04,440 --> 00:05:08,599 Speaker 1: of people, that's that's broadly inclusive. So that your uber 108 00:05:08,680 --> 00:05:12,359 Speaker 1: driver has the same iPhone that Tim Cook has. That's 109 00:05:12,400 --> 00:05:14,400 Speaker 1: the kind of inclusion that we're targeting. 110 00:05:15,120 --> 00:05:19,520 Speaker 2: Read I mean, I guess evolution is not necessarily progress 111 00:05:19,800 --> 00:05:22,120 Speaker 2: full stop. So how do you make sure that this 112 00:05:22,520 --> 00:05:24,800 Speaker 2: means progress for the majority of humans? 113 00:05:25,040 --> 00:05:25,800 Speaker 3: Well, so I think. 114 00:05:27,279 --> 00:05:29,800 Speaker 1: Look, I think as we iterate and we participate, we 115 00:05:29,839 --> 00:05:32,720 Speaker 1: make progress. And for example, even though you say, well, 116 00:05:32,760 --> 00:05:34,279 Speaker 1: we have a whole bunch of cars and that creates 117 00:05:34,279 --> 00:05:39,039 Speaker 1: climate change, the cars also create our industrial society. And 118 00:05:39,080 --> 00:05:41,000 Speaker 1: by the way, the way that we tackle climate change 119 00:05:41,000 --> 00:05:43,880 Speaker 1: is we add carburetor emissions and we add you know, 120 00:05:44,160 --> 00:05:47,760 Speaker 1: we new kinds of clean energy and we do evs 121 00:05:48,200 --> 00:05:50,920 Speaker 1: and so you know, I tend to be very you know, 122 00:05:50,960 --> 00:05:54,520 Speaker 1: as we do iterative deployment and as we bring humanity 123 00:05:54,560 --> 00:05:57,400 Speaker 1: into the loop, I tend to think we do make progress. 124 00:05:57,560 --> 00:06:00,320 Speaker 1: Now I think again, you make better progress by having 125 00:06:00,400 --> 00:06:04,239 Speaker 1: the right kind of design principles, by accepting criticism. 126 00:06:03,920 --> 00:06:05,320 Speaker 3: By talking about it. 127 00:06:05,520 --> 00:06:07,719 Speaker 1: So, you know, I describe myself as a bloomer, which 128 00:06:07,760 --> 00:06:11,680 Speaker 1: is not that technology is just great. It's technology engaging 129 00:06:11,720 --> 00:06:12,600 Speaker 1: with people as great. 130 00:06:12,839 --> 00:06:15,000 Speaker 2: But it also depends on the people in charge. What 131 00:06:15,040 --> 00:06:18,200 Speaker 2: do you think of Sam Altman's performance so far in 132 00:06:18,279 --> 00:06:19,080 Speaker 2: leading open AI? 133 00:06:19,440 --> 00:06:22,839 Speaker 1: Well, so I think, Look, I think Sam's great contribution 134 00:06:23,640 --> 00:06:26,960 Speaker 1: to humanity will be open AI. And that's what having 135 00:06:26,960 --> 00:06:29,600 Speaker 1: done a number of amazing things before and done amazing 136 00:06:29,640 --> 00:06:32,200 Speaker 1: investments like Confusion and all the rest. And I think 137 00:06:32,240 --> 00:06:36,240 Speaker 1: that his ability to think very big and to have 138 00:06:36,400 --> 00:06:38,960 Speaker 1: bet very hard on this, you know a little bit 139 00:06:38,960 --> 00:06:42,919 Speaker 1: of technological thesis of scale compute and scale learning systems 140 00:06:43,000 --> 00:06:46,039 Speaker 1: is what matters. And that's why open ai has brought 141 00:06:46,560 --> 00:06:51,440 Speaker 1: this current revolution to us. And it's these machines learn 142 00:06:51,880 --> 00:06:54,240 Speaker 1: and they learn things that we help them learn and 143 00:06:54,279 --> 00:06:54,960 Speaker 1: help teach them. 144 00:06:55,760 --> 00:06:57,920 Speaker 2: Is there someone I know You've also had your differences 145 00:06:57,960 --> 00:07:01,120 Speaker 2: with Elon Musk. Who's the person in the space that you, 146 00:07:01,440 --> 00:07:04,360 Speaker 2: if not admire or listen to the most well. 147 00:07:05,320 --> 00:07:08,680 Speaker 1: Sam Altman is definitely one of them. Kevin Scott at 148 00:07:08,760 --> 00:07:14,280 Speaker 1: Microsoft is another, Dario Ahmadi ananthropic as another, James Miyika 149 00:07:14,360 --> 00:07:16,800 Speaker 1: at Google as another. I mean, I think part of 150 00:07:16,800 --> 00:07:19,800 Speaker 1: the thing that's very important about making AI for humanity 151 00:07:20,240 --> 00:07:22,920 Speaker 1: is people who listen to others and talk to others 152 00:07:22,920 --> 00:07:25,160 Speaker 1: and accept criticism. And I think that's one of the 153 00:07:25,160 --> 00:07:27,400 Speaker 1: things that all of these people are very good at. 154 00:07:27,800 --> 00:07:30,360 Speaker 2: Do you need to regulate it or is it something 155 00:07:30,400 --> 00:07:32,960 Speaker 2: that you need to see how it runs and then 156 00:07:33,120 --> 00:07:34,720 Speaker 2: think about regulating afterwards. 157 00:07:34,800 --> 00:07:36,200 Speaker 1: So I think what you do is you start with 158 00:07:36,240 --> 00:07:39,600 Speaker 1: the absolute minimum regulation you could do for the things 159 00:07:39,600 --> 00:07:42,680 Speaker 1: that could be really bad, not for oh, look it 160 00:07:42,760 --> 00:07:45,080 Speaker 1: might have a biased picture or might have a bias statement, 161 00:07:45,160 --> 00:07:47,440 Speaker 1: like we can iterate, we can fix those as we're going. 162 00:07:47,960 --> 00:07:50,800 Speaker 2: Really bad is what people taking over planes to crash, 163 00:07:50,840 --> 00:07:51,720 Speaker 2: the things. 164 00:07:51,480 --> 00:07:56,240 Speaker 1: Like that, you know, cybercrime, et cetera. Regulate for that 165 00:07:56,960 --> 00:07:59,440 Speaker 1: right and then do iterative deployment. And by the way, 166 00:07:59,440 --> 00:08:02,440 Speaker 1: even though they deployment, you eventually get to fear the regulations. So, 167 00:08:02,440 --> 00:08:05,040 Speaker 1: for example, if you try to make everything perfect with 168 00:08:05,120 --> 00:08:07,880 Speaker 1: cars before you put them on the road, we'd never 169 00:08:07,920 --> 00:08:10,120 Speaker 1: have cars. So you put them on the road and 170 00:08:10,160 --> 00:08:13,280 Speaker 1: you go, oh, this for bumpers, this window wipers, this 171 00:08:13,360 --> 00:08:16,600 Speaker 1: for and occasionally, like the market doesn't want seat belts, 172 00:08:16,600 --> 00:08:18,520 Speaker 1: the car manufacturers all want seat belts, and then of 173 00:08:18,560 --> 00:08:20,600 Speaker 1: course the regulators going to say, oh no, no seat 174 00:08:20,600 --> 00:08:22,240 Speaker 1: belts is good, We're going to add those. 175 00:08:23,280 --> 00:08:26,360 Speaker 2: I mean, if you regulate for things, I mean terrorism 176 00:08:26,600 --> 00:08:29,720 Speaker 2: is bad actors, bad state actors, or so it's I mean, 177 00:08:29,720 --> 00:08:32,280 Speaker 2: how do you regulate it unless you have to protect yourself. 178 00:08:32,360 --> 00:08:34,920 Speaker 2: So it's basically finding the technology. 179 00:08:34,400 --> 00:08:37,080 Speaker 1: That blocks them. Well, I think it's the regulation is 180 00:08:37,120 --> 00:08:39,600 Speaker 1: if you're releasing the technology to the general public, which 181 00:08:39,640 --> 00:08:42,080 Speaker 1: could be to the bad actors as well, you're doing 182 00:08:42,120 --> 00:08:45,679 Speaker 1: red teaming and safety, you're putting the right security measures 183 00:08:45,679 --> 00:08:47,719 Speaker 1: to make sure that you're not bleeding the technology to 184 00:08:47,800 --> 00:08:51,080 Speaker 1: rogue states, terrorists, et cetera, that you have a safety 185 00:08:51,120 --> 00:08:54,120 Speaker 1: plan that if you go, Okay, why is the technology 186 00:08:54,120 --> 00:08:56,920 Speaker 1: I'm building if it does leak or anything else, why 187 00:08:56,960 --> 00:08:57,319 Speaker 1: will it. 188 00:08:57,320 --> 00:08:57,959 Speaker 3: Still be safe? 189 00:08:57,960 --> 00:09:00,719 Speaker 1: And how do we continue to have the technology that 190 00:09:01,440 --> 00:09:04,920 Speaker 1: makes anything that's in the wild as safe as possible. 191 00:09:05,280 --> 00:09:06,920 Speaker 2: On Nilon must Do you think he has too much 192 00:09:07,120 --> 00:09:09,000 Speaker 2: power being so close to the president. 193 00:09:09,200 --> 00:09:13,000 Speaker 1: Well, so, look, I think he's a celebrated entrepreneur. 194 00:09:13,160 --> 00:09:13,920 Speaker 3: But I think that. 195 00:09:15,400 --> 00:09:19,840 Speaker 1: Governments are not companies, like, for example, risking a company 196 00:09:19,960 --> 00:09:22,719 Speaker 1: like for example, to say, oh, our ten rockets blow up, 197 00:09:22,920 --> 00:09:26,640 Speaker 1: who cares, it doesn't matter the financial system of a 198 00:09:26,679 --> 00:09:30,560 Speaker 1: country blows up. That's the cultural revolution that's terrible. So 199 00:09:30,600 --> 00:09:33,000 Speaker 1: you actually have to say we take less risk here, 200 00:09:33,120 --> 00:09:35,920 Speaker 1: even at the price of some inefficiency, because it's more 201 00:09:35,920 --> 00:09:38,800 Speaker 1: important for us to not have things blow up. 202 00:09:39,280 --> 00:09:42,120 Speaker 2: Do you worry that things are going too quickly with 203 00:09:42,200 --> 00:09:43,920 Speaker 2: the Trump administration, Well, I. 204 00:09:43,880 --> 00:09:47,559 Speaker 1: Worry that very bad risks are being taken at speed. 205 00:09:47,679 --> 00:09:50,160 Speaker 3: Is not a problem. Risks are a problem. 206 00:09:50,480 --> 00:09:52,120 Speaker 1: And you know, for example, it's like, well, we're just 207 00:09:52,120 --> 00:09:54,000 Speaker 1: going to fire a whole bunch of people. Oh oops, 208 00:09:54,000 --> 00:09:56,600 Speaker 1: we fired a whole bunch of nuclear safety inspectors. Like 209 00:09:56,880 --> 00:09:59,960 Speaker 1: that's the kind of thing that's taking risks that is unwarranted. 210 00:10:00,559 --> 00:10:00,719 Speaker 3: Read. 211 00:10:00,760 --> 00:10:02,280 Speaker 2: I also want to talk to you about China because 212 00:10:02,320 --> 00:10:04,360 Speaker 2: Deep Sea kind of got everyone at the edge of 213 00:10:04,400 --> 00:10:08,000 Speaker 2: their seat. But where do you have a good understanding 214 00:10:08,040 --> 00:10:09,520 Speaker 2: about where China is on AI. 215 00:10:10,520 --> 00:10:11,960 Speaker 3: I have a reasonable understanding. 216 00:10:12,000 --> 00:10:14,760 Speaker 1: I do a fair amount of talking to various people 217 00:10:14,760 --> 00:10:17,920 Speaker 1: in China in order to make sure. Part of the 218 00:10:17,920 --> 00:10:19,560 Speaker 1: thing that when last year I was going around saying 219 00:10:19,600 --> 00:10:23,679 Speaker 1: there was actually an economic race with AI between the 220 00:10:23,679 --> 00:10:26,040 Speaker 1: West and China, people are like, oh, no, you're overblowing 221 00:10:26,040 --> 00:10:27,400 Speaker 1: that because you just simply don't want. 222 00:10:27,280 --> 00:10:27,920 Speaker 3: To be regulated. 223 00:10:28,160 --> 00:10:29,880 Speaker 1: And I think with deep Seak and everything else, we 224 00:10:29,920 --> 00:10:33,360 Speaker 1: see that that race is there, and the Chinese government 225 00:10:33,400 --> 00:10:35,800 Speaker 1: has said that they want to be you know, AI leaders, 226 00:10:35,880 --> 00:10:37,000 Speaker 1: like leading the world. 227 00:10:36,840 --> 00:10:39,080 Speaker 3: By twenty thirty. I think the race is on. 228 00:10:39,160 --> 00:10:41,960 Speaker 1: I think it's very important for US and our industries 229 00:10:42,320 --> 00:10:43,800 Speaker 1: to actually in fact be winning. 230 00:10:44,360 --> 00:10:46,360 Speaker 2: Is this an r Is this the new arms race? 231 00:10:46,760 --> 00:10:48,839 Speaker 1: Well, I don't call it an arms race because it's 232 00:10:48,840 --> 00:10:51,800 Speaker 1: primarily an economic race. There is arm stuff components with it, 233 00:10:52,240 --> 00:10:54,560 Speaker 1: but yes, it is the economic race. 234 00:10:54,880 --> 00:10:56,280 Speaker 2: Read thank you so much for joining us. That was 235 00:10:56,280 --> 00:10:59,160 Speaker 2: Reid Hoffman LinkedIn co founder Gray Luck, partner and of 236 00:10:59,200 --> 00:11:03,280 Speaker 2: course author of Super Agency. It's a good book. It's 237 00:11:03,360 --> 00:11:05,160 Speaker 2: well written, and it's to the point