1 00:00:01,160 --> 00:00:04,280 Speaker 1: What do you do when something as ordinary as dinner 2 00:00:04,559 --> 00:00:09,960 Speaker 1: turns into chaos? For Morgan Brown, vice President of Product 3 00:00:09,960 --> 00:00:13,600 Speaker 1: Management for AI at Dropbox, it was trying to work 4 00:00:13,640 --> 00:00:17,919 Speaker 1: out the cabs in his six year old son's meals. 5 00:00:18,040 --> 00:00:22,200 Speaker 1: Endless Google searchers stress at the dinner table, so he 6 00:00:22,239 --> 00:00:26,680 Speaker 1: built an app with no coding background, using AI to 7 00:00:26,880 --> 00:00:31,680 Speaker 1: solve it. That same problem solving mindset runs through his 8 00:00:31,800 --> 00:00:35,760 Speaker 1: work at Dropbox, where Morgan leads the company's AI products. 9 00:00:36,440 --> 00:00:41,239 Speaker 1: He's using tools like CHATDPT and Claude as sparring partners, 10 00:00:41,280 --> 00:00:46,240 Speaker 1: not just search engines, automating the grunt work, refining strategy documents, 11 00:00:46,720 --> 00:00:51,480 Speaker 1: even filtering AI research papers while he has his morning coffee. 12 00:00:52,280 --> 00:00:55,760 Speaker 1: In this conversation, you'll learn how to spot your hidden 13 00:00:55,840 --> 00:01:00,040 Speaker 1: time sinks, how to design prompts that actually work, and 14 00:01:00,120 --> 00:01:03,960 Speaker 1: how to use AI as a genuine thought partner. And 15 00:01:04,000 --> 00:01:07,839 Speaker 1: the result more bandwidth for the work and the life 16 00:01:08,000 --> 00:01:19,320 Speaker 1: that really matters. Welcome to How I Work, a show 17 00:01:19,440 --> 00:01:23,840 Speaker 1: about habits, rituals, and strategies for optimizing your day. I'm 18 00:01:23,880 --> 00:01:31,080 Speaker 1: your host, doctor Amantha Imber. I want to start by 19 00:01:31,120 --> 00:01:35,040 Speaker 1: talking about carb scan, which I read your post on 20 00:01:35,120 --> 00:01:38,560 Speaker 1: LinkedIn as something that you're doing on the side, Can 21 00:01:38,600 --> 00:01:41,199 Speaker 1: you tell me how this came about? Yes, because you're busy, 22 00:01:41,240 --> 00:01:43,520 Speaker 1: You've got like an important job at dropbox to do. 23 00:01:44,240 --> 00:01:48,520 Speaker 2: I'm actually a tapeline diabetic, so really important to manage 24 00:01:48,560 --> 00:01:51,200 Speaker 2: my blood sugar. I have, you know, an insulin pump 25 00:01:51,360 --> 00:01:55,160 Speaker 2: and blood glucose monitor and have been dealing with that 26 00:01:55,560 --> 00:01:59,040 Speaker 2: most of my adult life. But my son, who is six, 27 00:01:59,360 --> 00:02:04,000 Speaker 2: was also recently diagnosed with type one diabetes. And really 28 00:02:04,040 --> 00:02:07,320 Speaker 2: for little kids, it adds a ton of complexity to 29 00:02:08,160 --> 00:02:11,359 Speaker 2: your day because their bloodshair has to be monitored very closely. 30 00:02:11,919 --> 00:02:14,840 Speaker 2: And what I found was pretty much every meal turned 31 00:02:14,840 --> 00:02:18,799 Speaker 2: into grantically searching on Google for how many carbs are 32 00:02:18,800 --> 00:02:22,040 Speaker 2: in chicken nuggets or mac and cheese and trying to 33 00:02:22,040 --> 00:02:24,840 Speaker 2: add it up, and it really interferes with Hey, everyone's 34 00:02:24,880 --> 00:02:26,760 Speaker 2: trying to sit down and eat. And so I built 35 00:02:26,760 --> 00:02:29,680 Speaker 2: a little app that allows me and my family or 36 00:02:29,680 --> 00:02:32,240 Speaker 2: anyone that wants to try it to take a picture 37 00:02:32,840 --> 00:02:35,240 Speaker 2: of the food. I take that picture, I send it 38 00:02:35,280 --> 00:02:38,600 Speaker 2: to open Ai with a really detailed prompt, and then 39 00:02:38,800 --> 00:02:42,640 Speaker 2: reference calorie King, which is a kind of canonical source 40 00:02:42,680 --> 00:02:45,640 Speaker 2: of carbohydrate and other nutritional data on the web, to 41 00:02:45,760 --> 00:02:49,160 Speaker 2: give me carbohydrate estimates back for his meals, rather than 42 00:02:49,320 --> 00:02:52,560 Speaker 2: three or four different Google searches across a bunch of sites. 43 00:02:52,600 --> 00:02:56,200 Speaker 2: I get a very quick answer back that's really just 44 00:02:56,240 --> 00:02:59,639 Speaker 2: taken a ton of friction out of three times a 45 00:02:59,720 --> 00:03:03,160 Speaker 2: day or snacks included just kind of snapping photos. And 46 00:03:03,240 --> 00:03:06,400 Speaker 2: it built it using a vibe coding AI tools. You know, 47 00:03:06,440 --> 00:03:08,680 Speaker 2: I don't know a lot of code myself, and so 48 00:03:09,320 --> 00:03:13,160 Speaker 2: basically used AI to build new AI to solve the problem. 49 00:03:13,240 --> 00:03:13,680 Speaker 3: I love that. 50 00:03:13,720 --> 00:03:15,720 Speaker 1: I want to talk about the tools, and I love 51 00:03:15,720 --> 00:03:19,240 Speaker 1: that you've mentioned that you're not a coding guy, because 52 00:03:19,240 --> 00:03:22,200 Speaker 1: people might be listening going, well, that's easy for a 53 00:03:22,240 --> 00:03:24,960 Speaker 1: tech guy to say, so vibe coding, and it might 54 00:03:25,000 --> 00:03:27,680 Speaker 1: be worth just defining what vibe coding is for those 55 00:03:27,720 --> 00:03:29,400 Speaker 1: that don't know. And then I want to ask which 56 00:03:29,400 --> 00:03:30,799 Speaker 1: which are the tools you used. 57 00:03:30,960 --> 00:03:34,520 Speaker 2: The idea behind vibe coding is to use an LM 58 00:03:34,720 --> 00:03:38,480 Speaker 2: or an AI agent to describe the type of software 59 00:03:38,520 --> 00:03:40,880 Speaker 2: you want to build, and then have it build it 60 00:03:40,960 --> 00:03:43,600 Speaker 2: for you. And that can be how it looks, how 61 00:03:43,640 --> 00:03:46,400 Speaker 2: it works, and really you just use natural language to 62 00:03:46,440 --> 00:03:48,520 Speaker 2: describe it. And I think the reason they call it 63 00:03:48,880 --> 00:03:50,880 Speaker 2: vibe coding is that you don't have to be so 64 00:03:51,080 --> 00:03:55,640 Speaker 2: strict with kind of precise requirements and precise coding and 65 00:03:55,720 --> 00:03:57,839 Speaker 2: you know, making sure there's no typos, and so it's 66 00:03:57,920 --> 00:04:01,200 Speaker 2: really a very fluid creative process. And I used a 67 00:04:01,240 --> 00:04:04,440 Speaker 2: tool called Replet, which is a great tool for building 68 00:04:04,440 --> 00:04:08,400 Speaker 2: web applications, and built everything from the front end design 69 00:04:08,520 --> 00:04:10,480 Speaker 2: or I just explained what I wanted it to look like, 70 00:04:10,600 --> 00:04:13,800 Speaker 2: to the prompts that get sent to open AI, to 71 00:04:14,160 --> 00:04:17,080 Speaker 2: the way that it does the carb estimates and all 72 00:04:17,120 --> 00:04:19,080 Speaker 2: of that. So yeah, I've been working on it for 73 00:04:19,560 --> 00:04:21,480 Speaker 2: about a month and a half here and there, and 74 00:04:21,560 --> 00:04:23,640 Speaker 2: it's been fun to see it kind of come together. 75 00:04:23,720 --> 00:04:24,240 Speaker 3: That's amazing. 76 00:04:24,240 --> 00:04:26,440 Speaker 1: Did you use any other tools, like were using chat 77 00:04:26,560 --> 00:04:31,560 Speaker 1: chapte to help with how you prompted replic because I must. 78 00:04:31,800 --> 00:04:34,320 Speaker 1: I've played around with replet a little bit. I built 79 00:04:34,360 --> 00:04:36,719 Speaker 1: an app that had so many bugs in it and 80 00:04:36,760 --> 00:04:39,960 Speaker 1: then I just got sick of it and abandoned that 81 00:04:40,000 --> 00:04:42,280 Speaker 1: little project. But was it just Replet or were we 82 00:04:42,360 --> 00:04:44,600 Speaker 1: using other tools to help build that? 83 00:04:44,880 --> 00:04:48,400 Speaker 2: You know, caps can I definitely use chat gpt as 84 00:04:48,520 --> 00:04:51,640 Speaker 2: kind of my software thought partner, you know, so I 85 00:04:51,640 --> 00:04:53,920 Speaker 2: would go to chat gept and say, hey, I want 86 00:04:53,960 --> 00:04:58,440 Speaker 2: to write a prompt for an LLLM to give me 87 00:04:58,480 --> 00:05:01,719 Speaker 2: a really accurate carboh great estimate from a picture of food. 88 00:05:01,800 --> 00:05:04,280 Speaker 2: Could you give me some suggestions on how to write 89 00:05:04,279 --> 00:05:06,840 Speaker 2: a really great prompt? And it gave me a bunch 90 00:05:06,880 --> 00:05:10,039 Speaker 2: of detailed instructions. I kind of took that, worked on it. 91 00:05:10,080 --> 00:05:13,280 Speaker 2: Then I gave that to a Replet And actually, now 92 00:05:13,400 --> 00:05:16,560 Speaker 2: my current workflow is I have an idea for a 93 00:05:16,600 --> 00:05:19,520 Speaker 2: feature for carb scan. I go to chat GPT and 94 00:05:19,560 --> 00:05:22,040 Speaker 2: I say, I now have a project within chat GPT 95 00:05:22,200 --> 00:05:24,520 Speaker 2: that is just carb scan, so it has a whole 96 00:05:24,560 --> 00:05:27,640 Speaker 2: bunch of historical knowledge there. And I say, I have 97 00:05:27,760 --> 00:05:29,839 Speaker 2: this new feature, I'd like it to do X, Y 98 00:05:29,839 --> 00:05:32,520 Speaker 2: and Z. Can you help me write a really clear 99 00:05:33,080 --> 00:05:36,719 Speaker 2: product requirement and specification doc to give to Replet to 100 00:05:36,760 --> 00:05:39,480 Speaker 2: help me get this feature built and it will give 101 00:05:39,480 --> 00:05:41,240 Speaker 2: me that prompt and I take that and then I 102 00:05:41,279 --> 00:05:44,160 Speaker 2: paste that into Replet and I ask Replet to review 103 00:05:44,240 --> 00:05:47,479 Speaker 2: the requirements and then come back with a plan to 104 00:05:47,520 --> 00:05:49,560 Speaker 2: implement it, and then I approve the plan and it 105 00:05:49,600 --> 00:05:50,320 Speaker 2: starts to build. 106 00:05:50,560 --> 00:05:51,000 Speaker 3: So good. 107 00:05:51,640 --> 00:05:56,520 Speaker 1: Now, of course, building carbohydrate scanning apps is not your 108 00:05:56,560 --> 00:05:59,760 Speaker 1: day job working at dropbox is. Can you tell me 109 00:05:59,839 --> 00:06:02,840 Speaker 1: in the nutshell, what do you do at Dropbox? 110 00:06:03,200 --> 00:06:06,320 Speaker 2: My day job is I'm the Vice president of Product 111 00:06:06,360 --> 00:06:10,200 Speaker 2: Management for AI at Dropbox, where I'm responsible for the 112 00:06:10,240 --> 00:06:13,120 Speaker 2: AI products that we build at Dropbox to help people 113 00:06:13,320 --> 00:06:16,040 Speaker 2: be more effective at their job and with their work, 114 00:06:16,080 --> 00:06:19,599 Speaker 2: and so that means I'm primarily responsible for Dropbox Dash. 115 00:06:20,160 --> 00:06:24,279 Speaker 2: Dropbox Dash is an AI powered search knowledge management work 116 00:06:24,320 --> 00:06:29,080 Speaker 2: assistant that helps teams be more effective, get busy work 117 00:06:29,080 --> 00:06:32,599 Speaker 2: out of the way, and hopefully allow them to do 118 00:06:32,800 --> 00:06:35,880 Speaker 2: better work at more of the empowering, meaningful work that 119 00:06:35,920 --> 00:06:37,640 Speaker 2: we all set out to do every day that can 120 00:06:37,680 --> 00:06:39,839 Speaker 2: often get buried under a bunch of project management. 121 00:06:39,920 --> 00:06:44,640 Speaker 1: I would love to know about some of your AI workflows, 122 00:06:45,040 --> 00:06:47,599 Speaker 1: I guess, both in terms of how you use AI 123 00:06:47,760 --> 00:06:49,919 Speaker 1: to help with some of the grunt work, but also 124 00:06:50,040 --> 00:06:53,200 Speaker 1: how you use it to help augment your thinking as well. 125 00:06:53,320 --> 00:06:56,200 Speaker 1: So maybe let's start by talking about the grunt work, 126 00:06:56,320 --> 00:07:00,360 Speaker 1: the shallow work. Take me through some of the ways 127 00:07:00,360 --> 00:07:03,080 Speaker 1: that you're using AI. I'd love to get into real 128 00:07:03,080 --> 00:07:05,920 Speaker 1: specifics as to what you've designed for you. 129 00:07:06,080 --> 00:07:08,599 Speaker 2: One of the biggest challenges with my work is that 130 00:07:08,680 --> 00:07:12,720 Speaker 2: I'm constantly switching between all sorts of different contexts and 131 00:07:12,760 --> 00:07:15,520 Speaker 2: problems to solve. So it could be a product feature 132 00:07:15,560 --> 00:07:18,240 Speaker 2: over here, thinking about how to launch a new feature 133 00:07:18,280 --> 00:07:21,880 Speaker 2: over here, or communicate how feature works over here, looking 134 00:07:21,880 --> 00:07:25,760 Speaker 2: at the roadmap, reviewing something to ship out to customers, 135 00:07:25,840 --> 00:07:29,640 Speaker 2: working with our marketing team, and so constantly switching contexts, 136 00:07:29,640 --> 00:07:31,600 Speaker 2: and over the course of a week, I'll be in 137 00:07:31,800 --> 00:07:34,240 Speaker 2: dozens of meetings, and so one of the things I 138 00:07:34,280 --> 00:07:37,840 Speaker 2: do is I use AI to recap all of the 139 00:07:37,880 --> 00:07:40,960 Speaker 2: things that I've promised to follow up with for everyone, 140 00:07:41,360 --> 00:07:43,360 Speaker 2: all of the things that other people have promised to 141 00:07:43,360 --> 00:07:46,560 Speaker 2: get back to me on and big decisions or milestones 142 00:07:46,600 --> 00:07:49,520 Speaker 2: or risks to watch out for. So Dropbox uses Zoom, 143 00:07:49,680 --> 00:07:54,320 Speaker 2: which is kind of the video meeting software. It records 144 00:07:54,360 --> 00:07:56,800 Speaker 2: the meetings pretty much by defaults for the most part, 145 00:07:56,840 --> 00:08:00,240 Speaker 2: so I am able to take those transcripts and turn 146 00:08:00,320 --> 00:08:03,040 Speaker 2: those into summaries, you know, at the start of every week. 147 00:08:03,080 --> 00:08:06,120 Speaker 2: So it helps me reset context and stay kind of 148 00:08:06,120 --> 00:08:08,200 Speaker 2: on top of that. So that's one big use case, 149 00:08:08,240 --> 00:08:10,600 Speaker 2: and that's really helped with you know, I used to 150 00:08:10,760 --> 00:08:13,800 Speaker 2: write it all down in chicken scratch. My handwriting is terrible, 151 00:08:14,160 --> 00:08:16,360 Speaker 2: and try to go back through and remember who I 152 00:08:16,400 --> 00:08:20,080 Speaker 2: promised what to and so now that's very seamless and 153 00:08:20,240 --> 00:08:22,320 Speaker 2: just happens, you know, pretty much automatically for me. 154 00:08:22,560 --> 00:08:23,400 Speaker 4: So that's one big one. 155 00:08:23,600 --> 00:08:25,600 Speaker 3: Yeah, cool, What are a couple of others. 156 00:08:25,880 --> 00:08:28,040 Speaker 2: So another one of the things that I do all 157 00:08:28,120 --> 00:08:31,280 Speaker 2: the time is I'm reading documents constantly. So do these 158 00:08:31,320 --> 00:08:36,079 Speaker 2: product requirement documents or specifications or strategy briefs or memos 159 00:08:36,080 --> 00:08:38,600 Speaker 2: about to go to market approach? And I found that 160 00:08:38,679 --> 00:08:42,320 Speaker 2: over time, I have the same questions over and over. 161 00:08:42,480 --> 00:08:44,600 Speaker 2: Who are we building this for? What's the job to 162 00:08:44,640 --> 00:08:46,480 Speaker 2: be done? How do we know that this is the 163 00:08:46,559 --> 00:08:49,640 Speaker 2: right thing to build? Why does what we're proposing make 164 00:08:49,720 --> 00:08:52,640 Speaker 2: sense relative to everything else we know? And so I 165 00:08:52,720 --> 00:08:57,000 Speaker 2: basically built a prompt for DASH, which is our internal 166 00:08:57,040 --> 00:09:00,160 Speaker 2: AI tool, to help me pre read those documents and 167 00:09:00,240 --> 00:09:03,640 Speaker 2: evaluate those documents along the same sets of questions that 168 00:09:03,679 --> 00:09:06,120 Speaker 2: I ask all the time. So I've got, you know, 169 00:09:06,160 --> 00:09:08,600 Speaker 2: twelve questions that I ask all the time. I run 170 00:09:08,600 --> 00:09:10,959 Speaker 2: it through the prompt and it helps me surface areas 171 00:09:10,960 --> 00:09:13,360 Speaker 2: to look out for in the document, areas that I 172 00:09:13,400 --> 00:09:15,600 Speaker 2: might want to prove a bit deeper. And I found 173 00:09:15,600 --> 00:09:18,520 Speaker 2: it so effective that I actually just said emailed or 174 00:09:18,559 --> 00:09:21,520 Speaker 2: I messaged the prompt to my entire team and said, hey, 175 00:09:21,880 --> 00:09:24,120 Speaker 2: I'm using this myself. If you want to use it 176 00:09:24,160 --> 00:09:26,640 Speaker 2: before you send something to me, you know, it might 177 00:09:26,720 --> 00:09:28,000 Speaker 2: save us a few cycles. 178 00:09:28,000 --> 00:09:29,480 Speaker 4: So I just sent it to them. Now they use 179 00:09:29,520 --> 00:09:30,640 Speaker 4: it before they send it to me. 180 00:09:31,120 --> 00:09:33,199 Speaker 2: And so now I'm kind of taking that the next 181 00:09:33,240 --> 00:09:36,280 Speaker 2: step and really trying to think about, Hey, based on 182 00:09:36,360 --> 00:09:38,840 Speaker 2: this strategy or this approach, you know, what are some 183 00:09:39,520 --> 00:09:41,840 Speaker 2: second order effects that I may not be aware of. 184 00:09:42,360 --> 00:09:45,840 Speaker 2: What are some ways to potentially double down on this 185 00:09:45,960 --> 00:09:49,120 Speaker 2: or differentiate and make this an even stronger proposal. What 186 00:09:49,200 --> 00:09:52,480 Speaker 2: are some risks that may be hidden here that I'm 187 00:09:52,480 --> 00:09:56,560 Speaker 2: not considering? How might my CTO react to this? How 188 00:09:56,640 --> 00:09:58,800 Speaker 2: might our legal team react to this? How might our 189 00:09:58,840 --> 00:10:01,480 Speaker 2: go to market team react to it? So really trying 190 00:10:01,520 --> 00:10:05,079 Speaker 2: to basically roleplay and game theory out all of these 191 00:10:05,120 --> 00:10:07,800 Speaker 2: different inputs that would typically take weeks to get. 192 00:10:07,920 --> 00:10:10,880 Speaker 1: And when you're constructing a prompt like that, are you 193 00:10:11,080 --> 00:10:15,600 Speaker 1: typically using because what is called meta prompting where you're 194 00:10:15,720 --> 00:10:19,240 Speaker 1: you know what you described earlier, where you're working with 195 00:10:19,360 --> 00:10:22,880 Speaker 1: the AI to help improve the prompt or even write 196 00:10:22,920 --> 00:10:25,600 Speaker 1: the prompt, Like do you have a process when you're 197 00:10:25,840 --> 00:10:28,720 Speaker 1: I guess, creating one of those fundamental prompts in your 198 00:10:28,760 --> 00:10:30,520 Speaker 1: prompt library, and you want to make it really good. 199 00:10:30,840 --> 00:10:33,360 Speaker 3: How do you, I guess, build a great prompt? 200 00:10:33,559 --> 00:10:36,360 Speaker 2: Yeah, absolutely, And I think that is really a skill 201 00:10:36,440 --> 00:10:39,920 Speaker 2: that knowledge workers can really turn into a superpower now. 202 00:10:39,960 --> 00:10:43,000 Speaker 2: And so I think it's thinking about what is the 203 00:10:43,520 --> 00:10:47,600 Speaker 2: optimal amount of context you can give the LM to 204 00:10:47,679 --> 00:10:50,560 Speaker 2: give you the most relevant and best answer possible. And 205 00:10:50,600 --> 00:10:53,600 Speaker 2: so in my prompt that's basically what I'm trying to do. 206 00:10:53,800 --> 00:10:54,040 Speaker 4: Now. 207 00:10:54,640 --> 00:10:56,640 Speaker 2: You can't give it too much information or it kind 208 00:10:56,679 --> 00:10:59,319 Speaker 2: of gets overwhelmed. But the more that you can give 209 00:10:59,360 --> 00:11:01,800 Speaker 2: it that it is highly relevant, the better the answers 210 00:11:01,840 --> 00:11:04,000 Speaker 2: you get. And one of the main principles that I 211 00:11:04,120 --> 00:11:08,680 Speaker 2: use is that lms read things in context of everything 212 00:11:08,720 --> 00:11:12,680 Speaker 2: they've already read. So the initial pieces of information you 213 00:11:12,720 --> 00:11:16,000 Speaker 2: put into it are going to influence how it evaluates 214 00:11:16,040 --> 00:11:18,120 Speaker 2: the rest of it. And so, for example, one of 215 00:11:18,160 --> 00:11:20,680 Speaker 2: the tips that you often hear and I actually have 216 00:11:20,720 --> 00:11:22,880 Speaker 2: found to work really effectively is that when you're talking 217 00:11:22,880 --> 00:11:27,240 Speaker 2: about a domain specifically, it's important to state up front 218 00:11:27,679 --> 00:11:29,880 Speaker 2: that this is the domain you're talking about. So, for example, 219 00:11:29,880 --> 00:11:32,839 Speaker 2: if I'm working on positioning. I'm interested in how we 220 00:11:32,920 --> 00:11:37,640 Speaker 2: might position a dash feature. I'll say you are April Dunford, 221 00:11:38,000 --> 00:11:41,640 Speaker 2: author of this positioning book, or you are Al Reese 222 00:11:41,720 --> 00:11:46,080 Speaker 2: and Jack Trout of Positioning to Win. Help me position 223 00:11:46,600 --> 00:11:50,920 Speaker 2: this product feature. These are its capabilities, This is how 224 00:11:50,920 --> 00:11:53,880 Speaker 2: I think it could work. I think these are the challenges, 225 00:11:54,320 --> 00:11:55,920 Speaker 2: this is what's differentiated about it. 226 00:11:56,000 --> 00:11:57,719 Speaker 4: These are who it might compete. 227 00:11:57,400 --> 00:12:00,520 Speaker 2: With, and these are the ways where we would like 228 00:12:00,559 --> 00:12:02,559 Speaker 2: it to connect to the rest of our product suite. 229 00:12:03,040 --> 00:12:05,520 Speaker 2: And then it'll you know, kind of structuring it that 230 00:12:05,640 --> 00:12:08,439 Speaker 2: way where it's kind of a here's some context. Then 231 00:12:08,440 --> 00:12:11,400 Speaker 2: there's a very specific task or set of tasks that 232 00:12:11,480 --> 00:12:13,440 Speaker 2: you want to give it, and then you can give 233 00:12:13,440 --> 00:12:16,600 Speaker 2: it detailed directions around how you want it to accomplish 234 00:12:16,679 --> 00:12:20,160 Speaker 2: those tasks. So, for example, sometimes the lllms can be 235 00:12:20,280 --> 00:12:24,760 Speaker 2: very verbose, but I like very terse, very pragmatic answers, 236 00:12:24,760 --> 00:12:28,640 Speaker 2: So I always tell it no fluff, no hyperbole, consider 237 00:12:28,720 --> 00:12:30,640 Speaker 2: this in a very you know, you're not going to 238 00:12:30,840 --> 00:12:33,119 Speaker 2: hurt my feelings if you tell me it's a bad idea. 239 00:12:33,280 --> 00:12:35,200 Speaker 2: You know, just really trying to kind of get it 240 00:12:35,320 --> 00:12:38,680 Speaker 2: honed in with the language that I or the thought 241 00:12:38,679 --> 00:12:40,280 Speaker 2: process that I kind of want it to laid up, 242 00:12:40,280 --> 00:12:45,200 Speaker 2: and so context task output and then detailed instructions are 243 00:12:45,280 --> 00:12:47,400 Speaker 2: usually the format that I use to write these. 244 00:12:47,640 --> 00:12:50,400 Speaker 1: Still staying on the topic of I guess reducing or 245 00:12:50,440 --> 00:12:54,319 Speaker 1: eliminating some of the grunt work through AI, I'd love 246 00:12:54,320 --> 00:12:57,720 Speaker 1: to know if they're more on the automation or a 247 00:12:57,760 --> 00:13:02,600 Speaker 1: gentic AI side of the what some of the I 248 00:13:02,600 --> 00:13:05,440 Speaker 1: guess other tasks that you've been able to automate in 249 00:13:05,440 --> 00:13:06,719 Speaker 1: interesting ways would be. 250 00:13:06,960 --> 00:13:09,480 Speaker 2: I've really done a lot of automation around trying to 251 00:13:10,040 --> 00:13:12,640 Speaker 2: understand what's changing around me. So I have a daily 252 00:13:13,400 --> 00:13:18,760 Speaker 2: workflow that reviews all new kind of substack newsletters, important 253 00:13:18,800 --> 00:13:23,280 Speaker 2: tweets across a large domain of AI and mL X accounts. 254 00:13:23,559 --> 00:13:28,520 Speaker 2: It reviews Archive, which is the scientific paper repository for 255 00:13:29,080 --> 00:13:33,280 Speaker 2: new mL and artificial intelligence papers. It scans the whole 256 00:13:33,360 --> 00:13:38,120 Speaker 2: landscape YouTube, Spotify podcasts, and basically takes all of the 257 00:13:38,200 --> 00:13:42,840 Speaker 2: day's information about AI and mL reviews it all categorizes 258 00:13:42,920 --> 00:13:49,280 Speaker 2: it into world changing, interesting, incremental, and noise, and then 259 00:13:49,320 --> 00:13:52,960 Speaker 2: summarizes the top three quote unquote world changing potentials and 260 00:13:52,960 --> 00:13:55,080 Speaker 2: then summarizes it for me in the context of my 261 00:13:55,200 --> 00:13:58,480 Speaker 2: role at Dash where it kind of talks about hey, 262 00:13:58,559 --> 00:14:01,440 Speaker 2: things you might consider as the head of product for Dash. So, 263 00:14:01,480 --> 00:14:04,920 Speaker 2: for example, Anthropic the other day just announced that Claud 264 00:14:04,960 --> 00:14:09,199 Speaker 2: can handle now a million token context window, which effectively 265 00:14:09,240 --> 00:14:13,040 Speaker 2: means that for many reasonable size software codebases, it can 266 00:14:13,080 --> 00:14:15,720 Speaker 2: reason about the entire codebase, which is a pretty big 267 00:14:15,760 --> 00:14:18,760 Speaker 2: and important milestone for software development. And the other one 268 00:14:18,840 --> 00:14:21,640 Speaker 2: is I automate my personal email. I can't imagine you're 269 00:14:21,760 --> 00:14:24,440 Speaker 2: inbox Atmantha, but mine it gets hammered all the time. 270 00:14:24,520 --> 00:14:26,320 Speaker 4: And so I built a. 271 00:14:26,320 --> 00:14:31,080 Speaker 2: Little agent using Claud code to read my personal Gmail 272 00:14:31,120 --> 00:14:38,040 Speaker 2: every day, categorize it into newsletters, spam priority emails, summarize 273 00:14:38,080 --> 00:14:40,880 Speaker 2: the newsletters for me, flag the messages that I want 274 00:14:40,920 --> 00:14:46,040 Speaker 2: to respond to, and draft responses for those newsletters or 275 00:14:46,160 --> 00:14:49,360 Speaker 2: not for those emails. So really trying to kind of 276 00:14:49,400 --> 00:14:53,320 Speaker 2: like manage my inbound email automatically as well. 277 00:14:53,360 --> 00:14:55,320 Speaker 3: Wow, okay, I love the both those examples. 278 00:14:55,360 --> 00:14:57,600 Speaker 1: I want to dig a bit, Dafa into how you 279 00:14:57,640 --> 00:15:04,040 Speaker 1: went about creating this automation that summarizes all the AI 280 00:15:04,280 --> 00:15:05,080 Speaker 1: news of the day. 281 00:15:05,280 --> 00:15:08,640 Speaker 3: I mean that sounds so powerful. Can I buy that 282 00:15:08,680 --> 00:15:08,960 Speaker 3: from you? 283 00:15:09,080 --> 00:15:09,440 Speaker 1: Morgan? 284 00:15:09,640 --> 00:15:10,240 Speaker 3: Are you selling it? 285 00:15:11,560 --> 00:15:12,800 Speaker 4: You can link it in the show notes. 286 00:15:13,600 --> 00:15:17,160 Speaker 3: Wow amazing handed over Yeah Wow. Talk me through the 287 00:15:17,200 --> 00:15:19,200 Speaker 3: process of how you put that together. 288 00:15:19,480 --> 00:15:21,160 Speaker 2: First of all, I realized that there were a few 289 00:15:21,240 --> 00:15:25,760 Speaker 2: key sources of information that I was constantly referring to personally, 290 00:15:25,840 --> 00:15:27,800 Speaker 2: So you know, whenever I had a free moment, I 291 00:15:27,800 --> 00:15:30,960 Speaker 2: would go to archive and kind of browse the actual 292 00:15:31,480 --> 00:15:34,040 Speaker 2: published papers because that's the real source of truth on 293 00:15:34,080 --> 00:15:37,280 Speaker 2: the cutting edge. I would see a lot of popular 294 00:15:37,880 --> 00:15:42,240 Speaker 2: posts on x about kind of new research breakthroughs. I 295 00:15:42,280 --> 00:15:44,640 Speaker 2: follow some of the key people like the Sam Altman's 296 00:15:44,760 --> 00:15:47,960 Speaker 2: of the World and the Frontier Labs, and then there's 297 00:15:47,960 --> 00:15:52,680 Speaker 2: also several great substack newsletters around AIS that I read 298 00:15:52,840 --> 00:15:55,280 Speaker 2: as much as I can. But I just realized there 299 00:15:55,280 --> 00:15:56,880 Speaker 2: was no way for me to stay on top of 300 00:15:56,920 --> 00:16:00,360 Speaker 2: that volume consistently, and I just started with a prompt. 301 00:16:00,400 --> 00:16:02,880 Speaker 2: I told CHATGBT, I'd like to create a prompt to 302 00:16:02,960 --> 00:16:08,920 Speaker 2: scan the entire AIML space for important signals around research 303 00:16:08,920 --> 00:16:12,560 Speaker 2: and development and industry news that pertains to my role 304 00:16:12,760 --> 00:16:15,600 Speaker 2: at Dropbox. And I said, if for X, I want 305 00:16:15,600 --> 00:16:18,680 Speaker 2: to look across these accounts and accounts like it. So 306 00:16:18,680 --> 00:16:20,480 Speaker 2: I kind of gave it a seed list of accounts, 307 00:16:20,480 --> 00:16:23,880 Speaker 2: but asked it to expand where it could. I gave 308 00:16:23,920 --> 00:16:27,000 Speaker 2: it specific newsletters that I read on substack, but also 309 00:16:27,040 --> 00:16:30,200 Speaker 2: again asked to consider adjacents. And then I gave it 310 00:16:30,240 --> 00:16:34,720 Speaker 2: specific categories on archive where those papers are housed and 311 00:16:34,760 --> 00:16:38,200 Speaker 2: asked it to consider relevant ones that there. And then 312 00:16:38,240 --> 00:16:42,120 Speaker 2: I said, I'd like to really filter high signal, like 313 00:16:42,160 --> 00:16:44,600 Speaker 2: I don't want this to be a laundry list of 314 00:16:44,640 --> 00:16:47,040 Speaker 2: everything that happened that when actually helped me kind of 315 00:16:47,160 --> 00:16:49,320 Speaker 2: consume it any better. So I said, help me come 316 00:16:49,400 --> 00:16:53,120 Speaker 2: up with a scale to rank the innovations and the 317 00:16:53,160 --> 00:16:54,840 Speaker 2: news that's coming out in the day and kind of 318 00:16:54,880 --> 00:16:58,360 Speaker 2: categorize things as can't miss, these are like foundational important, 319 00:16:58,360 --> 00:17:02,680 Speaker 2: shifts increment and important, and then give me some synthesis 320 00:17:02,840 --> 00:17:06,240 Speaker 2: around hey, when you put these signals together, what does it. 321 00:17:06,200 --> 00:17:07,400 Speaker 4: Mean for my role? 322 00:17:07,960 --> 00:17:11,439 Speaker 2: So I fed that all just in basically talking to 323 00:17:11,520 --> 00:17:14,920 Speaker 2: chat GPT. I like to use the audio input to 324 00:17:15,000 --> 00:17:17,639 Speaker 2: kind of just stream of consciousness my thoughts into it. 325 00:17:17,720 --> 00:17:19,119 Speaker 2: Once I did that, I asked it to kind of 326 00:17:19,160 --> 00:17:22,439 Speaker 2: structure it into a prompt. I reviewed the prompt. Some 327 00:17:22,480 --> 00:17:24,840 Speaker 2: of the things I didn't like. I always try to 328 00:17:24,840 --> 00:17:27,359 Speaker 2: go back and forth with it. I think you should 329 00:17:27,440 --> 00:17:29,240 Speaker 2: kind of refine it, much like you with the copy 330 00:17:29,280 --> 00:17:32,919 Speaker 2: for a website or the script for something. And then 331 00:17:32,960 --> 00:17:35,000 Speaker 2: once I got the prompt to a pretty good spot, 332 00:17:35,160 --> 00:17:38,160 Speaker 2: I said, okay, let's run that for today and give 333 00:17:38,160 --> 00:17:38,840 Speaker 2: me the output. 334 00:17:38,960 --> 00:17:39,480 Speaker 4: And I got the. 335 00:17:39,400 --> 00:17:40,760 Speaker 2: Output and I was like, oh, this is a little 336 00:17:40,760 --> 00:17:43,600 Speaker 2: too long. I could be a little shorter. It looks 337 00:17:43,640 --> 00:17:46,440 Speaker 2: like we're underrepresenting, we're missing YouTube. Let's try to add 338 00:17:46,480 --> 00:17:48,600 Speaker 2: YouTube in so on and so forth. And then I 339 00:17:48,640 --> 00:17:50,600 Speaker 2: finally got it dialed in and then I said, great, 340 00:17:50,800 --> 00:17:53,760 Speaker 2: set this as a daily task every day five am Pacific. 341 00:17:54,240 --> 00:17:56,920 Speaker 2: Run it again while I'm having coffee. I just kind 342 00:17:56,920 --> 00:17:57,640 Speaker 2: of read through it. 343 00:17:57,760 --> 00:18:00,359 Speaker 1: Wow, Yes, I would love if you could that, and 344 00:18:00,400 --> 00:18:03,400 Speaker 1: I will pop that in the show notes. Let's talk 345 00:18:03,440 --> 00:18:08,719 Speaker 1: about how you use AI to augment your thinking. I recently, 346 00:18:08,760 --> 00:18:10,720 Speaker 1: I don't know if you know Bobb Johansson. He's a 347 00:18:10,880 --> 00:18:13,920 Speaker 1: futurist over in Silicon Valley and I interviewed him a 348 00:18:13,960 --> 00:18:17,560 Speaker 1: few weeks ago and he was talking about how, you know, 349 00:18:17,560 --> 00:18:20,000 Speaker 1: as many people have said, artificial intelligence is such a 350 00:18:20,000 --> 00:18:22,000 Speaker 1: bad brand name, and he said, if he was going 351 00:18:22,040 --> 00:18:25,080 Speaker 1: to name it, he would call it augmented intelligence. And 352 00:18:25,920 --> 00:18:27,840 Speaker 1: it's stuck with me, and I feel like I use 353 00:18:27,880 --> 00:18:30,520 Speaker 1: it a lot to augment my own thinking. But I 354 00:18:30,520 --> 00:18:32,960 Speaker 1: feel like most people talk about, you know, the efficiency 355 00:18:32,960 --> 00:18:35,119 Speaker 1: gains and the productivity gains. But I'd love to know, 356 00:18:35,200 --> 00:18:38,160 Speaker 1: for you, Morgan, what are the different ways or use 357 00:18:38,200 --> 00:18:42,280 Speaker 1: cases that you think about and use AI for to 358 00:18:42,320 --> 00:18:43,240 Speaker 1: improve your thinking. 359 00:18:43,560 --> 00:18:46,720 Speaker 2: I think a lot of people looks AI for answers, 360 00:18:46,920 --> 00:18:49,280 Speaker 2: but it's not really the best search engine. You know, 361 00:18:49,280 --> 00:18:52,280 Speaker 2: it can make things up, it can have outdated information. 362 00:18:52,800 --> 00:18:54,760 Speaker 2: I think it's much better to use it as kind 363 00:18:54,800 --> 00:18:57,280 Speaker 2: of a thought partner or sparing partner. And so I 364 00:18:57,280 --> 00:18:59,760 Speaker 2: think a lot of the techniques that you kind of 365 00:18:59,840 --> 00:19:03,560 Speaker 2: learn learn in business and brainstorming and communication and strategy 366 00:19:04,200 --> 00:19:07,760 Speaker 2: work really well with lllms. So the first thing I 367 00:19:07,800 --> 00:19:10,280 Speaker 2: try to do is again go back to, like what 368 00:19:10,359 --> 00:19:13,719 Speaker 2: kind of context does the LLLM need to know in 369 00:19:13,840 --> 00:19:16,520 Speaker 2: order to be a good thought partner for me? You know, 370 00:19:16,560 --> 00:19:20,880 Speaker 2: like you wouldn't go ask someone who works in sporting 371 00:19:20,880 --> 00:19:23,080 Speaker 2: goods about a medical condition, you know, and you're not 372 00:19:23,080 --> 00:19:25,359 Speaker 2: going to ask your doctor about how to score a 373 00:19:25,400 --> 00:19:27,720 Speaker 2: goal in soccer unless they used to play in a 374 00:19:27,800 --> 00:19:30,280 Speaker 2: university or something. But so, really, like, what does the 375 00:19:30,400 --> 00:19:34,400 Speaker 2: LLLM need to know to be successful. And so with DASH, 376 00:19:34,480 --> 00:19:37,280 Speaker 2: we have this concept called stacks, which are collections of 377 00:19:37,320 --> 00:19:40,199 Speaker 2: documents that you can create around a given topic, and 378 00:19:40,240 --> 00:19:45,120 Speaker 2: then dash chat uses that collection of documents to augment 379 00:19:45,240 --> 00:19:48,800 Speaker 2: its context and understanding of the problem space. But I 380 00:19:48,920 --> 00:19:52,720 Speaker 2: generalize this to chat, GBT or anthropic or any of 381 00:19:53,119 --> 00:19:56,040 Speaker 2: the other models where I try to say, Okay, what 382 00:19:56,080 --> 00:19:57,919 Speaker 2: does it need to know about me and the problem 383 00:19:57,960 --> 00:20:00,320 Speaker 2: I'm trying to solve first to be highly relevant, and 384 00:20:00,359 --> 00:20:03,679 Speaker 2: then go create that information. So, for example, one of 385 00:20:03,720 --> 00:20:06,000 Speaker 2: the things I've been that I started early in my 386 00:20:06,080 --> 00:20:10,439 Speaker 2: career was a Morgan's Operating Manual. So anyone that was 387 00:20:10,520 --> 00:20:12,880 Speaker 2: kind of going to work with me join my team. 388 00:20:13,119 --> 00:20:15,639 Speaker 2: It became really important when I became a manager to 389 00:20:15,760 --> 00:20:20,000 Speaker 2: help people understand, you know, here's what Morgan prioritizes, Here's 390 00:20:20,080 --> 00:20:23,480 Speaker 2: how he likes to communicate, here's the things that add friction. 391 00:20:23,840 --> 00:20:25,840 Speaker 2: It's like my operating manual that you would kind of 392 00:20:25,960 --> 00:20:28,879 Speaker 2: take a quick start guide for working with Morgan working 393 00:20:28,920 --> 00:20:31,280 Speaker 2: with an LM. That's one of the first documents I 394 00:20:31,320 --> 00:20:34,240 Speaker 2: give it are my operating principles, and it has things 395 00:20:34,280 --> 00:20:37,359 Speaker 2: like no fluff, what you don't know, say you don't 396 00:20:37,400 --> 00:20:39,800 Speaker 2: know no politics, you know that type of thing. 397 00:20:40,119 --> 00:20:40,679 Speaker 4: And then I've. 398 00:20:40,520 --> 00:20:43,880 Speaker 2: Started to build out those contextual documents. So, for example, 399 00:20:44,119 --> 00:20:47,240 Speaker 2: I've really been working lately on what are my core 400 00:20:47,400 --> 00:20:50,600 Speaker 2: principles as a person, as a father, as a husband, 401 00:20:51,520 --> 00:20:53,880 Speaker 2: what are the things that I care about in life generally, 402 00:20:54,560 --> 00:20:57,639 Speaker 2: and using an LLLM to codify those, and then I 403 00:20:57,640 --> 00:21:00,200 Speaker 2: can give it even domain specific things. So for example, 404 00:21:00,560 --> 00:21:03,520 Speaker 2: with carb scan, my priorities with carb scan are like, 405 00:21:03,760 --> 00:21:08,159 Speaker 2: accuracy is the most important thing, no false, confidence, delight 406 00:21:08,240 --> 00:21:11,879 Speaker 2: and speed are you know essential And so now it 407 00:21:12,000 --> 00:21:14,479 Speaker 2: has a working set of information to be a really 408 00:21:14,520 --> 00:21:16,920 Speaker 2: good thought partner. Once I do that, then I try 409 00:21:16,960 --> 00:21:19,320 Speaker 2: to use the techniques that we use to kind of 410 00:21:19,320 --> 00:21:21,480 Speaker 2: come up with good ideas all the time. So I'll say, 411 00:21:21,720 --> 00:21:24,280 Speaker 2: instead of asking AI for an answer about something, I'll say, Hey, 412 00:21:24,359 --> 00:21:26,680 Speaker 2: I have this problem I'm trying to solve. Let's do 413 00:21:26,840 --> 00:21:30,000 Speaker 2: some divergent thinking together. I want to be generative. Help 414 00:21:30,040 --> 00:21:32,480 Speaker 2: me be generative about the ideas here, So lead me 415 00:21:32,520 --> 00:21:36,000 Speaker 2: through a series of questions to help diverge and push 416 00:21:36,040 --> 00:21:39,080 Speaker 2: my thinking. So you know, in brainstorm, I use divergent 417 00:21:39,119 --> 00:21:41,480 Speaker 2: thinking to get as many options on the table. And 418 00:21:41,520 --> 00:21:44,000 Speaker 2: then once I feel pretty good about that, I'll start 419 00:21:44,040 --> 00:21:47,399 Speaker 2: to say, Okay, now let's now go through some conversion 420 00:21:47,440 --> 00:21:48,440 Speaker 2: thinking cycles. 421 00:21:48,520 --> 00:21:49,600 Speaker 4: Okay, now let's. 422 00:21:49,440 --> 00:21:52,880 Speaker 2: Drive down here, and then kind of through those questions, 423 00:21:52,920 --> 00:21:56,359 Speaker 2: I'll say, like, hey, is there a framework that could 424 00:21:56,359 --> 00:21:59,760 Speaker 2: help articulate how we're making these decisions? Like what are 425 00:21:59,760 --> 00:22:03,080 Speaker 2: some the implied principles about some of the decisions we're 426 00:22:03,119 --> 00:22:06,000 Speaker 2: making in this session? You know, is there a two 427 00:22:06,000 --> 00:22:09,320 Speaker 2: by two grid of you know, feature importance versus priority, 428 00:22:09,440 --> 00:22:13,359 Speaker 2: or user engagement versus monetization, or some other type of 429 00:22:13,480 --> 00:22:16,480 Speaker 2: way to kind of like help frame up the decisions. 430 00:22:16,920 --> 00:22:19,480 Speaker 2: And then look for like I mentioned earlier, like Okay, 431 00:22:19,480 --> 00:22:21,080 Speaker 2: now that we have this set of ideas, how might 432 00:22:21,119 --> 00:22:24,480 Speaker 2: we push them further? Are their power laws inherent? 433 00:22:24,520 --> 00:22:24,720 Speaker 1: Here? 434 00:22:25,040 --> 00:22:28,119 Speaker 2: Are there limiting steps, you know, things that must be 435 00:22:28,280 --> 00:22:31,040 Speaker 2: true for this to work? Like what are those? And 436 00:22:31,119 --> 00:22:33,359 Speaker 2: so that's really how I go back and forth with 437 00:22:33,400 --> 00:22:35,240 Speaker 2: them as a thinking partner. And then I try to 438 00:22:35,320 --> 00:22:38,119 Speaker 2: take the output of one. So if I do this 439 00:22:38,160 --> 00:22:40,600 Speaker 2: with chat GPT, I'll take its output and I will 440 00:22:40,600 --> 00:22:42,479 Speaker 2: give it to Claude and I will say I went 441 00:22:42,520 --> 00:22:45,400 Speaker 2: through this exercise of the chat GPT here's its recommendations. 442 00:22:45,440 --> 00:22:46,080 Speaker 4: What do you think. 443 00:22:46,240 --> 00:22:50,280 Speaker 1: So it's tempting to think that AI's biggest value is efficiency, 444 00:22:50,880 --> 00:22:51,960 Speaker 1: but Morgan. 445 00:22:51,680 --> 00:22:54,480 Speaker 3: Shows us that that is just the surface. 446 00:22:55,040 --> 00:22:59,159 Speaker 1: Because coming up we dig into experimentation, how tools like 447 00:22:59,240 --> 00:23:03,919 Speaker 1: chat JPT can simulate real world audiences, test hundreds of 448 00:23:03,920 --> 00:23:07,800 Speaker 1: ideas in minutes, and even reshape how you think about 449 00:23:07,840 --> 00:23:11,199 Speaker 1: what makes a strong hypothesis. We also get into the 450 00:23:11,280 --> 00:23:16,280 Speaker 1: risks of skipping humans all together, why speed can sometimes backfire, 451 00:23:16,680 --> 00:23:20,840 Speaker 1: and the surprising ways that Morgan stress tests his own ideas. 452 00:23:21,480 --> 00:23:25,440 Speaker 1: Stay tuned because this half gets very practical and very 453 00:23:25,600 --> 00:23:31,880 Speaker 1: eye opening. If you're looking for more tips to improve 454 00:23:31,920 --> 00:23:34,480 Speaker 1: the way you work can live. I write a short 455 00:23:34,520 --> 00:23:37,960 Speaker 1: weekly newsletter that contains tactics I've discovered that have helped 456 00:23:38,000 --> 00:23:41,159 Speaker 1: me personally. You can sign up for that at Amantha 457 00:23:41,359 --> 00:23:49,480 Speaker 1: dot com. That's Amantha dot com. I want to shift 458 00:23:49,480 --> 00:23:53,800 Speaker 1: into talking about experimentation, and but before we started recording, 459 00:23:53,840 --> 00:23:57,960 Speaker 1: I was showing you my very scribbled on dogged Coffee 460 00:23:58,040 --> 00:24:01,560 Speaker 1: of Hacking Growth, which I love this It came out 461 00:24:01,640 --> 00:24:03,760 Speaker 1: quite a few years ago now, but it's such a 462 00:24:03,760 --> 00:24:06,919 Speaker 1: great book, and I think about experimentation a lot. My 463 00:24:07,040 --> 00:24:10,000 Speaker 1: consultancy inventing part is what we do is we build 464 00:24:10,000 --> 00:24:14,359 Speaker 1: innovation capability for our clients. And you know, something I 465 00:24:14,400 --> 00:24:16,119 Speaker 1: have been thinking about in the last couple of years 466 00:24:16,200 --> 00:24:20,440 Speaker 1: is just how some of those fundamental principles of experimentation 467 00:24:20,560 --> 00:24:23,239 Speaker 1: and even some of the principles that like you know, 468 00:24:23,560 --> 00:24:26,320 Speaker 1: Eric Reaes and Steve Blank gave us with the lean 469 00:24:26,359 --> 00:24:31,080 Speaker 1: startup methodology and the great work in principles that you 470 00:24:31,119 --> 00:24:32,560 Speaker 1: wrote about in Hacking Growth. 471 00:24:32,880 --> 00:24:35,760 Speaker 3: How has your thinking evolved now with all these. 472 00:24:35,640 --> 00:24:38,399 Speaker 1: Different AI tools that we've got access to when it 473 00:24:38,440 --> 00:24:40,439 Speaker 1: comes to testing an idea. 474 00:24:40,760 --> 00:24:42,040 Speaker 4: Yeah, that's a great question, and you're right. 475 00:24:42,080 --> 00:24:43,960 Speaker 2: Your Hacking Growth, I think will be I think it's 476 00:24:44,000 --> 00:24:46,359 Speaker 2: eight years old at this point, and so I was 477 00:24:46,400 --> 00:24:47,320 Speaker 2: so excited to see. 478 00:24:47,119 --> 00:24:48,359 Speaker 4: That you had a copy. 479 00:24:48,680 --> 00:24:51,000 Speaker 2: When I reflect on it, I think the principles of 480 00:24:51,080 --> 00:24:53,720 Speaker 2: it still stand up very well. Like the first half 481 00:24:53,720 --> 00:24:55,880 Speaker 2: of the book kind of talking about, hey, you really 482 00:24:55,920 --> 00:24:59,840 Speaker 2: need to understand the problem deeply, using that understanding to 483 00:25:00,040 --> 00:25:03,200 Speaker 2: generate new potential ideas to test. And then the main 484 00:25:03,240 --> 00:25:06,439 Speaker 2: things that have changed are one, you can now go 485 00:25:06,520 --> 00:25:09,399 Speaker 2: through that loop much faster. Either you can kind of 486 00:25:09,440 --> 00:25:13,840 Speaker 2: work offline before you kind of move online. So, for example, 487 00:25:13,880 --> 00:25:17,280 Speaker 2: if you're going to test a bunch of headlines for 488 00:25:17,480 --> 00:25:21,040 Speaker 2: an advertisement or any email subject line. In the old days, 489 00:25:21,040 --> 00:25:23,280 Speaker 2: you would sit down, you would brainstorm with a team. 490 00:25:23,320 --> 00:25:25,600 Speaker 2: You'd come up with four or five. You kind of 491 00:25:25,600 --> 00:25:28,280 Speaker 2: debate which two you wanted to test. You would take 492 00:25:28,280 --> 00:25:30,400 Speaker 2: those two, you'd put them into your email software. You'd 493 00:25:30,440 --> 00:25:32,879 Speaker 2: send it to like ten percent of the list. You 494 00:25:32,880 --> 00:25:34,840 Speaker 2: would see which one was winning, and then you would 495 00:25:34,840 --> 00:25:36,800 Speaker 2: send that to the rest of the list, and then 496 00:25:36,800 --> 00:25:39,520 Speaker 2: you'd say, okay, for next time, this one one, and 497 00:25:39,560 --> 00:25:41,840 Speaker 2: so we'll use that as a kind of a starting 498 00:25:41,840 --> 00:25:43,760 Speaker 2: point to kind of move onto the next one. And 499 00:25:43,800 --> 00:25:46,120 Speaker 2: that was kind of the speed of the loop, which 500 00:25:46,240 --> 00:25:48,879 Speaker 2: was really gated by how much time it took to 501 00:25:48,920 --> 00:25:51,840 Speaker 2: do all that. Now today I can say I need 502 00:25:51,840 --> 00:25:55,879 Speaker 2: an email subject line chat you BT give me five immediately, 503 00:25:56,000 --> 00:25:58,320 Speaker 2: you get it instantly, and then you can start to 504 00:25:58,400 --> 00:26:02,280 Speaker 2: I think what's really interesting is you can start to simulate, hey, 505 00:26:02,640 --> 00:26:06,280 Speaker 2: how might this perform generally? You know, and the lllms 506 00:26:06,320 --> 00:26:10,080 Speaker 2: have so much context around the baseline industry open rates, 507 00:26:10,359 --> 00:26:13,240 Speaker 2: by email types, and you know, is to continue the example, 508 00:26:13,600 --> 00:26:16,600 Speaker 2: and so you can go through that learning loop much faster, 509 00:26:16,800 --> 00:26:19,440 Speaker 2: so that hopefully by the time you actually get down 510 00:26:19,480 --> 00:26:23,960 Speaker 2: to testing with that email, you've gone from maybe five 511 00:26:24,000 --> 00:26:27,040 Speaker 2: initial ideas that were kind of like generally concocted to 512 00:26:27,119 --> 00:26:30,080 Speaker 2: maybe you started now with twenty or one hundred ideas 513 00:26:30,160 --> 00:26:33,440 Speaker 2: which have gone through a series of you know, feedback 514 00:26:33,480 --> 00:26:36,199 Speaker 2: loops down to now two or three or five that 515 00:26:36,240 --> 00:26:38,520 Speaker 2: you feel really good about. And that takes you know, 516 00:26:38,600 --> 00:26:41,200 Speaker 2: thirty minutes versus you know, days and a couple of 517 00:26:41,240 --> 00:26:43,359 Speaker 2: meetings with your team and all of that. And so 518 00:26:43,480 --> 00:26:48,480 Speaker 2: I think that applies everywhere now. It applies to you know, 519 00:26:48,560 --> 00:26:51,000 Speaker 2: the subject lines of your email, your ad copy, the 520 00:26:51,080 --> 00:26:54,040 Speaker 2: landing page copy, the what shows up in your app. 521 00:26:54,400 --> 00:26:57,000 Speaker 2: To give you an example, with carb scan, one of 522 00:26:57,040 --> 00:26:59,040 Speaker 2: the things that I found coming back to it multiple 523 00:26:59,119 --> 00:27:01,640 Speaker 2: times a day was it was pretty sterile. 524 00:27:01,720 --> 00:27:01,879 Speaker 1: You know. 525 00:27:01,920 --> 00:27:03,000 Speaker 4: Every time he came back. 526 00:27:02,840 --> 00:27:04,760 Speaker 2: It said the same exact thing and said just you know, 527 00:27:05,720 --> 00:27:07,880 Speaker 2: know your carbs in a snap. And I was like, well, 528 00:27:08,240 --> 00:27:10,280 Speaker 2: this is my fifth time here today, you should know 529 00:27:10,520 --> 00:27:12,520 Speaker 2: a little bit something about me. So I just said, 530 00:27:13,040 --> 00:27:14,479 Speaker 2: you know, hey, I want to create it so that 531 00:27:14,520 --> 00:27:17,080 Speaker 2: when I come back, based on the time of day, 532 00:27:17,760 --> 00:27:19,520 Speaker 2: respond to that. So now when you come back at 533 00:27:19,560 --> 00:27:22,240 Speaker 2: dinner time, it's like dinner time, we've got you covered, 534 00:27:22,359 --> 00:27:24,520 Speaker 2: you know, get going with a snap, or like if 535 00:27:24,520 --> 00:27:26,720 Speaker 2: you go late at night, it's like late night snacking. 536 00:27:27,080 --> 00:27:29,679 Speaker 4: We've got your carbs counted. And that just took minutes. 537 00:27:29,920 --> 00:27:32,439 Speaker 1: I'd love to get into a bit more detail around 538 00:27:32,560 --> 00:27:37,760 Speaker 1: just your process for testing within an LLM without even 539 00:27:37,840 --> 00:27:41,200 Speaker 1: having to find a customer to open an email, and 540 00:27:41,720 --> 00:27:43,440 Speaker 1: maybe if you could walk me through an example, maybe 541 00:27:43,440 --> 00:27:47,359 Speaker 1: even with cabscan, Like let's just say you were testing 542 00:27:47,640 --> 00:27:51,080 Speaker 1: two subject lines for an email where the purpose of 543 00:27:51,160 --> 00:27:52,160 Speaker 1: the email. 544 00:27:51,880 --> 00:27:54,440 Speaker 3: Was get to get people to trial the app. 545 00:27:54,480 --> 00:27:56,320 Speaker 1: Can you talk me through like what that flow would 546 00:27:56,320 --> 00:27:59,080 Speaker 1: look like to test it, but just within an LLM. 547 00:27:59,000 --> 00:28:02,240 Speaker 2: Well, I think you get to get really wonky here, 548 00:28:02,280 --> 00:28:04,240 Speaker 2: so I'll try to kind of like break it down 549 00:28:04,240 --> 00:28:06,440 Speaker 2: a little bit. So one of the things that you 550 00:28:06,480 --> 00:28:10,000 Speaker 2: can do is you can ask the LLLM. 551 00:28:09,760 --> 00:28:11,280 Speaker 4: To simulate a persona. 552 00:28:11,680 --> 00:28:14,480 Speaker 2: So, for example, one of the things you can say is, hey, 553 00:28:14,960 --> 00:28:19,080 Speaker 2: you are a busy working mom with three kids. So 554 00:28:19,040 --> 00:28:21,600 Speaker 2: you use the carpskin example working mom, three kids, one 555 00:28:21,600 --> 00:28:24,040 Speaker 2: of them is a Type one diabetic. You typically eat 556 00:28:24,040 --> 00:28:26,119 Speaker 2: out a couple of times a week, but you know, 557 00:28:26,240 --> 00:28:29,800 Speaker 2: you're very on top of their diabetes regimen. You have 558 00:28:29,840 --> 00:28:32,800 Speaker 2: a bunch of mobile apps around kind of like managing 559 00:28:33,040 --> 00:28:36,520 Speaker 2: their blood glucose, and you know you're trying to make 560 00:28:36,560 --> 00:28:38,640 Speaker 2: the best informed decisions you can, but like you have 561 00:28:38,720 --> 00:28:40,760 Speaker 2: a very modern and hectic life, like we all do, 562 00:28:40,920 --> 00:28:43,320 Speaker 2: kind of schedule to the nines, and then you can 563 00:28:43,440 --> 00:28:46,880 Speaker 2: give it some specific prompts around, like from that persona, 564 00:28:46,920 --> 00:28:50,120 Speaker 2: which of these headlines might stand out in their inbox? 565 00:28:50,160 --> 00:28:52,440 Speaker 2: Which of these are the kind of more likely potentially 566 00:28:52,440 --> 00:28:55,520 Speaker 2: to click on? What might it be competing with around 567 00:28:55,640 --> 00:28:58,080 Speaker 2: that time of day? And then one of the things 568 00:28:58,120 --> 00:29:01,320 Speaker 2: I always like to do is ask the LM to 569 00:29:01,520 --> 00:29:05,640 Speaker 2: cite why it's giving me this information. So point me 570 00:29:05,680 --> 00:29:08,000 Speaker 2: to the blog posts that you're kind of referencing here, 571 00:29:08,120 --> 00:29:11,360 Speaker 2: point me to the study that you're using, or kind 572 00:29:11,360 --> 00:29:14,280 Speaker 2: of the ux principle, So trying to get as much 573 00:29:14,520 --> 00:29:18,560 Speaker 2: separate what it's hallucinating versus what I can go actually 574 00:29:18,640 --> 00:29:21,880 Speaker 2: kind of validate. So that's kind of the lightest layer 575 00:29:21,960 --> 00:29:24,400 Speaker 2: that you could do it. You can get deeper by 576 00:29:24,400 --> 00:29:29,000 Speaker 2: having it create full audiences of people like that. You 577 00:29:29,040 --> 00:29:34,280 Speaker 2: can say, imagine a representative pool of parents of type 578 00:29:34,320 --> 00:29:41,880 Speaker 2: one diabetics in this country, make up demographic psychographic behavioral information, 579 00:29:41,960 --> 00:29:44,320 Speaker 2: come up with a bunch of different personas, come up 580 00:29:44,360 --> 00:29:47,960 Speaker 2: with a distribution of those people, and now run this 581 00:29:48,040 --> 00:29:51,240 Speaker 2: subject line against that imagined population of people. 582 00:29:51,520 --> 00:29:54,120 Speaker 3: How accurate have you found it to be? 583 00:29:54,280 --> 00:29:57,080 Speaker 1: Like, have you ever done a split test where you've 584 00:29:57,600 --> 00:30:01,840 Speaker 1: recruited real humans and compared that to simulated audience. 585 00:30:02,360 --> 00:30:06,160 Speaker 2: I think it's all in the quality of the setup, right, 586 00:30:06,240 --> 00:30:10,600 Speaker 2: So the most important thing is if your assumptions about 587 00:30:10,640 --> 00:30:13,240 Speaker 2: the online audience. So, for example, if you're going to 588 00:30:13,320 --> 00:30:15,719 Speaker 2: test a x post and you're like, oh, I'm going 589 00:30:15,760 --> 00:30:19,320 Speaker 2: to build a synthetic audience with an LLM, if it 590 00:30:19,320 --> 00:30:22,840 Speaker 2: doesn't really map to your actual audience, then your results 591 00:30:22,840 --> 00:30:25,040 Speaker 2: are going to be, you know, just noise. But I 592 00:30:25,040 --> 00:30:28,520 Speaker 2: think there's some pretty interesting, you know, ways to kind 593 00:30:28,520 --> 00:30:31,120 Speaker 2: of like get that signal and get it pretty close. 594 00:30:31,160 --> 00:30:33,520 Speaker 2: So I would say I've had some where it's like 595 00:30:33,760 --> 00:30:36,840 Speaker 2: fully wrong and just doesn't resonate at all, and then 596 00:30:36,880 --> 00:30:39,600 Speaker 2: there are some where you're like, oh, that's actually that's 597 00:30:39,640 --> 00:30:42,280 Speaker 2: pretty good. So it's it's still more art than science. 598 00:30:42,320 --> 00:30:45,320 Speaker 2: I wouldn't put my job up against it quite yet, 599 00:30:45,320 --> 00:30:47,240 Speaker 2: but I do think it points to like an interesting 600 00:30:47,280 --> 00:30:51,200 Speaker 2: way to kind of accelerate ideation and kind of like 601 00:30:51,520 --> 00:30:54,400 Speaker 2: honing your thinking a bit further than kind of, you know, 602 00:30:54,720 --> 00:30:57,120 Speaker 2: just kind of scribbling, you know, down a couple of 603 00:30:57,120 --> 00:30:58,080 Speaker 2: ideas on a whiteboard. 604 00:30:58,200 --> 00:31:01,719 Speaker 1: So yeah, with that in mind, and when in an 605 00:31:01,800 --> 00:31:07,480 Speaker 1: experimentation or idea testing process, would you bring in real humans. 606 00:31:07,480 --> 00:31:10,360 Speaker 2: As soon as possible? Starting with hey, does the team 607 00:31:10,440 --> 00:31:12,960 Speaker 2: think this is a good idea? With dropbox and Dash? 608 00:31:13,000 --> 00:31:15,800 Speaker 2: We have some of our customer advisory boards and design 609 00:31:15,880 --> 00:31:19,040 Speaker 2: partners who we give them early access to our products 610 00:31:19,280 --> 00:31:22,000 Speaker 2: and ask for their feedback, or we show them mockups 611 00:31:22,000 --> 00:31:24,880 Speaker 2: and prototypes and ask them is this clear? How can 612 00:31:24,960 --> 00:31:27,120 Speaker 2: we make it better? And then from there you can 613 00:31:27,160 --> 00:31:31,600 Speaker 2: move up into bigger, scaled tests with larger groups of people, 614 00:31:32,120 --> 00:31:36,760 Speaker 2: quantitative surveys all the way up to ab tests and 615 00:31:36,800 --> 00:31:39,280 Speaker 2: beyond with personalization and so on. And so I don't 616 00:31:39,280 --> 00:31:43,560 Speaker 2: think it's lllms instead of humans. It's what is the 617 00:31:43,600 --> 00:31:45,920 Speaker 2: pre work that you can do to make the very 618 00:31:45,920 --> 00:31:50,040 Speaker 2: limited time and the very cost the expensive time with people, 619 00:31:50,240 --> 00:31:52,400 Speaker 2: you know, make it the most impactful as possible. 620 00:31:52,520 --> 00:31:55,680 Speaker 1: What else has changed when it comes to experimentation and 621 00:31:55,720 --> 00:31:58,280 Speaker 1: some of the concepts you were writing about years ago. 622 00:31:58,640 --> 00:32:01,760 Speaker 1: When it comes to test ideas in the age of AI. 623 00:32:02,040 --> 00:32:04,120 Speaker 2: The obviously answers, Hey, AI has changed a bunch of 624 00:32:04,120 --> 00:32:06,080 Speaker 2: the tooling and how we do it. But I actually 625 00:32:06,120 --> 00:32:08,360 Speaker 2: think maybe goes back to what you and I were 626 00:32:08,400 --> 00:32:11,200 Speaker 2: talking about earlier with the how to write like a 627 00:32:11,240 --> 00:32:14,480 Speaker 2: good prompt, which I think can be generalized into how 628 00:32:14,520 --> 00:32:18,240 Speaker 2: do I think about what's the right question to ask, 629 00:32:18,400 --> 00:32:22,400 Speaker 2: what's the right problem to solve? How good is our 630 00:32:22,600 --> 00:32:26,960 Speaker 2: thinking upfront in terms of the question we're trying to answer, 631 00:32:27,360 --> 00:32:30,000 Speaker 2: the problem we're trying to solve, and then the context 632 00:32:30,160 --> 00:32:33,200 Speaker 2: around that to do that really effectively, because the output 633 00:32:33,280 --> 00:32:35,560 Speaker 2: now is limitless, and so I think if I was 634 00:32:35,600 --> 00:32:37,880 Speaker 2: going to write packing growth again today, I would have 635 00:32:37,880 --> 00:32:41,120 Speaker 2: spent more time in how you think about framing up 636 00:32:41,360 --> 00:32:45,240 Speaker 2: and create hypothesis. Without that, the output is maybe just 637 00:32:45,280 --> 00:32:45,840 Speaker 2: more noise. 638 00:32:46,080 --> 00:32:49,800 Speaker 1: I'd love to finish on a piece of advice for listeners, 639 00:32:50,040 --> 00:32:52,320 Speaker 1: and you know, I feel like how I work listeners 640 00:32:53,200 --> 00:32:56,080 Speaker 1: fairly AI savvy, Like if you were to set them 641 00:32:56,680 --> 00:32:59,840 Speaker 1: one piece of homework something to do that would really 642 00:33:01,040 --> 00:33:06,080 Speaker 1: improve how they're working with AI right now, seeming they're 643 00:33:06,160 --> 00:33:10,600 Speaker 1: kind of you know, they're more than dabbling. What would 644 00:33:10,600 --> 00:33:12,800 Speaker 1: be one of the most powerful things that they could 645 00:33:12,840 --> 00:33:13,880 Speaker 1: just do this week? 646 00:33:14,160 --> 00:33:17,600 Speaker 2: Take thirty minutes and think about what's something that you 647 00:33:17,720 --> 00:33:20,600 Speaker 2: do all the time that you do kind of constantly 648 00:33:20,640 --> 00:33:23,160 Speaker 2: without even thinking about it. It could be checking email, 649 00:33:23,320 --> 00:33:28,920 Speaker 2: it could be reading documents, it could be approving expense reports, 650 00:33:29,040 --> 00:33:30,880 Speaker 2: it could be you know, there's a numerous things that 651 00:33:30,920 --> 00:33:33,920 Speaker 2: we all do constantly, and you know, kind of the 652 00:33:34,000 --> 00:33:36,320 Speaker 2: idea with carb skin is I didn't even realize I 653 00:33:36,360 --> 00:33:39,440 Speaker 2: was doing that many Google searches around dinner until I 654 00:33:39,440 --> 00:33:41,080 Speaker 2: sat down and kind of just thought about it, like 655 00:33:41,120 --> 00:33:43,160 Speaker 2: oh wow, just kind of like hit me. We see 656 00:33:43,160 --> 00:33:45,000 Speaker 2: that in work all the time that you know, when 657 00:33:45,040 --> 00:33:48,480 Speaker 2: we're working on DASH, we'll talk to someone like walk 658 00:33:48,560 --> 00:33:50,440 Speaker 2: us through your day or walk us through your week, 659 00:33:50,800 --> 00:33:54,960 Speaker 2: and someone will say, oh, every Wednesday, I get seven 660 00:33:55,040 --> 00:33:58,400 Speaker 2: emails from all of these different field agents about the 661 00:33:58,400 --> 00:34:01,600 Speaker 2: status of projects that are happening around the country. And 662 00:34:01,600 --> 00:34:03,320 Speaker 2: then I take all those I read all of them, 663 00:34:03,720 --> 00:34:05,320 Speaker 2: and then I have to put together a report to 664 00:34:05,360 --> 00:34:08,920 Speaker 2: send to our management team about what's on track, what's behind, 665 00:34:08,960 --> 00:34:11,640 Speaker 2: what's happening out in the field. I was like, great, 666 00:34:11,640 --> 00:34:13,319 Speaker 2: how do you do them? They're like, well, I get 667 00:34:13,320 --> 00:34:14,640 Speaker 2: the emails ever way for them to come in. I 668 00:34:14,680 --> 00:34:16,640 Speaker 2: read on my copy and Piete stuff into a new 669 00:34:16,680 --> 00:34:18,600 Speaker 2: document and then I write them. And I was like, Okay, 670 00:34:18,920 --> 00:34:22,839 Speaker 2: this is a great use case for like summarization and 671 00:34:22,880 --> 00:34:24,640 Speaker 2: that type of thing. And it just kind of the 672 00:34:24,719 --> 00:34:26,560 Speaker 2: light bulb goes off when you can get down at 673 00:34:26,640 --> 00:34:29,840 Speaker 2: that very specific thing. So yeah, start with I do 674 00:34:29,960 --> 00:34:32,560 Speaker 2: this a lot, It's important part of my job. And 675 00:34:32,600 --> 00:34:35,440 Speaker 2: then step back and say, how can I even if 676 00:34:35,480 --> 00:34:38,319 Speaker 2: you can't go end to end with the workflow, like 677 00:34:38,400 --> 00:34:40,120 Speaker 2: you might not be able to you know, connect to 678 00:34:40,120 --> 00:34:42,880 Speaker 2: your email inbox to read it all, but you know 679 00:34:43,000 --> 00:34:45,319 Speaker 2: what is a piece in there that you can let 680 00:34:45,360 --> 00:34:47,479 Speaker 2: AI kind of take a little bit off your plate 681 00:34:47,520 --> 00:34:49,840 Speaker 2: for you and then kind of use that as a 682 00:34:49,880 --> 00:34:52,919 Speaker 2: springboard to kind of work through the rest of your week. 683 00:34:53,000 --> 00:34:56,880 Speaker 1: Basically, Wulgan, it is vain such a joy chatting to 684 00:34:56,920 --> 00:34:58,520 Speaker 1: you and getting to pick your brain. 685 00:34:58,719 --> 00:35:01,759 Speaker 3: Thank you so much for spending some time with me. 686 00:35:02,080 --> 00:35:04,200 Speaker 4: Yeah, I Meanta, thank you so much. I really enjoyed 687 00:35:04,440 --> 00:35:06,920 Speaker 4: the conversation. Thanks for having me. 688 00:35:07,360 --> 00:35:09,360 Speaker 3: Here's what's stuck with me from Morgan. 689 00:35:10,000 --> 00:35:14,960 Speaker 1: The real leverage of AI isn't necessarily the answers, it's 690 00:35:15,000 --> 00:35:15,960 Speaker 1: better questions. 691 00:35:16,840 --> 00:35:18,920 Speaker 3: So this week, try some homework. 692 00:35:19,360 --> 00:35:22,840 Speaker 1: Take thirty minutes to notice something you do constantly, almost 693 00:35:22,920 --> 00:35:27,040 Speaker 1: without thinking, and then ask what part of this could 694 00:35:27,120 --> 00:35:28,480 Speaker 1: AI take off my plate? 695 00:35:29,560 --> 00:35:31,200 Speaker 3: You might be surprised at how. 696 00:35:31,120 --> 00:35:35,799 Speaker 1: Small experiments can lead to very big shifts. And if 697 00:35:35,840 --> 00:35:38,080 Speaker 1: you enjoyed this episode, you might want to go back 698 00:35:38,080 --> 00:35:42,480 Speaker 1: to my episode with Bobby Ohanson, the amazing Silicon Valley futurist. 699 00:35:42,920 --> 00:35:48,200 Speaker 1: We talked about reframing AI as augmented intelligence. 700 00:35:47,960 --> 00:35:50,600 Speaker 3: And it's a beautiful pairing. 701 00:35:50,480 --> 00:35:54,960 Speaker 1: To this interview with Morgan. Thank you so much for 702 00:35:55,080 --> 00:35:58,879 Speaker 1: listening to this episode. Hit follow on How I Work 703 00:35:59,000 --> 00:36:00,160 Speaker 1: so that you don't. 704 00:36:00,120 --> 00:36:01,400 Speaker 3: Miss what's next. 705 00:36:01,719 --> 00:36:04,440 Speaker 1: If you like today's show, make sure you get follow 706 00:36:04,560 --> 00:36:08,080 Speaker 1: on your podcast app to be alerted when new episodes drop. 707 00:36:08,640 --> 00:36:11,120 Speaker 1: How I Work was recorded on the traditional land of 708 00:36:11,160 --> 00:36:13,359 Speaker 1: the Warrangery People, part of the Cooler Nation.