1 00:00:10,160 --> 00:00:14,400 Speaker 1: Hello, and welcome to another episode of the Odd Lots Podcast. 2 00:00:14,480 --> 00:00:16,680 Speaker 2: I'm Joe Wisenthal and I'm Tracy Alloway. 3 00:00:16,800 --> 00:00:19,360 Speaker 1: Tracy, have you looked at in video stock chart lately? 4 00:00:19,440 --> 00:00:21,080 Speaker 1: And by lately, I don't mean like over the last 5 00:00:21,120 --> 00:00:22,680 Speaker 1: two years. I mean like just like over the last 6 00:00:22,720 --> 00:00:24,520 Speaker 1: like two weeks or two months. 7 00:00:24,560 --> 00:00:26,400 Speaker 2: I don't need to look at it because everyone keeps 8 00:00:26,440 --> 00:00:28,760 Speaker 2: talking about it. So I know, I know what's happening. 9 00:00:28,960 --> 00:00:30,880 Speaker 1: You know what, I'm pretty happy about. Could I just say, 10 00:00:31,160 --> 00:00:33,960 Speaker 1: you know, we did that episode like two months ago, yes, 11 00:00:34,200 --> 00:00:37,519 Speaker 1: with Stacy Raskin, and we were like, what's what's up 12 00:00:37,520 --> 00:00:39,239 Speaker 1: a good video, like well, you know, I know it's 13 00:00:39,240 --> 00:00:42,920 Speaker 1: at the center of the AI chips boom and whatever. 14 00:00:43,479 --> 00:00:45,199 Speaker 1: And then like we did that episode and it came 15 00:00:45,200 --> 00:00:46,680 Speaker 1: out and then a week later like they just like 16 00:00:46,800 --> 00:00:47,600 Speaker 1: knocked it out of the park. 17 00:00:48,479 --> 00:00:49,639 Speaker 3: Yeah, so you. 18 00:00:49,640 --> 00:00:51,000 Speaker 2: Know, we were early. 19 00:00:51,880 --> 00:00:53,640 Speaker 3: We were at least like, you know, a good like 20 00:00:53,680 --> 00:00:54,400 Speaker 3: two weeks earlier. 21 00:00:55,160 --> 00:00:56,840 Speaker 2: Hey, hey, two weeks I'll take it. 22 00:00:57,440 --> 00:00:58,360 Speaker 3: I'll take it. 23 00:00:58,440 --> 00:01:01,800 Speaker 1: So clearly something that you know, we and we talked 24 00:01:01,800 --> 00:01:03,760 Speaker 1: about this with Stacy, like you know, something that in 25 00:01:03,840 --> 00:01:06,840 Speaker 1: Nvidia has is like everyone's trying to buy it. Everyone's 26 00:01:06,840 --> 00:01:08,839 Speaker 1: trying to get it, But then it raises the next 27 00:01:08,880 --> 00:01:11,240 Speaker 1: question of like, okay, but what is that market? Like 28 00:01:11,520 --> 00:01:12,560 Speaker 1: how do you buy a chip? 29 00:01:13,000 --> 00:01:14,959 Speaker 2: Yeah? How do you buy a chip? And then I 30 00:01:14,959 --> 00:01:17,080 Speaker 2: guess what do you actually do with it once you 31 00:01:17,200 --> 00:01:19,280 Speaker 2: have it? Because my impression is that for a lot 32 00:01:19,319 --> 00:01:23,279 Speaker 2: of these AI applications, the way you use the chips, 33 00:01:23,319 --> 00:01:26,119 Speaker 2: the way you set up the data centers is very, 34 00:01:26,240 --> 00:01:29,399 Speaker 2: very different to what we've seen in the past. And 35 00:01:29,440 --> 00:01:32,280 Speaker 2: I think also what in Vidia is doing now is 36 00:01:32,360 --> 00:01:34,959 Speaker 2: kind of different. But maybe we can get into this 37 00:01:35,040 --> 00:01:36,920 Speaker 2: with our guests. My impression is they're trying to create 38 00:01:37,280 --> 00:01:40,960 Speaker 2: a sort of like holistic approach for customers where they 39 00:01:41,000 --> 00:01:44,840 Speaker 2: provide not just the hardware, but also some services to 40 00:01:44,880 --> 00:01:45,680 Speaker 2: go along with it. 41 00:01:46,000 --> 00:01:48,720 Speaker 1: Yes, right, and like all the software and Stacy talked 42 00:01:48,760 --> 00:01:50,880 Speaker 1: about that with the Kuda ecosystemica that. 43 00:01:50,840 --> 00:01:53,160 Speaker 3: Was it, how dominant that is? But right, like what 44 00:01:53,200 --> 00:01:55,360 Speaker 3: do you do with it? Like how do you get one? If? 45 00:01:55,400 --> 00:01:56,800 Speaker 3: Like what you know, what would we do. 46 00:01:56,760 --> 00:02:00,760 Speaker 1: Tracy if a big palette of in Vidia chips wound 47 00:02:00,840 --> 00:02:01,840 Speaker 1: up here? 48 00:02:01,480 --> 00:02:03,920 Speaker 2: Do you want to know a secret? Yeah, my basement 49 00:02:03,960 --> 00:02:06,600 Speaker 2: is filled with h one hundred chips. Just got a 50 00:02:06,640 --> 00:02:08,080 Speaker 2: pile of them. It came with the house. 51 00:02:08,440 --> 00:02:10,880 Speaker 1: It was on that ship that was stuck off the Chesapeake, 52 00:02:10,960 --> 00:02:12,960 Speaker 1: and instead of getting your cowards, you got it. 53 00:02:13,080 --> 00:02:15,720 Speaker 2: I just caught a palette of age four hundreds. 54 00:02:15,440 --> 00:02:19,680 Speaker 1: That that well, we're manifesting that into reality. So anyway, 55 00:02:19,760 --> 00:02:22,600 Speaker 1: I like how this world works so essentially, like the 56 00:02:22,800 --> 00:02:27,160 Speaker 1: trading and dealing of these, like the hottest commodity in 57 00:02:27,200 --> 00:02:30,920 Speaker 1: the world right which is these these advanced chips from AI, 58 00:02:31,000 --> 00:02:32,920 Speaker 1: and how that works and who can get one? I 59 00:02:32,960 --> 00:02:35,520 Speaker 1: still think is like a sort of mystery that we 60 00:02:35,639 --> 00:02:38,160 Speaker 1: need to delve further into this question. 61 00:02:38,400 --> 00:02:40,359 Speaker 2: I agree, And there is also there's a lot of 62 00:02:40,440 --> 00:02:43,600 Speaker 2: excitement around it right now for the obvious reasons of 63 00:02:43,680 --> 00:02:48,239 Speaker 2: everyone's really into generative AI and in video stock is exploding, 64 00:02:48,240 --> 00:02:50,640 Speaker 2: as we already talked about, but we're also seeing a 65 00:02:50,680 --> 00:02:55,560 Speaker 2: lot of previous I guess consumers of chips, like the 66 00:02:55,600 --> 00:02:59,079 Speaker 2: crypto miners start to pivot into the space, and I'd 67 00:02:59,080 --> 00:03:01,720 Speaker 2: be curious to see what they're doing in it as well, 68 00:03:01,800 --> 00:03:04,960 Speaker 2: and how much of that is just you know, desperation 69 00:03:05,520 --> 00:03:08,959 Speaker 2: versus versus a real business opportunity. 70 00:03:08,440 --> 00:03:09,440 Speaker 3: In the video game market. 71 00:03:09,520 --> 00:03:11,079 Speaker 2: Yeah, oh totally, I forgot about. 72 00:03:11,160 --> 00:03:13,320 Speaker 1: Which was like the other thing. It's like, for years 73 00:03:13,360 --> 00:03:15,519 Speaker 1: I thought of Nvidia is the video game company. Yeah, 74 00:03:15,560 --> 00:03:17,639 Speaker 1: because they had their logo on xboxes. 75 00:03:17,720 --> 00:03:21,680 Speaker 2: And how realistic is that pivot? What proportion of those 76 00:03:21,720 --> 00:03:23,840 Speaker 2: types of chips can be used for AI? 77 00:03:24,000 --> 00:03:27,080 Speaker 1: Now, well, I'm very excited. We do have I believe 78 00:03:27,240 --> 00:03:29,400 Speaker 1: the perfect guest. We are going to be speaking with 79 00:03:29,440 --> 00:03:32,919 Speaker 1: Brandon McBee. He is the chief strategy officer and co 80 00:03:33,000 --> 00:03:36,720 Speaker 1: founder of core Weave, which is a specialized cloud services 81 00:03:36,800 --> 00:03:40,680 Speaker 1: provider that's basically providing this sort of like high volume 82 00:03:40,800 --> 00:03:45,160 Speaker 1: compute to AI type companies. They recently raised over four 83 00:03:45,240 --> 00:03:47,560 Speaker 1: hundred million dollars. Have been in this space for a 84 00:03:47,600 --> 00:03:50,280 Speaker 1: little while. So Brandon, thank you so much for coming 85 00:03:50,320 --> 00:03:51,160 Speaker 1: on odd lots. 86 00:03:51,520 --> 00:03:53,640 Speaker 4: Thanks for the opportunity. Guys, really excited to chat with 87 00:03:53,640 --> 00:03:54,160 Speaker 4: you all today. 88 00:03:54,720 --> 00:03:57,240 Speaker 1: So let's just let me sorry if Tracy and I, like, 89 00:03:57,360 --> 00:03:58,880 Speaker 1: I don't know why they would do this, but if 90 00:03:58,920 --> 00:04:00,880 Speaker 1: like some VC was like, you know, we want you 91 00:04:00,960 --> 00:04:03,680 Speaker 1: to do on launch GPT. We want you to like 92 00:04:03,800 --> 00:04:07,880 Speaker 1: do a pore base large language model off of all 93 00:04:07,920 --> 00:04:09,640 Speaker 1: the work you've done. We want you to compete with 94 00:04:09,720 --> 00:04:12,240 Speaker 1: open AI. And they gave us like I don't know 95 00:04:12,440 --> 00:04:15,400 Speaker 1: some like, you know, one hundred million dollar rays, they said, 96 00:04:15,440 --> 00:04:18,640 Speaker 1: go start, do your startup? Could I call in video 97 00:04:19,320 --> 00:04:21,560 Speaker 1: and buy chips? Would I be able to get in 98 00:04:21,600 --> 00:04:22,120 Speaker 1: the door there? 99 00:04:22,600 --> 00:04:25,440 Speaker 4: Gosh? I mean you're I think you and everyone else 100 00:04:25,800 --> 00:04:27,960 Speaker 4: is asking that question, and you're going to have a 101 00:04:28,240 --> 00:04:31,039 Speaker 4: huge problem doing that. Right now, it's mostly just around 102 00:04:31,720 --> 00:04:35,520 Speaker 4: how much in demand this infrastructure became, right I mean, 103 00:04:35,600 --> 00:04:38,599 Speaker 4: you could argue it's one of the most critical pieces 104 00:04:38,600 --> 00:04:41,840 Speaker 4: of information technology resources on the planet right now and 105 00:04:42,400 --> 00:04:45,279 Speaker 4: suddenly everyone needs it, and you know, I like to 106 00:04:45,279 --> 00:04:49,800 Speaker 4: contextualize it in that, you know, the piece of software 107 00:04:49,800 --> 00:04:53,080 Speaker 4: adoption for AIS like one of the fastest adoption curves 108 00:04:53,080 --> 00:04:57,440 Speaker 4: we've ever seen, right Like, you're you're hitting these milestones 109 00:04:57,480 --> 00:05:01,280 Speaker 4: faster than any other software platform previously, and now all 110 00:05:01,320 --> 00:05:04,560 Speaker 4: of a sudden, you're asking infrastructure build to keep up 111 00:05:04,600 --> 00:05:07,360 Speaker 4: with that, right a space that traditionally takes more time, 112 00:05:07,480 --> 00:05:12,120 Speaker 4: and it's created this massive supply demand and balance just 113 00:05:12,200 --> 00:05:16,280 Speaker 4: on in place infrastructure today and not only infratructure is 114 00:05:16,279 --> 00:05:20,560 Speaker 4: available to purchase, and it's an issue that is going 115 00:05:20,600 --> 00:05:22,480 Speaker 4: to be ongoing for a bit as well, we think. 116 00:05:23,360 --> 00:05:26,479 Speaker 2: So can I ask the basic question, which is core weave. 117 00:05:27,400 --> 00:05:30,880 Speaker 2: What do you do exactly? Joe mentioned the capital raise, 118 00:05:30,920 --> 00:05:33,320 Speaker 2: which I think has you valued at something like two 119 00:05:33,320 --> 00:05:37,719 Speaker 2: billion dollars, So congrats, but what exactly are you doing here? 120 00:05:38,160 --> 00:05:41,400 Speaker 4: Yeah? Thank you. So Corey is a specialized cloud service 121 00:05:41,400 --> 00:05:45,920 Speaker 4: provider that is focused on highly parallelizable workloads. So we 122 00:05:46,320 --> 00:05:50,440 Speaker 4: build and operate the world's most performant GPU infrastructure at 123 00:05:50,480 --> 00:05:54,760 Speaker 4: scale and predominantly serve three sectors. That's the artificial intelligence sector, 124 00:05:54,920 --> 00:05:58,520 Speaker 4: the media and entertainment sector, and the computational chemistry sector. 125 00:05:58,640 --> 00:06:04,320 Speaker 4: So we build specialize in building this infrastructure at supercompute scale. 126 00:06:04,480 --> 00:06:07,560 Speaker 4: It's like quite literally, you know, it's sixteen thousand GPU 127 00:06:07,760 --> 00:06:09,640 Speaker 4: fabric and we can get into all the details and 128 00:06:09,680 --> 00:06:12,200 Speaker 4: how complex that is. But we build that so that 129 00:06:12,320 --> 00:06:16,280 Speaker 4: entities can come in and train these next generation foundation 130 00:06:16,440 --> 00:06:19,040 Speaker 4: machine learning models on. And you know, we found ourselves 131 00:06:19,080 --> 00:06:20,760 Speaker 4: in a spot where we can do that better than 132 00:06:20,880 --> 00:06:23,279 Speaker 4: literally anyone else in the market and do it on 133 00:06:23,320 --> 00:06:27,159 Speaker 4: a timeline that's faster or I think the only entity 134 00:06:27,200 --> 00:06:32,080 Speaker 4: with H one hundred available to clients at scale globally today. 135 00:06:32,800 --> 00:06:35,120 Speaker 2: So you have an actual basement full of H one 136 00:06:35,200 --> 00:06:38,960 Speaker 2: hundred chips. Well, can you talk to us. You know, 137 00:06:39,040 --> 00:06:42,560 Speaker 2: when you say infrastructure, we help clients build out the infrastructure, 138 00:06:42,640 --> 00:06:47,599 Speaker 2: help us conceptualize this. What does what does the infrastructure 139 00:06:48,080 --> 00:06:51,880 Speaker 2: for this type of AI actually look like? And how 140 00:06:51,880 --> 00:06:55,720 Speaker 2: does it differ to infrastructure for other types of large 141 00:06:55,720 --> 00:06:57,440 Speaker 2: scale technology projects. 142 00:06:57,920 --> 00:07:01,000 Speaker 4: Yeah, totally, so, you know, I I think during the 143 00:07:01,080 --> 00:07:04,200 Speaker 4: last in video quarterly earnings called Jensen put this a 144 00:07:04,200 --> 00:07:06,839 Speaker 4: really great way in the Q and A section, he 145 00:07:06,960 --> 00:07:10,000 Speaker 4: said that we are at the first year of a 146 00:07:10,120 --> 00:07:13,760 Speaker 4: decade long modernization of the data center, or like making 147 00:07:13,760 --> 00:07:16,640 Speaker 4: the data center intelligent. Right, you can kind of you 148 00:07:16,640 --> 00:07:19,840 Speaker 4: could suggest that the last generation or the twenty tens 149 00:07:20,040 --> 00:07:23,280 Speaker 4: data center was comprised of CPU, compute, storage and these 150 00:07:23,320 --> 00:07:26,840 Speaker 4: things that didn't really work together that intelligently. And the 151 00:07:26,880 --> 00:07:29,880 Speaker 4: way that in Nvidia has positioned itself is to make 152 00:07:29,880 --> 00:07:32,320 Speaker 4: it a smart data center that's like smart routing of 153 00:07:32,480 --> 00:07:36,160 Speaker 4: data packets of different pieces of infrastructure in there. That's 154 00:07:36,200 --> 00:07:39,680 Speaker 4: all focused on how do you expand the throughput in 155 00:07:39,720 --> 00:07:46,440 Speaker 4: communicability of and between pieces of infrastructure. Right, It's just 156 00:07:46,840 --> 00:07:50,840 Speaker 4: an amazingly different approach to data center deployments. And so 157 00:07:51,680 --> 00:07:54,160 Speaker 4: the way that we're building it and we're working with 158 00:07:54,480 --> 00:07:58,360 Speaker 4: Nvidia infrastructure. We design everything to a DGX reference back 159 00:07:58,400 --> 00:08:01,160 Speaker 4: in dgx's in videos like how do you draw the 160 00:08:01,200 --> 00:08:04,720 Speaker 4: most performance out of Nvidia infrastructure is possible with all 161 00:08:04,720 --> 00:08:08,000 Speaker 4: the anciliary components associated with it. So all this stuff 162 00:08:08,040 --> 00:08:10,960 Speaker 4: is going into what's qualified as a Tier three or 163 00:08:11,000 --> 00:08:14,440 Speaker 4: a Tier four data center. We collocate with within these things, 164 00:08:14,440 --> 00:08:17,520 Speaker 4: so we're not quite building in a basement, even though 165 00:08:17,800 --> 00:08:21,400 Speaker 4: like in our past history we certainly you know, had 166 00:08:21,840 --> 00:08:24,640 Speaker 4: time doing that, but this is within you know, just 167 00:08:25,120 --> 00:08:28,640 Speaker 4: amazing collocation sites that are operated by our partners such 168 00:08:28,640 --> 00:08:30,800 Speaker 4: as switch right. So a Tier three a Tier four 169 00:08:30,920 --> 00:08:35,680 Speaker 4: site is something that's qualified based on its ability to 170 00:08:35,720 --> 00:08:39,480 Speaker 4: serve workloads with an extremely high uptime. So we're talking 171 00:08:39,520 --> 00:08:43,840 Speaker 4: like ninety nine point nine nine percent uptime rate, and 172 00:08:43,880 --> 00:08:49,760 Speaker 4: that's guaranteed by its power redundancy, it's Internet redundancy, and 173 00:08:49,800 --> 00:08:53,480 Speaker 4: its security and then ultimately like it's connectivity to the 174 00:08:53,559 --> 00:08:56,959 Speaker 4: Internet backbone. Right, So as it's like, as a first step, 175 00:08:57,360 --> 00:09:02,520 Speaker 4: you're housed within these data centers that are just critical 176 00:09:02,600 --> 00:09:06,880 Speaker 4: parts of the Internet infrastructure, and then from there you 177 00:09:06,920 --> 00:09:09,079 Speaker 4: start building out the servers within there. And I can 178 00:09:09,240 --> 00:09:10,160 Speaker 4: go into that detail. 179 00:09:10,600 --> 00:09:13,320 Speaker 1: So you mentioned actually I want to just get sort 180 00:09:13,320 --> 00:09:15,480 Speaker 1: of defined some terms. Can you just real quickly before 181 00:09:15,520 --> 00:09:17,240 Speaker 1: we move on Tier three tier four? 182 00:09:17,240 --> 00:09:18,080 Speaker 3: What do you mean by this? 183 00:09:18,880 --> 00:09:21,880 Speaker 4: Yeah? So tier three, tier four. This all goes back 184 00:09:21,920 --> 00:09:24,360 Speaker 4: to like the quality of the data center that you're 185 00:09:24,360 --> 00:09:26,760 Speaker 4: in it. It's all about the reliability and up time 186 00:09:26,880 --> 00:09:28,640 Speaker 4: that you should be able to achieve out of that 187 00:09:28,760 --> 00:09:32,600 Speaker 4: data center. It's another way to qualify the services around it. 188 00:09:32,600 --> 00:09:35,960 Speaker 4: It's like power. You get redundant power, right like multiple 189 00:09:35,960 --> 00:09:39,880 Speaker 4: power services in case one goes offline, there's another one 190 00:09:40,080 --> 00:09:44,320 Speaker 4: you get, you know, redundant cooling, you get redundant Internet connectivity. 191 00:09:44,360 --> 00:09:47,480 Speaker 4: It's all these services that like have extra fiil safs 192 00:09:47,840 --> 00:09:50,760 Speaker 4: that allow for you to operate at the highest up 193 00:09:50,800 --> 00:09:52,400 Speaker 4: time and security level possible. 194 00:09:52,640 --> 00:09:54,760 Speaker 1: Is higher tier better? Like tier three four? Is that 195 00:09:54,800 --> 00:09:56,120 Speaker 1: better than Tier one and tier two? 196 00:09:57,040 --> 00:09:57,760 Speaker 4: That's correct? 197 00:09:57,960 --> 00:10:01,400 Speaker 1: Okay, so quick follow up question. Then you know we're 198 00:10:01,400 --> 00:10:03,280 Speaker 1: interested in, like, okay, where the rubber hits the road. 199 00:10:03,280 --> 00:10:08,040 Speaker 1: The scarcity is here. Let's say Tracy miraculously opens her 200 00:10:08,120 --> 00:10:10,400 Speaker 1: basement and there really is like you know, all these 201 00:10:10,440 --> 00:10:14,480 Speaker 1: palettes of these video chips, there is there capacity at 202 00:10:14,520 --> 00:10:16,440 Speaker 1: the data centers right now, She's like, you know, what 203 00:10:16,440 --> 00:10:19,120 Speaker 1: we want to co locate with you. You guys have great power, 204 00:10:19,679 --> 00:10:22,080 Speaker 1: pretty well connected to the internet. You have like good 205 00:10:22,120 --> 00:10:24,920 Speaker 1: security guards. So there's operated twenty four to seven. We 206 00:10:24,920 --> 00:10:27,000 Speaker 1: want to set something up, like is there space there? 207 00:10:28,240 --> 00:10:30,839 Speaker 4: Yeah, it's a fantastic question. It's a it's an issue 208 00:10:30,840 --> 00:10:33,480 Speaker 4: that didn't really pop up until really in the last 209 00:10:33,559 --> 00:10:34,560 Speaker 4: eight weeks or so. 210 00:10:34,800 --> 00:10:37,760 Speaker 3: Oh, it's really happening that fast. 211 00:10:38,400 --> 00:10:39,760 Speaker 4: It's happening that fast, Joe. 212 00:10:39,880 --> 00:10:40,880 Speaker 3: And it's okay, So. 213 00:10:40,960 --> 00:10:43,440 Speaker 2: That we said the two week lead time on in 214 00:10:43,520 --> 00:10:45,080 Speaker 2: video was very important, Joe. 215 00:10:45,360 --> 00:10:48,400 Speaker 3: Yeah, you're right, you're right. Is wow? Wait what happened? 216 00:10:48,600 --> 00:10:48,800 Speaker 4: Wait? 217 00:10:49,240 --> 00:10:52,600 Speaker 1: What happened sixteen described? Sixteen weeks ago? 218 00:10:52,720 --> 00:10:53,760 Speaker 3: Verus eight weeks ago? 219 00:10:54,520 --> 00:10:57,720 Speaker 4: Sure, it even last year? Right, So this is a space, 220 00:10:57,760 --> 00:11:02,240 Speaker 4: the data centers space, collocation space that's been fairly chronically 221 00:11:02,360 --> 00:11:05,440 Speaker 4: underinvested in because the Hyperscale has just built out their 222 00:11:05,480 --> 00:11:09,680 Speaker 4: own data centers, right instead. But what's happened is the 223 00:11:09,760 --> 00:11:13,600 Speaker 4: infrastructure changed. The type of compute that we're putting in 224 00:11:13,640 --> 00:11:17,240 Speaker 4: these data centers, it's different than the last generation, right, 225 00:11:17,280 --> 00:11:20,800 Speaker 4: so we're predominantly focused on GPU compute instead of CPU 226 00:11:20,840 --> 00:11:25,439 Speaker 4: compute and GPU compute. It's about four times more power 227 00:11:25,559 --> 00:11:29,839 Speaker 4: dance than CPU compute, and that throws the data center 228 00:11:29,960 --> 00:11:33,440 Speaker 4: planning into chaos, right because ultimately, let's say you have 229 00:11:33,480 --> 00:11:36,319 Speaker 4: a ten thousand square foot room in the data center, right, 230 00:11:36,360 --> 00:11:38,160 Speaker 4: and you have a certain amount of power it's called 231 00:11:38,160 --> 00:11:39,960 Speaker 4: one hundred units of power that go into that ten 232 00:11:40,000 --> 00:11:44,079 Speaker 4: thousand square feet. Well, because I'm four times more power DNS, 233 00:11:44,760 --> 00:11:47,320 Speaker 4: it means that now I take those hundred units of power, 234 00:11:47,400 --> 00:11:50,560 Speaker 4: but I only require about twenty five percent of that 235 00:11:50,640 --> 00:11:53,439 Speaker 4: data center footprint or in other words, twenty five hundred 236 00:11:53,440 --> 00:11:56,400 Speaker 4: square feet within that ten thousand square foot footprint. So 237 00:11:56,800 --> 00:12:00,480 Speaker 4: that then leads to like, not only is the space 238 00:12:00,760 --> 00:12:03,960 Speaker 4: in the data center being used inefficiently now because you 239 00:12:04,240 --> 00:12:06,720 Speaker 4: theoretically have to run more power into the data center 240 00:12:06,800 --> 00:12:08,800 Speaker 4: to use that full ten thousand square feet due to 241 00:12:08,800 --> 00:12:12,360 Speaker 4: the Poara density delta, but now you have cooling issues, 242 00:12:12,960 --> 00:12:15,720 Speaker 4: right because you designed that footprint to be able to 243 00:12:15,720 --> 00:12:19,920 Speaker 4: cool ten thousand square feet spread out across that entire area. 244 00:12:20,000 --> 00:12:21,040 Speaker 4: But now you're dropping storry. 245 00:12:21,559 --> 00:12:23,640 Speaker 1: Sorry, I just want to back up because this is 246 00:12:23,880 --> 00:12:26,040 Speaker 1: extremely interesting, so I don't want to I just want 247 00:12:26,080 --> 00:12:27,079 Speaker 1: to get this detail right. 248 00:12:27,640 --> 00:12:30,840 Speaker 3: Just sorry, just to and then move on. 249 00:12:30,920 --> 00:12:34,439 Speaker 1: But the let's given an x amount of power at 250 00:12:34,480 --> 00:12:37,720 Speaker 1: one hundred units of power. What you're saying is that 251 00:12:37,800 --> 00:12:41,880 Speaker 1: with this next generation of compute, it now only gets 252 00:12:42,120 --> 00:12:42,720 Speaker 1: that's now. 253 00:12:42,600 --> 00:12:44,720 Speaker 3: Only sufficient for a quarter of the data center. 254 00:12:44,760 --> 00:12:48,040 Speaker 1: In other words, that to power that whole that space, 255 00:12:48,520 --> 00:12:50,800 Speaker 1: and that to then power the whole space, you really 256 00:12:50,840 --> 00:12:52,360 Speaker 1: would need like four x the power. 257 00:12:53,320 --> 00:12:57,160 Speaker 4: That's accurate. Okay. The complication really arises out of the 258 00:12:57,160 --> 00:13:00,400 Speaker 4: cooling that that's required from that, right, So if you 259 00:13:00,400 --> 00:13:03,160 Speaker 4: imagine you can cool a ten thousand square foot space 260 00:13:03,160 --> 00:13:05,400 Speaker 4: and you designed for that, that's one thing. But now 261 00:13:05,440 --> 00:13:08,080 Speaker 4: if you have to cool in a much more dense area, 262 00:13:08,559 --> 00:13:12,400 Speaker 4: that's a different type of cooling requirement. And so that's 263 00:13:12,480 --> 00:13:15,719 Speaker 4: led to this issue where there's only a certain subset 264 00:13:15,960 --> 00:13:18,640 Speaker 4: of Tier three and four data centers across the US 265 00:13:19,120 --> 00:13:23,440 Speaker 4: that can are currently designed for or can quickly be 266 00:13:23,559 --> 00:13:27,880 Speaker 4: designed and changed to be able to accommodate this new 267 00:13:27,960 --> 00:13:31,760 Speaker 4: power density issue. So now not only like if you 268 00:13:31,840 --> 00:13:34,440 Speaker 4: had all those eighth one hundreds in your basement, you 269 00:13:34,520 --> 00:13:36,480 Speaker 4: might not have a place to plug them into. And 270 00:13:37,320 --> 00:13:40,160 Speaker 4: that's become a pretty big problem for the industry very quickly, 271 00:13:40,160 --> 00:13:42,520 Speaker 4: and truly has only arisen in the last eight weeks 272 00:13:42,640 --> 00:13:45,120 Speaker 4: or so, and it's going to persist for a few quarters. 273 00:13:46,040 --> 00:13:50,040 Speaker 2: So you were describing the difference between CPU and GPU. 274 00:13:50,600 --> 00:13:54,600 Speaker 2: How do you actually connect these newer types or these 275 00:13:54,640 --> 00:13:58,600 Speaker 2: different types of chips together, because I imagine, you know, 276 00:13:58,760 --> 00:14:01,160 Speaker 2: old data centers you just have a bunch of like 277 00:14:01,240 --> 00:14:04,520 Speaker 2: Ethernet cables or something like that. But for this type 278 00:14:04,559 --> 00:14:06,640 Speaker 2: of processing power, do you need something different? 279 00:14:08,000 --> 00:14:12,160 Speaker 4: That's exactly correct, Chracy. So what we so the legacy 280 00:14:12,280 --> 00:14:15,240 Speaker 4: the generalized compute data centers are really what the hyperscalers 281 00:14:15,280 --> 00:14:19,560 Speaker 4: look like. You know, Amazon, Google, Microsoft, Oracle. They predominantly 282 00:14:19,640 --> 00:14:23,240 Speaker 4: use something that's called Ethernet to connect all the service together. 283 00:14:23,280 --> 00:14:25,600 Speaker 4: And the reason you use that was, you know, you 284 00:14:25,600 --> 00:14:29,120 Speaker 4: don't really need to have high data throughput to connect 285 00:14:29,240 --> 00:14:31,040 Speaker 4: all these servers together, right, They just need to be 286 00:14:31,080 --> 00:14:33,040 Speaker 4: able to send some messages back and forth. They talk 287 00:14:33,080 --> 00:14:35,960 Speaker 4: to each other about what they're working on, but they're not, 288 00:14:36,480 --> 00:14:41,000 Speaker 4: you know, necessarily doing highly collaborative tasks that require moving 289 00:14:41,080 --> 00:14:44,720 Speaker 4: lots of data in between each other. That's changed. So 290 00:14:45,080 --> 00:14:48,160 Speaker 4: so today what people are focused on and need to 291 00:14:48,200 --> 00:14:52,400 Speaker 4: build are these effectively supercomputers. Right, and so we refer 292 00:14:52,520 --> 00:14:56,000 Speaker 4: to the connectivity between them, the network between them as 293 00:14:56,120 --> 00:14:59,560 Speaker 4: a fabric, right, it's called a network fabric. So if 294 00:14:59,600 --> 00:15:02,480 Speaker 4: we're build holding something to help train like the next 295 00:15:02,520 --> 00:15:07,600 Speaker 4: generation GPT model, typically clients are coming to us saying, hey, 296 00:15:07,640 --> 00:15:11,400 Speaker 4: I need a sixteen thousand GPU fabric of H one hundred. 297 00:15:11,960 --> 00:15:15,800 Speaker 4: So that's there's about eight GPUs that go into each server, 298 00:15:16,120 --> 00:15:18,960 Speaker 4: and then you have to run this connectivity between each 299 00:15:19,080 --> 00:15:21,240 Speaker 4: one of those servers. But it's now done in a 300 00:15:21,280 --> 00:15:25,480 Speaker 4: different way to your point, So we're using a in 301 00:15:25,520 --> 00:15:30,560 Speaker 4: Nvidio technology called InfiniBand which has the highest data throughput 302 00:15:30,680 --> 00:15:34,000 Speaker 4: to connect each of these devices together. And you know, 303 00:15:34,080 --> 00:15:38,560 Speaker 4: taking this sixteen thousand GPU cluster as an example, there's 304 00:15:38,600 --> 00:15:41,680 Speaker 4: two crazy numbers in here. One is that there are 305 00:15:41,960 --> 00:15:47,200 Speaker 4: forty eight thousand discrete connections that need to be made, 306 00:15:47,400 --> 00:15:50,280 Speaker 4: right like plugging one thing in from one computer to 307 00:15:50,320 --> 00:15:54,680 Speaker 4: another computer. But there's lots of switches and routers that 308 00:15:54,720 --> 00:15:57,200 Speaker 4: are between there. But you need to that forty eight 309 00:15:57,240 --> 00:16:03,120 Speaker 4: thousand times, and it takes over five hundred miles of 310 00:16:03,200 --> 00:16:07,200 Speaker 4: fiber optic cabling to do that successfully across the sixteen 311 00:16:07,240 --> 00:16:09,840 Speaker 4: thousand GPU cluster. And now again you're doing that within 312 00:16:09,880 --> 00:16:11,800 Speaker 4: a small space with a ton of power density, with 313 00:16:11,840 --> 00:16:14,720 Speaker 4: a ton of cooling, and it's just a completely different 314 00:16:14,760 --> 00:16:17,960 Speaker 4: way to build this infrastructure. It's just because the requirements 315 00:16:17,960 --> 00:16:20,560 Speaker 4: have changed, right, Like we've moved into this, like this 316 00:16:20,720 --> 00:16:25,040 Speaker 4: area where we are designing next generation AI models and 317 00:16:25,120 --> 00:16:27,840 Speaker 4: it requires a completely different type of compute, and it's 318 00:16:27,920 --> 00:16:31,320 Speaker 4: just it's caught the whole sector by surprise so much 319 00:16:31,360 --> 00:16:34,880 Speaker 4: so that you know, it's really challenging to go procure 320 00:16:34,880 --> 00:16:38,240 Speaker 4: it at the hyperscalers today because they didn't specialize in 321 00:16:38,280 --> 00:16:40,560 Speaker 4: building it. And that's you know where where core we've 322 00:16:40,560 --> 00:16:43,440 Speaker 4: comes in is we only focus on building this type 323 00:16:43,480 --> 00:16:46,240 Speaker 4: of compute for clients. It's our specialty. We hire all 324 00:16:46,240 --> 00:16:48,400 Speaker 4: of our engineering around it, all of our research goes 325 00:16:48,440 --> 00:16:51,040 Speaker 4: into it, and it's you know, it's been a fantastic 326 00:16:51,040 --> 00:16:53,640 Speaker 4: spot to be but our goal at the end of 327 00:16:53,640 --> 00:16:54,760 Speaker 4: the day is just to be able to get this 328 00:16:54,840 --> 00:16:57,480 Speaker 4: infrastructure into the hands of end consumers so that they 329 00:16:57,480 --> 00:17:00,640 Speaker 4: can build the amazing AI companies that have ones looking 330 00:17:00,680 --> 00:17:16,439 Speaker 4: forward to using and incorporating to enterprises and software companies. 331 00:17:21,520 --> 00:17:25,560 Speaker 2: You know, you mentioned these special or purpose built connections 332 00:17:25,680 --> 00:17:28,480 Speaker 2: that Nvidia is making, and this kind of leads nicely 333 00:17:28,640 --> 00:17:31,760 Speaker 2: into my next question, which is what exactly is your 334 00:17:31,880 --> 00:17:37,879 Speaker 2: relationship with Nvidia and in order to provide this type 335 00:17:37,960 --> 00:17:42,280 Speaker 2: of service, you know, vast amounts of processing power that 336 00:17:42,440 --> 00:17:46,080 Speaker 2: is well suited to a particular type of technology in 337 00:17:46,119 --> 00:17:49,160 Speaker 2: this case AI, do you have to have a really 338 00:17:49,200 --> 00:17:51,919 Speaker 2: good relationship with Nvidia to make that work? Like do 339 00:17:52,000 --> 00:17:55,120 Speaker 2: you have to have special access to H one, hundreds 340 00:17:55,160 --> 00:17:56,840 Speaker 2: and other chips. 341 00:17:57,840 --> 00:18:00,119 Speaker 4: It's a great question, and I'll try to offer or 342 00:18:00,680 --> 00:18:03,159 Speaker 4: from Nvidia's perspective, and it goes a little bit back 343 00:18:03,200 --> 00:18:05,160 Speaker 4: to the answer I just provided as well in that 344 00:18:06,200 --> 00:18:09,560 Speaker 4: I would think from in Nvidia's seat, what's most important 345 00:18:09,720 --> 00:18:13,640 Speaker 4: is empowering end users of their compute to be able 346 00:18:13,640 --> 00:18:18,200 Speaker 4: to access their compute and the most performant variant possible 347 00:18:19,160 --> 00:18:21,440 Speaker 4: at scale, and to be able to access it quickly, right, 348 00:18:21,480 --> 00:18:23,000 Speaker 4: Like a new generation comes out, they want to be 349 00:18:23,000 --> 00:18:25,240 Speaker 4: able to get their hands on it, right. And we've 350 00:18:25,320 --> 00:18:29,200 Speaker 4: built Core. We've around hitting every single one of those checkboxes. Right. 351 00:18:29,200 --> 00:18:31,440 Speaker 4: We build it at DGX reference back, we build it 352 00:18:31,560 --> 00:18:34,160 Speaker 4: at scale, and we bring it online on a timeline 353 00:18:34,200 --> 00:18:37,720 Speaker 4: that's you know, within months of a next generation chipset launch, 354 00:18:37,840 --> 00:18:41,680 Speaker 4: as opposed to you know, the more traditional legacy hyperscalers 355 00:18:41,680 --> 00:18:45,679 Speaker 4: that take quarters at a times, so US being in 356 00:18:45,720 --> 00:18:49,399 Speaker 4: a position to do that has has enabled us fantastic 357 00:18:49,640 --> 00:18:53,600 Speaker 4: access within Nvidia, and we have a history of consistently 358 00:18:53,640 --> 00:18:56,960 Speaker 4: executing on exactly what we've what we say we'll do right, 359 00:18:57,000 --> 00:19:02,120 Speaker 4: we under promise and over deliver as a business, and 360 00:19:02,520 --> 00:19:04,480 Speaker 4: I think that's just put us in this place where 361 00:19:04,800 --> 00:19:08,720 Speaker 4: Nvidia has the confidence in allocating infrastructure to us because 362 00:19:08,760 --> 00:19:10,680 Speaker 4: they know it's going to come online, they know it's 363 00:19:10,680 --> 00:19:14,240 Speaker 4: going to get to consumers faster than anyone else in 364 00:19:14,280 --> 00:19:15,760 Speaker 4: the market, and they know it's going to be delivered 365 00:19:15,760 --> 00:19:18,520 Speaker 4: in its most performance configuration that exists. 366 00:19:19,760 --> 00:19:22,359 Speaker 1: You know, I was thinking as I listened to some 367 00:19:22,400 --> 00:19:25,240 Speaker 1: of these answers, I keep having like these like imagines, 368 00:19:25,280 --> 00:19:29,520 Speaker 1: like you know, there's probably like some random industrial company 369 00:19:29,600 --> 00:19:32,159 Speaker 1: that's like traded like you know on the like S 370 00:19:32,200 --> 00:19:36,320 Speaker 1: and P four hundred that makes some cooling fluid whose 371 00:19:36,359 --> 00:19:38,080 Speaker 1: like sales are going to be up ten x. So 372 00:19:38,119 --> 00:19:40,040 Speaker 1: I'm like googling while we're talking, like what is a 373 00:19:40,040 --> 00:19:42,000 Speaker 1: company that makes kool aid fluid? Or like who is 374 00:19:42,040 --> 00:19:44,000 Speaker 1: some company that's like really good at making these like 375 00:19:44,040 --> 00:19:48,360 Speaker 1: infinite bands, Because it just likes. 376 00:19:46,840 --> 00:19:48,479 Speaker 3: Right, yeah, like what are the anyway? 377 00:19:48,560 --> 00:19:48,760 Speaker 4: Right? 378 00:19:48,920 --> 00:19:50,840 Speaker 1: But like right, like you know there's going to be 379 00:19:50,880 --> 00:19:54,760 Speaker 1: some yeah urchiery plate that are like thirty x up. 380 00:19:54,960 --> 00:19:56,760 Speaker 1: But you know, I want to get a sense from 381 00:19:56,800 --> 00:20:00,879 Speaker 1: you of so it's really changed a lot, and I 382 00:20:01,000 --> 00:20:02,600 Speaker 1: kind of you know, in the last several months. 383 00:20:02,600 --> 00:20:04,080 Speaker 3: Could we see it from in video results? 384 00:20:04,119 --> 00:20:08,040 Speaker 1: What you're describing, like how big is the market getting 385 00:20:08,119 --> 00:20:09,520 Speaker 1: and the way I think you know, I know, like 386 00:20:09,520 --> 00:20:13,040 Speaker 1: with AI, there's training and they sort of build the 387 00:20:13,080 --> 00:20:15,199 Speaker 1: model and then there's inference, and the inference is how 388 00:20:15,240 --> 00:20:17,600 Speaker 1: they spit out the results. Can you talk a little 389 00:20:17,600 --> 00:20:20,920 Speaker 1: bit about what you're seeing in terms of the growth 390 00:20:21,400 --> 00:20:24,760 Speaker 1: of both of those aspects of AI, which is bigger 391 00:20:25,000 --> 00:20:27,240 Speaker 1: and which is growing faster? And how do they compare 392 00:20:27,280 --> 00:20:29,720 Speaker 1: to like the size of the installed compute base that 393 00:20:29,760 --> 00:20:30,440 Speaker 1: already exists. 394 00:20:31,240 --> 00:20:34,760 Speaker 4: Oh. Absolutely, So this is one of my favorite topics 395 00:20:34,800 --> 00:20:37,280 Speaker 4: because it's just mind blowing the scale that's going to 396 00:20:37,280 --> 00:20:41,719 Speaker 4: be needed to support AI and scale this infrastructure. So okay, 397 00:20:41,800 --> 00:20:45,600 Speaker 4: so today most of the funding that's going into the 398 00:20:45,600 --> 00:20:49,800 Speaker 4: AI space is too for funding to train next generation 399 00:20:50,600 --> 00:20:53,400 Speaker 4: foundation models. Right, So when a company's raising a bunch 400 00:20:53,440 --> 00:20:55,119 Speaker 4: of money at the end of the day, most of 401 00:20:55,119 --> 00:20:57,680 Speaker 4: that money is going into cloud compute to go train 402 00:20:57,760 --> 00:21:01,200 Speaker 4: this next generation found model to build that intellectual property. 403 00:21:01,280 --> 00:21:03,360 Speaker 4: So they have this model and they can go bring 404 00:21:03,359 --> 00:21:06,800 Speaker 4: it into the inference market. And what I would say is, 405 00:21:07,280 --> 00:21:12,160 Speaker 4: we're having a supply demand issue like a chip access 406 00:21:12,200 --> 00:21:17,280 Speaker 4: crunch in the training phase, where in reality, the scale 407 00:21:17,440 --> 00:21:20,800 Speaker 4: of the inference market is where we're all the demand 408 00:21:21,000 --> 00:21:24,560 Speaker 4: truly is going to sit. So what I'd offer to 409 00:21:24,600 --> 00:21:28,720 Speaker 4: help contextualize that is, let's take you know, there's some 410 00:21:28,760 --> 00:21:30,880 Speaker 4: well known models in the market today. Let's let's say 411 00:21:30,920 --> 00:21:35,080 Speaker 4: there's a preach an in market trained model and it 412 00:21:35,160 --> 00:21:37,679 Speaker 4: took about let's say ten thousand. A one hundred or 413 00:21:37,680 --> 00:21:39,479 Speaker 4: so to train A one hundred is the last generation 414 00:21:39,560 --> 00:21:41,400 Speaker 4: in chip, but you know it still applies in terms 415 00:21:41,440 --> 00:21:45,520 Speaker 4: of relative scale here. So that company that used ten 416 00:21:46,520 --> 00:21:50,840 Speaker 4: train their model, our understanding is they're going to need 417 00:21:50,880 --> 00:21:55,760 Speaker 4: about a million GPUs within one to two years of 418 00:21:55,840 --> 00:21:59,760 Speaker 4: launch to support the entire inference demand. 419 00:22:00,720 --> 00:22:04,440 Speaker 1: So ten you could train the model on ten thousand 420 00:22:04,440 --> 00:22:06,920 Speaker 1: of these chips, ten thousand of these sisters, whatever they're 421 00:22:07,119 --> 00:22:09,080 Speaker 1: and then if they're actually going to be in the 422 00:22:09,119 --> 00:22:13,160 Speaker 1: market and sell something or provide some service to make 423 00:22:13,160 --> 00:22:13,720 Speaker 1: it worthwhile. 424 00:22:13,720 --> 00:22:15,040 Speaker 3: They're going to need a million. 425 00:22:15,680 --> 00:22:17,960 Speaker 4: A million, and I think that's just within first two 426 00:22:18,000 --> 00:22:21,440 Speaker 4: years of launch. Show like we're we're talking about something 427 00:22:21,440 --> 00:22:25,720 Speaker 4: that's going to continue growing afterwards. And so what does 428 00:22:25,720 --> 00:22:28,439 Speaker 4: a million GPUs mean? Obviously? Right, so you know a 429 00:22:28,440 --> 00:22:31,520 Speaker 4: couple I think it was like into last year, all 430 00:22:31,560 --> 00:22:35,600 Speaker 4: the hyperscalers combined, right, Amazon, Google, Microsoft, Oracle. You can 431 00:22:35,600 --> 00:22:37,479 Speaker 4: throw a COORWY from there, it was about, you know, 432 00:22:37,640 --> 00:22:42,400 Speaker 4: five hundred thousand GPUs globally right available across those platforms. 433 00:22:42,720 --> 00:22:44,760 Speaker 4: I'd see it end of this year, it'll be closer 434 00:22:44,800 --> 00:22:48,720 Speaker 4: to a million or so. But that's suggesting then that 435 00:22:48,920 --> 00:22:53,440 Speaker 4: one AI company with one model could consume the entire 436 00:22:53,520 --> 00:22:58,360 Speaker 4: global footprint of GPUs. And and now you start to think, wait, 437 00:22:58,440 --> 00:23:01,040 Speaker 4: aren't there a bunch of other companies training these models 438 00:23:01,040 --> 00:23:03,760 Speaker 4: in market right now? And I leo'ld say, yes, there are. 439 00:23:03,920 --> 00:23:08,240 Speaker 4: So it can imply that there are in the short term, 440 00:23:08,280 --> 00:23:12,480 Speaker 4: the demand of several million GPUs just to support the 441 00:23:12,680 --> 00:23:17,000 Speaker 4: inference market. And there's just there's just nowhere near enough 442 00:23:17,320 --> 00:23:20,639 Speaker 4: globally of this infrastructure, and it's it's going to be 443 00:23:20,680 --> 00:23:24,119 Speaker 4: a big challenge for the market as we exit this 444 00:23:24,200 --> 00:23:26,800 Speaker 4: training phase and move into the productization or really just 445 00:23:26,840 --> 00:23:29,080 Speaker 4: the commercialization of these models, like how do you generate 446 00:23:29,119 --> 00:23:32,640 Speaker 4: revenue off them? And it's it's something that I don't 447 00:23:32,680 --> 00:23:36,440 Speaker 4: think many people truly understand just the amount of scale 448 00:23:36,520 --> 00:23:39,800 Speaker 4: and construction that needs to take place. And now you 449 00:23:39,840 --> 00:23:42,119 Speaker 4: put that in the same framework of the data centers 450 00:23:42,119 --> 00:23:43,920 Speaker 4: that we were talking about, right, So there's this lack 451 00:23:43,960 --> 00:23:46,280 Speaker 4: of data center space, there's lack of chipset supply, like 452 00:23:46,359 --> 00:23:49,920 Speaker 4: it's it's going to be an issue for years that 453 00:23:49,960 --> 00:23:50,560 Speaker 4: we see. 454 00:23:50,840 --> 00:23:53,800 Speaker 2: So when it comes to scale, you know, you keep 455 00:23:53,840 --> 00:23:57,679 Speaker 2: mentioning the hyper scalers, which is a great term, but 456 00:23:57,960 --> 00:24:02,560 Speaker 2: people like Amazon, Google, I guess, Microsoft, IBM, et cetera. 457 00:24:03,359 --> 00:24:07,560 Speaker 2: How quickly or what is your impression of how quickly 458 00:24:07,880 --> 00:24:11,159 Speaker 2: they are able to ramp up in this space? Like 459 00:24:11,320 --> 00:24:15,200 Speaker 2: how fast could they react to some of the trends 460 00:24:15,240 --> 00:24:16,280 Speaker 2: that you've been outlining. 461 00:24:17,520 --> 00:24:20,600 Speaker 4: Yeah, so I can offer what I'm seeing today. You know, 462 00:24:20,760 --> 00:24:25,159 Speaker 4: the h one hundreds started to be distributed globally to 463 00:24:26,359 --> 00:24:28,720 Speaker 4: all of us, right, like all the entities that have 464 00:24:28,800 --> 00:24:31,439 Speaker 4: these you know, kind of upper tier relationships with Nvidia 465 00:24:31,720 --> 00:24:35,080 Speaker 4: back in March, right, so we started getting them the 466 00:24:35,080 --> 00:24:37,959 Speaker 4: s infrastructure online in April, really scaling in May, and 467 00:24:38,040 --> 00:24:40,160 Speaker 4: you know, we have builds going on at ten data 468 00:24:40,160 --> 00:24:42,719 Speaker 4: centers across the US right now, and we're delivering it 469 00:24:42,720 --> 00:24:46,680 Speaker 4: to clients. The guidance that we're seeing from the hyperscalers 470 00:24:47,240 --> 00:24:50,760 Speaker 4: is that they're not going to begin delivering scale access 471 00:24:51,080 --> 00:24:54,720 Speaker 4: to the H one hundred chipset until late Q three, 472 00:24:55,880 --> 00:24:57,920 Speaker 4: maybe mid Q four, and some of them are even 473 00:24:57,960 --> 00:25:01,159 Speaker 4: beginning to guide into Q one. And it's all driven 474 00:25:01,400 --> 00:25:04,040 Speaker 4: by the fact that this is just a different type 475 00:25:04,040 --> 00:25:07,919 Speaker 4: of compute that they're building relative to last generation. Right 476 00:25:07,920 --> 00:25:10,720 Speaker 4: You're no longer just running Ethernet to your point, between 477 00:25:10,760 --> 00:25:13,360 Speaker 4: all these devices, you're not just plugging in CPU blades. 478 00:25:13,440 --> 00:25:15,720 Speaker 4: You're having to deal with like totally different data center 479 00:25:15,760 --> 00:25:20,240 Speaker 4: power density and cooling requirements. You're having to build supercomputers instead, 480 00:25:20,359 --> 00:25:24,119 Speaker 4: with five hundred miles of fiber and all these connections. 481 00:25:24,119 --> 00:25:26,480 Speaker 4: It's just it's a completely different way to build the cloud, 482 00:25:26,520 --> 00:25:29,760 Speaker 4: and it's taking them some time to catch up because 483 00:25:29,760 --> 00:25:32,159 Speaker 4: you have to retrain entire organizations to do this. So 484 00:25:32,960 --> 00:25:35,399 Speaker 4: you know, as of now, i'd say the direct answer 485 00:25:35,480 --> 00:25:38,920 Speaker 4: is three quarters after a chip set launch, But it's 486 00:25:38,920 --> 00:25:41,760 Speaker 4: seeming it might take longer, And I think that's all 487 00:25:41,760 --> 00:25:45,720 Speaker 4: going to contribute to this just kind of slower ability 488 00:25:45,800 --> 00:25:50,760 Speaker 4: to scale infrastructure than what's being dictated by the adoption 489 00:25:50,920 --> 00:25:53,480 Speaker 4: rate of AI software, And it's going to lead to 490 00:25:53,480 --> 00:25:56,600 Speaker 4: this supply demand imbalance that will just last for a while. 491 00:25:57,640 --> 00:26:00,479 Speaker 2: You know, you keep mentioning or we both keep mentioning 492 00:26:00,520 --> 00:26:03,800 Speaker 2: the H one hundred for obvious reasons, But do you 493 00:26:03,840 --> 00:26:08,199 Speaker 2: look at other chips or what would happen to you know, 494 00:26:08,280 --> 00:26:11,439 Speaker 2: your own business if, for instance, a new chip was 495 00:26:11,480 --> 00:26:14,920 Speaker 2: developed that could do the same thing or better than 496 00:26:15,080 --> 00:26:17,640 Speaker 2: an Nvidia H one hundred. Like, for instance, I hear 497 00:26:17,640 --> 00:26:19,919 Speaker 2: a lot of excitement about some of the stuff that 498 00:26:19,960 --> 00:26:25,040 Speaker 2: AMD is developing. And I'm not a chips expert, except 499 00:26:25,119 --> 00:26:29,159 Speaker 2: maybe when it comes to freedo's or layers. But like, 500 00:26:30,080 --> 00:26:33,840 Speaker 2: how how big a difference would that make to you 501 00:26:34,200 --> 00:26:39,199 Speaker 2: if we suddenly got a different chip manufacturer gain prominence 502 00:26:39,320 --> 00:26:39,720 Speaker 2: in AI. 503 00:26:40,800 --> 00:26:45,520 Speaker 4: Sure? So I'd offer kind of two broad responses. One, typically, 504 00:26:45,720 --> 00:26:50,080 Speaker 4: when you train a model you're going to use, you're 505 00:26:50,119 --> 00:26:52,719 Speaker 4: going to use the same chips for inference on that 506 00:26:52,840 --> 00:26:55,520 Speaker 4: model as well. Right, So two PT four, For example, 507 00:26:55,600 --> 00:26:58,399 Speaker 4: I was trained on a one hundreds, they're predominantly going 508 00:26:58,480 --> 00:27:00,440 Speaker 4: to use a one hundreds going for or you might 509 00:27:00,440 --> 00:27:03,400 Speaker 4: fit in some kind of newer generation hyper efficient chips 510 00:27:03,400 --> 00:27:06,199 Speaker 4: into there, but it's not like you need a quote 511 00:27:07,000 --> 00:27:10,200 Speaker 4: a GP with more vram on it, right, Like you're 512 00:27:10,240 --> 00:27:12,439 Speaker 4: in a need your forty gig or your your eighty 513 00:27:12,480 --> 00:27:15,879 Speaker 4: grid gig ram chip because that's the size of the 514 00:27:15,880 --> 00:27:18,200 Speaker 4: model that that you trained, Right, You're not gonna need 515 00:27:18,280 --> 00:27:20,680 Speaker 4: like next multiple generations. You're not going to like really 516 00:27:20,680 --> 00:27:23,479 Speaker 4: be able to adopt them to change the efficiency of 517 00:27:23,760 --> 00:27:26,480 Speaker 4: serving that model. So what we view is that a 518 00:27:26,560 --> 00:27:30,560 Speaker 4: chip's lifespan is like this first two to three years 519 00:27:30,800 --> 00:27:34,240 Speaker 4: is spent training models, and then it's next four to 520 00:27:34,359 --> 00:27:38,480 Speaker 4: five years is spent doing inference for those models that 521 00:27:38,560 --> 00:27:42,159 Speaker 4: it trained. And then within there as well, you do 522 00:27:42,200 --> 00:27:44,879 Speaker 4: this thing called fine tuning, which is updating the model 523 00:27:44,960 --> 00:27:47,199 Speaker 4: with new information, right, Like how do you keep a 524 00:27:47,200 --> 00:27:49,560 Speaker 4: model like up to date with what's happened on a 525 00:27:49,600 --> 00:27:53,399 Speaker 4: Twitter or what's happened on in the media. Right, you 526 00:27:53,480 --> 00:27:55,680 Speaker 4: have to keep retraining it, right, and you'll use those 527 00:27:55,680 --> 00:27:58,080 Speaker 4: same chips to do that. But so your question on 528 00:27:58,240 --> 00:28:01,000 Speaker 4: other chip sets, and this is something that we have 529 00:28:01,040 --> 00:28:05,000 Speaker 4: a particularly interesting view into because we have like you know, 530 00:28:05,080 --> 00:28:07,720 Speaker 4: call it six hundred and fifty AI clients right, and 531 00:28:07,760 --> 00:28:10,960 Speaker 4: we're having conversations with them daily to ensure that we're 532 00:28:11,000 --> 00:28:13,960 Speaker 4: meeting their scaling demands. So it gives us a look 533 00:28:13,960 --> 00:28:17,480 Speaker 4: into six to twelve months into the future what type 534 00:28:17,480 --> 00:28:21,480 Speaker 4: of infrastructure they expect to need, and it's it's overwhelmingly 535 00:28:21,840 --> 00:28:26,479 Speaker 4: people still want access to Nvidia chips and the reason 536 00:28:26,560 --> 00:28:29,320 Speaker 4: for this is something that dates back, I think it's 537 00:28:29,359 --> 00:28:34,000 Speaker 4: nearly fifteen years when Nvidia and Jensen made the decision 538 00:28:34,040 --> 00:28:38,400 Speaker 4: to open source Kuda and to make this software set 539 00:28:38,600 --> 00:28:42,840 Speaker 4: accessible to the machine learning community. And you know today 540 00:28:42,840 --> 00:28:44,640 Speaker 4: if you go to GitHub and you search a machine 541 00:28:44,680 --> 00:28:49,200 Speaker 4: learning project, they's all reference Kuda drivers. And he's established 542 00:28:49,200 --> 00:28:55,320 Speaker 4: this utter dominance of ecosystem around his compute within the 543 00:28:55,440 --> 00:28:58,640 Speaker 4: mL space really similar to like the x eighty six 544 00:28:58,840 --> 00:29:02,360 Speaker 4: instruction set for CP you versus ARM, right, like X 545 00:29:02,400 --> 00:29:04,880 Speaker 4: eighty six is is used prodominantly. ARM has been trying 546 00:29:04,880 --> 00:29:07,400 Speaker 4: to find its way into the space for a while 547 00:29:07,480 --> 00:29:10,360 Speaker 4: now and it's just really struggled because all the engineers 548 00:29:10,360 --> 00:29:13,680 Speaker 4: and developers are used to x eighty six. Similar to 549 00:29:13,800 --> 00:29:16,520 Speaker 4: how all the engineers and developers in the AI space 550 00:29:16,600 --> 00:29:20,280 Speaker 4: are used to using KUDA, so it's something that like 551 00:29:20,360 --> 00:29:25,320 Speaker 4: obviously AMD is highly incentivized to find a way into 552 00:29:25,360 --> 00:29:28,480 Speaker 4: the sector, but they just don't have the ecosystem and 553 00:29:28,520 --> 00:29:30,760 Speaker 4: it's a huge moat to deal with. Then, you know, 554 00:29:30,880 --> 00:29:35,280 Speaker 4: kudos to Nvidia for establishing themselves and having the patients 555 00:29:35,320 --> 00:29:37,720 Speaker 4: to stick with it and to continue to support that 556 00:29:37,760 --> 00:29:40,120 Speaker 4: community over the last fifteen years, and it's it's really 557 00:29:40,120 --> 00:29:43,040 Speaker 4: paying off for them in spades today. You know, if 558 00:29:43,080 --> 00:29:46,560 Speaker 4: the demand comes for that infrastructure at some point, it's 559 00:29:46,680 --> 00:29:49,960 Speaker 4: you know, we can run other pieces of infrastructure within 560 00:29:50,000 --> 00:29:54,000 Speaker 4: our data center. But I also find that Nvidia has 561 00:29:54,080 --> 00:29:57,440 Speaker 4: such an advantage on the competition with not only its GPUs, 562 00:29:57,480 --> 00:30:00,600 Speaker 4: but all its components that support the gp USE, like 563 00:30:00,760 --> 00:30:04,400 Speaker 4: the infinite band fabric, that it's it's gonna be a 564 00:30:04,440 --> 00:30:08,320 Speaker 4: really difficult company to displace from the market in terms 565 00:30:08,360 --> 00:30:12,280 Speaker 4: of the best standard for AI infrastructure. 566 00:30:12,840 --> 00:30:14,680 Speaker 1: Can I ask a question, and I'm gonna I want 567 00:30:14,680 --> 00:30:17,000 Speaker 1: to ask this politely because it's not intended to be 568 00:30:17,120 --> 00:30:20,560 Speaker 1: accusatory or anything like that, So I don't want you 569 00:30:20,560 --> 00:30:22,840 Speaker 1: to you know here it is like but like when 570 00:30:22,840 --> 00:30:29,200 Speaker 1: you're like talking about like hyperscalers and you're like you know, Amazon, Google, Microsoft, 571 00:30:29,200 --> 00:30:31,000 Speaker 1: and you know kind of core Weave, and it's like, okay, 572 00:30:31,000 --> 00:30:33,640 Speaker 1: those are trillion dollar companies and you're a two billion 573 00:30:33,680 --> 00:30:36,760 Speaker 1: dollar company. Like why like I don't still don't think 574 00:30:36,800 --> 00:30:38,560 Speaker 1: I like wrap my head around, like and I know, 575 00:30:38,640 --> 00:30:41,000 Speaker 1: like they're all like in they're all talking about. 576 00:30:40,760 --> 00:30:41,520 Speaker 3: AI et cetera. 577 00:30:41,600 --> 00:30:43,360 Speaker 1: Like can you still just like explain to me a 578 00:30:43,400 --> 00:30:46,080 Speaker 1: little bit, like why aren't they just gonna frankly like 579 00:30:46,200 --> 00:30:49,080 Speaker 1: steamroll you or be able to let's put it this way, 580 00:30:49,360 --> 00:30:52,160 Speaker 1: be able to Okay, maybe it'll take a few quarters 581 00:30:52,200 --> 00:30:55,840 Speaker 1: to re evaluate things, but like you know, eventually this 582 00:30:56,040 --> 00:30:59,360 Speaker 1: just becomes this sort of de facto offering from these 583 00:30:59,400 --> 00:31:02,960 Speaker 1: big is that have these huge cloud budgets that must 584 00:31:02,960 --> 00:31:05,680 Speaker 1: be orders of magnitude large larger than yours. 585 00:31:06,320 --> 00:31:08,120 Speaker 4: Yeah. Yeah, I would really love to be able to 586 00:31:08,160 --> 00:31:11,040 Speaker 4: have access to their their cost of capital, that's pretty sure. 587 00:31:12,120 --> 00:31:15,480 Speaker 4: So the way, look, it's the way I had to talk 588 00:31:15,480 --> 00:31:18,960 Speaker 4: about this is we don't have a silver bullet necessarily, right. 589 00:31:18,960 --> 00:31:21,600 Speaker 4: I can't point to like a super secret piece of 590 00:31:21,640 --> 00:31:24,400 Speaker 4: technology that we put inside of our servers or anything 591 00:31:24,400 --> 00:31:27,680 Speaker 4: along those lines. But the way I like to broadly contextualize, 592 00:31:27,680 --> 00:31:33,680 Speaker 4: it is referencing another sector, and it's that like Ford 593 00:31:34,080 --> 00:31:37,000 Speaker 4: should be able to produce a model, why right, Like 594 00:31:37,240 --> 00:31:39,280 Speaker 4: they have the budget, they have the people, they have 595 00:31:39,400 --> 00:31:43,920 Speaker 4: the decades of expertise. But in order to ask them 596 00:31:43,920 --> 00:31:46,320 Speaker 4: to produce a model, why you would have to ask 597 00:31:46,360 --> 00:31:50,560 Speaker 4: them to foundationally change the way that they produce a vehicle, 598 00:31:50,880 --> 00:31:55,280 Speaker 4: all the way from research to servicing and that entire mechanism. 599 00:31:55,680 --> 00:31:58,120 Speaker 4: Like it's a giant organization. Now you have to go 600 00:31:58,200 --> 00:32:01,240 Speaker 4: ask that huge organization and people to change the way 601 00:32:01,240 --> 00:32:03,960 Speaker 4: that they go about producing things. 602 00:32:04,000 --> 00:32:05,560 Speaker 1: And I get that, but just to push back a 603 00:32:05,600 --> 00:32:07,280 Speaker 1: little bit, and I get And this is like a 604 00:32:07,280 --> 00:32:10,000 Speaker 1: theme that we that comes up in various flavors on 605 00:32:10,120 --> 00:32:12,800 Speaker 1: odd lots a lot, which is that like companies have 606 00:32:13,000 --> 00:32:16,600 Speaker 1: internal it's really hard to replicate sort of like tacit 607 00:32:16,720 --> 00:32:20,280 Speaker 1: knowledge within a corporation. And we see that with companies 608 00:32:20,280 --> 00:32:22,680 Speaker 1: that make semiconductor equipment. We see that with companies that 609 00:32:22,720 --> 00:32:25,840 Speaker 1: make airplanes. We see that with real estate developers that 610 00:32:26,040 --> 00:32:28,479 Speaker 1: know how to turn an office building into a condo. 611 00:32:28,520 --> 00:32:30,479 Speaker 1: And so I think this is like a deep point, 612 00:32:31,200 --> 00:32:34,000 Speaker 1: but you know they are offering AI stuff like I 613 00:32:34,000 --> 00:32:36,800 Speaker 1: can look at Google right now, like there's Cloud AI 614 00:32:37,160 --> 00:32:39,600 Speaker 1: like and there's Asia AI, and they all have their announcements. 615 00:32:39,680 --> 00:32:41,480 Speaker 1: So I'm still trying to understand, like what is it 616 00:32:42,080 --> 00:32:45,120 Speaker 1: that you're offering that all the hyper scalers they all have, 617 00:32:45,240 --> 00:32:47,680 Speaker 1: they all say they have AI offering, So what is 618 00:32:47,720 --> 00:32:49,920 Speaker 1: the difference between sort of like what you have and 619 00:32:50,000 --> 00:32:52,920 Speaker 1: what they say is like their you know, AI compute platforms. 620 00:32:54,000 --> 00:32:56,320 Speaker 4: Absolutely, and this will really depend on how much technical 621 00:32:56,360 --> 00:32:59,360 Speaker 4: detail you like me to get into. But broadly, through 622 00:33:00,120 --> 00:33:03,960 Speaker 4: structure differentiation like literally using different components to build our cloud, 623 00:33:04,360 --> 00:33:07,520 Speaker 4: and through software differentiation we use different pieces of software 624 00:33:07,560 --> 00:33:11,400 Speaker 4: to operate and optimize our cloud, we're able to deliver 625 00:33:11,480 --> 00:33:15,240 Speaker 4: a product that's about forty to sixty percent more efficient 626 00:33:15,600 --> 00:33:18,000 Speaker 4: on a workload adjusted basis than what you find across 627 00:33:18,040 --> 00:33:21,440 Speaker 4: any of the hyperscalers. So, in other words, if you 628 00:33:21,440 --> 00:33:23,560 Speaker 4: were to take the same workload or like go do 629 00:33:23,680 --> 00:33:27,880 Speaker 4: the same process at a hyperscaler on the exact same 630 00:33:28,240 --> 00:33:32,360 Speaker 4: GPU compute versus core weave, we're going to be forty 631 00:33:32,360 --> 00:33:36,040 Speaker 4: to sixty percent more efficient at doing that because of 632 00:33:36,040 --> 00:33:39,520 Speaker 4: the way that we've configured everything relative to the hyperscalers 633 00:33:39,520 --> 00:33:43,040 Speaker 4: and it comes back to this analogy between like why 634 00:33:43,080 --> 00:33:45,960 Speaker 4: Forward can't produce the model? Why again like they can. 635 00:33:46,120 --> 00:33:49,320 Speaker 4: These are trillion dollar companies we're talking about. To your point, 636 00:33:49,320 --> 00:33:51,160 Speaker 4: they have the budget, they have the personnel, and they 637 00:33:51,200 --> 00:33:54,200 Speaker 4: certainly have the motivation to do so. But you know, 638 00:33:54,280 --> 00:33:57,280 Speaker 4: it's not just one singular thing they have to change. 639 00:33:57,280 --> 00:34:01,040 Speaker 4: It's a completely different way to building their their business 640 00:34:01,200 --> 00:34:04,440 Speaker 4: that they would have to orchestrate. And it's what's the analogies. 641 00:34:04,760 --> 00:34:07,520 Speaker 4: However many miles it takes to turn an aircraft carrier, right, 642 00:34:07,560 --> 00:34:09,279 Speaker 4: like it's it's going to take them a while to 643 00:34:09,320 --> 00:34:12,160 Speaker 4: do that. And I think if they do get there 644 00:34:12,400 --> 00:34:14,799 Speaker 4: at some point, which you know, I don't disagree with you, 645 00:34:14,920 --> 00:34:18,200 Speaker 4: they're certainly motivated to, it's it's going to have taken 646 00:34:18,239 --> 00:34:20,960 Speaker 4: them some time, literally years to get there, and they're 647 00:34:21,000 --> 00:34:23,839 Speaker 4: going to look really similar to us. And meanwhile, I've 648 00:34:24,280 --> 00:34:27,640 Speaker 4: dominated mortgage share and I've really established my product and 649 00:34:27,719 --> 00:34:30,440 Speaker 4: market and I continue when I'll continue to differentiate myself 650 00:34:30,480 --> 00:34:32,160 Speaker 4: on the software asign business as well. 651 00:34:49,120 --> 00:34:52,760 Speaker 2: Since we're on the topic of adaptation, can I ask about, 652 00:34:53,239 --> 00:34:56,319 Speaker 2: you know, your own evolution as a company, because I 653 00:34:56,360 --> 00:35:00,160 Speaker 2: think I've read that you started out in ethereum mining, 654 00:35:00,320 --> 00:35:03,719 Speaker 2: and at one point I'm pretty sure crypto mining was 655 00:35:03,800 --> 00:35:07,520 Speaker 2: a substantial, if not the biggest, portion of your business. 656 00:35:08,000 --> 00:35:14,000 Speaker 2: But you have clearly adapted or pivoted into this AI space. 657 00:35:14,080 --> 00:35:16,200 Speaker 2: So what has that been like and can you maybe 658 00:35:16,239 --> 00:35:20,040 Speaker 2: describe some of the trends that you've seen over your history. 659 00:35:21,160 --> 00:35:24,319 Speaker 4: Yes, absolutely, and you're right. We did start within the 660 00:35:24,320 --> 00:35:28,080 Speaker 4: cryptocurrency space back in twenty seventeen or so, and that 661 00:35:28,280 --> 00:35:32,279 Speaker 4: was spawned out of just frankly curiosity from a group 662 00:35:32,320 --> 00:35:36,640 Speaker 4: of former commodity traders. So myself, my two co founders, 663 00:35:36,840 --> 00:35:39,920 Speaker 4: we ran hedge funds, we ran family offices, so we 664 00:35:40,000 --> 00:35:42,480 Speaker 4: traded in these energy markets. So we were always attracted 665 00:35:42,520 --> 00:35:45,520 Speaker 4: to supply demand mechanics. But what attracted us but then 666 00:35:45,560 --> 00:35:49,800 Speaker 4: cryptocurrency was there's this arbitrage opportunity that was a permissionless 667 00:35:49,920 --> 00:35:52,400 Speaker 4: revenue stream, right, Like I knew the cost of power, 668 00:35:52,800 --> 00:35:55,239 Speaker 4: I knew what the hardware could generate in terms of 669 00:35:55,280 --> 00:35:58,719 Speaker 4: revenue with using a power input. Thus it's effectively an 670 00:35:58,800 --> 00:36:02,880 Speaker 4: arbitrage right, So we explored that we had some of 671 00:36:02,880 --> 00:36:06,359 Speaker 4: the infrastructure operating literally in our basements. As you said, 672 00:36:09,200 --> 00:36:12,720 Speaker 4: then that like quickly turned into scaling across warehouses, and 673 00:36:13,160 --> 00:36:16,520 Speaker 4: at some point in twenty I think it was twenty eighteen, 674 00:36:16,640 --> 00:36:21,560 Speaker 4: maybe late twenty eighteen, we were the largest ethereum miner 675 00:36:21,880 --> 00:36:26,160 Speaker 4: in North America. We were operating over fifty thousand GPUs, 676 00:36:26,200 --> 00:36:29,880 Speaker 4: we represented over one percent of the ethereum network. But 677 00:36:31,040 --> 00:36:33,400 Speaker 4: during that whole time, we just kept coming back to 678 00:36:33,440 --> 00:36:37,400 Speaker 4: the idea that there's no moat, there's no advantage that 679 00:36:37,840 --> 00:36:41,680 Speaker 4: we could create for ourselves relative to our competitors, right, Like, sure, 680 00:36:41,680 --> 00:36:44,160 Speaker 4: you could maybe focus on power price and just kind 681 00:36:44,200 --> 00:36:46,480 Speaker 4: of chase the cheapest power, but that just felt like 682 00:36:46,560 --> 00:36:49,080 Speaker 4: chasing to the bottom of the bucket, right. You know, 683 00:36:49,560 --> 00:36:51,200 Speaker 4: I think an area we could have gone into is 684 00:36:51,200 --> 00:36:53,640 Speaker 4: producing your own chips, right because if you produce the 685 00:36:53,640 --> 00:36:56,359 Speaker 4: own chips and you run the mining equipment before anyone 686 00:36:56,400 --> 00:36:58,920 Speaker 4: else has access to it, then you have an advantage 687 00:36:58,920 --> 00:37:00,719 Speaker 4: for that period. But you know, we weren't going to 688 00:37:00,719 --> 00:37:04,200 Speaker 4: go design and fabric own chips. So what we kept 689 00:37:04,239 --> 00:37:08,440 Speaker 4: coming back to was this GPU compute, Man, what if 690 00:37:08,480 --> 00:37:10,520 Speaker 4: we could do other things, right, like what if they 691 00:37:10,560 --> 00:37:15,240 Speaker 4: were what if we could develop uncorrelated optionality into multiple 692 00:37:15,320 --> 00:37:19,200 Speaker 4: high growth markets right in those markets or where we 693 00:37:19,440 --> 00:37:22,760 Speaker 4: predominantly sit today with an artificial intelligence, media and entertainment, 694 00:37:22,800 --> 00:37:27,040 Speaker 4: and computational chemistry. And the original thesis was, well, whenever 695 00:37:27,040 --> 00:37:31,360 Speaker 4: our compute isn't being allocated into those sectors, we'll just 696 00:37:31,400 --> 00:37:34,280 Speaker 4: have it mining cryptocurrency and we'll build out this fantastic 697 00:37:34,320 --> 00:37:37,279 Speaker 4: company that has one hundred percent utilization rate across the 698 00:37:37,280 --> 00:37:40,920 Speaker 4: infrastructure because it could switch immediately from being released from 699 00:37:40,960 --> 00:37:43,600 Speaker 4: an AI workload into going back into the Ethereum network. 700 00:37:44,360 --> 00:37:47,319 Speaker 4: And we did get a brief glimpse of being able 701 00:37:47,360 --> 00:37:51,440 Speaker 4: to operate that way in twenty twenty one, as we 702 00:37:51,560 --> 00:37:55,320 Speaker 4: had our cloud live and we had AI clients in place, 703 00:37:55,640 --> 00:37:59,880 Speaker 4: but Ethereum mining effectively ended during the merge in Q 704 00:38:00,200 --> 00:38:04,480 Speaker 4: three of twenty twenty two. But I'd say the other 705 00:38:04,560 --> 00:38:10,239 Speaker 4: thing that we never appreciated was the utter complexity of 706 00:38:10,320 --> 00:38:14,760 Speaker 4: running a CSP forgetting about the software side of the business, 707 00:38:14,760 --> 00:38:16,680 Speaker 4: which in and of itself, you know, we spent about 708 00:38:16,719 --> 00:38:20,320 Speaker 4: four years developing the software to build a modern cloud 709 00:38:20,360 --> 00:38:23,600 Speaker 4: to do infrastructure orchestration and actually be a cloud service provider. 710 00:38:24,200 --> 00:38:29,400 Speaker 4: The components themselves that the sector broadly used for crypto 711 00:38:29,480 --> 00:38:33,920 Speaker 4: mining were these retail grade GPUs, right, the kind of 712 00:38:33,960 --> 00:38:37,359 Speaker 4: things that you plug in your desktop to go play video. 713 00:38:37,600 --> 00:38:41,040 Speaker 3: They were like selling them on stock X. Yes, yes 714 00:38:41,480 --> 00:38:41,719 Speaker 3: it was. 715 00:38:41,760 --> 00:38:44,080 Speaker 4: It was crazy during that period to get your hands 716 00:38:44,080 --> 00:38:45,839 Speaker 4: on that infrastructure for crypto. 717 00:38:45,560 --> 00:38:48,560 Speaker 1: Mining, and all the video gamers hated the crypto people, 718 00:38:48,640 --> 00:38:50,319 Speaker 1: right because they're like, I want to like play this 719 00:38:50,400 --> 00:38:52,320 Speaker 1: game and they would like line up what is it? 720 00:38:52,920 --> 00:38:55,239 Speaker 1: Game stop and like the geek wire shop and all 721 00:38:55,280 --> 00:38:57,560 Speaker 1: that or whatever it is, and they like couldn't get 722 00:38:57,560 --> 00:38:58,759 Speaker 1: it because you got it, not you. 723 00:38:58,920 --> 00:39:01,359 Speaker 3: But yeah, they're orbating. 724 00:39:01,040 --> 00:39:03,319 Speaker 1: Access to the chips first and getting more value out 725 00:39:03,360 --> 00:39:04,520 Speaker 1: of them so that you could bit them up. 726 00:39:05,440 --> 00:39:08,920 Speaker 4: We were certainly part of the problem, and that's absolutely correct. 727 00:39:09,600 --> 00:39:12,640 Speaker 4: But you know what we found ultimately is like those chips, 728 00:39:13,560 --> 00:39:16,479 Speaker 4: that's not what you run enterprise grade workloads on. That's 729 00:39:16,520 --> 00:39:19,319 Speaker 4: not what's supporting you know, the largest AI companies in 730 00:39:19,360 --> 00:39:23,240 Speaker 4: the world, And starting in twenty nineteen, we stopped buying 731 00:39:23,400 --> 00:39:27,160 Speaker 4: any of those chips and only focused on purchasing enterprise 732 00:39:27,200 --> 00:39:30,960 Speaker 4: grade GPU chip sets that you know, Nvidia has a 733 00:39:31,000 --> 00:39:34,200 Speaker 4: probably about twelve different SKUs that they offer, including a 734 00:39:34,360 --> 00:39:38,440 Speaker 4: one hundred and h one hundred chips, and really oriented 735 00:39:38,440 --> 00:39:42,200 Speaker 4: our business around it. So it's a. I don't expect 736 00:39:42,320 --> 00:39:46,760 Speaker 4: to see much repurposing of this kind of older retail 737 00:39:46,880 --> 00:39:49,640 Speaker 4: grade GPU equipment that was used for crypto mining, because 738 00:39:50,360 --> 00:39:52,200 Speaker 4: in crypto mining, you want to buy the cheapest chip 739 00:39:52,239 --> 00:39:54,480 Speaker 4: that can do the thing for it, right, that can 740 00:39:54,520 --> 00:39:57,960 Speaker 4: participate in crypto mining. But there's a huge difference in 741 00:39:58,040 --> 00:40:00,880 Speaker 4: price between a retail plugin into your computer so you 742 00:40:00,880 --> 00:40:03,840 Speaker 4: can play video games chip and an enterprise grade you 743 00:40:03,880 --> 00:40:05,680 Speaker 4: can run it twenty four to seven. There's not going 744 00:40:05,719 --> 00:40:07,680 Speaker 4: to be downtime, You're going to have a low failure rate. 745 00:40:08,040 --> 00:40:10,640 Speaker 4: Like there's a large technology difference and there's a large 746 00:40:10,640 --> 00:40:13,840 Speaker 4: pricing difference between those and the crypto miners. You only 747 00:40:13,960 --> 00:40:16,360 Speaker 4: needed the retail grade ship because you know, if it 748 00:40:16,400 --> 00:40:19,239 Speaker 4: went down for two percent five percent of the time 749 00:40:19,280 --> 00:40:21,759 Speaker 4: for a failure rate, that's not a big deal. But 750 00:40:22,080 --> 00:40:27,200 Speaker 4: the tolerance, the uptime tolerance for these enterprise grade workloads 751 00:40:27,320 --> 00:40:30,719 Speaker 4: is measured on the thousandths of a percent, and it's 752 00:40:30,760 --> 00:40:33,960 Speaker 4: a different type of infrastructure, so we don't expect to 753 00:40:34,000 --> 00:40:38,040 Speaker 4: see the components really being reused, if at all. And 754 00:40:38,080 --> 00:40:40,640 Speaker 4: then the other variable, going back to the very beginning 755 00:40:40,640 --> 00:40:43,120 Speaker 4: of our conversation are the data centers in which these 756 00:40:43,120 --> 00:40:47,319 Speaker 4: are housed. So Joe, to your point earlier, you know, 757 00:40:47,400 --> 00:40:49,840 Speaker 4: we sit within tier three tier four data centers, and 758 00:40:49,880 --> 00:40:53,080 Speaker 4: that's the basically the broad industry standard for being able 759 00:40:53,080 --> 00:40:57,560 Speaker 4: to serve these kind of workloads. The crypto miners sat 760 00:40:57,600 --> 00:41:01,399 Speaker 4: within tier zero tier one data centers, and these things 761 00:41:01,440 --> 00:41:05,200 Speaker 4: are like highly interruptible. They do like really interesting things 762 00:41:05,280 --> 00:41:09,040 Speaker 4: like helping load balance the power markets in places like 763 00:41:09,080 --> 00:41:11,520 Speaker 4: ir Cot right, Like they'll shut down when power prices 764 00:41:11,560 --> 00:41:13,840 Speaker 4: go too high and it load balances the grid. But 765 00:41:14,960 --> 00:41:19,000 Speaker 4: enterprise AI workloads don't have a tolerance for that. Their 766 00:41:19,080 --> 00:41:22,200 Speaker 4: tolerance again is measured on the thousands of a percentage 767 00:41:22,239 --> 00:41:26,280 Speaker 4: in terms of uptime. So not only does the infrastructure 768 00:41:26,680 --> 00:41:30,839 Speaker 4: not work from cryptomning, but the data centers that they 769 00:41:31,040 --> 00:41:35,960 Speaker 4: built within don't work either the way that they're currently configured. Now, 770 00:41:36,360 --> 00:41:39,680 Speaker 4: they could potentially convert their sites into tier three and 771 00:41:39,719 --> 00:41:42,279 Speaker 4: tier four data centers. I'll tell you that in and 772 00:41:42,320 --> 00:41:46,200 Speaker 4: of itself, that is an extremely challenging task and it 773 00:41:46,200 --> 00:41:49,840 Speaker 4: takes a lot of proprietary knowledge and industry expertise to 774 00:41:49,880 --> 00:41:51,719 Speaker 4: do so. It's not just throwing a few fans in 775 00:41:51,760 --> 00:41:54,279 Speaker 4: a room and a few air conditioning units. It's a 776 00:41:55,320 --> 00:41:57,840 Speaker 4: it's it. Honestly, it feels like walking to a spaceship. 777 00:41:57,960 --> 00:41:59,960 Speaker 3: Tracy, this is this is an episode. 778 00:42:00,360 --> 00:42:02,560 Speaker 1: I don't know about you, Tracy, there's like six another 779 00:42:02,840 --> 00:42:06,279 Speaker 1: like six follow on episodes. No, seriously, like the whole 780 00:42:06,280 --> 00:42:10,400 Speaker 1: like data center market and the coolant and all, you know, 781 00:42:10,440 --> 00:42:13,600 Speaker 1: the electricity, Like there's so many different rabbit holes you 782 00:42:13,640 --> 00:42:15,960 Speaker 1: could go down, just like with the infrastructure you're. 783 00:42:15,800 --> 00:42:18,920 Speaker 2: Talking about, for sure, And I think the estimates that 784 00:42:19,000 --> 00:42:23,839 Speaker 2: I've seen on repurposing crypto GPUs, I think I've seen 785 00:42:23,880 --> 00:42:28,400 Speaker 2: like five to fifteen percent, so to Brannon's point, but 786 00:42:28,680 --> 00:42:31,000 Speaker 2: I'm sure, I'm sure there will be people out there 787 00:42:31,040 --> 00:42:31,600 Speaker 2: who try. 788 00:42:32,680 --> 00:42:35,560 Speaker 4: You got to try, right, because what if it works right, 789 00:42:35,960 --> 00:42:38,919 Speaker 4: If you can make that work, that's amazing. But we're 790 00:42:39,080 --> 00:42:41,920 Speaker 4: just you know, coming as an entity that was an 791 00:42:41,920 --> 00:42:45,759 Speaker 4: extremely large operator of that infrastructure and has built, you know, 792 00:42:45,880 --> 00:42:48,920 Speaker 4: one of the largest cloud service providers for AI workloads. 793 00:42:49,719 --> 00:42:51,959 Speaker 4: I can tell you it's it's gonna be really really 794 00:42:52,000 --> 00:42:54,239 Speaker 4: hard to do it because we've had exposure in both 795 00:42:54,280 --> 00:42:56,160 Speaker 4: those places and at the end of the day, they're 796 00:42:56,200 --> 00:42:59,719 Speaker 4: just very very different businesses, both from the type of 797 00:43:00,080 --> 00:43:02,400 Speaker 4: nearing and developers that you employed to the infrastructure to 798 00:43:02,440 --> 00:43:03,719 Speaker 4: the data centers that you sit within. 799 00:43:04,920 --> 00:43:07,160 Speaker 1: So can I just go back you know, yeah, just 800 00:43:07,239 --> 00:43:09,680 Speaker 1: sort of like big picture, and I guess it sort 801 00:43:09,680 --> 00:43:11,920 Speaker 1: of goes back to like who gets access to what? 802 00:43:12,000 --> 00:43:14,040 Speaker 3: Who gets access to chips? 803 00:43:14,080 --> 00:43:17,279 Speaker 1: And I imagine that you know, not only do you 804 00:43:17,320 --> 00:43:19,440 Speaker 1: need a lot of money to like build a relationship 805 00:43:19,520 --> 00:43:22,680 Speaker 1: with like Nvidio, you also probably need like a you know, 806 00:43:23,040 --> 00:43:25,399 Speaker 1: expectation you're going to be back the next year, back 807 00:43:25,440 --> 00:43:26,920 Speaker 1: the next year, back the next year, and that you 808 00:43:26,960 --> 00:43:30,040 Speaker 1: actually like have relationship and so forth. But I have 809 00:43:30,080 --> 00:43:33,200 Speaker 1: to imagine like planning is really tough, and when like 810 00:43:33,280 --> 00:43:35,880 Speaker 1: you know, you have this sort of like AI Machine 811 00:43:35,920 --> 00:43:39,960 Speaker 1: language whatever like industry, and then something like chet GPT 812 00:43:40,200 --> 00:43:43,040 Speaker 1: comes out and like suddenly everyone like, oh I need 813 00:43:43,080 --> 00:43:45,600 Speaker 1: to like have AI access. Talk to us about like 814 00:43:45,640 --> 00:43:47,759 Speaker 1: the sort of like challenge of just sort of like 815 00:43:48,120 --> 00:43:51,879 Speaker 1: planning to build when it can move that fast, and 816 00:43:51,960 --> 00:43:54,560 Speaker 1: like everyone is just sort of guessing how big this 817 00:43:54,640 --> 00:43:56,359 Speaker 1: market is going to be in two to three years. 818 00:43:57,080 --> 00:44:00,960 Speaker 4: Oh my gosh, it's it's it's been utterly insane right 819 00:44:01,040 --> 00:44:04,080 Speaker 4: like the it you know, back to last year, you know, 820 00:44:04,120 --> 00:44:06,200 Speaker 4: the supply chain and the ability to get your hands 821 00:44:06,200 --> 00:44:08,640 Speaker 4: on components. You know, you would call your your OEM. 822 00:44:08,960 --> 00:44:12,160 Speaker 4: The OEM is the original equipment manufacturer, Like those are 823 00:44:12,160 --> 00:44:14,760 Speaker 4: the super micros, the gigabytes of the world, who actually 824 00:44:14,840 --> 00:44:17,200 Speaker 4: you know, build the nodes, build the servers, and you're 825 00:44:17,360 --> 00:44:20,040 Speaker 4: you're buying through them, and then they buy the GPUs 826 00:44:20,080 --> 00:44:23,239 Speaker 4: from Nvidia and build all the components together. Right, So 827 00:44:23,480 --> 00:44:25,160 Speaker 4: if you called them and said, hey, I need this 828 00:44:25,239 --> 00:44:29,880 Speaker 4: many nodes to be delivered, they'll say, great, we'll start assembling. 829 00:44:30,040 --> 00:44:32,360 Speaker 4: Takes us, you know, a week to two weeks to 830 00:44:32,360 --> 00:44:34,399 Speaker 4: get the parts in assembling, and then it's another week 831 00:44:34,440 --> 00:44:36,160 Speaker 4: for them to ship them to you, and then it 832 00:44:36,160 --> 00:44:37,880 Speaker 4: takes us two to three weeks to plug them in 833 00:44:37,880 --> 00:44:40,799 Speaker 4: and put them online get them going. Right now, that's 834 00:44:40,920 --> 00:44:45,040 Speaker 4: completely changed, as you know, like all the supply chain 835 00:44:45,040 --> 00:44:48,000 Speaker 4: has gotten thrown off so much so that you know, 836 00:44:48,080 --> 00:44:52,480 Speaker 4: in Vidia is fully allocated, like they've fully sold out 837 00:44:52,480 --> 00:44:56,160 Speaker 4: their infrastructure through the end of the year. Right, you 838 00:44:56,200 --> 00:44:58,319 Speaker 4: can't call them, You can't call the OEM and just 839 00:44:58,320 --> 00:45:01,000 Speaker 4: say you need more compute chips like that, that's not possible. 840 00:45:01,160 --> 00:45:03,560 Speaker 4: So much so that you know, when clients are coming 841 00:45:03,640 --> 00:45:06,279 Speaker 4: to us today and they're asking for like a four 842 00:45:06,320 --> 00:45:09,560 Speaker 4: thousand GPU cluster to be built for them. We're telling 843 00:45:09,560 --> 00:45:12,879 Speaker 4: them Q one, and increasingly it's moving towards Q two 844 00:45:12,880 --> 00:45:14,760 Speaker 4: at this point because Q one is starting to get 845 00:45:15,160 --> 00:45:18,160 Speaker 4: booked up right now, So it's something that a lot 846 00:45:18,200 --> 00:45:20,879 Speaker 4: of time has been added to it. And then there's 847 00:45:20,920 --> 00:45:25,080 Speaker 4: other supply chain variables within there as well. You know, 848 00:45:25,120 --> 00:45:28,520 Speaker 4: we had a client earlier this year that we were 849 00:45:28,520 --> 00:45:30,560 Speaker 4: in negotiations with them on the contract and you know, 850 00:45:30,680 --> 00:45:33,600 Speaker 4: we really wanted to perform well on timing for it. 851 00:45:34,200 --> 00:45:38,080 Speaker 4: So we knew because of our orientation within the supply 852 00:45:38,160 --> 00:45:41,400 Speaker 4: chain that there were some critical components that needed to 853 00:45:41,400 --> 00:45:44,800 Speaker 4: be ordered ahead of time so that it would reduce 854 00:45:45,120 --> 00:45:47,680 Speaker 4: our time to bring in the infrastructure online. And at 855 00:45:47,680 --> 00:45:50,120 Speaker 4: that point it was the power supply units and the 856 00:45:50,200 --> 00:45:53,840 Speaker 4: fans for the nodes that the Oliams were putting together, 857 00:45:54,480 --> 00:45:56,719 Speaker 4: and if we hadn't have done that, it would have 858 00:45:56,719 --> 00:45:59,600 Speaker 4: been another i think eight weeks on top of the 859 00:45:59,640 --> 00:46:03,279 Speaker 4: build process, just because not all the components would have 860 00:46:03,280 --> 00:46:06,359 Speaker 4: been there at the same time. So you're navigating this, 861 00:46:06,960 --> 00:46:10,960 Speaker 4: you know, within other kind of global supply chain disruptions 862 00:46:10,960 --> 00:46:13,160 Speaker 4: and inflation and all these other things that are going 863 00:46:13,160 --> 00:46:16,960 Speaker 4: on right now and it's just an insanely complex task 864 00:46:17,080 --> 00:46:21,640 Speaker 4: that I think, you know, the generation of software developers 865 00:46:21,880 --> 00:46:25,759 Speaker 4: and founders that we're working with today were used to 866 00:46:25,840 --> 00:46:28,480 Speaker 4: being able to go to a cloud service provider and 867 00:46:28,560 --> 00:46:31,560 Speaker 4: just getting whatever infrastructure they needed. Right. You go to 868 00:46:31,600 --> 00:46:33,960 Speaker 4: your hyperscalers and say all right, any of this and 869 00:46:34,000 --> 00:46:37,680 Speaker 4: it was just there and available. And that just doesn't 870 00:46:37,719 --> 00:46:42,560 Speaker 4: exist today because the pace of demand growth that we've 871 00:46:42,600 --> 00:46:45,439 Speaker 4: been on and just the lack of this infrastructure's availability, 872 00:46:45,680 --> 00:46:49,080 Speaker 4: and it's just caught everyone by surprise. Again. You're you're 873 00:46:49,120 --> 00:46:54,120 Speaker 4: asking infrastructure to keep pace with the fastest adoption of 874 00:46:55,080 --> 00:46:57,760 Speaker 4: a new piece of software that's ever occurred. 875 00:46:58,480 --> 00:47:01,440 Speaker 1: Brandon McBee, core Leave, Thank you so much. That was 876 00:47:01,440 --> 00:47:03,960 Speaker 1: a great conversation. Like I said, I always sort of 877 00:47:04,040 --> 00:47:06,520 Speaker 1: measure the quality of a conversation of like do I 878 00:47:06,520 --> 00:47:07,200 Speaker 1: get seven. 879 00:47:07,160 --> 00:47:09,279 Speaker 2: How many additional episode like that is. 880 00:47:09,200 --> 00:47:11,520 Speaker 1: A pretty good proxy for a good conversation. Do you 881 00:47:11,520 --> 00:47:13,480 Speaker 1: get like eight ideas for future episodes? We got a 882 00:47:13,480 --> 00:47:15,920 Speaker 1: bunch there, So thank you so much for coming. 883 00:47:15,719 --> 00:47:16,360 Speaker 3: On the podcast. 884 00:47:16,640 --> 00:47:18,319 Speaker 4: Always happy to chat with you guys, and thank you 885 00:47:18,360 --> 00:47:19,080 Speaker 4: for the invitation. 886 00:47:32,880 --> 00:47:33,280 Speaker 3: Tracy. 887 00:47:33,320 --> 00:47:35,920 Speaker 1: I want to find that company that makes the coolant 888 00:47:36,280 --> 00:47:39,200 Speaker 1: for the data No, seriously, for the data centers. That 889 00:47:39,320 --> 00:47:43,040 Speaker 1: allows them to pack more compute and more energy into 890 00:47:43,080 --> 00:47:45,200 Speaker 1: this space, because it's like it feels like they're probably 891 00:47:45,200 --> 00:47:46,239 Speaker 1: going to make a fortune of the. 892 00:47:46,160 --> 00:47:47,759 Speaker 2: Next Joe, I think you just want to talk to 893 00:47:47,800 --> 00:47:51,640 Speaker 2: an HVACT contractor that's like installing out. 894 00:47:51,880 --> 00:47:52,520 Speaker 3: Can we talk to it? 895 00:47:52,920 --> 00:47:56,000 Speaker 1: Just some random like I love the Maybe it was 896 00:47:56,040 --> 00:47:59,200 Speaker 1: such a funny thought like these like really advanced data centers, 897 00:47:59,280 --> 00:48:00,960 Speaker 1: like oh, do we have like a local air conditioning 898 00:48:00,960 --> 00:48:01,759 Speaker 1: guy who can like. 899 00:48:01,880 --> 00:48:03,520 Speaker 2: But I imagine actually that would have been a good 900 00:48:03,600 --> 00:48:07,120 Speaker 2: question for Brandon, wouldn't it. Like the labor constraints in 901 00:48:07,600 --> 00:48:10,799 Speaker 2: building and adapting those data centers. But there was so 902 00:48:10,920 --> 00:48:12,960 Speaker 2: much in there. One of the things, one of the 903 00:48:12,960 --> 00:48:15,680 Speaker 2: things that I was thinking about was the point about how, well, okay, 904 00:48:15,960 --> 00:48:18,799 Speaker 2: if you train a model on one type of chip, 905 00:48:18,840 --> 00:48:21,600 Speaker 2: you're going to keep using that type of chip. And 906 00:48:21,640 --> 00:48:24,000 Speaker 2: I guess, I guess it's kind of obvious, but it 907 00:48:24,080 --> 00:48:28,560 Speaker 2: does suggest that there's some stickiness there, Like if you 908 00:48:28,640 --> 00:48:32,239 Speaker 2: start out using an Nvidia each one hundred, you're going 909 00:48:32,280 --> 00:48:33,880 Speaker 2: to keep using them, and in fact, you're going to 910 00:48:33,920 --> 00:48:37,800 Speaker 2: consume even more because the processing power required the compute 911 00:48:37,840 --> 00:48:40,880 Speaker 2: required for the inference is higher than for the actual 912 00:48:40,920 --> 00:48:42,080 Speaker 2: initial training. 913 00:48:42,320 --> 00:48:44,760 Speaker 1: Which I knew that that was the case because Stacy 914 00:48:44,800 --> 00:48:47,200 Speaker 1: said so as well, but I did not realize quite 915 00:48:47,200 --> 00:48:51,600 Speaker 1: the scale of like how much more Like okay, like 916 00:48:51,680 --> 00:48:53,400 Speaker 1: if you train a model and then we try to 917 00:48:53,440 --> 00:48:54,840 Speaker 1: take it to market product tize it. 918 00:48:55,000 --> 00:48:56,080 Speaker 3: As a business person. 919 00:48:55,920 --> 00:48:58,319 Speaker 1: I'd say, if we try to productize, like how much 920 00:48:58,480 --> 00:49:02,680 Speaker 1: more computing power or we would need for the inferant aspect? 921 00:49:02,880 --> 00:49:04,520 Speaker 1: And meanwhile we have to keep training it all the 922 00:49:04,560 --> 00:49:05,960 Speaker 1: time to keep it up with fresh data and. 923 00:49:05,920 --> 00:49:06,480 Speaker 3: Stuff like that. 924 00:49:06,600 --> 00:49:10,320 Speaker 2: Yeah, totally. And the other thing that I was thinking about, 925 00:49:10,400 --> 00:49:13,839 Speaker 2: and again Stacy mentioned this in our discussion with him 926 00:49:13,840 --> 00:49:17,480 Speaker 2: as well, but this idea of Nvidia building a kind 927 00:49:17,520 --> 00:49:21,279 Speaker 2: of large ecosystem around the hardware. So you have the 928 00:49:21,360 --> 00:49:24,799 Speaker 2: open source software Kudo, which we talked about a little bit, 929 00:49:25,120 --> 00:49:28,640 Speaker 2: and then you have these sort of high touch partnerships 930 00:49:28,719 --> 00:49:32,520 Speaker 2: with companies like core Weave where they're trying to make 931 00:49:32,560 --> 00:49:35,160 Speaker 2: it as easy as possible for you to use their 932 00:49:35,239 --> 00:49:38,440 Speaker 2: chips and set them up in a way that works 933 00:49:38,480 --> 00:49:42,880 Speaker 2: for you. It feels like maybe it feels almost like 934 00:49:42,920 --> 00:49:44,120 Speaker 2: what bitmin used to do. 935 00:49:44,120 --> 00:49:46,960 Speaker 3: Do you remember that, uh, no. 936 00:49:46,920 --> 00:49:49,680 Speaker 2: Maybe they're still doing it anyway, but it does feel 937 00:49:49,719 --> 00:49:52,920 Speaker 2: like they're trying to build this like ecosystem mote around 938 00:49:53,080 --> 00:49:54,160 Speaker 2: the chip technology. 939 00:49:54,440 --> 00:49:54,680 Speaker 4: Yeah. 940 00:49:54,760 --> 00:49:56,319 Speaker 3: No, absolutely true. 941 00:49:56,360 --> 00:49:58,320 Speaker 1: And you know, I really do take that point that 942 00:49:58,440 --> 00:50:02,040 Speaker 1: Brandon made about like every company has a sort of 943 00:50:02,080 --> 00:50:04,759 Speaker 1: like knowledge that cannot be written down on a piece 944 00:50:04,800 --> 00:50:06,640 Speaker 1: of paper. Yeah, which is a Dan Wong point that 945 00:50:06,680 --> 00:50:08,759 Speaker 1: we've been talking about for years. And so it's like 946 00:50:09,120 --> 00:50:10,840 Speaker 1: to your point, you know, like you have to like 947 00:50:11,080 --> 00:50:13,720 Speaker 1: use different types of connectors and different types of power 948 00:50:13,760 --> 00:50:16,040 Speaker 1: and all these stuff like the ease with which any 949 00:50:16,040 --> 00:50:21,680 Speaker 1: sort of traditional cloud provider or data center provider can 950 00:50:21,920 --> 00:50:23,880 Speaker 1: you know, sort of switch to it's like a you know, 951 00:50:24,000 --> 00:50:25,840 Speaker 1: it's not trivial even with lots of no. 952 00:50:26,400 --> 00:50:29,760 Speaker 2: But coming away, I'm coming away from that conversation thinking, 953 00:50:29,840 --> 00:50:32,799 Speaker 2: like the big question here is how quickly can those 954 00:50:32,840 --> 00:50:37,120 Speaker 2: other hyperscalers adapt and like how big a moat can 955 00:50:37,239 --> 00:50:38,960 Speaker 2: Nvidia build around this business? 956 00:50:39,000 --> 00:50:41,160 Speaker 1: And then I mean the other question I have is 957 00:50:41,280 --> 00:50:44,040 Speaker 1: like what if none of these companies make any money 958 00:50:44,400 --> 00:50:46,600 Speaker 1: building AI models, Like I still don't think like that's 959 00:50:46,640 --> 00:50:48,800 Speaker 1: been proven and so you can have this like huge 960 00:50:48,840 --> 00:50:50,520 Speaker 1: boom and like, hey, we got to build any by 961 00:50:50,640 --> 00:50:52,920 Speaker 1: a model is what we're going to build, like you know, 962 00:50:53,120 --> 00:50:56,440 Speaker 1: outlaws GPT for like data stuff and whatever. But it 963 00:50:56,640 --> 00:50:59,759 Speaker 1: all is somewhat predicated on these companies being successful and 964 00:50:59,760 --> 00:51:02,040 Speaker 1: making a lot of money. And if they're not, and 965 00:51:02,040 --> 00:51:04,720 Speaker 1: if it turns out that like the monetization of AI 966 00:51:04,840 --> 00:51:08,320 Speaker 1: products is trickier than expected, then that also raises this 967 00:51:08,440 --> 00:51:09,640 Speaker 1: question about like how long. 968 00:51:09,480 --> 00:51:12,480 Speaker 2: This Like I'm sorry, Joe, so you're saying that tech 969 00:51:12,520 --> 00:51:16,319 Speaker 2: companies should make money? Is that it? Are you sure? 970 00:51:17,120 --> 00:51:20,640 Speaker 3: That's right? That's it's real post zerp thinking of it? 971 00:51:20,960 --> 00:51:23,000 Speaker 2: I know, all right? Shall we leave it there? 972 00:51:23,040 --> 00:51:23,719 Speaker 3: Let's leave it there. 973 00:51:23,800 --> 00:51:26,640 Speaker 2: This has been another episode of the All Thoughts podcast. 974 00:51:26,680 --> 00:51:29,279 Speaker 2: I'm Tracy Alloway. You can follow me on Twitter at 975 00:51:29,320 --> 00:51:30,520 Speaker 2: Tracy Alloway. 976 00:51:30,160 --> 00:51:32,480 Speaker 1: And I'm Joe Wisenthal. You can follow me on Twitter 977 00:51:32,560 --> 00:51:35,160 Speaker 1: at the Stalwart. Follow our guest Brannon McBee. 978 00:51:35,200 --> 00:51:36,680 Speaker 3: He's at Brannon McBee. 979 00:51:36,840 --> 00:51:40,279 Speaker 1: Follow our producers Carmen Rodriguez at Carmen Arman and dash 980 00:51:40,280 --> 00:51:42,600 Speaker 1: Ol Bennett at dashbot. And check out all of the 981 00:51:42,600 --> 00:51:46,200 Speaker 1: Bloomberg podcasts under the handle at podcasts, and for more 982 00:51:46,239 --> 00:51:49,359 Speaker 1: Odd Lots content, go to Bloomberg dot com slash odd 983 00:51:49,360 --> 00:51:52,560 Speaker 1: lots where we have transcripts, a blog, and a newsletter 984 00:51:52,600 --> 00:51:55,760 Speaker 1: that comes out each Friday. And check out our Discord. 985 00:51:55,920 --> 00:51:58,920 Speaker 1: We have an AI channel and a semiconductor channel in 986 00:51:58,960 --> 00:52:01,280 Speaker 1: there so people talk about these topics twenty four to seven. 987 00:52:01,480 --> 00:52:02,120 Speaker 3: Maybe they'll be. 988 00:52:02,040 --> 00:52:04,799 Speaker 1: Talking about them in both of those rooms when this 989 00:52:04,880 --> 00:52:08,080 Speaker 1: comes out. Discord dot gg, slash. 990 00:52:07,719 --> 00:52:11,520 Speaker 2: Outline and if you enjoy all thoughts, if you appreciate 991 00:52:11,520 --> 00:52:14,080 Speaker 2: conversations like the one we just had with Brandon McBee, 992 00:52:14,160 --> 00:52:17,960 Speaker 2: then please leave us a positive review on your favorite 993 00:52:18,080 --> 00:52:20,080 Speaker 2: podcast platform. Thanks for listening.