WEBVTT - Silicon States: US Data Center Expansion in the AI Era

0:00:00.160 --> 0:00:02.719
<v Speaker 1>This is Tom Rowlands Reese and you're listening to Switched

0:00:02.759 --> 0:00:05.960
<v Speaker 1>on the podcast brought to you by BNF. The rapid

0:00:06.040 --> 0:00:09.400
<v Speaker 1>rise of energy intensive AI data centers is reshaping the

0:00:09.440 --> 0:00:13.000
<v Speaker 1>near term outlook for US power markets. Outpacing the energy

0:00:13.039 --> 0:00:16.720
<v Speaker 1>demand growth of EVS, hydrogen and all other demand classes

0:00:16.760 --> 0:00:19.599
<v Speaker 1>out to twenty thirty, data centers will account for eight

0:00:19.640 --> 0:00:22.959
<v Speaker 1>point six percent of US electricity demand by twenty thirty five.

0:00:23.120 --> 0:00:25.919
<v Speaker 1>That's almost twice as much as today. Largely owned and

0:00:25.960 --> 0:00:29.720
<v Speaker 1>operated by a few highly consolidated companies with very deep pockets,

0:00:29.920 --> 0:00:33.280
<v Speaker 1>this concentration of capital allows for rapid expansion and a

0:00:33.360 --> 0:00:37.760
<v Speaker 1>significant influence over future energy infrastructure investment. So what strategies

0:00:37.760 --> 0:00:40.839
<v Speaker 1>are these companies employing to optimize their data center rollout?

0:00:41.080 --> 0:00:43.520
<v Speaker 1>On today's show, I'm joined by BNF's head of US

0:00:43.560 --> 0:00:47.720
<v Speaker 1>Power Helen Co and US Power Senior Associate Natalie Lemandebrata,

0:00:47.960 --> 0:00:50.640
<v Speaker 1>and together we discuss findings from their note US data

0:00:50.640 --> 0:00:53.519
<v Speaker 1>center market outlook the Age of AI, which B and

0:00:53.560 --> 0:00:55.800
<v Speaker 1>EF clients can find at BNF go on the Bloomberg

0:00:55.880 --> 0:00:58.800
<v Speaker 1>Terminal or on BNF dot com. All right, let's get

0:00:58.800 --> 0:01:01.640
<v Speaker 1>to talking about the outlook AI data centers with Helen

0:01:01.720 --> 0:01:15.959
<v Speaker 1>and Natalie. Helen, thank you for being here. Thanks Tom

0:01:16.240 --> 0:01:18.039
<v Speaker 1>and Natalie thanks for being here as well.

0:01:18.200 --> 0:01:18.880
<v Speaker 2>Thank you Tom.

0:01:19.600 --> 0:01:23.240
<v Speaker 1>So Natalie reports up to Helen, and Helen reports up

0:01:23.280 --> 0:01:25.560
<v Speaker 1>to me, and I'm not saying that to flex. I'm

0:01:25.560 --> 0:01:27.960
<v Speaker 1>saying it for a couple of reasons. One is, I'm

0:01:28.000 --> 0:01:31.840
<v Speaker 1>like super proud to have such smart people on my team,

0:01:32.160 --> 0:01:34.559
<v Speaker 1>and I'm also particularly proud of the work they've done

0:01:34.640 --> 0:01:39.640
<v Speaker 1>around data centers. But also this situation of this reporting

0:01:39.720 --> 0:01:41.720
<v Speaker 1>line means that I get to have catch ups with

0:01:41.800 --> 0:01:44.880
<v Speaker 1>them regularly, which has been pretty useful to me personally

0:01:45.120 --> 0:01:48.520
<v Speaker 1>because this question around AI and data centers has had

0:01:48.560 --> 0:01:50.760
<v Speaker 1>a lot of people talking, a lot of people have opinions,

0:01:51.000 --> 0:01:54.040
<v Speaker 1>and so I have often found myself in situations where

0:01:54.200 --> 0:01:57.960
<v Speaker 1>people are expressing their opinions, and in those situations, I've

0:01:58.040 --> 0:02:01.520
<v Speaker 1>developed this tactic to different myself from the pack, which

0:02:01.600 --> 0:02:06.040
<v Speaker 1>is that in the situation, I just regurgitate whatever Helen

0:02:06.040 --> 0:02:08.400
<v Speaker 1>and Natalie last told me about data centers, and everyone

0:02:08.440 --> 0:02:11.200
<v Speaker 1>thinks that I'm really smart. So first off, let's start

0:02:11.200 --> 0:02:13.480
<v Speaker 1>just with the headline numbers. How much data center build

0:02:13.560 --> 0:02:16.680
<v Speaker 1>are we expecting in the US According to the report

0:02:16.680 --> 0:02:17.919
<v Speaker 1>that we just published, so.

0:02:17.919 --> 0:02:21.440
<v Speaker 2>BNF's latest outlook has data center and demand more than

0:02:21.480 --> 0:02:25.080
<v Speaker 2>doubling from thirty five gigatts today to close to eighty

0:02:25.120 --> 0:02:28.360
<v Speaker 2>gigawatts in twenty thirty five. This would account for close

0:02:28.360 --> 0:02:31.600
<v Speaker 2>to nine percent of total US electricity demand.

0:02:32.000 --> 0:02:34.480
<v Speaker 1>Wow, so we're expecting let me just do the mass

0:02:34.600 --> 0:02:38.440
<v Speaker 1>suddenly like forty five gigawatts ish, which for those of

0:02:38.480 --> 0:02:41.000
<v Speaker 1>you who are you know, maybe new to the energy space,

0:02:41.080 --> 0:02:45.000
<v Speaker 1>that's like twenty to thirty nuclear plants, and nuclear plants

0:02:45.000 --> 0:02:46.960
<v Speaker 1>are really really big and take a long time to build.

0:02:46.960 --> 0:02:50.480
<v Speaker 1>That's a lot of demand. So we are forecasting some

0:02:50.840 --> 0:02:53.680
<v Speaker 1>astronomical amount of data center build. How do we compare

0:02:53.880 --> 0:02:56.680
<v Speaker 1>to everyone else that has an opinion on this topic?

0:02:57.320 --> 0:03:03.760
<v Speaker 3>We're relatively conservative and relatively conservative. Yes, yeah, our overall

0:03:04.000 --> 0:03:07.800
<v Speaker 3>demand build is fairly low in terms of uptake relative

0:03:07.880 --> 0:03:09.160
<v Speaker 3>to our third parties.

0:03:09.639 --> 0:03:13.840
<v Speaker 1>Okay, so how come they are forecasting something so much

0:03:13.919 --> 0:03:16.120
<v Speaker 1>more aggressive than us, or how come we are so

0:03:16.240 --> 0:03:17.720
<v Speaker 1>much more conservative than them?

0:03:18.000 --> 0:03:22.239
<v Speaker 3>Well, we don't really know what our third party counterparts

0:03:22.320 --> 0:03:24.560
<v Speaker 3>do in terms of their forecast, but what we do

0:03:24.680 --> 0:03:27.960
<v Speaker 3>know is like how we forecast data centers, and our

0:03:28.040 --> 0:03:31.359
<v Speaker 3>focus was really to look at like how data centers

0:03:31.400 --> 0:03:34.280
<v Speaker 3>move from one stage to the next. So in our

0:03:34.320 --> 0:03:36.280
<v Speaker 3>project data base, what we know is that we can

0:03:36.320 --> 0:03:39.520
<v Speaker 3>see data center stages. So we have like early stage,

0:03:39.800 --> 0:03:43.160
<v Speaker 3>which is basically anything kind of just just got announced.

0:03:43.200 --> 0:03:45.840
<v Speaker 3>We have projects that are committed, which is anything that

0:03:45.880 --> 0:03:49.000
<v Speaker 3>has some type of like land or permitting agreement, things

0:03:49.000 --> 0:03:52.400
<v Speaker 3>that are under construction and then live. And what we

0:03:52.480 --> 0:03:55.080
<v Speaker 3>did was we tracked how these data centers moved from

0:03:55.080 --> 0:03:57.200
<v Speaker 3>one stage to the next, and we looked at like

0:03:57.280 --> 0:03:59.920
<v Speaker 3>the probability of how these data centers moved.

0:03:59.760 --> 0:04:02.520
<v Speaker 1>From and stay to the next. So typically, how long

0:04:02.560 --> 0:04:05.000
<v Speaker 1>does it take for a data center, you know, from

0:04:05.680 --> 0:04:08.960
<v Speaker 1>early on in this pipeline to being commissioned, how long

0:04:08.960 --> 0:04:09.600
<v Speaker 1>would that take?

0:04:10.040 --> 0:04:13.360
<v Speaker 3>Yeah, So what we've found based on data between twenty

0:04:13.440 --> 0:04:15.920
<v Speaker 3>twenty to twenty twenty four is that it takes seven

0:04:16.040 --> 0:04:18.719
<v Speaker 3>years to build a data center, which is a really,

0:04:18.880 --> 0:04:19.680
<v Speaker 3>really long time.

0:04:19.960 --> 0:04:23.240
<v Speaker 1>Okay, So that's really interesting. So in a way, our

0:04:23.320 --> 0:04:25.760
<v Speaker 1>forecast we're saying with a fair amount of confidence because

0:04:25.800 --> 0:04:27.440
<v Speaker 1>you know, twenty thirty five is only just over a

0:04:27.480 --> 0:04:31.160
<v Speaker 1>decade away, and we have data on everything that's getting built,

0:04:31.240 --> 0:04:33.400
<v Speaker 1>and we know that it takes most of that decade

0:04:33.440 --> 0:04:37.440
<v Speaker 1>for it to get built. So anyone who's forecasting something

0:04:37.440 --> 0:04:39.480
<v Speaker 1>more aggressive than us either has a different view on

0:04:39.520 --> 0:04:41.479
<v Speaker 1>how long it takes to build data centers, or they

0:04:41.560 --> 0:04:43.960
<v Speaker 1>must have different data, or maybe they're using a completely

0:04:43.960 --> 0:04:46.680
<v Speaker 1>different methodology. But it's good to know that we're the

0:04:46.680 --> 0:04:48.920
<v Speaker 1>ones doing it right. So who's building all of these

0:04:49.000 --> 0:04:53.919
<v Speaker 1>data centers this sort of colossal volume of new demand.

0:04:54.360 --> 0:04:57.720
<v Speaker 2>Yeah, so the data center market is pretty concentrated. There's

0:04:57.760 --> 0:05:01.800
<v Speaker 2>two main types of owners. There's colo data centers who

0:05:01.960 --> 0:05:05.719
<v Speaker 2>have buildings with multiple tenants, and you have these self

0:05:05.760 --> 0:05:09.360
<v Speaker 2>build companies, which are typically your large tech companies or

0:05:09.440 --> 0:05:14.440
<v Speaker 2>hyperscalers is what they're usually referred to. And the hyperscalers

0:05:14.480 --> 0:05:19.640
<v Speaker 2>of Google, Amazon, Microsoft, and Meta are close to fifty

0:05:19.640 --> 0:05:23.560
<v Speaker 2>percent of total operating capacity today and this is only

0:05:23.600 --> 0:05:26.960
<v Speaker 2>set to grow. They're building much larger campuses close to

0:05:27.040 --> 0:05:31.640
<v Speaker 2>gigawatts scale. Amazon has multiple gigawat data center campuses in

0:05:31.720 --> 0:05:36.680
<v Speaker 2>development in Virginia. Meta has another two gigawatts in Louisiana,

0:05:36.960 --> 0:05:40.039
<v Speaker 2>and just as a point of reference, in the last decade,

0:05:40.120 --> 0:05:43.880
<v Speaker 2>data centers have typically been in the tens of megawatts.

0:05:44.279 --> 0:05:46.680
<v Speaker 2>So really, as we're pushing through these hundreds of megawatts

0:05:46.680 --> 0:05:50.560
<v Speaker 2>and gigawat size, the rise of uptake will be much

0:05:50.720 --> 0:05:51.960
<v Speaker 2>faster and larger.

0:05:52.240 --> 0:05:54.680
<v Speaker 1>Okay, And so just just to make sure I've understood

0:05:54.800 --> 0:05:58.640
<v Speaker 1>overall correctly, those four companies are fifty percent of the

0:05:58.760 --> 0:06:02.120
<v Speaker 1>data center build today, but we think that there's going

0:06:02.160 --> 0:06:04.440
<v Speaker 1>to be even more because they're the companies that are

0:06:04.440 --> 0:06:07.400
<v Speaker 1>building these really big data centers that are so much

0:06:07.400 --> 0:06:10.360
<v Speaker 1>of what we're expecting. So you've already alluded to these

0:06:10.440 --> 0:06:12.840
<v Speaker 1>data centers are maybe bigger than the ones we've seen

0:06:12.880 --> 0:06:15.000
<v Speaker 1>in the past, that's the trend. But in what other

0:06:15.040 --> 0:06:17.719
<v Speaker 1>ways are the data centers that we're expecting different to

0:06:17.800 --> 0:06:19.960
<v Speaker 1>the ones that we've seen in the past.

0:06:20.680 --> 0:06:23.279
<v Speaker 3>Yeah, before we jump into that, I think it's important

0:06:23.279 --> 0:06:26.560
<v Speaker 3>to understand how be any of categorized as data centers.

0:06:26.800 --> 0:06:30.760
<v Speaker 3>So we categorize data centers in three different ways. Vers size,

0:06:30.800 --> 0:06:35.200
<v Speaker 3>which Natalie had alluded to, first retail, wholesale and hyperscaler

0:06:35.320 --> 0:06:37.359
<v Speaker 3>So that's just based on the project size of a

0:06:37.440 --> 0:06:40.640
<v Speaker 3>data center. And then there's operator type, which is the

0:06:40.680 --> 0:06:45.280
<v Speaker 3>ownership so either self build or co location, which Natalie

0:06:45.320 --> 0:06:50.120
<v Speaker 3>had already explained. And then there's workload, So workload is

0:06:50.200 --> 0:06:54.080
<v Speaker 3>based on just the computing process of a data center,

0:06:54.400 --> 0:06:58.040
<v Speaker 3>and there are many different types of workload from cloud

0:06:58.279 --> 0:07:02.520
<v Speaker 3>or enterprise, telecom, crypto mining, and that determines a lot

0:07:02.600 --> 0:07:06.960
<v Speaker 3>about the data center's overall infrastructure and then also their

0:07:07.040 --> 0:07:09.240
<v Speaker 3>overall load and power consumption.

0:07:09.760 --> 0:07:11.720
<v Speaker 1>So then, I mean one of the other things that

0:07:11.760 --> 0:07:13.720
<v Speaker 1>you've spoken to me about is when we're talking about

0:07:13.760 --> 0:07:16.840
<v Speaker 1>AI data centers, that there's two main flavors, So can

0:07:16.880 --> 0:07:18.920
<v Speaker 1>you just talk me through those as well.

0:07:19.560 --> 0:07:23.400
<v Speaker 2>Within AI, we mainly branch it out in two main workloads,

0:07:23.440 --> 0:07:27.280
<v Speaker 2>AI training and AI inference. AI training is processing a

0:07:27.360 --> 0:07:29.720
<v Speaker 2>large amount of data in order to train these large

0:07:29.800 --> 0:07:33.600
<v Speaker 2>language models, and AI inference is taking those already train

0:07:33.720 --> 0:07:36.520
<v Speaker 2>models for real time applications in use, like when you're

0:07:36.600 --> 0:07:40.160
<v Speaker 2>careering in chat GPT, And those two workloads have also

0:07:40.800 --> 0:07:46.400
<v Speaker 2>different location and constraints of the data center itself. AI inference,

0:07:46.560 --> 0:07:50.840
<v Speaker 2>since they're theoretically interacting with the end user in real time,

0:07:51.120 --> 0:07:54.600
<v Speaker 2>they'll care more about latency and location parameters to that

0:07:54.760 --> 0:07:57.960
<v Speaker 2>end user. AI training a lot of that processing happens

0:07:57.960 --> 0:08:01.040
<v Speaker 2>on site, so theoretically they could be more flexible on

0:08:01.080 --> 0:08:04.520
<v Speaker 2>where they locate, and they could follow where there's available

0:08:04.560 --> 0:08:06.480
<v Speaker 2>power or other constraints.

0:08:06.840 --> 0:08:09.480
<v Speaker 1>Okay, I mean you use the word theoretically there, which

0:08:09.480 --> 0:08:11.160
<v Speaker 1>maybe is doing a lot of work. And we'll come

0:08:11.160 --> 0:08:14.560
<v Speaker 1>back to where people are actually building data centers later on,

0:08:14.560 --> 0:08:17.160
<v Speaker 1>because I do have a question about that. But roughly,

0:08:17.240 --> 0:08:20.200
<v Speaker 1>what is do we have an idea of what proportion

0:08:20.320 --> 0:08:24.080
<v Speaker 1>of the data center building the pipeline is for AI

0:08:24.240 --> 0:08:26.400
<v Speaker 1>and what proportion of it is training and what proportion

0:08:26.520 --> 0:08:27.360
<v Speaker 1>of it is inference.

0:08:27.960 --> 0:08:32.240
<v Speaker 2>That's actually very tough to ascertain exactly what the split is.

0:08:32.280 --> 0:08:35.440
<v Speaker 2>If we look at a large Gigawat campus, some of

0:08:35.480 --> 0:08:37.800
<v Speaker 2>their buildings could be used for training today, but it

0:08:37.800 --> 0:08:40.200
<v Speaker 2>could be used for inference in the future. Similarly, we

0:08:40.280 --> 0:08:44.400
<v Speaker 2>talked about different owner types a co location building, they

0:08:44.440 --> 0:08:47.520
<v Speaker 2>could have, you know, multiple tenants, and unless you know

0:08:47.559 --> 0:08:50.320
<v Speaker 2>exactly who the tenant is and what type of workloads

0:08:50.320 --> 0:08:52.920
<v Speaker 2>they're running, it's also difficult to know if it is

0:08:52.960 --> 0:08:55.839
<v Speaker 2>for AI training at inference. But we can infer based

0:08:55.880 --> 0:08:57.280
<v Speaker 2>on where they're getting built.

0:08:57.679 --> 0:09:00.679
<v Speaker 1>But these distinctions, I mean, what I'm here here is

0:09:00.880 --> 0:09:03.640
<v Speaker 1>that it's not just that nobody tells us which it

0:09:03.720 --> 0:09:05.760
<v Speaker 1>is that makes it a little bit of a gray area.

0:09:05.800 --> 0:09:09.000
<v Speaker 1>It's that actually even a data center itself might sometimes

0:09:09.200 --> 0:09:11.000
<v Speaker 1>at one point in its life be doing one thing

0:09:11.040 --> 0:09:12.520
<v Speaker 1>and at a different point in it's life be doing

0:09:12.559 --> 0:09:13.040
<v Speaker 1>something else.

0:09:13.360 --> 0:09:15.880
<v Speaker 3>I think for power folks, I often try to like

0:09:16.000 --> 0:09:18.720
<v Speaker 3>frame it like a data center is very similar to

0:09:18.760 --> 0:09:21.840
<v Speaker 3>a battery. Like batteries can do multiple different types of

0:09:21.960 --> 0:09:25.720
<v Speaker 3>energy services or ancillary services, a data center can do

0:09:25.800 --> 0:09:29.000
<v Speaker 3>multiple different types of workload. It can be doing AI

0:09:29.040 --> 0:09:32.760
<v Speaker 3>training or cloud as long as the configuration is correct,

0:09:32.840 --> 0:09:34.560
<v Speaker 3>or it could do those types of workload.

0:09:34.679 --> 0:09:37.000
<v Speaker 1>Got it. So just because you've optimized one to one

0:09:37.000 --> 0:09:39.839
<v Speaker 1>thing doesn't mean that it can't do the other thing.

0:09:40.120 --> 0:09:45.360
<v Speaker 3>Yeah, And particularly in colocation data centers where basically these

0:09:45.360 --> 0:09:48.720
<v Speaker 3>companies are renting out their RT servers to tenants, those

0:09:48.760 --> 0:09:51.920
<v Speaker 3>tenants are probably doing different types of workload. There can

0:09:52.000 --> 0:09:54.360
<v Speaker 3>be multiple applications in a data center.

0:09:54.640 --> 0:09:56.800
<v Speaker 2>Yeah, a workload, which is often why you see in

0:09:56.840 --> 0:10:00.079
<v Speaker 2>colocation that they're optimizing for everything because they don't know

0:10:00.120 --> 0:10:03.720
<v Speaker 2>who their tenant is. I will add, though, in terms

0:10:03.720 --> 0:10:07.199
<v Speaker 2>of AI data centers that they are quite different from

0:10:07.280 --> 0:10:10.000
<v Speaker 2>data centers today. We already talked that these data centers

0:10:10.000 --> 0:10:12.280
<v Speaker 2>are getting larger. A lot of that has to do

0:10:12.320 --> 0:10:15.360
<v Speaker 2>with the GPUs in those servers being much more so.

0:10:15.400 --> 0:10:17.280
<v Speaker 1>Can you just what a GPUs.

0:10:17.040 --> 0:10:22.520
<v Speaker 2>Graphical processing units, which is they're similar to CPUs central

0:10:22.520 --> 0:10:26.640
<v Speaker 2>processing units, but they're very specialized in they're cooler.

0:10:28.080 --> 0:10:30.719
<v Speaker 1>I think that we've got to the level of understanding.

0:10:30.920 --> 0:10:32.839
<v Speaker 1>I'm good with it. So it's like it's like a chip,

0:10:32.960 --> 0:10:34.040
<v Speaker 1>but it's a different kind of chip.

0:10:34.120 --> 0:10:37.320
<v Speaker 2>Yeah, it's very good at parallel processing, which is optimized

0:10:37.360 --> 0:10:40.440
<v Speaker 2>for training large language models. I'm sure lots of people

0:10:40.440 --> 0:10:43.080
<v Speaker 2>have heard of Navidia and their GPUs. A lot of

0:10:43.080 --> 0:10:46.439
<v Speaker 2>tech companies are also building their AI accelerators, which are

0:10:46.520 --> 0:10:50.720
<v Speaker 2>specialized chips for these models. So a lot of the

0:10:50.760 --> 0:10:54.520
<v Speaker 2>design of the data center today in order to accommodate

0:10:54.559 --> 0:10:57.960
<v Speaker 2>AI training workloads are different. A lot of that has

0:10:58.000 --> 0:11:00.520
<v Speaker 2>to do with these GPUs and the rack that they're

0:11:00.520 --> 0:11:02.800
<v Speaker 2>on are going to be much more power dense, which

0:11:02.800 --> 0:11:06.520
<v Speaker 2>means they need a lot more sophisticated cooling technologies to

0:11:06.559 --> 0:11:09.640
<v Speaker 2>accommodate those sort of rack densities that could be tenext

0:11:09.720 --> 0:11:13.440
<v Speaker 2>to what typical data centers are today. We've seen, for example,

0:11:13.600 --> 0:11:16.960
<v Speaker 2>Metas scrapping data centers that were not ail ready, so

0:11:17.000 --> 0:11:20.120
<v Speaker 2>it's quite difficult to retrofit a data center from five

0:11:20.160 --> 0:11:22.040
<v Speaker 2>years ago to an AI workload.

0:11:22.400 --> 0:11:23.360
<v Speaker 1>So while a.

0:11:23.400 --> 0:11:26.079
<v Speaker 2>Data center in the future could have multiple uses, it's

0:11:26.120 --> 0:11:28.920
<v Speaker 2>also very specialized to what they're going to be running.

0:11:29.400 --> 0:11:31.560
<v Speaker 1>Got it is it? Would it be right for me

0:11:31.640 --> 0:11:34.360
<v Speaker 1>to say that, like, a data center designed for AI

0:11:34.760 --> 0:11:37.439
<v Speaker 1>can be used for other things too, but a data

0:11:37.440 --> 0:11:40.560
<v Speaker 1>center not designed for AI probably can't do AI? Is

0:11:40.559 --> 0:11:41.880
<v Speaker 1>that a fair statement?

0:11:42.200 --> 0:11:45.040
<v Speaker 3>I think that's a pretty fair statement. We're basically seeing

0:11:45.080 --> 0:11:48.199
<v Speaker 3>new data centers being designed in a way that allows

0:11:48.280 --> 0:11:51.800
<v Speaker 3>for AI training, and so there's an influence of like

0:11:51.880 --> 0:11:55.080
<v Speaker 3>AI and data center design to be larger and more

0:11:55.200 --> 0:11:56.000
<v Speaker 3>power tents.

0:11:56.440 --> 0:12:00.439
<v Speaker 1>So in these training data centers that building these models.

0:12:00.679 --> 0:12:02.800
<v Speaker 1>And one of the charts in the note that I

0:12:02.840 --> 0:12:06.240
<v Speaker 1>really loved but haven't quite fully digested is one showing

0:12:06.480 --> 0:12:09.040
<v Speaker 1>the amount of and you don't have to explain this

0:12:09.200 --> 0:12:15.120
<v Speaker 1>unit amount of terror flops required too. Terrorflop that's just

0:12:15.160 --> 0:12:17.520
<v Speaker 1>the unit of like computer work, isn't it.

0:12:17.600 --> 0:12:19.640
<v Speaker 2>Yeah, it's just a basic unit of computation.

0:12:20.040 --> 0:12:24.360
<v Speaker 1>Yeah, the amount of terror flops needed to design different

0:12:24.559 --> 0:12:28.320
<v Speaker 1>AI models as they've become more and more in sophisticated

0:12:28.600 --> 0:12:31.840
<v Speaker 1>and so obviously there's been an expectation that trend is

0:12:31.840 --> 0:12:34.680
<v Speaker 1>going to continue, and everyone was you know, freaking out

0:12:34.960 --> 0:12:36.920
<v Speaker 1>in both good ways and bad ways about all the

0:12:36.920 --> 0:12:38.920
<v Speaker 1>power demand that this will intel. And then I remember

0:12:38.960 --> 0:12:41.320
<v Speaker 1>deep seek came along and a lot of the people

0:12:41.440 --> 0:12:44.480
<v Speaker 1>were like, oh, this changes everything. Can you just provide

0:12:44.520 --> 0:12:46.120
<v Speaker 1>a bit of clarity on all of this.

0:12:46.720 --> 0:12:49.080
<v Speaker 3>There was a couple of things I really tried to

0:12:49.200 --> 0:12:52.160
<v Speaker 3>understand about all of this, just in terms of like

0:12:52.360 --> 0:12:55.040
<v Speaker 3>power market fundamentals. I think when it comes to like

0:12:55.240 --> 0:12:59.480
<v Speaker 3>forecasting for power demand, all things come down to some

0:13:00.160 --> 0:13:04.280
<v Speaker 3>very basic constructs. It's usually like how much quantity of

0:13:04.280 --> 0:13:08.800
<v Speaker 3>something and then the energy intensity of that something. And

0:13:08.960 --> 0:13:14.200
<v Speaker 3>for data centers it's a very similar process. And when

0:13:14.200 --> 0:13:18.240
<v Speaker 3>we think about like the energy efficiency of large language

0:13:18.280 --> 0:13:21.360
<v Speaker 3>model AI training data centers, there's like a couple things

0:13:21.400 --> 0:13:24.760
<v Speaker 3>to think through. First is around the energy intensity of

0:13:24.800 --> 0:13:28.920
<v Speaker 3>power consumption from a chips perspective, and chip innovation is

0:13:28.960 --> 0:13:33.960
<v Speaker 3>often like confused with like increasing energy efficiency. That's not

0:13:34.080 --> 0:13:37.840
<v Speaker 3>necessarily the case. Typically when we think about chip innovation,

0:13:38.320 --> 0:13:42.400
<v Speaker 3>it's often optimized for like operations per second, which means

0:13:42.440 --> 0:13:46.280
<v Speaker 3>that like typically more like every generation of chips tend

0:13:46.320 --> 0:13:49.880
<v Speaker 3>to have draw more power and therefore, like it increases

0:13:49.920 --> 0:13:53.920
<v Speaker 3>the energy intensity of a data center. So there are

0:13:54.480 --> 0:13:57.800
<v Speaker 3>new chips that are getting invented, like the Nvidia Blackwell

0:13:57.800 --> 0:14:01.400
<v Speaker 3>that focuses on energy efficiency. But in general what we've

0:14:01.400 --> 0:14:04.400
<v Speaker 3>seen is like the more advanced chips tend to draw

0:14:04.480 --> 0:14:08.240
<v Speaker 3>on more power. The other thing that is really important

0:14:08.280 --> 0:14:11.640
<v Speaker 3>for AI training is then like the number of parameters

0:14:12.000 --> 0:14:15.560
<v Speaker 3>that an AI training model focuses on. So parameters is

0:14:15.640 --> 0:14:20.360
<v Speaker 3>just like points of active information for a model to

0:14:20.440 --> 0:14:21.000
<v Speaker 3>think through.

0:14:21.240 --> 0:14:23.360
<v Speaker 1>So it's just the number of different things it thinks

0:14:23.400 --> 0:14:26.520
<v Speaker 1>about sort of like so if I was thinking, like

0:14:27.000 --> 0:14:29.120
<v Speaker 1>should I go to work today or should I walk

0:14:29.160 --> 0:14:32.440
<v Speaker 1>to work today? Like, if I had one parameter, it

0:14:32.560 --> 0:14:34.760
<v Speaker 1>might be what's the weather like outside? And if the

0:14:34.840 --> 0:14:37.720
<v Speaker 1>two parameter might be what's the weather like outside and

0:14:37.760 --> 0:14:39.360
<v Speaker 1>what day of the week is it? Yeah, you might

0:14:39.400 --> 0:14:42.240
<v Speaker 1>determine whether or not I go to work. So that's

0:14:42.280 --> 0:14:44.520
<v Speaker 1>like what each of those is a parameter.

0:14:44.280 --> 0:14:49.160
<v Speaker 3>Exactly, exactly precisely and in general, like in the large

0:14:49.240 --> 0:14:51.760
<v Speaker 3>language model community historically, or at least in the power

0:14:51.800 --> 0:14:55.440
<v Speaker 3>industry historically, what people had thought through was that like

0:14:56.040 --> 0:15:01.600
<v Speaker 3>more sophisticated models required more parameters. So with every generation

0:15:01.800 --> 0:15:05.600
<v Speaker 3>of like chatchipt or Gemina or cloud. If you look

0:15:05.640 --> 0:15:08.960
<v Speaker 3>at their like technical reports, what you'll see is that

0:15:09.000 --> 0:15:12.000
<v Speaker 3>there's increasing amount of parameters that is used to like

0:15:12.080 --> 0:15:14.960
<v Speaker 3>train these large language models, and so in general, like

0:15:15.240 --> 0:15:19.160
<v Speaker 3>there was this overall consensus that the energy intensity of

0:15:19.200 --> 0:15:23.320
<v Speaker 3>air training is an upward trend. You use more powerful chips,

0:15:23.360 --> 0:15:26.600
<v Speaker 3>and you're using your training on more and more parameters,

0:15:26.840 --> 0:15:30.280
<v Speaker 3>and so that's all driving more and more electricity consumption.

0:15:30.680 --> 0:15:34.320
<v Speaker 3>Deep Seek came out in December twenty twenty four, and

0:15:34.400 --> 0:15:37.920
<v Speaker 3>in their technical report, what was really cool was that

0:15:38.000 --> 0:15:41.800
<v Speaker 3>they had a different training process. So it uses something

0:15:41.840 --> 0:15:45.800
<v Speaker 3>called a mixture of experts training process, where instead of

0:15:45.880 --> 0:15:49.320
<v Speaker 3>just like plugging in all of their parameters all at once,

0:15:49.720 --> 0:15:53.920
<v Speaker 3>what they did was they pre categorized their parameters into experts.

0:15:54.240 --> 0:15:58.720
<v Speaker 3>So like certain parameters that specialize in math or like colors,

0:15:58.960 --> 0:16:01.520
<v Speaker 3>they would like pre categze them so that when you

0:16:01.560 --> 0:16:04.560
<v Speaker 3>put in a query of a question to ask the

0:16:04.600 --> 0:16:08.080
<v Speaker 3>model to train, it would know which specialization to pull

0:16:08.160 --> 0:16:12.040
<v Speaker 3>the parameters from, which then drew on less power. So

0:16:12.400 --> 0:16:15.760
<v Speaker 3>the other thing that the technical report published was that

0:16:15.880 --> 0:16:19.840
<v Speaker 3>deep Seek performed at a very high level with chat

0:16:19.880 --> 0:16:23.000
<v Speaker 3>GPT or like other types of large language models that

0:16:23.160 --> 0:16:26.320
<v Speaker 3>used a lot more parameters, and so it kind of

0:16:26.320 --> 0:16:30.520
<v Speaker 3>broke the assumption that you needed a whole bunch of

0:16:30.560 --> 0:16:33.760
<v Speaker 3>parameters to make really sophisticated large language models.

0:16:34.280 --> 0:16:37.240
<v Speaker 1>And so do you think that that will massively change

0:16:37.440 --> 0:16:39.480
<v Speaker 1>the outlook for power demand from AI?

0:16:40.160 --> 0:16:42.280
<v Speaker 3>So to answer your question from like a data center

0:16:42.320 --> 0:16:47.520
<v Speaker 3>demand perspective, not necessarily in our near term forecast on

0:16:47.880 --> 0:16:50.920
<v Speaker 3>in our data center outlook, we use like a project

0:16:50.920 --> 0:16:54.680
<v Speaker 3>by project level forecasts, right. But in our new energy outlook,

0:16:54.920 --> 0:16:58.440
<v Speaker 3>which focused more on like long term forecasts for data

0:16:58.440 --> 0:17:02.920
<v Speaker 3>center demand, it focused on that fundamentals based way of forecasting,

0:17:02.960 --> 0:17:06.800
<v Speaker 3>which it looks at long term in any given like market,

0:17:07.240 --> 0:17:11.240
<v Speaker 3>what data generation and data usage looks like relative to

0:17:11.280 --> 0:17:14.320
<v Speaker 3>the energy intensity of that data generation.

0:17:14.880 --> 0:17:15.040
<v Speaker 2>Right.

0:17:15.200 --> 0:17:17.280
<v Speaker 1>And there's a point you make in the report I

0:17:17.320 --> 0:17:19.359
<v Speaker 1>recall and I don't think you were talking about deep

0:17:19.400 --> 0:17:22.160
<v Speaker 1>seek here. I think you were talking about data center efficiency.

0:17:22.200 --> 0:17:24.760
<v Speaker 1>But bring up Jevins paradox, which is this idea that

0:17:24.760 --> 0:17:27.320
<v Speaker 1>if you make something more efficient, it doesn't mean necessarily

0:17:27.359 --> 0:17:30.440
<v Speaker 1>that we save energy, it's that we just do more

0:17:30.480 --> 0:17:33.160
<v Speaker 1>with what we were going to consume anyway, And could

0:17:33.200 --> 0:17:35.240
<v Speaker 1>the same logic be applied for deep seek is if

0:17:35.280 --> 0:17:38.280
<v Speaker 1>it is more efficient with parameters and therefore energy and

0:17:38.440 --> 0:17:41.320
<v Speaker 1>also computer usage, then that just opens the door to

0:17:41.359 --> 0:17:44.360
<v Speaker 1>do cooler things with AI than would have previously been possible,

0:17:44.680 --> 0:17:45.879
<v Speaker 1>rather than to save energy.

0:17:46.280 --> 0:17:50.240
<v Speaker 3>Yeah, the energy intensity curve of AI training data centers

0:17:50.280 --> 0:17:53.080
<v Speaker 3>like instead of it being like an upward swing, it

0:17:53.800 --> 0:17:56.919
<v Speaker 3>goes down. But we also then know it opens up

0:17:57.000 --> 0:17:59.480
<v Speaker 3>a lot more opportunities for a lot of different types

0:17:59.480 --> 0:18:03.920
<v Speaker 3>of business to maybe do AI training right, which means

0:18:03.920 --> 0:18:07.120
<v Speaker 3>that you have more companies that may be doing this.

0:18:07.520 --> 0:18:10.080
<v Speaker 3>So got it runs paradox.

0:18:09.920 --> 0:18:12.480
<v Speaker 1>Very interesting and actually I think that brings some real

0:18:12.480 --> 0:18:15.840
<v Speaker 1>clarity into what this whole deep seat thing means for

0:18:15.880 --> 0:18:20.920
<v Speaker 1>power demand, which my main takeaway is like not that much. Ultimately,

0:18:21.240 --> 0:18:24.080
<v Speaker 1>it's very difficult to say, but we shouldn't be saying, oh,

0:18:24.119 --> 0:18:26.159
<v Speaker 1>this means all this data center demand growth isn't going

0:18:26.200 --> 0:18:29.560
<v Speaker 1>to happen. Yes, So pulling out again, there's all of

0:18:29.600 --> 0:18:32.560
<v Speaker 1>this data center build that's going to happen in the US,

0:18:32.800 --> 0:18:35.040
<v Speaker 1>four companies are going to be behind a little bit

0:18:35.040 --> 0:18:36.960
<v Speaker 1>more than half of it. Where are they going to

0:18:37.000 --> 0:18:39.640
<v Speaker 1>be building all of this? And why are they going

0:18:39.680 --> 0:18:40.960
<v Speaker 1>to be building in those places?

0:18:41.440 --> 0:18:44.800
<v Speaker 2>Yeah? I'll also add on those four companies that are

0:18:45.119 --> 0:18:48.159
<v Speaker 2>most of the data center market, they're also the companies

0:18:48.440 --> 0:18:51.560
<v Speaker 2>that you know aren't building these AI training models and

0:18:51.640 --> 0:18:55.120
<v Speaker 2>have the capacity to train large scale models because it's

0:18:55.160 --> 0:18:59.280
<v Speaker 2>a very costly exercise that is only set to grow.

0:18:59.480 --> 0:19:02.080
<v Speaker 2>They're also forty of the data center market, but they're

0:19:02.119 --> 0:19:05.560
<v Speaker 2>also the ones training AI models because actually, not many

0:19:05.560 --> 0:19:08.040
<v Speaker 2>companies can train A models, right, right.

0:19:07.880 --> 0:19:10.360
<v Speaker 1>So a lot of the new big demand is becoming

0:19:10.440 --> 0:19:13.359
<v Speaker 1>from these four companies. So a lot of the demand

0:19:13.440 --> 0:19:14.680
<v Speaker 1>that we're talking about.

0:19:14.880 --> 0:19:17.000
<v Speaker 2>Yeah, I guess today when we look at the data

0:19:17.000 --> 0:19:20.480
<v Speaker 2>center fleet, they're mostly for cloud. But going forward, the

0:19:20.520 --> 0:19:24.040
<v Speaker 2>companies that can actually train AI models is less than

0:19:24.080 --> 0:19:26.720
<v Speaker 2>ten companies training like frontier models, and that those are

0:19:26.800 --> 0:19:28.240
<v Speaker 2>going to be the big tech companies.

0:19:28.520 --> 0:19:31.760
<v Speaker 1>And so then where is this happening and why in

0:19:31.800 --> 0:19:32.800
<v Speaker 1>those locations.

0:19:33.119 --> 0:19:36.160
<v Speaker 2>So in our forecast we see three main markets emerge

0:19:36.200 --> 0:19:39.280
<v Speaker 2>through twenty thirty five. We break them down by power

0:19:39.320 --> 0:19:42.680
<v Speaker 2>region for power forecasting purposes.

0:19:42.200 --> 0:19:45.960
<v Speaker 1>And that's just because that's how we think. I don't

0:19:46.000 --> 0:19:47.560
<v Speaker 1>see states, I see power region.

0:19:47.880 --> 0:19:51.280
<v Speaker 2>Yeah, so we see PGM or Coat and Southeast, and

0:19:51.359 --> 0:19:57.720
<v Speaker 2>within PGM, which spans fourteen states, we see Virginia continuing

0:19:57.760 --> 0:20:01.320
<v Speaker 2>being one of the biggest markets so than Virginia has

0:20:01.400 --> 0:20:05.800
<v Speaker 2>been data center capital for the last decade. If Virginia

0:20:05.840 --> 0:20:09.000
<v Speaker 2>was a country, it would follow the US and China

0:20:09.040 --> 0:20:11.360
<v Speaker 2>as having the largest data center market.

0:20:11.320 --> 0:20:12.480
<v Speaker 4>In the world.

0:20:12.680 --> 0:20:14.680
<v Speaker 1>Wow. Can I say that again? Wow?

0:20:14.840 --> 0:20:15.080
<v Speaker 3>Yeah.

0:20:15.119 --> 0:20:19.119
<v Speaker 2>So Northern Virginia has been kind of at the center

0:20:19.320 --> 0:20:21.760
<v Speaker 2>of data center build out. A lot of this has

0:20:21.800 --> 0:20:24.399
<v Speaker 2>to do with a bit of history. They had the

0:20:24.440 --> 0:20:28.760
<v Speaker 2>first Internet exchange point in the nineties, which was kind

0:20:28.760 --> 0:20:31.480
<v Speaker 2>of the beginning of small data center build out, and

0:20:31.720 --> 0:20:35.679
<v Speaker 2>data centers typically continue to cluster an existing markets, so

0:20:36.000 --> 0:20:42.000
<v Speaker 2>as data centers grow, they'll have supporting infrastructure like fiber optics,

0:20:42.600 --> 0:20:48.120
<v Speaker 2>utility relationships, and workforce availability that allows more data centers

0:20:48.160 --> 0:20:51.240
<v Speaker 2>to continue to grow. So over time, Northern Virginia just

0:20:51.280 --> 0:20:53.879
<v Speaker 2>has gotten hotter and hotter, and we see in our

0:20:53.880 --> 0:20:57.320
<v Speaker 2>project Pipeline a lot of that continuing to grow. We

0:20:57.440 --> 0:21:00.320
<v Speaker 2>do know that a lot of this could be more

0:21:00.400 --> 0:21:03.920
<v Speaker 2>AI inference rather than AI training, but still a lot

0:21:03.920 --> 0:21:07.840
<v Speaker 2>of data center demand happening. There. Another state in PJM

0:21:07.960 --> 0:21:11.440
<v Speaker 2>is Ohio, which is emerging as one of the main

0:21:11.680 --> 0:21:16.000
<v Speaker 2>hubs in the Midwest. Google is building data centers in

0:21:16.080 --> 0:21:19.359
<v Speaker 2>New Albany, which is just outside Columbus, one of the

0:21:19.440 --> 0:21:22.600
<v Speaker 2>largest cities in Ohio, as well as other co location

0:21:22.960 --> 0:21:24.520
<v Speaker 2>and hyperscular companies.

0:21:24.800 --> 0:21:27.719
<v Speaker 1>You know, you've highlighted some a couple of major markets

0:21:27.720 --> 0:21:29.560
<v Speaker 1>with PGM. I think I think I thought was really

0:21:29.560 --> 0:21:33.160
<v Speaker 1>interesting and maybe slightly paradoxical in the report you wrote

0:21:33.359 --> 0:21:35.639
<v Speaker 1>was there's this chart from that has a survey of

0:21:35.760 --> 0:21:39.160
<v Speaker 1>data center developers saying what do you prioritize when you're

0:21:39.160 --> 0:21:41.000
<v Speaker 1>thinking about where to build a data center? And I

0:21:41.000 --> 0:21:43.439
<v Speaker 1>think we said that the top three survey results it

0:21:43.480 --> 0:21:45.680
<v Speaker 1>wasn't our survey, it was a third parties. The top

0:21:45.680 --> 0:21:48.120
<v Speaker 1>three survey results all related to energy. It was something

0:21:48.160 --> 0:21:51.280
<v Speaker 1>like security of supply, how cheap the energy is, how

0:21:51.440 --> 0:21:53.480
<v Speaker 1>green the energy is. I can't remember, don't quote me

0:21:53.480 --> 0:21:56.240
<v Speaker 1>on them, but it was energy related. When you look

0:21:56.359 --> 0:22:00.240
<v Speaker 1>at the data of where they're currently building and have been, well,

0:22:00.320 --> 0:22:04.240
<v Speaker 1>it kind of completely contradicts that thesis. You know, PGM

0:22:04.320 --> 0:22:07.600
<v Speaker 1>doesn't have the cheapest or cleanest electricity in the US.

0:22:07.680 --> 0:22:09.600
<v Speaker 1>I mean you could say it has good security of supply,

0:22:09.880 --> 0:22:11.760
<v Speaker 1>So what is behind this paradox?

0:22:12.240 --> 0:22:14.560
<v Speaker 2>I think we a lot of the hype right now

0:22:14.600 --> 0:22:18.280
<v Speaker 2>in data center built out is AI related, and we

0:22:18.359 --> 0:22:21.800
<v Speaker 2>did talk about how AI training could be more flexible.

0:22:21.960 --> 0:22:25.080
<v Speaker 2>And we do see a lot of our merging innovations

0:22:25.119 --> 0:22:29.760
<v Speaker 2>of siting near stranded renewable assets and going against the

0:22:29.800 --> 0:22:34.960
<v Speaker 2>grain of traditional sighting. But most as we said, most

0:22:35.000 --> 0:22:37.720
<v Speaker 2>of them are continuing to build out an existing data

0:22:37.720 --> 0:22:42.080
<v Speaker 2>center market. So market's pre AI one theory is basically,

0:22:42.280 --> 0:22:45.720
<v Speaker 2>they're investing billions of dollars in a data center for

0:22:45.920 --> 0:22:48.880
<v Speaker 2>the next decade or so. While in the near term

0:22:48.880 --> 0:22:51.280
<v Speaker 2>they can plan for AI training. In the future, it

0:22:51.280 --> 0:22:54.480
<v Speaker 2>could be AI inference, or they could you know, retrofit

0:22:54.520 --> 0:22:57.240
<v Speaker 2>and sell it to a co location company altogether, and

0:22:57.600 --> 0:23:01.879
<v Speaker 2>they need to plan for those latency and redidancy requirements today.

0:23:02.119 --> 0:23:05.800
<v Speaker 2>So even though in the near term they could cite

0:23:05.840 --> 0:23:08.040
<v Speaker 2>it in you know, West Texas where there's a lot

0:23:08.040 --> 0:23:11.080
<v Speaker 2>of renewables, we still see like Dallas Fort Worth and

0:23:11.280 --> 0:23:14.560
<v Speaker 2>like northern Virginia as being hotspots, although we are seeing

0:23:14.880 --> 0:23:18.199
<v Speaker 2>a trend of going a bit outside of urban locations

0:23:18.200 --> 0:23:21.199
<v Speaker 2>and going more to where there is power supply.

0:23:21.520 --> 0:23:23.640
<v Speaker 1>Got it. I remember earlier in the podcast you said

0:23:23.840 --> 0:23:27.280
<v Speaker 1>in theory about you know where you could build training

0:23:27.359 --> 0:23:29.680
<v Speaker 1>data cents and this is the this is what you're

0:23:29.680 --> 0:23:31.399
<v Speaker 1>saying is in practice, it's like this, but we are

0:23:31.440 --> 0:23:34.000
<v Speaker 1>seeing a bit of a trend, but just maybe not

0:23:34.040 --> 0:23:36.080
<v Speaker 1>as much as you would think to the sort of

0:23:36.440 --> 0:23:37.960
<v Speaker 1>energy ideal locations.

0:23:38.240 --> 0:23:38.680
<v Speaker 4>Yeah.

0:23:38.880 --> 0:23:42.119
<v Speaker 3>I guess one thing that we did also notice, like

0:23:42.400 --> 0:23:46.240
<v Speaker 3>to your point on that little contradiction, is that hyperscalers

0:23:46.240 --> 0:23:50.359
<v Speaker 3>do take a dual strategy. They're both building in existing

0:23:50.520 --> 0:23:55.080
<v Speaker 3>regions and also trialing in new locations. If you look

0:23:55.240 --> 0:23:59.000
<v Speaker 3>in our report, you're seeing that the large four companies

0:23:59.160 --> 0:24:02.639
<v Speaker 3>they're building with in their own data center clusters and

0:24:02.720 --> 0:24:05.960
<v Speaker 3>also like looking for new markets at the same time.

0:24:06.200 --> 0:24:09.360
<v Speaker 3>And they're doing that because for them, speed and scale

0:24:09.520 --> 0:24:14.879
<v Speaker 3>of development is really critical, particularly because like their AI

0:24:15.040 --> 0:24:18.080
<v Speaker 3>business requires them to kind of take a like a.

0:24:18.040 --> 0:24:20.800
<v Speaker 1>Winner is that it's like a there's a computing power

0:24:20.920 --> 0:24:24.440
<v Speaker 1>arms race happening, yeah now, and so it's all very

0:24:24.480 --> 0:24:26.760
<v Speaker 1>well saying, oh, we'd ideally build it here, but it's

0:24:26.760 --> 0:24:28.439
<v Speaker 1>like we just need this right now, and we're going

0:24:28.520 --> 0:24:29.080
<v Speaker 1>to do what's tried.

0:24:29.480 --> 0:24:29.960
<v Speaker 4>Yeah.

0:24:30.000 --> 0:24:32.600
<v Speaker 3>Like, I guess it's like a winner takes all game

0:24:32.880 --> 0:24:36.320
<v Speaker 3>in the AI business. So they want to build data

0:24:36.320 --> 0:24:39.520
<v Speaker 3>centers as quickly as possible, So they're taking all options.

0:24:39.680 --> 0:24:42.439
<v Speaker 1>They want their their particular AI model to be the

0:24:42.440 --> 0:24:45.080
<v Speaker 1>Coca Cola of AI. So what does all this mean

0:24:45.119 --> 0:24:47.480
<v Speaker 1>for the power sector? I mean, how is this going

0:24:47.520 --> 0:24:50.000
<v Speaker 1>to affect the supply mix just keeping up with all

0:24:50.000 --> 0:24:53.000
<v Speaker 1>this demand? How's it going to affect the regulatory model?

0:24:53.720 --> 0:24:56.480
<v Speaker 1>You know, it's not designed for just suddenly having loads

0:24:56.520 --> 0:24:59.720
<v Speaker 1>of new demand dropped on a in very concentrated regions,

0:24:59.760 --> 0:25:02.040
<v Speaker 1>as we I've just learned. And then you know what

0:25:02.040 --> 0:25:04.520
<v Speaker 1>innovations are we seeing to try and cope with all

0:25:04.520 --> 0:25:06.359
<v Speaker 1>of this new demand in the power sector.

0:25:06.880 --> 0:25:10.760
<v Speaker 3>Yeah, So what we're currently seeing is a very strong

0:25:10.840 --> 0:25:15.199
<v Speaker 3>reaction towards all of this new data center demand, particularly

0:25:15.240 --> 0:25:19.199
<v Speaker 3>from utilities. So in Ohio, where I guess there's a

0:25:19.200 --> 0:25:22.760
<v Speaker 3>lot of new investment in data centers, what we're seeing

0:25:22.880 --> 0:25:25.480
<v Speaker 3>is ap Ohio, which is the utility in the region.

0:25:25.720 --> 0:25:29.560
<v Speaker 3>They've proposed a new tariff which in that tariff they've

0:25:29.680 --> 0:25:32.439
<v Speaker 3>acquired or requested that new data centers pay up to

0:25:32.520 --> 0:25:37.000
<v Speaker 3>eighty five percent of their projected energy usage, which makes

0:25:37.040 --> 0:25:40.240
<v Speaker 3>it less attractive of a market for these data centers

0:25:40.280 --> 0:25:43.160
<v Speaker 3>to want to build in that region. In a way,

0:25:43.240 --> 0:25:47.520
<v Speaker 3>it's to prevent increasing retail rates from a utilities perspective,

0:25:47.680 --> 0:25:50.399
<v Speaker 3>but there's a lot of pushback for installment.

0:25:50.760 --> 0:25:54.000
<v Speaker 1>Yeah, that's I mean, because that's controversial because the entire

0:25:54.040 --> 0:25:58.560
<v Speaker 1>basis of the regulated monopoly is providing equal access to

0:25:58.680 --> 0:26:02.480
<v Speaker 1>all consumers, even if those consumers that are being discriminated

0:26:02.520 --> 0:26:07.040
<v Speaker 1>against our massive corporations, still does undermine the philosophical basis.

0:26:07.119 --> 0:26:08.920
<v Speaker 1>So I'm kind of interested to see what's going to

0:26:08.960 --> 0:26:11.240
<v Speaker 1>happen there. What do you think's going to mean for

0:26:11.720 --> 0:26:15.560
<v Speaker 1>the supply mix, you know, wind, solar, gas, what's going

0:26:15.600 --> 0:26:15.919
<v Speaker 1>to do it?

0:26:16.280 --> 0:26:19.520
<v Speaker 2>So I think one of the emerging innovations in moving

0:26:19.560 --> 0:26:22.439
<v Speaker 2>from you know, this hyperload growth environment and there's not

0:26:22.560 --> 0:26:26.720
<v Speaker 2>much supply available is this trend of colocation or having

0:26:26.800 --> 0:26:31.240
<v Speaker 2>on site generation. A lot of traditional grid planning was

0:26:31.320 --> 0:26:33.919
<v Speaker 2>for that peak power, and I think we saw this

0:26:34.160 --> 0:26:36.640
<v Speaker 2>a couple of years ago, and you know a lot

0:26:36.640 --> 0:26:40.040
<v Speaker 2>of taxes and how they integrated crypto mining is having

0:26:40.040 --> 0:26:43.920
<v Speaker 2>these as flexible loads and curtailing during hours of peak demand.

0:26:44.320 --> 0:26:48.520
<v Speaker 2>We know that non crypto workloads like AI training or

0:26:48.560 --> 0:26:53.080
<v Speaker 2>inference may not necessarily curtail, but we do see one

0:26:53.440 --> 0:26:56.720
<v Speaker 2>model in which they'll have some sort of on site

0:26:56.800 --> 0:27:01.880
<v Speaker 2>generation or colocation supply where they could as a whole

0:27:01.880 --> 0:27:06.040
<v Speaker 2>campus interact with grid needs in terms of the total

0:27:06.080 --> 0:27:09.960
<v Speaker 2>supply mix. A lot of the short term needs means

0:27:10.040 --> 0:27:13.479
<v Speaker 2>that they'll build whatever technology is fastest, and in our

0:27:13.600 --> 0:27:17.560
<v Speaker 2>data most of that is when solar batteries, but reliability

0:27:17.640 --> 0:27:20.120
<v Speaker 2>is also a huge part of data centers. They're known

0:27:20.160 --> 0:27:23.400
<v Speaker 2>for having five nines, which is ninety nine point nine

0:27:23.520 --> 0:27:27.840
<v Speaker 2>nine nine percent uptime, which means that you know, firm

0:27:27.880 --> 0:27:31.080
<v Speaker 2>capacity like natural gas generation or diesel gent chats that

0:27:31.080 --> 0:27:34.919
<v Speaker 2>they've used traditionally will also be a large part of

0:27:34.960 --> 0:27:36.680
<v Speaker 2>the solution in the short term.

0:27:36.800 --> 0:27:38.880
<v Speaker 1>So it's a real bit of an open question there.

0:27:39.040 --> 0:27:41.760
<v Speaker 1>It's either how quickly things can get built and what

0:27:41.880 --> 0:27:45.520
<v Speaker 1>truly is the fastest solution versus the long term needs. Helen,

0:27:45.520 --> 0:27:47.080
<v Speaker 1>thank you very much for joining us today.

0:27:47.320 --> 0:27:49.760
<v Speaker 4>Thank you, Ton and Natalie, thank you so much for

0:27:49.840 --> 0:27:59.639
<v Speaker 4>joining Thanks Sam.

0:28:00.040 --> 0:28:02.960
<v Speaker 1>Day's episode of Switched On was produced by Cam Gray

0:28:03.160 --> 0:28:05.520
<v Speaker 1>with production assistance from Kamala Shelling.

0:28:05.680 --> 0:28:08.840
<v Speaker 4>Bloomberg NIF is a service provided by Bloomberg Finance LP

0:28:09.000 --> 0:28:09.879
<v Speaker 4>and its affiliates.

0:28:09.920 --> 0:28:12.640
<v Speaker 1>This recording does not constitute, nor should it be construed

0:28:12.640 --> 0:28:16.440
<v Speaker 1>as investment in vice investment recommendations, or a recommendation as

0:28:16.480 --> 0:28:19.320
<v Speaker 1>to an investment or other strategy. Bloomberg ANIF should not

0:28:19.359 --> 0:28:22.399
<v Speaker 1>be considered as information sufficient upon which to base an

0:28:22.440 --> 0:28:25.600
<v Speaker 1>investment decision. Neither Bloomberg Finance LP nor any of its

0:28:25.600 --> 0:28:29.320
<v Speaker 1>affiliates makes any representation or warranty as to the accuracy

0:28:29.400 --> 0:28:32.199
<v Speaker 1>or completeness of the information contained in this recording, and

0:28:32.320 --> 0:28:32.560
<v Speaker 1>any

0:28:32.600 --> 0:28:35.879
<v Speaker 2>Liability as a result of this recording is expressly disclaimed