Granted, a project like this probably doesn't strictly need all $1 billion all at once, but I'd argue it's better to get whatever necessary funding upfront instead of risking having sunk a partial investment without being able to obtain the rest should the company's financial situation change.
> The debt facility is being made through the Department of Energy’s Loan Programs Office (LPO), which was formed under the Energy Policy Act of 2005 to foster the growth of clean energy technologies.
> The Inflation Reduction Act, which passed during the Biden administration, created another pot of money under the LPO known as the Energy Infrastructure Reinvestment program. That program was created to restore existing power plants to operation provided they avoid or reduce pollutants or greenhouse gas emissions. The Trump administration kept it largely in tact, rebranding it the Energy Dominance Financing Program.
Congress passed the Energy Policy act of 2005 and then the Inflation Reduction Act allocating money to the DoE to make these loans.
> The debt facility is being made through the Department of Energy’s Loan Programs Office (LPO), which was formed under the Energy Policy Act of 2005 to foster the growth of clean energy technologies
and, more importantly:
> The Inflation Reduction Act, which passed during the Biden administration, created another pot of money under the LPO known as the Energy Infrastructure Reinvestment program. That program was created to restore existing power plants to operation provided they avoid or reduce pollutants or greenhouse gas emissions. The Trump administration kept it largely in tact, rebranding it the Energy Dominance Financing Program.
https://www.ms.now/msnbc-podcast/msnbc/discussing-explosion-...
Are you comparing cost against what electricity currently costs or what it would cost to add capacity? I feel like Microsoft is not acting on hype here, they're going to pay a premium just because it's cool to refire a nuclear plant? Surely they've done the math to decide the feasibility of building out a few acres of solar panels.
It’s not really that farfetched, either. If the government expects a conflict in the next few decades, solar build out might become much more expensive or impossible since our domestic production might not be enough to support NATO’s growth.
So Microsoft is less price sensitive than other electricity customers.
Plus they get the PR and hype boost from saying they are using nuclear, which is huge right now. Which is big enough that the other hyperscalers thought they had to announce new nuclear projects, even though it will be a decade before those new nuclear projects could ever come on line.
PV did get spectacularly cheaper, but is not a panacea.
Nuclear is great fit for constant load, for example a cloud datacenter where relatively constant utilization is also a business goal and multiple incentives are in place to promote this. (eg. spot pricing to move part of the load off from peaks)
And unreliable energy sources routinely exclude the wildly uneconomical costs and environmental impact it would take to make them reliable.
For the right kind of workloads and at sufficient scale, I wonder if this is actually true. (It probably is, but it's fun to hypothesize.) I'm assuming the workloads are mostly AI-related.
AI training presumably isn't super time-sensitive, so could you just pause it while it's cloudy?
AI inference, at least for language models, presumably isn't particularly network-intensive nor latency-sensitive (it's just text). So if one region is currently cloudy... spin it down and transfer load to a different region, where it's sunny? It's kind of like the "wide area grid" concept without actually needing to run power lines.
Yes, I know that in reality the capex of building and equipping a whole DC means you'll want to run it 24/7, but it is fun to think about ways you could take advantage of low cost energy. Maybe in a world where hardware somehow got way cheaper but energy usage remained high we'd see strategies like this get used.
> Yes, I know that in reality the capex of building and equipping a whole DC means you'll want to run it 24/7, but it is fun to think about ways you could take advantage of low cost energy.
There's some balance between maximizing capex, business continuity planning, room for growth, and natural peak and trough throughout the day.
You probably don't really want all your DCs maxxed out at the daily peak. Then you have no spare capacity for when you've lost N DCs on you biggest day of the year. N might typically be one, but if you have many DCs, you probably want to plan for two or three down.
Anyway, so on a normal day, when all your DCs are running, you do likely have some flexibility on where tasks run/where traffic lands. It makes sense to move traffic where it costs less to serve, within some reasonable bounds of service degradation. Even if electricity prices are the same, you might move traffic where the ambient temperature is lower, as that would reduce energy used for cooling and with it the energy bill.
You might have some non-interactive, non-time sensitive background jobs that could fill up spare DC capacity... but maybe it's worth putting a dollar amount on those --- if it's sunny and windy and energy is cheap, go ahead ... when it's cloudy and still and energy is expensive, some jobs may need to be descheduled.
or pause it when "organic traffic" has a peak demand, and resume in off-peak hours, so that the nuclear powerplant can operate efficiently without too much change in its output.
I also think you would need more than 24 hours battery. You have to prepare for freak weather events that reduce system capacity.
I also wonder what time horizon we are talking. solar and batteries presumably have to be replaced more often than nuclear.
In general, yes. Not really in the context of utility generation for a DC, though. A DC should have onsite backup generation, at least to supply critical loads. If your contracted utility PV + storage runs out, and there's no spare grid capacity available (or it's too expensive) you can switch to onsite power for the duration. The capex for backup power is already required, so you're just looking at additional spending for fuel, maybe maintenance if the situation requires enough hours on backup.
https://www.nei.org/resources/statistics/us-nuclear-generati...
As for France's capacity factor, that has a lot to do with the presence of intermittents on the continental grid, combined with the EU's Renewable Energy Directive making France liable to pay fines if they use nuclear power in preference to wind/solar.
not often and most importantly they are PREDICTABLE. You do understand why being able to control when a power plant is operating is a very important thing, right?
I guess if I knew there would be two months with less power I might design my data center to fit into 40 foot containers so I could deploy wherever power and latency are cheapest
> In 1988, the NRC announced that, although it was possible to further decontaminate the Unit 2 site, the remaining radioactivity had been sufficiently contained as to pose no threat to public health and safety.
Chernobyl (which was a far worse accident) continued to produce power at other units on the same site for 14 years after the meltdown of unit 4.
Nuclear is more expensive because there are extensive regulations. "Green" energy not only does not face so many regulations but it benefits from incentives.
Also, when comparing nuclear with "green" energy, most studies don't take into account the costs of energy storage.
Home grown nuclear programs will always be better than solar propped up by foreign entities.
With that said, while it doesn't provide numbers, the article does say the refurbishment (costing $1.6 billion, estimated) will be cheaper than a new build. It'll also likely be much faster, projected to open in 2028.
A quick google search puts construction costs of new nuclear of a Unit 2 size in the $5-10 billion range. 3 Mile Island itself was constructed for $2 billion in 2024 inflation-adjusted dollars. All in all, refurbishing sounds like a good bargain compared to a green field build.
*Many reactors started construction in the 70s and were finished in the 80s or 90s, plus Watts Bar Unit 2 which was started in 1972 and finished in 2016 for a total of $5 billion. The US also of course builds many naval reactors.
The main problem is that things cost more per unit if you do them less. The first new reactor in decades is going to be stupid expensive because you have new people doing it who are learning things for the first time, which often means doing them over again, which is expensive. And then we didn't even get to see if the second unit at Vogtle could improve on the first because then COVID hit and made everything cost even more.
Whereas the interesting question is, how much do they each cost if you build them at scale?
Apart from the obvious labour costs difference , theres also the skills at scale.Chinese have been on a continous buildout of new plants , so at this point they have designs/skilled teams for whom this is another routine at this point(i think 30+ under construction concurrently).The US builds are almost artisinal at this point.
And yeah at $1B , given prior examples , it expect them to be late and costs to baloon.Unless they use this as a template to upskill/retrain a workforce that will lead a new buildout so economies of scale take over and put downward pressure on the costs.
[1] https://www.nytimes.com/interactive/2025/10/22/climate/china...
in a list of countries with uranium resverves 1-59 they're number 55!
https://en.wikipedia.org/wiki/List_of_countries_by_uranium_r...
And in any case Australia host a LOT of uranium and is a very close ally and is happy to sell it to the US.