The question of how much power is ultimately sufficient for artificial intelligence remains shrouded in uncertainty. Neither Sam Altman, the CEO of OpenAI, nor Microsoft’s chief executive Satya Nadella can provide a definitive answer. This lack of clarity has placed software-centric companies such as OpenAI and Microsoft in a difficult position. The entire technology sector has recognized that computational capacity—known as compute—represents one of the primary limitations to expanding and widely deploying AI systems. Yet, while technology firms have been engaged in intense competition to secure not merely chips but also the electrical power necessary to run them, their success has been uneven. In fact, the race to acquire adequate energy infrastructure has fallen behind the extraordinary pace of GPU purchases to such an extent that Microsoft now faces a situation where its chip orders exceed the amount of power capacity it currently has guaranteed.

During a conversation on the *BG2* podcast, Nadella acknowledged the unpredictability inherent in the supply-and-demand cycles that shape this complex ecosystem. He emphasized that the company’s present challenge does not stem from an overabundance of computing chips, but rather from a shortage of power and, consequently, the difficulty in completing data center construction swiftly enough in areas with sufficient energy supply. Without timely access to ready-built facilities near reliable power sources, Microsoft risks being left with vast numbers of unused chips. Nadella illustrated this predicament vividly, explaining that the issue is not chip scarcity but the absence of physical spaces—so-called “warm shells” in real-estate language—into which the servers can be installed and activated.

What we are witnessing, in essence, is a collision between two dramatically different technological tempos. Companies that have historically optimized for the rapid iteration of software code or the mass production of silicon chips now find themselves needing to operate in the slower, more capital-intensive world of energy infrastructure. Unlike writing software or fabricating semiconductors, constructing large-scale power plants is a process measured not in weeks but in years.

For much of the past decade, electricity demand in the United States remained largely flat, giving utilities little reason to expand generating capacity. However, within the last five years, the proliferation of massive data centers—especially those designed to power cloud services and advanced AI models—has reignited demand at an unexpectedly rapid pace. This surge has already begun to outstrip the plans of most utilities to bring new generation facilities online. To cope, developers of data centers have increasingly turned to what are known as “behind-the-meter” energy arrangements, in which power flows directly to the data center rather than through the public electrical grid. This strategy provides more predictable supply but also poses new regulatory and logistical challenges.

Altman, who appeared on the same podcast, voiced concern that these mismatches between rapidly evolving technology and slow-moving infrastructure could produce significant financial repercussions. He suggested that if an exceptionally cheap and scalable form of energy were to emerge in the near future, companies locked into expensive long-term power contracts might face serious losses as those agreements become uneconomical almost overnight.

Expanding further on the pace of change, Altman noted that the cost of computation per “unit of intelligence” has been declining at a staggering rate—averaging roughly fortyfold per year by some internal estimates. Such exponential scaling, he explained, creates a paradoxical infrastructure challenge: as compute becomes cheaper and more accessible, demand for it expands even more rapidly, driving the need for ever greater physical capacity to sustain the underlying systems.

Altman’s personal ventures reflect his conviction that the future of AI ultimately hinges on breakthroughs in affordable, abundant, and clean energy. He has invested in a range of next-generation energy startups, spanning nuclear fission company Oklo, fusion innovator Helion, and solar enterprise Exowatt, which stores concentrated solar heat for later use. Yet none of these emerging technologies are ready for widespread commercial application today. Meanwhile, traditional energy projects such as natural gas power plants require years to permit, design, and build. Even orders placed immediately would likely not be fulfilled until late in the decade, illustrating the scale of lag between digital ambition and physical reality.

This gap partly explains why so many technology companies have been accelerating the deployment of solar power systems. Solar energy offers several strong advantages: it is relatively inexpensive, emits no greenhouse gases during operation, and—most importantly for fast-moving tech firms—can be installed and scaled with impressive speed. On a subtle level, the structure of the solar industry mirrors the semiconductor sector that these companies already understand well. Both are based on silicon substrates and both produce modular components—solar panels and processing chips—that can be assembled in parallel arrays. Just as multiple chips can be combined to boost computing performance, numerous panels can be connected to generate significant electrical capacity. This structural similarity has made solar a particularly attractive and “de-risked” investment for companies seeking power generation solutions aligned with their operational mindset.

Moreover, the modularity and relative speed with which solar installations can be deployed align well with the timelines for constructing new data centers. However, despite this compatibility, both undertakings still require considerable time and coordination to complete, meaning that rapid swings in AI demand can easily outpace the speed at which new power or infrastructure can realistically be brought online. Altman openly conceded that if AI models become drastically more efficient, or if adoption plateaus sooner than many expect, companies could find themselves burdened with power plants sitting idle—a costly and politically delicate outcome.

Nevertheless, his other remarks indicate that he views such an outcome as improbable. Altman subscribes to the economic concept known as *Jevons Paradox*, which posits that as technological efficiency increases and the cost of using a resource declines, total consumption of that resource paradoxically rises rather than falls. Under this logic, the more efficiently we can compute, the more applications, experiments, and services will emerge to consume that newfound capacity.

In practical terms, Altman illustrated this dynamic with a thought experiment: if the price of computing power for a given level of intelligence were to drop by a factor of one hundred overnight, usage would likely multiply far beyond that scale. An enormous wave of previously unfeasible ideas and products would suddenly become economically viable. Thus, rather than marking the end of high-power demand, efficiency gains in AI may only deepen society’s reliance on ever-larger quantities of energy to sustain its digital ambitions.

Sourse: https://techcrunch.com/2025/11/03/altman-and-nadella-need-more-power-for-ai-but-theyre-not-sure-how-much/