OpenAI is currently regarded as one of the most valuable and strategically important customers in the cloud computing ecosystem, admired not only for the scale of its operations but also for the influence it exerts on the broader technology industry. However, this privileged position could evolve dramatically over time, as the dynamics of competition and control of infrastructure may shift in unexpected ways.

At a recent Goldman Sachs conference in San Francisco, OpenAI’s Chief Financial Officer, Sarah Friar, issued what can only be described as a pointed and public caution to the powerful technology conglomerates that dominate the global cloud services sector. With striking candor, she remarked that major providers have effectively been “learning on our dime,” suggesting that OpenAI’s reliance on their platforms may inadvertently provide them with insights into the company’s proprietary methods and operational strategies. Her comments highlighted a central concern: the danger of unintentionally granting competitors access to the intellectual property that underpins OpenAI’s distinctive advantage in artificial intelligence innovation.

The challenge is particularly acute because OpenAI, in order to support its immensely popular ChatGPT platform and its rapidly expanding enterprise AI business, requires extraordinarily large amounts of computing capacity. This necessity compels the company, at least for now, to work extensively with multiple industry-leading cloud suppliers, including Microsoft, Google, Oracle, and the rising challenger CoreWeave. These partnerships extend deep into OpenAI’s operational core, touching everything from the training of large-scale AI models to the inference tasks that users experience in real time through products and services.

During the same conference, Friar elaborated further, characterizing OpenAI’s current relationship with cloud vendors as a double-edged sword. On the one hand, these collaborations enable OpenAI to access the enormous scale of resources essential for training and deploying its models. On the other hand, the process allows providers to observe how the company structures and optimizes its computational workloads, effectively absorbing valuable lessons in the design of AI-specific infrastructure. As Friar put it, OpenAI must remain vigilant to ensure that as knowledge is transferred, its most critical intellectual property—the foundation of its competitive edge—is not inadvertently given away.

This concern is now motivating OpenAI to explore a strategic redirection: the gradual development of its own proprietary data centers, which Friar referred to as “first-party builds.” Such a move would represent a major departure from today’s heavy dependence on outside providers. She acknowledged that OpenAI is already initiating conversations around massive undertakings, including the ambitious “Stargate” projects, which currently involve collaborations with multiple cloud leaders. Over time, however, these projects may evolve into OpenAI-owned and independently operated facilities.

Friar described the approach to this transition as unfolding in three stages. The initial phase involves purchasing computing capacity in standard form directly from outside providers—a straightforward but dependency-heavy model. The subsequent stage is characterized by more collaborative engagements, where OpenAI works closely with its partners and gains nuanced knowledge about designing, maintaining, and scaling highly specialized AI data centers. The anticipated final step, she explained, will see OpenAI increasingly taking control of the process by constructing and managing such facilities itself, effectively shifting toward true infrastructure self-sufficiency.

While this strategy promises greater independence and long-term protection of intellectual property, it simultaneously introduces potential risks for cloud providers who have thus far benefitted handsomely from the immense demand generated by AI companies. At present, the competitive environment is defined by an urgent shortage of computational capacity, which forces AI innovators to rent resources at significant cost. Should OpenAI ultimately succeed in erecting its own dedicated facilities, the company may transition from being one of the most lucrative clients in the cloud sector to a formidable competitor.

Nonetheless, such an outcome is far from imminent. Building state-of-the-art AI data centers involves not only staggering amounts of financial investment and intricate engineering but also contending with supply constraints in critical areas. A global shortage of AI-optimized chips, combined with limitations in available energy resources and skilled labor, is currently slowing the pace at which new infrastructure can be deployed. These factors suggest that despite OpenAI’s intent and long-term trajectory, the transformation into a fully independent infrastructure operator will likely unfold gradually, over several years if not more.

In short, while OpenAI remains deeply reliant on its cloud partners today, the seeds have been planted for a strategic shift that could eventually redefine the balance of power between pioneering AI startups and the corporate giants that dominate global computing infrastructure markets. The remarks from Friar were not simply a critique of the current dynamic, but also a declaration of OpenAI’s ambitions to assert greater control over its technological destiny and safeguard the vital intellectual property at the heart of its success.

Sourse: https://www.businessinsider.com/openai-cfo-sarah-friar-cloud-giants-learning-on-our-dime-2025-9