This passage originates from *Sources* by Alex Heath, a specialized newsletter devoted to developments in artificial intelligence and the broader technology industry. The piece is distributed exclusively to subscribers of *The Verge* once a week, offering readers a curated synthesis of insight, analysis, and commentary at the intersection of innovation and market dynamics.
At the recent DealBook Summit, Dario Amodei, the co‑founder and chief executive officer of Anthropic, delivered a pointed and strategically measured message—one that clearly sought to differentiate his company’s vision and operational philosophy from those of its peers, though he deliberately refrained from explicitly naming any rivals. During a substantial portion of his dialogue with journalist Andrew Ross Sorkin, Amodei’s responses reflected careful rhetorical precision: he drew nuanced boundaries between Anthropic’s approach to responsible growth and the more aggressive tactics he subtly attributed to others in the industry.
When Sorkin broached the topic of whether the current surge in AI activity constitutes an economic bubble, Amodei’s reply revealed both technical confidence and fiscal caution. He dissected the issue into two distinct dimensions—the ‘technological side’ and the ‘economic side’—thereby emphasizing that while he maintained strong conviction regarding the authenticity and durability of recent advancements in AI models and infrastructure, he nevertheless harbored reservations about the financial exuberance surrounding them. In his words, even if today’s artificial intelligence capabilities deliver fully on their promises, market participants who miscalculate timing by even a small margin could trigger destabilizing consequences. In other words, success in the AI domain is not solely determined by breakthroughs in computation or model performance; it equally depends on sober judgment and prudent pacing in capital deployment and strategic forecasting.
Pressed by Sorkin to clarify whom he had in mind, Amodei declined to mention OpenAI or its CEO Sam Altman directly. Yet his veiled reference was hardly ambiguous. He described certain industry players as ‘YOLOing’—an informal but vivid shorthand for taking disproportionate risks driven by ambition or ego rather than measured prudence. He elaborated that some executives appear temperamentally inclined toward maximalist bets, guided by an impulse to pursue monumental scale or record‑setting metrics without adequately accounting for downside risk. In Amodei’s view, such a mindset may lead companies to push the metaphorical dial too far, transforming innovation into overreach.
The conversation also turned toward the increasingly common practice of so‑called ‘circular deals,’ financial arrangements in which hardware manufacturers—most notably chipmakers like Nvidia—invest heavily in AI startups that in turn expend those very funds on purchasing the investors’ own processing units. Amodei acknowledged that Anthropic has participated in this kind of reciprocal funding structure, though he was careful to note that the scale of its involvement remained considerably smaller than that of other firms. He explained the economics underlying these transactions: constructing a modern gigawatt‑level data center can entail expenditures of roughly ten billion dollars over a five‑year span. In such scenarios, a hardware vendor might front part of the required capital, while the AI company repays its portion incrementally as its revenue scales correspondingly. Amodei contended that, when managed prudently and transparently, these partnerships can make sense, but they also carry inherent risks when combined with unrealistic revenue projections or unrestrained expansion.
Without explicitly identifying any organization, Amodei alluded to eye‑catching figures widely discussed in connection with OpenAI’s massive compute build‑outs. He emphasized that, in principle, there is nothing inherently inappropriate about amassing significant infrastructure investments. However, problems arise when capital commitments escalate beyond what future income plausibly justifies. He illustrated this point with a hypothetical: if a company promises investors that, by 2027 or 2028, it must generate two hundred billion dollars annually to sustain its expansion, then even minor misalignments between projections and market realities could lead to overextension and potentially catastrophic financial strain.
At the core of Amodei’s perspective lies a conceptual framework he has popularized within Anthropic’s internal planning processes: the ‘cone of uncertainty.’ This term encapsulates the range of possible financial outcomes as the company projects its future growth. He revealed that Anthropic’s revenues have multiplied roughly tenfold each year for three consecutive years—progressing from nonexistence to approximately one hundred million dollars in 2023, from there to about one billion in 2024, and potentially approaching between eight and ten billion by the end of the current year. For comparison, Sam Altman has publicly estimated that OpenAI could end 2025 with an annualized revenue run rate exceeding twenty billion dollars. Amodei, however, underscored the tenuousness of such forecasting: even he cannot predict whether Anthropic will reach twenty billion or fifty billion next year. Such pronounced uncertainty, he explained, is intrinsic to the industry’s volatile trajectory.
This unpredictability is especially problematic because the physical infrastructure underpinning artificial intelligence systems—large‑scale data centers—requires years to plan, finance, and construct. The build cycle for such facilities typically spans one to two years, meaning that decisions affecting computational capacity in 2027 must be finalized today. The dilemma is stark: if a company underestimates its future needs, it risks losing clients to competitors with greater capacity; if it overshoots and commits to excessive expansion, it might face severe capital strain or even bankruptcy. According to Amodei, the safety margin within this ‘cone of uncertainty’ depends largely on a company’s profit margins and its tolerance for risk.
He articulated Anthropic’s strategy as one centered on maintaining enough purchasing capacity to withstand even pessimistic market scenarios—the so‑called tenth percentile outcomes—while still managing tail risks prudently. By emphasizing enterprise customers, whose contracts often yield higher margins and predictable renewal cycles, Anthropic positions itself structurally safer than firms focused predominantly on volatile consumer markets. As Amodei concluded succinctly, his company operates with sufficient foresight and stability that it need not resort to emergency measures or ‘code red’ responses to keep pace. His remarks ultimately conveyed a vision of sustainable innovation: one in which deliberate planning, fiscal restraint, and ethical stewardship form the foundation for enduring success in the turbulent world of artificial intelligence.
Sourse: https://www.theverge.com/column/837779/anthropic-ai-bubble-warning