During the past summer, the global technology sector seemed to leap headfirst onto what many have begun calling the artificial intelligence “crazy train.” This surge of enthusiasm, marked by almost frenetic investment, relentless product announcements, and a pervasive sense of inevitability about AI’s dominance, shows absolutely no indication of easing off or slowing down. The momentum continues to accelerate, feeding on both innovation and speculation in equal measure.

The newest and perhaps most potent source of acceleration for this AI boom originates not from Silicon Valley but from the heart of the financial world — Wall Street. Increasingly, sophisticated borrowing schemes and innovative, sometimes opaque financial arrangements are being used to bankroll these ambitious AI ventures. Many of these deals rely on circular funding models, where money is recycled through intricate webs of credit, creating layers of leverage that resemble the structured financing tools once popular before previous economic downturns. This influx of speculative capital adds fuel to an industry already running at full speed.

In moments when financial complexity begins to obscure the underlying fundamentals, I turn to the insights of Dakin Campbell, a seasoned Business Insider journalist who has spent nearly twenty years chronicling the rhythms and crises of Wall Street. Campbell recently authored an in-depth piece exploring this latest AI financing frenzy, dissecting how the same patterns that sparked earlier bubbles may be reappearing within today’s technology ecosystem. I invited him to share his thoughts more directly.

**Alistair Barr:** Structured credit instruments are becoming increasingly visible in the financing of AI infrastructure projects. Does that trend concern you at all?

**Dakin Campbell:** The simple, almost reflexive answer is yes, it prompts concern — we have, in a sense, witnessed this narrative unfold before. Structured credit itself is not inherently perilous; it is a tool, and like any tool, its impact depends on how responsibly it is used. The true risk lies in the way such methods disperse exposure across the financial system. By distributing risk among countless participants, these instruments can render the overall picture opaque, making it substantially more difficult to monitor and assess potential points of failure. This complexity creates challenges not only for investors attempting to gauge value but also for regulators, journalists, and analysts — the individuals and institutions who collectively serve as society’s counterweight against systemic excess. So yes, the worry is real; these mechanisms can unintentionally camouflage danger behind financial sophistication.

**Question:** Do technology founders such as Mark Zuckerberg or Sam Altman genuinely prioritize the eventual returns for their investors, or are they primarily driven by the singular goal of dominating the modern AI race?

**Campbell:** At a fundamental level, I do think that figures like Zuckerberg, Altman, and their contemporaries are convinced there is substantial profit to be made from AI. They believe this industry will mature into an extremely lucrative enterprise, perhaps the defining one of their generation. Yet, intertwined with those profit motives is a powerful personal dimension — a deep-seated belief in their own potential to shape history. Their ambitions extend beyond business success toward the almost mythic aspiration of ushering in artificial general intelligence, or AGI, a transformative milestone often depicted in science fiction. These individuals grew up absorbing imaginative stories about powerful machines and digital minds. That early influence, I suspect, still fuels their determination and cannot be separated from what drives them today.

**Question:** Some compare the current AI infrastructure buildout to the construction of railroads in the nineteenth century — enormous upfront losses followed by lasting value. Is that comparison apt?

**Campbell:** It is an elegant analogy but only partially accurate. Railroad tracks and locomotives are examples of enduring physical assets, designed to last generations. By contrast, AI infrastructure depends heavily on graphical processing units, or GPUs, which depreciate rapidly. As tech writer Paul Kedrosky notes, roughly sixty percent of data center costs stem from GPUs alone. Whether one amortizes their value over three years or perhaps six, the lifespan remains relatively short. What remains — the building’s structural shell, its cooling systems, and electrical grid — accounts for less than half of total expenditures. Therefore, unlike the railroads, the bulk of today’s AI capital is directed toward assets that age quickly and demand continual replacement. If we stretch for a closer parallel, the fiber-optic overbuild during the early dot-com era offers a more instructive precedent, since fiber networks endure far longer than the rapidly replaced GPUs powering present-day AI systems.

**Question:** Can the growing demand for inference — the process of running AI models to generate practical outputs — alone sustain this boom, or will long-term viability depend upon the creation of concrete, marketable AI products?

**Campbell:** Inference, by definition, represents the stage where AI models move from theoretical potential to applied function, delivering answers, insights, or predictions to end users. In many ways, it corresponds to the same goal as developing real-world products. The industry cannot thrive indefinitely on experimentation; it must eventually create reliable, repeatable products that businesses and consumers alike find valuable enough to purchase. Increasingly, voices within the field are advocating a pragmatic pivot — moving away from distant dreams of achieving superintelligence or AGI and focusing instead on leveraging current AI capabilities to address immediate, tangible problems. Leading researchers argue that humanity remains several breakthroughs away from genuine AGI. Viewed through that lens, it seems inevitable that investors, markets, and perhaps even public sentiment will pressure corporate leaders to prioritize solvable, real-world challenges over abstract long-term quests.

**Question:** From your personal experience, have you found meaningful, everyday value from generative AI tools?

**Campbell:** Absolutely — at least in certain domains. For example, some of my friends rely enthusiastically on tools like Grammarly to polish their writing and enhance clarity. Personally, I often use generative models as research assistants, prompting them to brainstorm ideas and expand on complex concepts. In these creative or exploratory contexts, I have indeed found them useful. However, when I require consistent, verifiable results — such as when asking a model to analyze a set of documents and produce conclusions strictly bound to the provided data — performance tends to decline considerably. The technology, while promising, still struggles to deliver accuracy and consistency when precision truly matters. Thus, while I recognize its enormous potential, the consensus I encounter among professionals is remarkably consistent: people long for AI systems capable of solving problems seamlessly and repeatedly, without demanding that users master the art of perfect prompting. That milestone, to most of us, still feels somewhat distant.

For those wishing to stay updated, readers can subscribe to Business Insider’s Tech Memo newsletter for ongoing coverage of these technological transformations. I welcome feedback or conversation via email at abarr@businessinsider.com.

Sourse: https://www.businessinsider.com/wall-street-fueling-ai-bubble-crazy-train-2025-10