During a highly anticipated public session, OpenAI’s CEO, Sam Altman, engaged directly with an eager audience, fielding a series of probing and often challenging questions. His remarks during this live Q&A—held on a pivotal Tuesday—proved to be nearly as enlightening and thought-provoking as the inquiries themselves. Over the course of the conversation, Altman elaborated on OpenAI’s evolving perspective on artificial general intelligence (AGI) and articulated his broader vision for the organization’s trajectory. At the same time, he acknowledged the company’s recent errors in communication and policy, sought to reassure users nostalgic for earlier model versions, and indicated that OpenAI intends to expand freedoms for adult users, albeit within clearly defined ethical and operational constraints. The dialogue that emerged offered rare clarity into Altman’s thinking, punctuated by several revealing statements that encapsulate OpenAI’s current priorities and ambitions.
Among the most striking declarations was Altman’s aspiration for the company to establish what he referred to as an “infrastructure factory” — a system capable of producing the computational power equivalent to one gigawatt each week. This vision is emblematic of OpenAI’s increasingly industrial approach to AI capacity building. Currently, through a succession of agreements and partnerships, the company has secured commitments totaling approximately $1.4 trillion in future spending for AI infrastructure. This immense investment is expected to result in roughly 30 gigawatts of new compute power over the next several years. Yet, according to Altman, this represents only the beginning. During the session, he noted that if OpenAI’s research continues to progress at its current pace and if demand from consumers remains steady, the company would ideally move toward constructing infrastructure capable of generating a gigawatt of computational capacity per week. For perspective, Nvidia’s CEO, Jensen Huang, once explained that ten gigawatts translates to between four million and five million GPUs, underscoring the immense scale of Altman’s goal. Still, Altman tempered expectations, clarifying that OpenAI has not yet committed to this plan but is presently engaged in exploratory discussions. He emphasized that the ultimate objective is to dramatically reduce the cost of such an expansion, ideally reaching approximately twenty billion dollars over a five-year equipment lifecycle.
Another moment of candid introspection came when Altman addressed a recent misstep in which he referenced erotica in an online post intended to illustrate OpenAI’s approach to user autonomy. Reflecting on his choice of example, Altman admitted, with characteristic humility, that he regretted invoking such a sensitive subject. He explained that his intention had been to underline the principle of trust between company and users, particularly regarding how adults interact with generative models. Figures such as Mark Cuban publicly questioned Altman’s phrasing, prompting further debate about responsible content moderation. During the Q&A, Altman clarified that he had meant to distinguish between artistic or literary erotica and more commercialized or exploitative content, but that the nuance was lost in translation. He reiterated that OpenAI’s guiding objective is to ensure that adults are treated like adults—meaning given latitude to use AI systems responsibly and creatively—while still acknowledging clear moral and safety boundaries.
As is now customary, one of the audience’s concerns centered on OpenAI’s handling of older models, particularly the well-loved 4o model. Users who had formed unexpected attachments to this version voiced frustration when OpenAI initially removed it following the debut of GPT-5. The backlash online was immediate and intense, prompting the company to restore access for paid subscribers. Altman, addressing these concerns directly, reaffirmed that there are no current plans to retire or “sunset” 4o. Nonetheless, he drew boundaries around such reassurance, explaining that while OpenAI recognizes and values the loyalty of its community, it cannot guarantee indefinite support—“not until the death of the universe,” as he phrased it. The underlying message was one of balance: a company striving to innovate relentlessly without disregarding the preferences and sentiments of its users.
The issue of user freedom surfaced again when participants asked whether adults who underwent age verification could gain expanded access to potentially restricted features if they accepted full liability. Altman’s answer was firm yet nuanced. He emphasized that OpenAI is committed to granting verified adults greater flexibility—more freedom to customize experiences and use models creatively—yet even such autonomy has limits. With characteristic directness, he declared that OpenAI would never engage in the moral equivalent of “selling heroin,” regardless of any waivers or legal protections provided by users. In other words, the company will ensure that age verification enhances choice and authenticity without compromising ethical responsibility.
Perhaps the most ambitious statement came when the discussion turned to OpenAI’s long-standing pursuit of AGI. The same day marked a milestone: OpenAI had finalized its corporate restructuring and reached a definitive agreement with Microsoft, one of its earliest and most influential investors. Both events underscored OpenAI’s ongoing transformation from a research-focused organization into a vast, operational entity shaping the future of intelligence. Altman and OpenAI’s chief scientist, Jakub Pachocki, used the moment to propose a more pragmatic framework for discussing AGI—one rooted not in philosophical definitions but in tangible milestones. They revealed internal goals aiming to produce an automated AI research intern by September 2026, with a longer-term objective of achieving a fully autonomous AI researcher by March 2028. In Altman’s words, it is “much more useful” to articulate a specific intention—such as developing a true automated AI researcher by a predetermined date—than to argue endlessly over abstract interpretations of what constitutes AGI. This shift represents OpenAI’s strategic focus on measurable progress and real-world applications, reflecting an ethos of transparency and technical precision.
Altman’s wide-ranging remarks, spanning infrastructure ambitions, personal accountability, user empowerment, and the philosophical evolution of AGI, collectively painted a portrait of a leader grappling with the immense complexity of guiding one of the most consequential technology organizations of the century. Each statement seemed to reveal both a vision for exponential scale and a deep awareness of the moral responsibilities required to shape the future of artificial intelligence responsibly.
Sourse: https://www.businessinsider.com/sam-altman-open-ai-livestream-quotes-2025-10