The public reaction to Sam Altman’s announcement on Tuesday regarding forthcoming changes to ChatGPT—particularly the controversial inclusion of adult-oriented or erotic material—took the OpenAI chief executive by complete surprise. Following the unexpected intensity of online discussions, Altman wrote on X (formerly known as Twitter) that the debate surrounding the erotica aspect had exploded far beyond his initial expectations. He emphasized that the feature was never intended to dominate the conversation; rather, it served as a single illustration of OpenAI’s broader intent to grant mature users increased autonomy and expressive latitude within ethical bounds.

In his initial disclosure, Altman revealed that by December, ChatGPT would receive a significant update designed to make it competitive with similar offerings such as Grok, the system created by Elon Musk’s xAI. This update, he explained, would permit responsible adult engagement with content of a more sensual or mature nature, effectively acknowledging a growing market for AI tools that accommodate a full spectrum of human interests. However, Altman was quick to clarify in subsequent remarks that this new direction would not in any way dilute or weaken the platform’s existing commitments concerning sensitive areas such as mental health support. The underlying goal, he underscored, was not to sensationalize the software but to provide capable adults with the freedom to shape their own experiences responsibly.

Altman elaborated that OpenAI’s purpose is not to assume the role of a universal moral authority. Drawing an analogy to long-established societal conventions, such as film rating systems that distinguish between age-appropriate categories like R-rated or PG-13, he expressed the company’s desire to develop comparable boundaries within artificial intelligence. In doing so, OpenAI hopes to mirror the structure of mature content regulation that has successfully guided other industries for decades, offering adults freedom while safeguarding younger users from potential harm.

At the same time, Altman assured observers that ChatGPT will continue to place the protection of minors above all else. The platform’s child-safety protocols, he reiterated, will remain rigorous—prioritizing security and well-being over privacy and unfettered freedom for younger audiences. This approach stems from the belief that adolescents and children deserve significant protective measures when encountering advanced conversational technologies.

Nevertheless, the proposed policy changes attracted immediate criticism. Among those voicing serious concern was entrepreneur and former “Shark Tank” investor Mark Cuban, who argued that the effectiveness of any new age restrictions would likely be limited in practice. Cuban warned on X that such measures might backfire dramatically, predicting that skeptical parents would doubt OpenAI’s ability to prevent minors from bypassing age-gating systems. In his view, that skepticism might even push families toward competing large language models, inadvertently undercutting the original safety goals.

Altman also remarked that ChatGPT’s behavioral logic will include differential treatment for users depending on their mental and emotional state. Specifically, he noted that individuals who appear to be experiencing mental health crises will be managed in a categorically different way from those who are not. The model will continue to refuse content or interactions that could lead to harm—whether self-inflicted or directed toward others—though Altman refrained from specifying precisely how the system would identify users in distress or delineate what kinds of requests would be restricted.

When asked for further clarification, OpenAI declined to provide additional details, leaving open questions about the exact mechanisms behind the company’s moderation and detection strategies. Despite the ambiguity, Altman reaffirmed one central principle: OpenAI intends to support users without adopting an overly paternalistic stance. The company’s aspiration, he said, is to assist people in achieving their own long-term objectives with integrity and care, respecting their autonomy while ensuring that engagement with AI technology remains consistent with both ethical responsibility and personal safety.

Sourse: https://www.businessinsider.com/sam-altman-openai-not-moral-police-after-chatgpt-erotica-announcement-2025-10