OpenAI has announced that it will soon enable verified adult users of ChatGPT to engage in conversations that include mature and erotic themes, marking a notable transformation in how the company approaches adult-oriented digital interactions. According to a statement shared by OpenAI CEO Sam Altman on X (formerly Twitter), this change will coincide with the introduction of an age-gating system scheduled for release in December. Altman explained that the move aligns with OpenAI’s broader philosophy of ‘treating adult users like adults,’ suggesting the company’s growing confidence in its capacity to integrate maturity, responsibility, and content safety in its AI systems.

In his post, Altman clarified that once OpenAI fully deploys the age verification mechanisms, the platform will support a wider range of adult expression, including the creation and exchange of erotica, for those who have confirmed their age through the new gating process. This initiative had been foreshadowed earlier in the month, when OpenAI alluded to upcoming features that would allow developers to design and distribute “mature” ChatGPT applications, contingent upon the establishment of appropriate verification protocols and user control measures. These additional layers of security are expected to ensure that adult content remains accessible only to users who have consciously opted in and can responsibly handle such interactions.

OpenAI’s foray into this more permissive content arena does not occur in isolation. Other technology companies have also been exploring digital intimacy and adult-oriented AI experiences. Elon Musk’s AI venture, xAI, has already introduced flirtatious AI companions — virtual personas presented as animated three-dimensional anime-style models within its Grok app. This signals a broader industry trend toward integrating personal, emotional, and even sensual elements into artificial intelligence experiences, while simultaneously grappling with issues of consent, ethical design, and emotional impact.

Beyond expanding permissible content, OpenAI is preparing to release a new version of ChatGPT deliberately designed to recapture the personality traits and interactive warmth that users most appreciated in GPT-4o. The announcement followed a wave of feedback after GPT-5 became the default model powering ChatGPT. Many users found the new model, while advanced in capabilities, to be less personable and engaging. In response, OpenAI swiftly reinstated GPT-4o as an optional model so that users could once again enjoy its more human-like conversational tone and emotionally resonant exchanges.

Altman has acknowledged that past restrictions were, in part, a reflection of OpenAI’s cautious stance toward mental health considerations. He noted that ChatGPT had been made “pretty restrictive” as a deliberate measure to minimize potential harm and prevent the model from unintentionally exacerbating sensitive psychological conditions. However, he admitted that this cautious approach also reduced the chatbot’s usefulness and enjoyment for a significant portion of users who did not face such challenges. Recognizing this imbalance, OpenAI has since developed more sophisticated mechanisms for identifying and responding to users who may be experiencing mental distress. These tools enable the platform to intervene more precisely, providing support where needed while allowing greater freedom for users not at risk.

In keeping with its commitment to ethical responsibility, OpenAI has also formed a new advisory council dedicated to studying the intersections of artificial intelligence and psychological well-being. This ‘Well-being and AI’ council consists of eight specialists and researchers with expertise in technological, behavioral, and emotional domains. Their mission is to guide OpenAI’s handling of complex or sensitive user scenarios, ensuring that the platform evolves in a way that respects human vulnerability and digital safety. Nevertheless, observers such as Ars Technica have pointed out that the group lacks representation from suicide prevention experts, a notable exclusion given that many professionals in the mental health community have recently urged OpenAI to strengthen its safeguards for users struggling with suicidal ideation.

Concluding his remarks, Altman expressed optimism that OpenAI’s enhanced safety tools and refined moderation practices would now permit the company to ease several of the strict limitations previously imposed on ChatGPT. ‘Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases,’ he stated. His comments suggest a strategic shift toward a more balanced framework — one that recognizes both the importance of protecting users’ mental health and the value of allowing adults to engage with AI in more diverse, authentic, and personally meaningful ways.

Sourse: https://www.theverge.com/news/799312/openai-chatgpt-erotica-sam-altman-verified-adults