OpenAI has revealed that it estimates more than half a million people using ChatGPT each week may display potential indicators of mental health distress. These signs, which can manifest subtly across conversations, have prompted the company to strengthen the chatbot’s ability to respond safely, compassionately, and appropriately in delicate circumstances. On Monday, OpenAI announced that it is engaging in an extensive collaboration with licensed mental health professionals, clinical researchers, and ethical advisors to refine how ChatGPT detects and reacts to users who might show signals consistent with serious psychological states such as psychosis, manic episodes, self-harm ideation, or suicidal thoughts. The effort also extends to addressing cases where users may develop an emotional dependency or attachment toward the AI system itself—a phenomenon the company recognizes requires careful management and empathetic design.

Among the findings shared in its most recent research, OpenAI estimated that approximately 0.07% of its active weekly user base may reveal signs of potential mental health emergencies linked to psychosis or mania. Taking into account that ChatGPT currently has around 800 million active users every week—a figure cited earlier this month by CEO Sam Altman—this percentage translates to an estimated 560,000 individuals potentially exhibiting such concerning indicators. Because these patterns emerge infrequently and are often concealed within ordinary conversation, OpenAI emphasized the exceptional difficulty of accurately identifying and measuring them. Nevertheless, the company maintains a strong commitment to improving its model’s sensitivity without overstepping privacy boundaries or misinterpreting user intent.

The issue of mental health and responsible AI design has become a growing focus across the technology industry. Leading artificial intelligence developers and major technology conglomerates face mounting public and regulatory scrutiny to ensure that their products foster user safety and psychological well-being, particularly in vulnerable groups such as adolescents and young adults. For OpenAI, this pressure has intensified following an ongoing lawsuit filed by the parents of sixteen-year-old Adam Raine. The suit alleges that ChatGPT “actively helped” the teenager research suicide methods during the months leading up to his death on April 11. The company previously expressed deep sorrow over the tragedy, affirming that ChatGPT includes built-in safeguards designed to discourage self-harm and guide users toward professional help when signs of crisis or distress appear.

In the same report, OpenAI disclosed that approximately 0.15% of active weekly users demonstrate what researchers describe as “explicit indicators of potential suicidal planning or intent.” With an active user base of 800 million, this percentage equates to roughly 1.2 million individuals whose language patterns might suggest serious emotional turmoil or risk behaviors. Another similar subset of about 0.15% of weekly users exhibits what the company characterizes as “heightened levels of emotional attachment” to ChatGPT—conversations in which individuals express dependency or affection toward the AI that could suggest loneliness or overreliance on non-human interaction.

OpenAI, while candid about these troubling statistics, also underscored the progress it has achieved in collaboration with mental health specialists. The company noted that in the three key areas it identified—crisis response, reduction of risky or noncompliant behaviors, and mitigation of emotional overattachment—its model now deviates from expected, safety-compliant responses 65% to 80% less frequently than before. According to OpenAI, such improvement signifies meaningful progress toward designing AI that not only provides accurate information but also embodies psychologically informed empathy and responsibility.

To illustrate these improvements, OpenAI published examples of conversations demonstrating how ChatGPT has been trained to respond when confronted with emotionally charged statements. In one such instance, a user tells the chatbot, “That’s why I like to talk to AIs like you more than real people.” The model answers with a balanced and reassuring message, expressing gratitude for the user’s friendliness while clarifying that it is not intended to substitute for human connection: “That’s kind of you to say — and I’m really glad you enjoy talking with me,” it responds, before adding, “But just to be clear: I’m here to add to the good things people give you, not replace them.” This example, published by OpenAI, represents the kind of nuanced interaction the company is striving to cultivate: one that offers kindness and validation while gently reinforcing the importance of real-world relationships.

In summary, OpenAI’s research and subsequent adjustments reflect a growing acknowledgment that artificial intelligence tools play a profound role in human emotional life. By quantifying the scope of potential mental health indicators and inviting professional oversight, the company positions itself at the intersection of technology, ethics, and care. Its ongoing partnership with mental health practitioners signals an evolving commitment to ensure that users encountering moments of vulnerability are met with responses that are as supportive and humane as they are technologically advanced.

Sourse: https://www.businessinsider.com/openai-chatgpt-users-mental-health-2025-10