OpenAI has firmly clarified that ChatGPT’s functionality and underlying behavioral framework remain entirely consistent and unaltered, despite a wave of misinformation spreading rapidly through social media platforms. These online posts inaccurately alleged that the company’s latest policy update had introduced sweeping new restrictions preventing the chatbot from sharing legal or medical information. In response, OpenAI explained that this interpretation is incorrect and that no substantive change has occurred in its stated guidelines.
Providing additional clarification, Karan Singhal, OpenAI’s Head of Health AI, publicly addressed the misunderstanding on the social network X. He explicitly stated that the circulating claims were unfounded and emphasized that OpenAI’s stance had not shifted. Singhal reiterated that while ChatGPT has never been intended to function as a formal replacement for licensed professionals—such as doctors or attorneys—it nonetheless continues to serve as a valuable educational resource. Users can rely on it to better interpret, structure, and understand general information within domains like healthcare and law, while recognizing that such guidance is inherently informational rather than diagnostic or advisory in a professional sense.
This clarification from Singhal came as a direct reply to a now-deleted post originally shared by the prediction market platform Kalshi, which had incorrectly asserted that ChatGPT would no longer provide insights or descriptions concerning legal or medical matters. Responding to this, Singhal made clear that OpenAI’s usage policies have always incorporated language delineating the boundaries of the system’s advisory functions and that the core principles governing these boundaries remain identical to those previously in place.
According to Singhal, the specific mention of legal and medical domains within the most recent policy documentation does not represent a new restriction but merely a continuation and clarification of existing terms. The policy revision issued on October 29th consolidated OpenAI’s previous frameworks into a unified set of usage guidelines. Within this comprehensive update, users are reminded that the platform should not be employed for “the provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.” This phrasing reiterates one of the company’s fundamental commitments—to ensure that AI systems are used responsibly and that they do not inadvertently supplant professional expertise where accuracy, safety, and ethical considerations are paramount.
In essence, the restructured policy does not introduce new prohibitions but rather maintains consistency with OpenAI’s earlier directive, which instructed users to refrain from activities that could “significantly impair the safety, wellbeing, or rights of others.” This earlier wording specifically included restrictions against delivering personalized legal, medical, health, or financial advice without review by a properly qualified specialist, as well as without disclosing that AI played a role in preparing the content.
Previously, OpenAI managed three distinct policy frameworks—one covering general use across all products, and two additional ones governing ChatGPT and API interactions respectively. The recent update merged these separate documents into a single, harmonized policy architecture. OpenAI’s changelog explained that this transition achieves alignment under a universal set of rules applicable across all its products and services, thereby simplifying oversight while preserving the substance of the original policies. In short, despite the structural consolidation and refreshed presentation, the underlying rules, ethical boundaries, and operational principles remain precisely as they have been since their inception.
Sourse: https://www.theverge.com/news/812848/chatgpt-legal-medical-advice-rumor