California has advanced a landmark initiative in the regulation of artificial intelligence, taking significant steps toward placing legal and ethical boundaries around the rapidly expanding field of AI-driven social technologies. At the center of this development is Senate Bill 243 (SB 243), a proposed piece of legislation designed specifically to establish rules for AI companion chatbots—programs that simulate human conversation in ways that can feel deeply personal and socially responsive. The bill successfully passed through both chambers of the state legislature, earning support from Democrats and Republicans alike. It now sits squarely on the desk of Governor Gavin Newsom, who holds the authority to either veto the proposal or transform it into law by signing it before the October 12 deadline. Should Newsom choose approval, the provisions would officially be enacted beginning January 1, 2026, positioning California as the first state in the nation to legally require companies operating AI companion chatbots to implement protective safeguards. This framework of accountability would obligate companies to adhere to safety standards and would expose them to legal liability if their chatbots failed to comply.
The primary aim of SB 243 is to mitigate the risks arising when AI companion systems interact with individuals—particularly minors and other vulnerable populations—who might be susceptible to harmful conversations. The bill defines companion chatbots as adaptive AI systems that provide human-like responses tailored to a user’s social and emotional needs. To limit harmful exposure, these systems would be strictly prohibited from engaging with users about suicidal thoughts, self-harm behaviors, or sexually explicit material. Moreover, operators would be compelled to introduce recurring, timed alerts to ensure clarity in the interaction. For minors, such reminders would appear every three hours, explicitly notifying them that they are communicating with an artificial system rather than a real person, while also urging them to step away periodically. The legislation goes further by mandating a regime of annual transparency reporting beginning July 1, 2027: companies deploying companion AI—such as the prominent platforms OpenAI, Character.AI, and Replika—would be required to disclose their practices, crisis intervention referrals, and overall safety measures in a structured and verifiable manner.
In addition to preventive obligations, the bill empowers individuals harmed by negligent AI practices to pursue legal remedies. Those who can demonstrate injury or risk stemming from a company’s violation of the law would be entitled to file civil actions. They could seek injunctive relief to demand corrective actions, recover damages up to $1,000 per violation, and claim reimbursement of attorney’s fees. This liability mechanism is intended not only to serve as a deterrent but also to create a tangible pathway for recourse in cases where technological negligence produces real-world harm.
The urgency fueling SB 243 traces back to tragic real-life incidents and revelations within the technology sector. The bill gained traction after widespread attention to the suicide of a teenager named Adam Raine, whose prolonged conversations with OpenAI’s ChatGPT reportedly included discussions of self-harm and planning his death. That loss underscored the grave risks chatbots pose when oversight mechanisms fail. Additional concern arose from leaked internal documents revealing that Meta’s chatbot prototypes had engaged with minors in conversations characterized as romantic and even explicitly sensual. These disclosures intensified fears that young users could be manipulated or exposed to harmful experiences without realizing the artificial nature of their interaction.
The passage of SB 243 comes at a broader moment of growing national scrutiny of artificial intelligence and its societal impact. Federal authorities, including the Federal Trade Commission, have begun examining how AI chatbots affect children’s mental health, signaling an interest in determining whether deceptive or unsafe practices violate consumer protection laws. Texas Attorney General Ken Paxton has launched investigations into Meta and Character.AI, accusing them of misrepresenting the safety of their platforms relative to mental health support. In Congress, lawmakers from both parties have opened their own probes; Senator Josh Hawley (Republican of Missouri) and Senator Ed Markey (Democrat of Massachusetts) are separately investigating Meta’s chatbot practices and safeguards.
Supporters of the legislation stress the balance between innovation and duty of care. Senator Padilla, in conversation with TechCrunch, articulated the imperative of building modest but meaningful safeguards to reduce significant risks. He emphasized that minors should always be assisted in distinguishing between human conversation and simulation, that distress signals such as suicidal ideation should immediately trigger crisis referrals, and that inappropriate materials must be proactively filtered. Padilla also argued for systematic data collection, requesting that companies publicly disclose statistics on how frequently users are referred to crisis hotlines so that policymakers gain visibility into the scope of the issue before tragedies occur.
SB 243 has indeed undergone revision. Earlier drafts contained stricter requirements that were softened through amendments in an effort to balance practicality with protection. Originally, the bill would have demanded that operators eliminate manipulative design strategies known as “variable reward” tactics—features that encourage users to continually engage by dangling special responses, unlocking rare “personalities,” or offering emotionally charged storylines. Critics contend these mechanics replicate the addictive loops familiar from the design of slot machines or social media apps. Ultimately, these requirements were cut, along with provisions that would have forced operators to track and formally report instances where chatbots initiated discussions of suicidal ideation with users. Lawmaker Evan Becker defended these compromises, suggesting that the final version reached an equilibrium between addressing harms and avoiding impractical compliance burdens that either technology cannot yet deliver or would generate overwhelming administrative paperwork.
The timing of SB 243 is notable given the political environment. Large Silicon Valley corporations are pouring significant financial resources into political action committees to influence upcoming midterm elections by supporting candidates who prefer lighter-touch regulatory strategies. Parallel to SB 243, California legislators are also considering a second proposal, Senate Bill 53, which would require broader transparency reporting across AI systems more generally. This particular measure has drawn strong resistance from major technology companies like Meta, Google, Amazon, and OpenAI, which argue instead for federal or internationally harmonized frameworks. By contrast, Anthropic, an AI developer known for prioritizing safety, has explicitly voiced its support for SB 53. In an open letter to Governor Newsom, OpenAI urged him to reject SB 53, indicating its preference for less restrictive approaches.
Throughout the debate, advocates like Padilla have rejected the notion that innovation and regulation exist at odds. He insists that technological progress and reasonable consumer protections can coexist harmoniously. To this end, he has emphasized that regulations should not suffocate innovation but should establish essential boundaries that guard against preventable harm among the most vulnerable populations. This argument is reinforced by certain companies that have publicly communicated some level of alignment, such as Character.AI, which told TechCrunch it already places prominent disclaimers across its platform warning that its chat experiences are fictional in nature and must not be construed as real interpersonal relationships. Other companies, including Meta, declined to comment, while industry giants such as OpenAI, Anthropic, and Replika were approached but had not yet provided statements at the time of reporting.
In sum, SB 243 represents a pioneering attempt to set guardrails for a nascent but highly influential class of artificial intelligence: companion chatbots. By insisting on heightened safety mechanisms, accountability systems, and transparency obligations, California seeks to establish itself at the forefront of AI oversight. Whether Governor Newsom signs the bill into law will determine if this ambitious state-led regulatory model becomes a precedent for other jurisdictions across the United States.
Sourse: https://techcrunch.com/2025/09/11/a-california-bill-that-would-regulate-ai-companion-chatbots-is-close-to-becoming-law/