As of October 13th, California has formally enacted a groundbreaking piece of legislation that seeks to regulate the rapidly expanding and increasingly influential industry of companion artificial intelligence chatbots. Signed into law by Governor Gavin Newsom, Senate Bill 243 represents a pioneering effort to introduce what its sponsor, State Senator Anthony Padilla, has described as the nation’s very first comprehensive safeguards for AI chatbots. This measure distinguishes itself not merely as a local policy shift but as an ambitious attempt to establish clear ethical boundaries and consumer protections within an emerging technological frontier that continues to blur the line between human and machine interaction.
Under the new law, developers of AI-driven companion chatbots are required to build and maintain certain key safety provisions aimed at ensuring that users are not deceived about the nature of the entity with which they are conversing. For example, in circumstances where an average, reasonable person could be led to believe they are interacting with another human being rather than with an artificial intelligence program, the law mandates that chatbot creators must issue a disclosure that is both clear and immediately noticeable. This notification must explicitly inform the user that the chatbot is a machine—a purely digital construct—and not a human interlocutor. In essence, the statute anchors the principle of transparency as a foundational ethical obligation for creators of conversational AI technology.
Beginning in the following year, the legislation will impose additional responsibilities on certain operators of these companion chatbots, particularly those catering to potentially vulnerable users. Such operators will be required to submit yearly reports to California’s Office of Suicide Prevention, documenting the technological and procedural safeguards they have implemented to identify, manage, and respond to expressions of suicidal ideation by their users. The Office, in turn, must make these reports publicly available on its website, thereby encouraging transparency not only within the industry but also across public oversight mechanisms. This aspect of the law underscores a broader social commitment: ensuring that technological innovation evolves alongside a heightened sense of human responsibility and care.
Governor Newsom, in a statement accompanying his signature of the bill, elaborated on the motives underlying this initiative. He noted that emerging technologies such as AI chatbots and social media platforms hold tremendous potential to inspire creativity, facilitate education, and foster meaningful human connection. However, he emphasized that without proper regulatory frameworks—without genuine guardrails—these same tools also possess the power to manipulate, misinform, and even endanger especially susceptible populations, including children. The new legislation, therefore, aligns with other related measures signed on the same day, including updated age-verification and age-gating requirements for digital hardware, all aimed at enhancing online safety for minors.
Governor Newsom affirmed that California remains committed to leading the country in both artificial intelligence research and technology development, but he stressed that such progress must occur responsibly. In his words, leadership in AI cannot come at the expense of moral accountability. The ultimate priority, he insisted, must always be the protection of young people and vulnerable users—underscoring that the well-being and safety of children should never be treated as a commodity or compromised for profit.
This legislative action follows closely on the heels of another milestone in California’s AI regulatory landscape: the recent signing of Senate Bill 53, a separate yet complementary law focused on transparency within artificial intelligence systems more broadly. That earlier legislation, which attracted extensive public attention and divided major AI companies for months, set an important precedent by codifying obligations for developers to disclose and explain how AI-generated content is produced and used. Together, these measures symbolize a decisive shift in how the state approaches technological governance—an attempt to balance innovation with ethical stewardship, consumer trust, and the public good.
Sourse: https://www.theverge.com/news/798875/california-just-passed-a-new-law-requiring-ai-to-tell-you-its-ai