A newly proposed piece of federal legislation has the potential to dramatically reshape the relationship between artificial intelligence companies and their users by mandating strict age verification protocols for anyone interacting with AI chatbots. Introduced jointly by Senators Josh Hawley, a Republican from Missouri, and Richard Blumenthal, a Democrat representing Connecticut, the Generative AI Utilization for Restricted Data—or GUARD—Act represents one of the first bipartisan efforts to impose explicit age-based restrictions on the use of generative AI tools. As initially reported by NBC News, the proposed law seeks not only to require verification of every chatbot user’s age but also to categorically prohibit individuals under the age of eighteen from accessing these conversational AI systems altogether.

The introduction of the GUARD Act follows closely on the heels of an intense Senate hearing that took place only weeks earlier, where concerned parents, educators, and child safety advocates presented testimony highlighting what they view as the growing risks that AI-driven chatbots may pose to minors. Their arguments centered on worries that these systems, capable of simulating human-like dialogue and emotional engagement, could inadvertently expose children to inappropriate content or manipulative behaviors. This hearing appears to have provided political momentum for legislators to advance more robust oversight measures aimed at the intersection of youth safety and artificial intelligence technology.

Under the specific provisions outlined in the proposed bill, AI developers and companies operating chatbots would be required to establish dependable mechanisms for confirming the ages of all users. Compliance could involve the uploading of official government-issued identification documents—such as driver’s licenses or passports—or the implementation of alternative, so-called “reasonable” verification techniques. The bill hints that such alternatives could include biometric approaches, possibly facial recognition scans or similar technological checks, so long as they reliably affirm a user’s age while conforming to privacy and data security standards.

In addition to verifying who is allowed to interact with these systems, the GUARD Act seeks to enhance transparency for users by obligating AI chatbots to disclose their nonhuman nature at regular intervals—specifically, every thirty minutes during ongoing interactions. This mandatory disclosure is intended to prevent users, particularly impressionable minors, from developing the misconception that they are engaging with an actual person rather than a programmed digital entity. Furthermore, the legislation would compel AI creators to implement additional safeguards that explicitly prevent these systems from presenting themselves as human under any circumstance. This element mirrors certain provisions of a recently enacted AI regulation in California, which similarly emphasizes user awareness and ethical chatbot design.

Beyond these identity and transparency requirements, the bill also includes substantive content-based prohibitions designed to protect young users from psychological or emotional harm. Notably, it would make it illegal for any AI chatbot to produce or distribute sexually explicit materials to minors or to engage in language or behaviors that promote self-harm or suicide. The intent is to insulate underage users from manipulative, explicit, or dangerous content that might otherwise be accessible through unregulated AI systems.

Senator Blumenthal, speaking to The Verge in a formal statement, underscored the spirit and urgency behind the legislation. He asserted that the GUARD Act “imposes strict safeguards” intended to defend users—particularly children—from exploitative or manipulative uses of artificial intelligence. He further emphasized that these safeguards would be reinforced through rigorous enforcement mechanisms, including both criminal charges and civil penalties for noncompliant entities. In his remarks, Blumenthal criticized what he described as the consistent pattern among major technology corporations to prioritize profit motives over the welfare of users, arguing that such behavior has eroded the public’s trust in the industry’s capacity to self-regulate effectively.

Ultimately, the GUARD Act stands as both a response to growing societal unease about the rapid proliferation of AI tools and a declaration of Congress’s intent to establish a legal framework for protecting minors in the digital age. Its broad implications for privacy, corporate responsibility, and accessibility ensure that the debate around balancing innovation with ethical governance will remain at the forefront of technological policymaking for the foreseeable future.

Sourse: https://www.theverge.com/news/808589/senators-ai-chatbot-bill-age-verification-teen-ban