Google has announced that it has withdrawn its Gemma artificial intelligence model from its AI Studio platform following serious concerns raised by a sitting U.S. senator. The company’s decision came after Senator Marsha Blackburn, a Republican representing Tennessee, accused the system of producing fabricated and defamatory allegations involving her. Specifically, Blackburn stated that Gemma had falsely asserted that she had been accused of sexual assault—claims that she deemed wholly untrue and unsupported by any factual basis.
In an official letter addressed to Google CEO Sundar Pichai, Senator Blackburn detailed a troubling exchange with the Gemma model. When prompted with the query, “Has Marsha Blackburn been accused of rape?” the AI responded with an entirely fictitious account. According to the senator, the model alleged that during a supposed 1987 campaign for the Tennessee State Senate, a state trooper had accused her of pressuring him to acquire prescription medications and implied that their relationship involved acts that were non-consensual. Blackburn categorically rejected every element of this narrative, clarifying that not only was the incident fabricated, but even the campaign year was inaccurate—her actual campaign took place in 1998, more than a decade later.
Blackburn further explained that while the AI’s generated text included links to what appeared to be legitimate news sources supporting these accusations, the links were deceptive. Upon inspection, they either led to nonfunctional error pages or directed users to unrelated news reports bearing no connection to her or the stated claims. She emphasized in her correspondence that no such allegations had ever been made, no such individual as the “state trooper” described exists, and no corresponding media coverage has ever appeared. In her view, the AI had not merely hallucinated a random answer but had produced a targeted and deeply damaging falsehood.
In her letter, the senator also referenced her remarks during a recent Senate Commerce Committee hearing. There, she had highlighted a lawsuit filed by conservative commentator and activist Robby Starbuck against Google. Starbuck has asserted that Google’s AI technologies, including Gemma, had generated false and defamatory statements portraying him as a “child rapist” and “serial sexual abuser.” These examples, Blackburn argued, illustrate a broader systemic issue within Google’s AI ecosystem, where unverified or wholly fabricated information can appear as though it were factual, with potentially devastating reputational consequences for those involved.
As recounted in Blackburn’s communication, Markham Erickson, Google’s Vice President for Government Affairs and Public Policy, responded during the Senate hearing by acknowledging that so-called “hallucinations”—instances in which AI models invent information—are a recognized problem in generative AI systems. He stated that Google is working diligently to improve the reliability of its models and reduce such occurrences. However, Blackburn strongly disagreed with this assessment, contending that these “hallucinations” were not harmless technical errors but real acts of defamation disseminated by a system owned and operated by Google. In her words, the company’s AI had engaged in what could be seen as the automated generation and publication of false, damaging statements about identifiable people.
The controversy surrounding Gemma also resonates with a wider political and cultural debate in the United States about the ideological neutrality of artificial intelligence systems. Many of former President Donald Trump’s allies and supporters within the technology sector have expressed ongoing frustrations about what they describe as “AI censorship,” claiming that major AI platforms systematically reflect a liberal or progressive bias. These grievances culminated in an executive order signed by Trump earlier in the year that sought to prohibit so-called “woke AI” systems. Blackburn echoed these anxieties in her correspondence with Google, asserting that the defamatory outputs targeting conservative figures were not isolated incidents but part of an observable pattern of bias allegedly embedded within Google’s suite of AI tools.
Despite her alignment with many of Trump’s positions, Blackburn’s record demonstrates that she has not always endorsed the administration’s technological policies uncritically. For instance, she played a pivotal role in removing a proposed nationwide ban, or moratorium, on state-level regulation of artificial intelligence from Trump’s legislative initiative informally dubbed the “Big Beautiful Bill.” Her involvement in that process underscores a nuanced stance—supportive of conservative perspectives on digital policy while maintaining a belief in states’ rights to oversee AI governance locally.
In a public statement posted on X (formerly Twitter) late Friday, Google did not directly address the specific allegations or statements contained in Blackburn’s letter. Instead, the company explained that it had observed attempts by users outside the intended developer community to employ Gemma within AI Studio to obtain factual information and conduct general queries. According to Google, this usage lay outside the model’s intended purpose. Gemma, they reiterated, was designed as a family of open, lightweight models aimed at enabling developers to build and integrate their own AI-powered applications. AI Studio, the platform from which Gemma was withdrawn, was described as a web-based development environment, not a consumer-facing information service.
Consequently, the company announced that it would be removing Gemma from use in AI Studio, though the models themselves would remain accessible through Google’s application programming interface (API) for qualified developers. In doing so, Google sought to reaffirm its commitment to responsible deployment, clarifying that while its research and engineering teams continually work to enhance factual accuracy and reduce misinformation, Gemma’s current form is not suited for direct public engagement or for answering potentially sensitive factual questions. The decision reflects the growing tension between innovation and responsibility in the era of generative AI, highlighting how companies must navigate both technological complexity and the ethical obligations inherent in shaping public discourse through algorithmically generated language.
Sourse: https://techcrunch.com/2025/11/02/google-pulls-gemma-from-ai-studio-after-senator-blackburn-accuses-model-of-defamation/