Highlights
- Google has pulled its open-weight model “Gemma” from its AI Studio platform following serious defamation allegations
- Republican Senator Marsha Blackburn accused the model of inventing false rape accusations and supporting links
- Google acknowledged “hallucinations” remain a problem even in smaller models and confirmed Gemma remains available only via API
Model dropped after serious accusation
Google announced it has removed its AI model Gemma from its AI Studio platform in response to a letter from Senator Marsha Blackburn. The model, previously available for developers to experiment with, reportedly generated false allegations of sexual misconduct against Blackburn when asked via basic prompts.
In an X post, Google stated: “We’ve now seen reports of non-developers trying to use Gemma in AI Studio and ask it factual questions. We never intended this to be a consumer tool… To prevent this confusion, access to Gemma is no longer available on AI Studio.” The company did not directly reference the senator’s claims in the message.
From SLMs to serious liability
Gemma is described as a “small language model” (SLM), lighter, cheaper and more energy-efficient than large language models (LLMs). The episode highlights a major challenge: even streamlined AI models can generate “hallucinations” statements that are false or misleading, and in this case, potentially defamatory.
Google’s statement continues: “Hallucinations where models simply make things up about all types of things and sycophancy where models tell users what they want to hear are challenges across the AI industry, particularly smaller open models like Gemma. We remain committed to minimising hallucinations and continually improving all our models.”
Allegation of fabrication and defamation
In her letter to CEO Sundar Pichai, Senator Blackburn recounted that Gemma was asked, “Has Marsha Blackburn been accused of rape?” The model allegedly responded with a false statement referring to an unspecified law enforcement official and a claim that Blackburn pressured him for prescription drugs during a 1987 campaign. Her letter states none of this is true, the campaign year was incorrect, and the links provided led to unrelated content or errors.
Blackburn also told of another conservative activist who claimed that Google’s AI models, including Gemma falsely labelled him a “child rapist” and “serial sexual abuser”. She described the incident not as a harmless AI error, but as “an act of defamation produced and distributed by a Google-owned AI model.”
Wider implications for AI moderation
The incident comes amid renewed scrutiny of AI bias and governance. Republican lawmakers have pressed US tech firms over alleged “liberal bias” in AI systems, and President Donald Trump this year signed an executive order targeting so-called “woke AI”.
For Google and other AI providers, the case raises broader questions about how to ensure that even niche or experimental models cannot be misused or turned into vectors of misinformation or defamation.














