Robby Starbuck has initiated legal proceedings against Google, asserting that the company’s artificial intelligence–based search utilities generated and circulated statements that inaccurately associated him with accusations of sexual assault and with Richard Spencer, a figure widely recognized as a white nationalist. According to Starbuck’s filing, these false links were produced through Google’s AI-driven systems, causing potential harm to his reputation and professional standing. Notably, this lawsuit constitutes the second instance in which Starbuck—who has built a public profile around his opposition to corporate diversity, equity, and inclusion initiatives—has targeted a major technology company for alleged misconduct tied to artificial intelligence products.

Earlier in the year, specifically in April, Starbuck brought a comparable case against Meta Platforms. In that action, he contended that Meta’s AI technology incorrectly insisted that he had taken part in the January 6th attack on the U.S. Capitol and that he had subsequently been detained on a misdemeanor charge. That legal conflict ultimately concluded in a private settlement, during which Meta reportedly brought Starbuck on board as an advisor whose task was to provide guidance on addressing so‑called “ideological and political bias” within the company’s chatbot framework. The details of the agreement were not publicly disclosed, yet observers noted that this move aligned with a broader series of appointments by Meta that appeared designed to alleviate criticism from politically conservative audiences dissatisfied with perceived bias in the platform’s moderation or automated content systems.

When questioned about the new complaint, José Castañeda, a representative for Google, informed *The Verge* that the company planned to review the formal filing once it had been received. Castañeda also clarified that many of the accusations seem to stem from earlier “hallucinations” produced by Google’s Bard AI—an industry term referring to instances in which large language models fabricate or distort factual information. He explained that Google had already taken steps in 2023 to mitigate such issues, emphasizing that hallucinations are a recognized challenge across all large-scale AI systems. Although the company has invested considerable resources into minimizing these incidents and transparently warning users about them, Castañeda stressed that it remains technically possible for an individual to maneuver or “prompt” a chatbot into generating misleading or false statements if they use sufficiently creative input methods.

Whether Starbuck’s argument would ultimately succeed on its substantive legal merits remains uncertain, particularly given the lack of judicial precedent in this domain. To date, as *The Wall Street Journal* has reported, no American court has awarded financial compensation to any plaintiff claiming defamation by an AI chatbot. A notable comparison emerged in 2023, when conservative radio commentator Mark Walters sued OpenAI, alleging that ChatGPT had defamed him by erroneously connecting his name to allegations of financial fraud and embezzlement. In that instance, the court ruled in favor of OpenAI, concluding that Walters could not establish “actual malice,” a legal requirement for defamation against public figures. This outcome underscores the evolving and largely untested landscape of liability surrounding generative AI, where traditional standards of intent, negligence, and truthfulness are being reconsidered in light of machine-generated content.

Starbuck’s case against Google has been filed in the Superior Court of Delaware, and according to reporting by *The Wall Street Journal*, he is seeking damages totaling fifteen million dollars. Nonetheless, some observers have speculated that Starbuck might ultimately be less interested in the monetary award itself and more inclined toward leveraging the lawsuit as an opportunity to obtain a role influencing corporate AI policy—mirroring the advisory position he secured when settling with Meta. Regardless of his motives, the lawsuit contributes to the ongoing debate over how artificial intelligence technologies intersect with personal reputation, platform accountability, and the broader question of who bears responsibility when an AI system disseminates misinformation.

Sourse: https://www.theverge.com/news/804494/anti-diversity-activist-robby-starbuck-is-suing-google-now