According to a report published by The Wall Street Journal, Elon Musk’s artificial intelligence venture, xAI, allegedly required its own employees to provide deeply personal biometric information in order to assist in the training of an experimental AI chatbot known as “Ani.” The chatbot, characterized as a digital female persona, was specifically designed to function as part of an interactive system that mimics human-like conversation and behavior.
Ani—an anime-inspired avatar featuring stylized blond pigtails and a toggleable NSFW mode—was introduced earlier in the summer for users subscribed to the $30-per-month SuperGrok service offered by X, the social platform affiliated with xAI’s ecosystem. Upon testing the application, Victoria Song from The Verge described the experience as a technologically updated interpretation of a traditional phone sex line, noting that Ani’s personality and interactivity were explicitly designed to simulate the intimacy and responsiveness found in human-to-human conversations. This comparison suggested that, while Ani presented herself as a purely digital construct, there appeared to be traces of real human influence embedded behind her simulated expressions and responses.
The Journal’s investigation revealed that during an internal company meeting held in April, xAI’s corporate attorney, Lily Lim, informed staff members that participation in a new data collection initiative was mandatory. Employees were told they would need to submit their biometric data—including both facial imagery and voice samples—to make the AI companion more convincingly human in its exchanges with users. The meeting, which was allegedly recorded and later reviewed by the Journal, outlined that this effort was a strategic step toward refining Ani’s realism in communication, effectively enabling her to exhibit more nuanced emotional cues and tone variations derived from authentic human patterns.
Under a confidential internal operation referred to as “Project Skippy,” employees designated as AI tutors were further instructed to authorize an extensive legal release. This document granted xAI a perpetual, global, non-exclusive, sublicensable, and royalty-free license to utilize, replicate, and distribute their biometric data—including their faces and voices—without ongoing compensation. The specified purpose of this data usage was to advance Ani’s development, as well as to improve other AI companions connected to Grok, xAI’s broader family of generative models. In essence, the project aimed to harness the employees’ biometric inputs as a foundational dataset to better simulate personality, speech cadence, and expressive realism across the platform’s interactive AI figures.
However, the Journal noted that not all company personnel were comfortable complying with this directive. Some individuals expressed pronounced unease about the possibility that their facial likenesses, vocal patterns, or even mannerisms could be repurposed or redistributed in unintended ways—such as being sold to third parties or embedded in deceptive digital reconstructions like deepfakes. Beyond the privacy risks, employees were also reportedly disturbed by the overtly sexualized presentation of Ani, whose design and behavior evoked the archetype of a “waifu”—a virtual romantic partner popular in anime culture. For certain staff members, the blending of workplace obligations with the eroticized aesthetic of the product marked an ethical and emotional boundary they were unwilling to cross.
Nevertheless, those objections appeared to have little effect on internal policy. According to the Journal, management emphasized that contributing biometric data was not an optional favor but a professional obligation. Employees were reminded that full participation in Project Skippy was considered a condition of employment and integral to advancing xAI’s broader mission: the pursuit of ever more lifelike artificial companions capable of bridging the remaining gap between mechanical simulation and genuine human interaction. This internal conflict encapsulated a growing tension at the intersection of technological ambition, personal privacy, and ethical responsibility—raising difficult questions about how far companies should go in their quest to endow artificial entities with the essence of humanity itself.
Sourse: https://www.theverge.com/news/814168/xai-grok-ani-employee-biometric-data