This publication is titled The Stepback — a thoughtfully curated weekly newsletter that delves into one significant and timely story emerging from the ever-evolving world of technology. Each week, The Stepback provides readers with a nuanced exploration of a single issue that exemplifies the ways in which technology is reshaping industries, society, and law. For those seeking deeper insight into the intricate legal network surrounding artificial intelligence and the complex ethical dilemmas it has brought to light, journalist Adi Robertson offers continuing analysis and coverage. The Stepback is delivered directly to subscribers’ inboxes at 8 a.m. Eastern Time, providing an accessible yet comprehensive reflection on the latest events. Readers can easily opt in to receive The Stepback and stay informed about the shifting intersections of innovation, regulation, and digital culture.
The spark for this ongoing legal and cultural discourse was a song called “Heart on My Sleeve.” To an unsuspecting listener, it would have seemed indistinguishable from a new release by Drake, capturing his signature cadence and vocal inflection with uncanny precision. But to those aware of the truth, the track symbolized something entirely different — the opening salvo of a conflict that would redefine how society perceives creativity, ownership, and digital identity. It represented the beginning of an expansive legal and moral struggle over how artificial intelligence technologies could, or should, use the likeness — both visual and vocal — of real individuals, and how platforms tasked with hosting this material should respond to its proliferation.
When the AI-generated imitation of Drake’s voice surfaced in 2023, it was received as both a novelty and a warning. Although superficially amusing, the legal and ethical questions it raised were immediately apparent. The song’s convincing mimicry of a globally recognized artist was unnerving to fellow musicians, who suddenly saw how vulnerable their voices and artistic identities had become. Streaming services quickly removed the track, citing tenuous copyright grounds. Yet the creator had not technically replicated any preexisting composition; instead, the work existed in a gray zone of imitation rather than direct copying. This gray area propelled the conversation beyond copyright and into the domain of likeness law — a comparatively obscure field that had historically been invoked when celebrities attempted to block unauthorized endorsements, impostor advertisements, or satirical parodies. With the growing prevalence of deepfake videos and AI-generated audio, likeness law began to emerge as one of the few remaining tools that could be used to regulate these uncanny simulations.
Unlike copyright, which is clearly delineated by the Digital Millennium Copyright Act and various international treaties, likeness rights in the United States are governed by a fragmented assortment of state-level statutes. Each state has its own rules, most of which were conceived long before modern AI was imaginable. In recent years, however, legislators have scrambled to address these gaps. In 2024, both Tennessee Governor Bill Lee and California Governor Gavin Newsom — representing two states whose economies are profoundly shaped by their media and entertainment industries — signed new laws expanding legal protections against unauthorized digital recreations of performers. These measures, though regionally confined, marked a significant acknowledgment that likeness rights were no longer merely an abstract theoretical issue but a tangible necessity in an era dominated by generative AI.
Yet, predictably, the legal structure evolved at a slower pace than the technologies driving these controversies. Early in 2024, OpenAI launched Sora, an AI-driven video generation platform specifically designed to render vivid, often photorealistic depictions of real people and their movements. The tool effectively unleashed a torrent of remarkably lifelike deepfakes, including visual recreations of individuals who had given no consent for their likenesses to be used. In the absence of comprehensive federal regulation, OpenAI and other tech companies rushed to devise internal policies governing likeness use — ad hoc frameworks that, for the moment, serve as the de facto laws of the internet.
OpenAI and its CEO, Sam Altman, have publicly rejected claims that the release of Sora was reckless. Altman has insisted that the company was, if anything, excessively cautious, embedding guardrails he characterized as overly restrictive. Nonetheless, Sora’s debut was immediately accompanied by controversy. The service initially allowed relatively unrestricted use of the likenesses of historical figures, a policy it reversed only after complaints from the estate of Martin Luther King Jr., which strongly objected to AI-generated videos depicting the late civil rights leader in disrespectful or racist scenarios. While OpenAI advertised rigorous limitations on employing the facial or vocal likenesses of living individuals without their approval, users soon discovered loopholes that enabled them to fabricate videos of celebrities such as Bryan Cranston appearing alongside figures like Michael Jackson. These incidents prompted objections from SAG-AFTRA, leading OpenAI to promise additional, though unspecified, policy tightening.
Even individuals who had explicitly permitted Sora to use their likenesses expressed unease after witnessing the tool’s unpredictable outputs. Many women, in particular, reported that their authorized likenesses had been incorporated into fetishistic or sexually suggestive clips, revealing that consent to participate did not necessarily safeguard one’s image from degradation. Altman later admitted that he had underestimated how people might experience “in-between feelings” — neither total comfort nor full objection — when their likeness was used in contexts they found distasteful or offensive.
Although OpenAI has made incremental changes, such as revising its guidelines for depicting historical figures, Sora remains only one example in a rapidly expanding ecosystem of AI video systems. The broader cultural picture has become distinctly surreal: digitally fabricated images and videos are routinely deployed in political propaganda, online trolling, and influencer conflicts. Former President Donald Trump’s administration and other politicians have increasingly circulated AI-generated content depicting grotesque or racially charged caricatures of their opponents. Meanwhile, local political candidates such as Andrew Cuomo briefly disseminated similarly manipulated videos targeting rivals, only to delete them amid backlash. Journalists like Kat Tenbarge have documented how such creations now infiltrate internet subcultures and social media feuds, transforming AI videos into a new form of rhetorical ammunition.
Given this environment, the threat of legal action constantly looms over unauthorized likeness reproductions. Celebrities including Scarlett Johansson have already sought legal counsel to challenge the unapproved use of their facial or vocal likenesses. Surprisingly, however, these likeness disputes have not yet produced the same avalanche of lawsuits seen in cases of AI-related copyright infringement. That disparity may stem from the legal landscape’s ongoing instability: the statutes governing likeness rights remain unsettled, leaving enforcement inconsistent and unpredictable.
When SAG-AFTRA commended OpenAI for reinforcing Sora’s guardrails, the union simultaneously revived its endorsement of the proposed Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act — a legislative initiative that, if enacted, would enshrine nationwide rights against unauthorized “digital replicas.” The bill defines such replicas as computer-generated representations capable of realistically reproducing an individual’s visual appearance or voice, whether that person is living or deceased. It even holds platforms accountable if they knowingly host or distribute such material without permission. Major digital players like YouTube have voiced support for the initiative, viewing it as a coherent federal solution to a mounting global issue.
Not everyone agrees. Free speech advocacy groups, particularly the Electronic Frontier Foundation, have issued scathing criticisms of the NO FAKES Act. They argue that the proposed framework would, in effect, establish an aggressive censorship apparatus obligating online platforms to filter vast amounts of content. This could, they warn, lead to widespread over-removal of legitimate material and the rise of a “heckler’s veto,” where the mere threat of complaint silences lawful expression. Although the bill formally preserves exceptions for satire, parody, and commentary, digital rights experts caution that these provisions offer little comfort to those lacking the financial means to litigate complex cases.
Opponents might take some reassurance in the current congressional gridlock: legislative productivity remains so low that the United States is enduring the second-longest government shutdown in history. Furthermore, competing proposals aimed at blocking state-level AI regulation could potentially undermine emerging likeness laws before they take effect. Despite this stagnation, the overall momentum still favors stronger regulation. In a telling development, YouTube recently unveiled a feature allowing creators in its Partner Program to search for and request the removal of unauthorized uploads employing their likenesses. This expansion complements its existing system for musicians, who can already demand the takedown of content mimicking their distinctive voices.
Even beyond legal frameworks, cultural expectations around digital likeness are still being negotiated. We now inhabit a world in which nearly anyone’s face can be inserted into virtually any scenario through accessible AI tools. The pressing question, therefore, is not merely whether such actions are possible, but when — or whether — they should be socially acceptable. While many recent examples of deepfake generation are relatively trivial or comedic, academic research consistently shows that the vast majority of deepfakes historically have been pornographic in nature, targeting women without consent. Alongside services like Sora, an entire underground industry of “nudify” tools perpetuates this problem, raising parallel legal and ethical issues akin to those surrounding nonconsensual sexual imagery.
The ramifications extend far beyond the question of consent. A sufficiently convincing deepfake could, in some contexts, cross the threshold into defamation or harassment, especially when used in coordinated campaigns of intimidation or reputational harm. The intricacy of these issues only deepens as AI systems become both more capable and more integrated into mainstream platforms. Traditionally, social media companies have relied on Section 230 for protection, which shields them from being legally regarded as publishers of user-generated content. However, as these same companies begin to assist users in generating images and videos through AI, it becomes increasingly uncertain how far that immunity will stretch.
Despite recurring fears that artificial intelligence will render reality indistinguishable from illusion, the truth remains that, for now, careful observation often reveals subtle indicators of fabrication. Many AI-generated videos still contain visual artifacts, repetitive editing quirks, or watermarks that betray their origins. The greater concern may be psychological rather than technical: a growing indifference among viewers, many of whom simply do not care to determine whether what they are seeing is real.
The journalist Sarah Jeong once issued a prescient warning about the dangers of seamlessly manipulated imagery — a caution that feels even more urgent today. The New York Times has examined in depth former President Trump’s penchant for AI-created visuals, while other analysts, such as Max Read, have considered whether Sora could evolve into a distinct form of social network despite its controversies. As questions accumulate, The Stepback continues to trace these intersections of politics, technology, and ethics. By following its authors and related topics, readers can ensure they remain connected to the evolving narrative of what it means to possess, or lose control of, one’s image in the digital age.
Sourse: https://www.theverge.com/column/805821/the-next-legal-frontier-is-your-face-and-ai