Photo illustration by Cheng Xin/Getty Images
Follow ZDNET:
Add ZDNET as a preferred source on Google to stay fully updated on the evolving conversation surrounding artificial intelligence and creative innovation.

ZDNET’s comprehensive takeaways on this subject highlight a stark truth: the newest generation of AI video technologies brings with it not just groundbreaking creative potential, but also an array of pressing legal, ethical, and ownership challenges that cannot be ignored. OpenAI, the creator of the much-discussed tool Sora, contends that its system was designed to empower human imagination and expand artistic freedom. Yet critics remain skeptical, arguing that such tools risk undermining artistic integrity and dismantling long-established boundaries between originality and imitation. In short, generative video may either usher in an unprecedented democratization of art—or hasten its downfall altogether.

OpenAI’s Sora 2, a powerful generative AI video creation platform, has only been available for a short while. Despite its brief debut, it has already stirred intense debate across creative and legal communities. Within days of its release, Sora 2 inspired examples both absurd and provocative—clips that defy taste and convention, such as beloved cartoon characters or pop culture icons placed in bizarre or inappropriate scenarios. These viral creations dramatize a larger question about the human condition: when granted unlimited creative potential with minimal effort, our collective instinct often veers toward testing boundaries, humor, and even transgression.

This phenomenon encapsulates a recurring element of human behavior. When new technological possibilities emerge, the first to experiment are often those motivated by simple amusement, curiosity, or mischief. Their actions might not be malicious, but they open the floodgates for misuse—content that provokes, offends, or capitalizes on notoriety. Soon afterward, a more calculated group appears: individuals who recognize profit opportunities within this creative chaos. Whether by manufacturing viral content for commercial gain or manipulating the likenesses of well-known figures for deceptive endorsements, these actors push ethical limits. This chain reaction reflects a historical pattern whenever society acquires disruptive tools: early enthusiasm begets experimentation, followed by exploitation.

To illustrate this, one need only consider a striking example: a video of OpenAI’s CEO Sam Altman circulating on Sora 2’s public Explore page. In the original clip, Altman discusses an entirely separate AI initiative. Yet with Sora’s remixing capabilities, a user can easily alter context, expression, or attire—transforming the tone and message almost instantly. Within minutes, the technology can generate an altered scene that appears convincingly real. This capacity demonstrates not just the flexibility of generative AI but its potential to destabilize our shared sense of authenticity.

It is precisely these capabilities that invite both fascination and alarm. Legal experts quickly note that the unprecedented realism of such software introduces genuine questions of ownership, liability, and ethical accountability. When tools allow ordinary individuals to replicate likenesses of public figures, or to generate convincing footage incorporating recognizable brands, what mechanisms exist to delineate fair use from infringement? Who bears responsibility when those creations are distributed and absorbed as truth?

According to reports by major publications like The Wall Street Journal, OpenAI attempted to mitigate early backlash by notifying Hollywood rightsholders about Sora 2’s impending release and offering an opt-out for those who did not wish their intellectual property included in the system’s training data. Yet the entertainment industry’s reaction was predictably fierce. Statements from the Motion Picture Association demanded immediate corrective action and emphasized that the ultimate burden of preventing misuse must lie with OpenAI, not with the creators and studios whose works risk replication.

In response, OpenAI has introduced measures intended to balance creative freedom with responsibility. Its published Sora 2 System Card—a detailed six-page framework—outlines the technology’s operational boundaries. Features include consent-based likeness control, restrictions that block the reproduction of public figures, watermarking protocols that certify provenance through the C2PA authenticity standard, and policies banning users who engage in harassment, fraud, or privacy violations. Despite these safeguards, ambiguity remains regarding ultimate ownership and accountability.

Sean O’Brien of the Yale Privacy Lab clarifies this legal landscape. He notes that, under U.S. law, when an individual generates content using an AI system, that person—and potentially their organization—assumes liability for any resulting misuse or infringement. The machines may be autonomous in function, but the legal burden remains resolutely human. O’Brien further observes a developing four-part doctrine: copyright applies only to human-created works; AI-generated content generally falls into the public domain; human users remain responsible for infringing material; and training a model on copyrighted data without authorization constitutes direct violation of intellectual property rights.

Beyond legal complexity, Sora 2 reopens philosophical debates about the very essence of creativity. By definition, creation involves bringing something into existence—whether through imagination, skill, or action. Digital tools like Sora blur distinctions between instrument and artist, challenging how society recognizes authorship. Many experienced creators recall similar transitions: photographers who became empowered by Photoshop, illustrators who replaced brushes with styluses, musicians who navigated the leap from analog to digital production. Each innovation democratized expression, yet each also confronted traditional notions of mastery.

Veteran digital artist Bert Monroy recalls that before the computer era, creativity required entire production teams—retouchers, photographers, and designers—to achieve results now producible through a single software prompt. The advent of AI escalates that accessibility exponentially: one can now produce a polished commercial-quality scene in moments, without formal training. As Monroy notes, AI-driven generation threatens not only established creative industries but also the individual identity of the artist as a skilled practitioner.

Maly Ly, an experienced executive across technology and entertainment sectors, reframes the issue through an economic and philosophical lens. She argues that AI video compels society to revisit fundamental assumptions about authorship. In a world where AI recombines countless past works, the concept of “originality” shifts from scarcity toward abundance, reframing creativity not as theft but as multiplication. Yet she also acknowledges the structural need for new systems of attribution and compensation—proposing transparent, traceable mechanisms that fairly credit every contribution embedded within an AI’s training data. Her idea envisions copyright reform as a living, programmable framework that dynamically responds to collaboration and innovation rather than static legal documentation.

Another dimension of concern is reality itself. Technologies capable of replicating perception blur the border between evidence and illusion. Historically, society has struggled with similar disorientations, from Orson Welles’ 1938 radio adaptation of “War of the Worlds,” which caused widespread confusion, to modern-day deepfake phenomena that weaponize synthetic media for manipulation. Although today’s AI companies attempt to embed provenance metadata and enforce upload restrictions, determined users will inevitably find ways to circumvent them. Deepfake imagery already distresses families of public figures, illustrating how technical marvels can inflict emotional harm.

Even before digital compositing, photo forgeries existed: nineteenth- and early-twentieth-century photographs were routinely manipulated for political or propagandist purposes. From airbrushed Stalin-era portraits to doctored royal photographs, the temptation to rewrite visual reality has deep roots. The difference now is scale and accessibility—what once required state apparatus or specialized expertise now sits within ordinary reach through consumer-grade AI tools.

The implication is clear: critical thinking and digital literacy will become as essential as any technological safeguard if society hopes to preserve trust in visual media.

Attorney Richard Santalesa of SmartEdgeLaw Group further contextualizes the unfolding debate. He emphasizes that Sora 2 epitomizes a growing friction between rapid innovation and the inherited frameworks of copyright law. While OpenAI’s user policies theoretically prohibit infringement, the underlying legal responsibility often extends beyond corporate intention. As he succinctly puts it, “the genie is out of the bottle.” Future efforts must therefore focus not on reversing the technology but on cultivating robust mechanisms for control, verification, and ethical alignment.

Finally, an official statement from OpenAI encapsulates the company’s stance: Sora 2 and its related video-generation tools are designed to complement, not supplant, human creativity—to assist individuals in exploring ideas and expressing themselves with newfound breadth. Whether society interprets this as empowerment or displacement will depend largely on how responsibly we define authorship, ethics, and truth in the age of synthetic imagination.

Have you experimented with Sora 2 or similar AI video platforms? Do you consider their outputs legitimate expressions of creativity or contrived simulations of it? Questions of accountability, evolution, and authenticity now sit at the very heart of the artistic process. However one answers, it is clear that the conversation about AI and creativity has only just begun.

For continued updates and expert analyses on artificial intelligence, creativity, and technology law, sign up for ZDNET’s Innovation newsletter. Follow David Gewirtz across social media for in-depth commentary and weekly insights into the changing world of AI and digital culture.

Sourse: https://www.zdnet.com/article/is-art-dead-what-sora-2-means-for-your-rights-creativity-and-legal-risk/