There was once a time, not very long ago, when the term “fake” on the internet merely referred to an obviously manipulated or poorly edited image, often created with amateur Photoshop skills that left telltale traces of distortion and inconsistency. Those were arguably simpler, more innocent digital days, when the boundaries between what was genuine and what was fabricated were far easier to discern. Today, however, that landscape has shifted dramatically. We now find ourselves submerged in an overwhelming flood of hyperreal content—an ocean of artificial intelligence–generated videos and sophisticated deepfakes that blur the distinction between authentic footage and highly convincing deceit. From fraudulent celebrity appearances and fabricated disaster reports to meticulously orchestrated social media illusions, the internet has become a labyrinth of uncertainty where identifying the truth demands increasing effort and skepticism.

Unfortunately, the situation is only becoming more complex, especially with the emergence of OpenAI’s groundbreaking yet controversial video-generation system, Sora. This technology has already transformed the digital environment by complicating our understanding of authenticity. The situation has grown even more tangled with the viral rise of “Sora 2,” the company’s exclusive, invitation-only social media platform that mirrors the aesthetic and addictive scrolling experience of TikTok but with one critical difference—every video featured on the feed is entirely fabricated. Users engage with a constant stream of synthetic content painstakingly crafted by algorithms, rendering the distinction between fiction and reality almost decorative.

A journalist aptly characterized Sora 2 as a “deepfake fever dream,” and indeed, the description captures the surreal essence of this phenomenon. The platform continuously evolves, perfecting the illusion of authenticity with each update. Its unparalleled ability to make fiction appear indistinguishable from fact poses profound ethical and societal risks. Many individuals, even those with advanced technological literacy, struggle to navigate this new terrain without feeling disoriented. If you find it challenging to separate what is real from what is artificially synthesized, you are certainly not alone.

CNET has provided several valuable strategies to help users critically evaluate visual content and sift truth from digital fabrication. To remain well-informed about ongoing developments, unbiased technology reviews, and analytical reporting, it is advisable to add CNET as a preferred and trusted Google News source.

From a purely technical standpoint, Sora’s video outputs are remarkably advanced, standing strong against competing models like Midjourney’s V1 and Google’s Veo 3. The clips it produces exhibit impressive resolution, natural-looking motion, synchronized sound design, and striking creative versatility. Among its most discussed capabilities is the “cameo” feature, which allows users to seamlessly integrate another person’s likeness—essentially borrowing their face and inserting it into nearly any conceivable AI-generated setting. This function yields results so realistic that they are often indistinguishable from authentic recordings, a factor that many technologists find both awe-inspiring and deeply concerning.

It is precisely because of this heightened realism that experts have voiced alarm over Sora’s growing accessibility. The tool empowers virtually anyone to fabricate convincing deepfakes without specialized technical knowledge, significantly lowering the barrier for spreading misinformation, defamation, or political manipulation. Public figures, entertainers, and influencers stand particularly exposed to such misuse. In recognition of these risks, influential organizations such as SAG-AFTRA have urged OpenAI to implement stronger protective measures and clearer ethical boundaries to prevent the abuse of digital likenesses.

Even so, identifying AI-generated videos remains a dynamic challenge confronting technology companies, social platforms, and users alike. Yet, despite the difficulties, definitive strategies do exist. For instance, one practical telltale sign involves examining visible watermarks. Each video produced within Sora’s iOS environment is automatically stamped with a distinct icon—a white cloud-shaped emblem that subtly bounces around the edges of the frame, visually reminiscent of TikTok’s watermark branding. This type of trace, while elementary, serves as a critical control measure that allows observers to recognize AI-produced content at a glance. Comparable initiatives exist elsewhere, such as Google’s Gemini “nano banana” model, which too embeds immutable identifiers in its generated imagery.

However, watermarking, though beneficial, is far from foolproof. Static or fixed watermarks can be cropped out with basic editing, and even dynamic, moving marks such as Sora’s are vulnerable to removal through specialized software that deliberately erases such indicators. OpenAI’s CEO, Sam Altman, has openly acknowledged this limitation, suggesting that society must evolve to adapt to a reality in which anyone has the potential to manufacture video fabrications featuring any individual. Before Sora, creating such persuasive illusions demanded considerable technical expertise. Now, with tools this powerful and accessible, the conversation has shifted toward developing layered methodologies for verifying authenticity rather than relying on any single safeguard.

Another method involves delving into a video’s metadata—a collection of invisible data points automatically attached to digital files at creation. Metadata can disclose crucial contextual information, such as the device used to capture imagery, the precise date, time, and even the GPS coordinates associated with the recording. AI-generated content often carries additional fields known as content credentials, which formally document synthetic origins. Because OpenAI is a member of the Coalition for Content Provenance and Authenticity (C2PA), all Sora-derived videos include standardized C2PA metadata. Users can verify this information through the Content Authenticity Initiative’s tool by visiting verify.contentauthenticity.org, uploading the suspected file, and reviewing the material’s characteristics displayed in the right-hand panel. Correctly issued Sora outputs are clearly marked as “issued by OpenAI” and include an explicit declaration of AI generation.

Nevertheless, as with all detection systems, limitations persist. A third-party edit—such as re-saving, cropping, or watermark removal—can strip away some or all metadata, reducing verifiability. Moreover, AI productions from other platforms like Midjourney may lack the necessary metadata flags entirely, rendering them invisible to such analysis. During testing, videos created directly in Sora were accurately identified by the Content Authenticity Initiative’s verification tool, including exact timestamps, demonstrating that while imperfect, the system provides meaningful transparency where intact metadata is preserved.

In addition to metadata checks, users can look for automated labeling systems integrated into major social media networks. For example, Meta products—including Facebook and Instagram—employ internal detection algorithms that attach indicators to posts identified as AI-generated. TikTok and YouTube have adopted comparable labeling measures, offering an additional layer of user-facing disclosure. Yet, despite these technical aids, the most reliable guarantee remains creator transparency. If a user voluntarily marks their creation as AI-generated or discloses its nature in a caption or credit, that act of honesty enhances collective digital literacy and mitigates widespread confusion. Particularly for those sharing Sora-based videos beyond the dedicated app—where audiences are accustomed to synthetic content—clearly acknowledging how material was produced fosters responsibility and trust.

Ultimately, the overarching lesson is the necessity of vigilance. No universal shortcut exists to discern truth from illusion instantaneously. Recognizing digital authenticity demands both attention and a healthy dose of skepticism. When an image or video feels slightly off—perhaps the lighting seems inconsistent, motion appears unnaturally smooth, or small textual elements behave erratically—such irregularities often serve as indicators of synthetic manipulation. Slow down before passing judgment or sharing sensational media. Examine details closely: observe the continuity of backgrounds, the behavior of shadows, and the consistency of physics. Even experts, after all, are occasionally deceived. In this unprecedented era saturated with algorithmic artifice, our best defense is an inquisitive mind and a cautious eye.

(Disclosure: Ziff Davis, the parent company of CNET, filed a lawsuit in April against OpenAI, asserting that the company’s AI systems infringed on Ziff Davis’s copyrighted material during training and operation. This ongoing legal dispute underscores the broader ethical and legal implications surrounding the use of human-created content to train generative AI models.)

Sourse: https://www.cnet.com/tech/services-and-software/deepfake-videos-are-more-realistic-than-ever-heres-how-to-spot-if-a-video-is-real-or-ai/#ftag=CAD590a51e