Last week, as I scrolled through TikTok, I came across what initially appeared to be a heart-wrenching moment: a woman kneeling in the blinding snow high on Mount Everest, burying her loyal dog after it had perished during their grueling climb. The imagery was cinematic and emotionally charged, and the comments section was aflame with outrage. Users accused the woman of cruelty, of subjecting her animal to reckless conditions. Yet, remarkably, the sorrow they felt was for something that had never truly existed. The woman, her canine companion, and even the mountain’s dazzling backdrop were nothing more than digital illusions—fabrications born from generative artificial intelligence, most likely through OpenAI’s powerful video model, Sora 2. The watermark identifying the AI tool had been scrubbed from the clip, but the user’s handle—Soralice—offered a telling clue. What had stirred thousands to empathy and anger alike had, in reality, likely sprung from only a few lines of textual prompting, conjuring a hyperreal scene of grief that never happened.
Such examples illustrate a rapidly shifting landscape in which researchers and watchdogs have warned that ever more realistic synthetic images and videos could empower malicious actors to craft convincing deepfakes for fraud, propaganda, and manipulation. Yet as these tools have become publicly accessible, their most common application may not be sinister political deception at all but, rather, something considerably more trivial and absurd. Many ordinary users, especially younger ones, have taken to experimenting with AI video purely for mischief—creating content best described as pranks, rage bait, or outright digital hoaxes.
Take, for instance, the viral “homeless man” prank circulating among teenagers. Using AI image generators, pranksters create the likeness of a disheveled, scruffy man in ragged clothing, then pretend he has broken into their home. They send these pictures to their unsuspecting parents and post screenshots of the ensuing frantic text message exchanges online for likes and laughs. The phenomenon has become so widespread that certain local law enforcement agencies have issued public service announcements warning of its folly. One police department bluntly labeled it “stupid and potentially dangerous.” Meanwhile, the misuse of AI imagery has reached even more delicate ethical terrain: the children of deceased public figures—among them Martin Luther King Jr. and Robin Williams—have publicly pleaded with internet users to stop regenerating videos of their late parents. In response, OpenAI has temporarily suspended AI-generated MLK clips and confirmed that descendants and estates can request exclusions from such digital recreations.
These uncanny artifacts aren’t limited to people. TikTok feeds overflow with AI-generated animals that never lived—adorable rabbits bouncing on trampolines, mischievous dogs dismantling wedding decorations, and canines gleefully smashing cakes. There are also countless videos featuring digitally fabricated “heartthrobs”—for example, women sending partners AI-rendered photos of improbably handsome, shirtless plumbers as a form of teasing humor. Many of these creations look just plausible enough to sow a flicker of uncertainty in the viewer’s mind, leaving them half-convinced that what they’re seeing might be genuine.
Only a short time ago, synthetic video was still awkward and unconvincing, faltering somewhere between curiosity and uncanny spectacle. Back in early 2023, making a believable deepfake of a leader such as Vladimir Putin or Donald Trump demanded specialized tools, considerable time, and technical expertise. Imposters could use those rudimentary videos to spread disinformation or orchestrate scams, yet the process was slow and imperfect. Earlier public experiments—like the strangely viral clip of Will Smith shoveling spaghetti into his mouth—produced images both fascinating and grotesque. The famous actor’s face appeared distorted yet recognizable, his limbs twitching in fragmentary motions, his hands multiplying like a glitching cartoon. Those videos resembled stop‑motion flipbooks more than fluid footage, forcing the human eye to fill gaps in continuity. When I covered AI video generation then, experts emphasized inherent technological constraints: incoherence between frames, limited scalability, and the immense computational demands required for photorealistic motion. Such limitations rendered the medium primitive—but also a safeguard against mass deception.
That buffer has since eroded. Spiraling investment and fierce competition have catalyzed breathtaking improvements. In 2024, anyone with an internet connection can create a high‑quality AI video that feels tangible, persuasive, and—crucially—viral‑ready. The transformation from the era of distorted noodles to today’s polished simulations is astonishing. Algorithms now produce seamless, detailed, time‑consistent scenes in seconds, and social media platforms—whether AI‑centric ones such as OpenAI’s Sora, Meta’s Vibes feed, or X’s integration with xAI’s Grok—amplify this content exponentially. What originates on an experimental app soon migrates to mainstream feeds, where audiences are primed to believe that most footage portrays reality. As Henry Ajder, a deepfake specialist and consultant, notes, generative video is “a perfect engine for a new age of memes.” The technology allows users to craft instantly customizable visuals that retain recognizable personal or stylistic fingerprints, encouraging creativity at unprecedented speed and scale.
Yet the thrill of instant generation comes at a cost. While these synthetic creations can mimic athletic feats or capture the charm of playful pets, they seldom forge authentic emotional resonance. Critics argue that this phenomenon amplifies social media’s oldest flaws—sensationalism, divisive misinformation, and low‑effort outrage. Hany Farid, a professor at UC Berkeley’s School of Information, bluntly frames it as a human failing rather than a purely algorithmic one. Users both produce and consume what he calls digital “slop”—low‑value content engineered for immediate, forgettable stimulation. Algorithms then reward this behavior, rapidly recycling similar videos through endless recommendation loops. The ease of generating these clips means that producing a viral AI prank now takes no more effort than posting an impulsive tweet. As Farid explains, the combination of adaptable algorithms and human curiosity leads to a feedback cycle of manipulation: people feed the system with junk, and the system in turn feeds them more of it, magnifying the reach of trivial or emotionally exploitative media.
This curious glut of unserious content hints at a deeper conceptual void surrounding generative AI itself. Olivia Gambelin, an ethicist and author of *Responsible AI*, observes that companies have unleashed powerful technology without clearly defining its purpose or social value. In her view, developers have effectively handed society a toolset and shrugged, saying, “You figure out what to do with it,” instead of guiding its integration into meaningful, life‑improving applications. Even OpenAI’s CEO, Sam Altman, has remarked that humanity must negotiate the boundary between freedom and oversight: how much should society constrain these technologies versus trusting individuals to find ethical uses for themselves? At an MIT talk last year, Altman conceded that not every person will wield these systems responsibly—but insists that this inherent variability is simply the nature of tools.
Meanwhile, artists and filmmakers have discovered genuinely creative possibilities within generative media, using it to expand imagination, reduce production costs, and visualize dreams beyond practical means. However, the vast majority of users lack either time or creative direction to pursue such high‑minded projects. For them, the appeal lies in simplicity: making quick, performative jokes that capture fleeting attention. As Gambelin points out, the “stupid” applications—visual gags, surreal pranks, exaggerated memes—are easiest for the average mind to grasp, whereas the more profound questions surrounding AI’s potential demand deeper contemplation. She poses a critical inquiry: what genuine problem are we attempting to solve through AI‑generated video? Until that question finds an answer, the technology will remain dominated by novelty rather than necessity.
Today, AI‑driven clips saturate nearly every corner of the social web, from TikTok’s fast‑moving trends to X’s meme‑driven feeds. Yet this presence may not last indefinitely. Farid predicts that we are currently witnessing a “novelty effect”: people are captivated simply because the technology allows scenarios never before seen—a dog on Everest, a celebrity eating spaghetti, a plumber model‑perfect in an impossible way. But, he suggests, the fascination will fade quickly once the shock wears off, leaving audiences muttering, “All right, this is just dumb.” The core reason, Gambelin adds, is that imitation rarely rivals reality. Viral videos of actual humans dancing, laughing, or failing do more than entertain—they connect us to shared experience, validating our impulse to document authentic life moments with a “pics or it didn’t happen” sensibility. AI can counterfeit spectacle, but it cannot easily duplicate the felt texture of real emotion.
In the restless pursuit of viral attention, the flood of generated media may drive the internet toward what Gambelin grimly calls a “race to the bottom of novelty.” Each ephemeral spark of fascination demands another, faster, brighter illusion to replace it. As the spectacle intensifies and authenticity wanes, we may need to reconsider what we truly value in our digital storytelling—and whether, amid the noise, there is still room for the genuine.
Amanda Hoover, a senior correspondent at *Business Insider*, investigates the evolving tech industry and the cultural forces surrounding it. Through *Business Insider’s Discourse* series, she and her colleagues provide rigorous, insight‑driven analyses of the issues shaping our technological and social landscape today.
Sourse: https://www.businessinsider.com/use-case-ai-videos-dumb-pranks-sora-tiktok-deepfakes-2025-10