Google has significantly broadened the functionality of Gemini’s artificial intelligence verification system, extending its reach beyond static images to include videos that were produced or modified using the company’s proprietary AI technologies. This enhancement enables users to interact directly with Gemini by uploading a video and asking a simple, yet revealing question: “Was this generated using Google AI?” In response, the sophisticated model initiates a deep analytical process, scanning the entirety of the video’s visual frames and accompanying audio to identify the presence of Google’s signature digital watermark, known as SynthID.

Unlike conventional verification tools that deliver only binary answers, Gemini goes several steps further. Its assessment produces a detailed output in which it pinpoints specific frames or timestamps within the video or audio where the SynthID watermark becomes detectable. According to Google, this expanded, multidimensional response is meant to foster transparency and user understanding, empowering individuals to trace precisely how and where AI systems have contributed to the creation or alteration of content. The introduction of this capability for moving images follows the earlier rollout of a similar verification function for still images, which debuted in November and was likewise limited to media generated or edited exclusively with Google’s AI frameworks.

However, the landscape of watermark technology is not without its challenges. As demonstrated by OpenAI’s experience with its Sora application—a platform built around fully AI-produced videos—some watermarking methods have proven vulnerable to removal or modification. In contrast, Google emphasizes that SynthID was engineered to be “imperceptible,” seamlessly woven into the digital structure of the audiovisual content so that it remains invisible to the human eye while remaining machine-detectable. Yet, despite Google’s confidence, questions persist regarding how resistant SynthID truly is to alteration or deletion, and whether external platforms, especially social media networks and third‑party detection systems, will be able to consistently recognize and label media carrying the watermark as AI‑generated.

Within the Gemini ecosystem, Google’s Nano Banano AI image‑generation model further embeds C2PA metadata, providing an additional layer of authenticity verification and content provenance tracking. Nevertheless, the broader digital environment still lacks unified standards for tagging AI‑created material. The absence of coherent, cross‑platform metadata policies has left significant blind spots in online content monitoring—gaps that allow synthetic media such as deepfakes to circulate widely without reliable identification.

From a technical perspective, Gemini’s verification mechanism currently supports videos up to 100 megabytes in size and with durations not exceeding 90 seconds. This balance between accessibility and computational efficiency ensures that the tool remains responsive while handling moderately large files. Moreover, Google has made the feature universally available in every language and region where the Gemini application operates, underscoring its commitment to making trustworthy AI detection tools accessible to users across the globe. In essence, Google’s latest update represents a decisive step toward enhancing transparency, integrity, and accountability in an age increasingly defined by AI‑driven creativity and digital transformation.

Sourse: https://www.theverge.com/news/847680/google-gemini-verification-ai-generated-videos