Google is furthering its AI transparency by extending Gemini's verification tools to video content created or edited with its own AI models. The move is aimed at allowing users to quickly check whether a video was generated using Google AI, as concerns about deepfakes grow.
With this update, users can now upload videos to Gemini and directly ask, “Was this generated using Google AI?” Gemini then analyzes both visuals and audio to look for Google's proprietary watermark, known as SynthID. Unlike simple detection tools, the response goes beyond basic confirmation. Gemini highlights the exact moment in a video or its audio track where a watermark appears.


Google first introduced this image verification feature in November. This deployment was also limited to content created or edited with Google AI tools. By extending this to video, the company is addressing a format that is at the center of AI misuse and misinformation.
However, watermarking remains an imperfect solution. Some watermarks can be easily removed, as OpenAI discovered after launching Sora, which showcases fully AI-generated videos. Google describes SynthID as “imperceptible”, suggesting it is difficult to scrub. However, it remains unclear how resilient it is to deletion, or whether other platforms can reliably detect and label SynthID-tagged content.
The problem is broader than watermark strength. Google's Nano Banano AI image generation model within Gemini has C2PA metadata embedded, but there is still no unified system across social platforms. As a result, AI-generated content can be circulated without clear labels, and deepfakes can slip past moderation systems.
For now, Gemini's video authentication has clear limitations. This tool supports videos up to 100 MB in size and up to 90 seconds. Google says the feature is available in all languages and regions where the Gemini app is already running.
