Google is introducing a new feature to Gemini that will be able to inspect a video and tell you whether some of it was created or edited with Google's own AI. This feature scans audio as well as visuals for SynthIDs. SynthID is an invisible watermark developed by Google DeepMind to indicate AI-generated media. Some technologists say this is a major step forward in efforts to combat fake footage, at a time when synthetic videos that push the boundaries of reality are exploding.
How Gemini uses SynthID watermarks to flag AI videos
The Gemini app allows you to upload a video and ask if the footage contains AI-generated parts. SynthID examines the frame and soundtrack and returns a structured answer, reporting where it was found. For example, you may want to detect an event in the audio even though no markers are visually present during a particular time interval.


This detection is common to all languages supported by the app and is currently limited only to recordings up to 100MB in size and approximately 90 seconds in length. SynthID includes inaudible signals in the background to overcome common edits such as compression, trimming, and re-encoding (operations that often remove traditional metadata). This allows Gemini to simply make segment-level calls without having to make a “yes” or “no” decision.
What SynthID does and doesn't cover with AI detection
There is one important factor. Gemini can only sense SynthID. This means that while we can authenticate material created or edited by Google's AI tools and certain partners that support SynthID, we can't always verify media from systems that don't use this watermark. In other words, for videos produced with another model, the displayed content may be “clean” even if it does not have a compatible watermark.
Google sells SynthID to other industry partners, and companies like NVIDIA and Hugging Face are also experimenting with similar integrations. However, the larger ecosystem remains fragmented. Some labs include their own watermarks, others apply metadata-level approaches, and many products move back and forth within the community without any associated provenance signals. Therefore, “No detection” means “SynthID not found” and cannot be considered as evidence of authenticity.
Why this matters for misinformation and public trust
The ability to label AI-edited segments introduces much-needed subtlety to verification workflows for journalists, platforms, and civil society organizations. It's not just about catching fully synthetic deepfakes. It may help reveal partial manipulations, such as substituting real footage and audio, which are more convincing than outright fabrications.
The people's anxiety is real. That's what the majority of people navigating the online world say, according to the Reuters Institute's Digital News Report. High-profile incidents, from voice-cloning robocalls to manipulated war videos, have revealed how quickly synthetic media can spread and how slow traditional verification methods can be without better digital tools.


How to adapt to new standards and policies
Watermarks are just one piece of the provenance puzzle. The Coalition for Content Provenance and Authenticity is pushing for signed metadata, called Content Credentials, that add evidence of tamper-evident changes. This approach is supported by Adobe and a group of major publishers whose members include Google. At the policy level, NIST and many local regulators are already calling for greater transparency in synthetic media.
Each method has tradeoffs. Hidden watermarks such as SynthID survive common editing operations, but may not work well with content that has been heavily converted or reshot. Crypto provenance is end-to-end strong when present, but can be removed by platforms or broken by incompatible workflows. Defense in depth (default provenance, watermarking as a backup, platform disclosure) is the best defense.
What users should know before using Gemini Detection
Treat your Gemini detection as a signal, not a judgment. If SynthID is checked, you have the strongest evidence that Google AI is involved. If you don't find anything, accept that the results are likely inconclusive and corroborate with other methods.
- Use keyframes in reverse image search
- Match shadows and reflections
- Listen to audio that didn't sound right
- Please contact your trusted retailer for further confirmation
Also consider privacy and context. Please be sure to upload clips that you actually have permission to research. Also, don't forget about file size and length limits. As platforms like YouTube publish synthetic media and newsrooms build provenance checks into their workflows, detectors like Gemini can be incorporated into larger verification processes rather than acting as a standalone solution.
Bottom line: Gemini's latest video detection is another valuable step in making media provenance even clearer. It won't catch everything, but by giving fair warning of where AI is used and doing it at the segment level, it gives users, creators, and fact-checkers a useful tool to tell the difference between signal and noise in more synthetic feeds than ever before.
