Following the introduction of image verification within Gemini in November, Google is further strengthening transparency.
For reference, Gemini gained the ability to detect whether an image is real or generated by AI in November. Now, Google is enhancing the same functionality by giving Gemini the ability to scan videos and detect the same invisible AI digital fingerprint.
Gemini can now determine if an image is real or AI-generated
But there's a catch
Highlighted in a new blog post, this new feature joins Google's content transparency tools that primarily leverage SynthID watermarks to identify AI-generated content.
Checking if a video was generated by AI works the same way as checking if an image was generated by AI. Just upload it to the Gemini app and ask, “Was this created by Google AI?” or “Is this generated by an AI?”
Limitations: Google only detection
The first prompt above also reveals one of the main limitations of the detector. You can only mark a video as AI-generated if it was generated by one of Google's own tools. This limitation also applies to Gemini's ability to detect AI-generated images. “That video was not generated by Google AI, but the tool could not determine whether it was generated by other AI tools.” This is what Gemini told me when I shared a random video of the room.
Another limitation is that you can upload files up to 100 MB in size and 90 seconds in length.
Gemini scans both audio and visual tracks for imperceptible SynthID watermarks, uses its own inference to provide context, and returns a response specifying which segments contain elements generated using Google AI.
Gemini may then say something like “SynthID detected in 5-10 seconds of visual. No SynthID detected in audio,” or some similar combination, depending on the content in question.
Video origin verification is now available in the Gemini app in all languages and countries supported by the Gemini app. Available on web and mobile.
