What you can see online may not be true. In the age of AI, it can fool you.
As artificial intelligence evolves rapidly, the internet is flooded with videos that look real but aren't. From fake interviews to synthetic news reports, deepfakes have become difficult to detect and no longer give themselves with awkward smiles and stiff gestures. They are sure finding them now requires more than noticeable convulsive eyes and strange smiles.
Tools like Dall/E and Veo 3 can generate stunning visuals and movie videos with just the prompt. This technology opens up new creative possibilities, but also poses significant risks.
challenge? These videos often look unnoticeably realistic.
Also Read: 5 tools for detecting audio deepfakes
For example, a close observation of a video posted by X User in June showed that female journalists reporting floods in Abeokuta were subtle but conveying signs of the AI generation. Despite the speech of human-like movement and natural sound, some contradictions gave it. For example, during a quick turn by the presenter, an umbrella suddenly appears in her hand, but there was no moment.

Furthermore, her heartless reaction to someone falling nearby is usually a case of concern, suggesting that the video contains illicit human behavior. There are various instances of such videos that have gone viral.
Another example is that this video shows a collapsed elevated bridge. The scenery and structure look realistic, but the behavior of a white car driving towards a collapsed section is incredible and not an expected action in the face of danger.

As an audience, journalist, or an active digital citizen, finding AI-generated content is more than just a useful skill. That's important.
This tutorial will show you how to recognize videos created with tools such as Dall-E (generate images that generate images that are edited into video sequences) and VEO (generate the entire video scene from the prompt).
Look for visual abnormalities
Videos generated by AI are increasingly sophisticated, but often reveal artificial properties through subtle glitches and inconsistencies. Note strange lighting and mismatched shadows, distorted hands and fingers (still common in conversions from veids like Dole-e-Hack), unnatural eye movements and flashing, and distorted and melted objects in the background.
Tools like VEO 3 can all signal all synthetic content, robot body movements, excessively smooth transitions, altered or inconsistent subtitles, and audio that are too perfect or lack of ambient noise. Even complex scenes may have inconsistent arrangements or lit objects, with careful observation being essential.
Perfect, still fake
VEO can produce cinematic scenes with hyperrealism, especially in the latest version (VEO 3), but the results often feature subtle giveaways. Look for excess environments such as clean sky, smooth water in mirrors, and unnaturally fluid movements.
Transitions between scenes or camera angles can also be jarring or robotic. The actor himself may not blink, breathe, or move, or may exude an eerie atmosphere. VEO 3 can also create scenarios that are contrary to logic and physics, or come up with backgrounds with impossible events, perfect characters, or strange, repeated patterns that don't belong to real-world settings.
Context is equally important. For example, VEO 3 is a paid tool, so a sudden flood of short, high-quality videos about Breaking News could indicate AI involvement.
A recent example is the overpass bridge accident in Nasarawa, where images generated by AI are circulated to support the claims. but, FactCheckHub Found These visuals are misleading.
Similarly, if the video claims to show major events, but not reported by a reputable outlet, it could be a red flag.
Examine the metadata
Tools like Invid help to verify video reliability by extracting metadata such as timestamps, GPS coordinates, device information, and more. Videos generated by AI often lack this data or contain vague, common tags. If the original video file is accessible, inspect the metadata. Signs include “Veo”, “Gemini”, and “Google AI”.
Please note that the metadata may be stripped or modified. So it should not be the only line of verification.
Read: AI Voice Cloning Scam, How to Protect Yourself from Audio Deepfakes – Experts
Use the detection tool
Some tools can help you detect media generated or manipulated by AI. Deepware scanners are designed to find facial changes in deepfark, but Hive Moderation uses AI detection to flag composite images or video frames. Invid is particularly useful for video verification, allowing frame-by-frame analysis and metadata extraction. Another useful tool is whether AI is or not, which can help you determine whether an image or video was created using artificial intelligence.
FactCheckHub I used one or more of these tools to validate images and videos with previous fact checks, as you can see here and here.
Look for disclosure
It's worth looking at it as some creators spontaneously tag and tag mark content they've looked through. Check for phrases such as “AI-generated”, “Created using Dall-E”, or “VEO3” in the caption, file name, or project description. Especially if it contains prompt style languages.
Google embeds both visible and invisible watermarks in every VEO 3 video. You can find small logos or “AI-generated” text within the frame. Even more subtle, VEO 3 videos contain invisible digital signatures that can only be detected by specialized tools. The public is not able to access SynthID scanners, but certain fact checkers and organizations can use them to see the origin of the video.

Veteran fact checker and researcher Fatima Quadri wrote many fact checks, explanatory and media literacy sections for FactCheckhub to combat information disorders. She can be contacted by x or sunmibola_q at fquadri@icirnigeria.org.
