How to spot AI-generated videos

AI Video & Visuals


Generative AI is rapidly infiltrating the web, and while we consider its impact on education, energy, and human creativity, it is becoming increasingly difficult to trust everything we see online: is it real, or is it AI-generated?

While there's currently no way to be 100% surefire at detecting AI content every time, there are some telltale signs in computer-generated text, audio, images, and video that are worth watching out for: With a little human intelligence thrown in, it's usually possible to spot what is likely AI-generated.

We'll focus on AI-generated videos created by tools like OpenAI's Sora and provide some examples: Next time you come across a video you're not sure about, check how well it performs against these criteria.

Inappropriate text

You'll notice that text is missing from many AI videos (and images). Generative AI doesn't understand characters or language, so it's not very good at processing text (at least, not as well as humans can). AI signs often look like they're written in an alien language, so be wary of garbled text or no text at all.

That's not to say that good text doesn't appear in AI videos, but if you do, it was probably added later. In this Monster Camp trailer (embedded below) generated by Luma AI, some of the signs look fine (and were likely added manually). Look for gibberish on the buses and fair stalls; the strange text doesn't show for long, so you have to look carefully.

Quick Cuts (or Slow Cuts)

Now consider another characteristic of AI-generated videos: you will often notice that the cuts are very short and the action moves very quickly. This is also to hide inconsistencies or inaccuracies in the video you are shown – to give the impression of something real, without being the real thing itself.

On the other hand, it can make the action extremely slow, which contradicts what I said earlier, but the end goal is the same: to make the seams as seamless as the AI ​​can possibly make them.

In this AI-generated music video for Washed Out (embedded below), it's the former: everything is quick, fast, and gone before you can get a good look at it. Try pausing the video at various points to see how many oddities you can spot (we noticed at least one person blending into the wall).

Bad physics

Generative AI knows how to mash up moving pixels to show something that resembles a cat or a city or a castle, but it doesn't understand the world, 3D space, or the laws of physics: people will hide behind buildings, they'll look different in different scenes, buildings will be built in odd shapes, furniture won't line up properly, etc.

Consider this drone shot (embedded below) created by OpenAI's Sora. Notice the crowd of people walking towards the bottom of the scene in the bottom left corner. They all absorb each other and eventually blend into the railing because the AI ​​sees them as pixels rather than people.

Uncanny Valley

AI videos often have an unnatural sheen to them, and the term “uncanny valley” is well-known to describe computer-generated graphics that attempt to replicate reality and nature. When watching AI videos, it's common to experience the uncanny valley, even if only for a moment.

If you watch the branding film that Toys R Us created with the help of the Sora AI, you'll notice that the young child's smile and hair movement are suspiciously unnatural. He also looks like a different child in each scene, but that's because he's not an actor; he's a 2D representation generated based on what the AI ​​thinks a boy should probably look like.

Complete (or incomplete) elements

This is another paradox: AI videos can be given away by their overly-perfect or under-perfect elements. After all, these clips are computer-generated, so the designs of buildings, vehicles, or materials may be repeated over and over in patterns that are too perfect to exist in reality.

Meanwhile, the AI ​​continues to struggle with anything natural: hands, chins, leaves moving in the wind. In this video of a running astronaut from Runway AI, you'll notice that the hands are all messed up (much of the physics in the background are wrong, smearing the text).

Check the context

Then you have all these tools to identify misinformation. Photoshop existed before generative AI, so some of the rules for spotting false information still remain the same.

Context is important. The New York Times If it comes from an official social media account, it’s more likely to be trustworthy. If it’s forwarded from X faceless person with a bunch of numbers in their username, it might be less trustworthy.

See if the action in the video has been shot from other angles or widely reported elsewhere to see if it actually makes sense, and also check with anyone in the video – if there's Morgan Freeman narrating in the background, he can attest to it somehow.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *