Gone are the days when it was easy to spot “fake”, often badly photoshopped, photos on the internet. We are now swimming in a sea of AI-generated videos and deepfakes, from fake celebrity endorsements to fake disaster broadcasts. Modern technology has obnoxiously cleverly blurred the line between reality and fiction, making it almost impossible to discern what is real.
And the situation is rapidly escalating. OpenAI's Sora is already causing confusion, but its viral “social media app” Sora 2 has become the Internet's hottest — and most deceptive — ticket. This is a TikTok style feed where everything is 100% fake. The author calls this a “deepfake fever dream,” and for good reason. The platform is continually improving in making fiction seem realistic, with significant real-world risks.
If you have trouble distinguishing between reality and AI, you're not alone. Here are some tips to help you cut through the noise and get to the truth of each AI-inspired piece.
Don't miss our unbiased technical content and lab-based reviews. Add CNET as your preferred Google source.
My AI expert features Sora's video
From a technical perspective, Sora Video is superior compared to competitors such as: Mid Journey V1 and Google Veo 3. Featuring high resolution, synchronized audio, and amazing creativity. Sora's most popular feature is called “Cameo,” which allows you to use another person's likeness and insert it into almost any AI-generated scene. This is a great tool for creating terrifyingly realistic videos.
That's why many experts are concerned about Sola. The app makes it easy for anyone to create dangerous deepfakes, spread misinformation, and blur the line between what's real and what's not. Public figures and celebrities are especially vulnerable to these deepfakes, and unions like SAG-AFTRA are backing OpenAI. Strengthen guardrails.
Identifying AI content is an ongoing challenge for technology companies, social media platforms, and everyone else. But it's not completely hopeless. There are a few things to keep in mind to determine if your video was created using Sora.
Look for the Sora watermark
All videos created with the Sora iOS app will include a watermark when downloaded. Bounced off the edge of the video is a white Sora logo (cloud icon). It's similar to how you can watermark TikTok videos. Watermarking content is one of the biggest ways AI companies can help visually discover AI-generated content. Google's Gemini “nanobanana” model automatically watermarks images. Watermarks are great because they clearly indicate that the content was created using AI.
However, watermarks are not perfect. One is that if the watermark is stationary (not moving), it can easily be clipped. Even moving watermarks like Sora can't be trusted completely because there are apps specifically designed to remove them. When asked about this, OpenAI CEO Sam Altman said society needs to adapt to a world where anyone can create fake videos. Of course, before Sora, there was no popular, easily accessible, skill-free way to create these videos. However, his argument raises a valid point about the need to rely on other methods to verify reliability.
Check metadata
You're probably thinking that there's no way to check a video's metadata to determine if it's real or not. I understand where you are coming from. This is an additional step, so you may not know where to start. But it's a great way to tell if a video was created with Sora, and it's easier than you might think.
Metadata is a collection of information that is automatically added to content when it is created. Learn more about how images and videos were created. This includes the type of camera used to take the photo, location, date and time the video was captured, and file name. All photos and videos, whether created by humans or AI, have metadata. And much of the content created by AI will also include content credentials that indicate its AI origins.
OpenAI is part of the Coalition for Content Provenance and Authenticity. This means that Sora videos contain C2PA metadata. You can use the Content Authenticity Initiative's validation tools to check the metadata of your videos, images, or documents. Here's how: (The Content Authenticity Initiative is part of C2PA.)
How to check the metadata of photos, videos, and documents
1. Go to this URL: https://verify.contentauthenticity.org/
2. Upload the file you want to check.
3. Click Open.
4. Review the information in the right panel. If it was generated by AI, you should include it in the content summary section.
When you run a Sora video through this tool, it will display the video as “Published by OpenAI,” indicating the fact that it was generated by AI. All Sora videos must include these credentials to ensure they were created with Sora.
Like all AI detectors, this tool is not perfect. There are many ways to avoid AI video detection. If you have a non-Sora video, the metadata may not contain the signals the tool needs to determine whether it was created by AI. As I found in my testing, AI videos created with Midjourney, for example, are not flagged. Even if a video was created by Sora and re-downloaded via a third-party app (such as a watermark removal app), the tool is less likely to flag it as AI.
The Content Authenticity Initiative's verification tool correctly flagged videos created with Sora as AI-generated, along with the date and time of creation.
Explore other AI labels and include your own
If you use Meta's social media platforms, such as Instagram or Facebook, it may help you a little bit to determine if something is AI. Meta has internal systems in place to flag AI content and label it as such. These systems aren't perfect, but you can clearly see the labels of flagged posts. TikTok and YouTube have similar policies regarding labeling AI content.
The only truly reliable way to know if something was generated by an AI is if the creator discloses it. Many social media platforms now offer settings that allow users to label their posts with AI-generated labels. Even a simple credit or disclosure in a caption can go a long way in helping everyone understand how something was created.
As you scroll through Sora, you realize that nothing is real. However, once you leave the app and share an AI-generated video, it becomes our collective responsibility to disclose how the video was created. As AI models like Sora continue to blur the line between reality and AI, it's up to all of us to be as clear as possible when something is real or AI.
The most important thing is to remain vigilant
There is no reliable way to accurately tell at a glance whether a video is real or AI. The best thing you can do to avoid being fooled is to not automatically and unquestioningly believe everything you see online. Follow your intuition and if something feels unrealistic, it probably is. In this unprecedented age of AI disruption, your best defense is to take a closer look at the videos you're watching. Don't just glance at it or scroll away without thinking. Check for jumbled text, missing objects, and unphysical behavior. Don't blame yourself if you get fooled once in a while. Even experts get it wrong.
(Disclosure: CNET's parent company, Ziff Davis, filed a lawsuit against OpenAI in April, alleging that it infringed on Ziff Davis' copyrights in training and operating AI systems.)
