chicago – AI video is no longer new. It’s everywhere, nestled between feeds and news, cat videos and political rants. And they’re realistic enough that people will stop mid-scroll and wonder, “Wait, is this real?”
This question popped up in our own comments section after FOX Chicago posted on TikTok about the recent cold snap.
Some viewers joked that the speakers looked like they were generated by AI. Others weren’t really sure. All of this was a strange memory. The line between real and synthetic video has become uncomfortably blurred.
This is more important than you think. AI video tools are advancing faster than systems designed to warn and regulate.
With election season in full swing and misinformation spreading like wildfire across social platforms, spotting misinformation is becoming less of a special skill and more of a basic online survival tactic.
What to look for:
According to research by detection experts and the MIT Media Lab, the hand is the first clue. AI cannot yet be fully successful. You’ll see extra numbers, weird bends, and hands that flash and disappear. If someone’s gestures or hands seem wrong, trust your instincts.
The eyes and mouth also represent this. MIT researchers studying deepfake detection note that in AI videos, lips are often not perfectly synchronized with audio.
The blinking also gets weird, either too often, too slow, or looks weirdly mechanical. Humans blink without thinking. The AI thinks too much about it, but it shows up.
Don’t ignore the background. The AI-generated scene looks smooth at first, but becomes distorted when you zoom in. The door frame is bent. Objects become blurry for no reason. Details change slightly between frames, something that cannot happen in real footage.
And then there’s intuition. Many AI videos are shot for perfection, and that’s exactly the problem. The skin looks airbrushed. The lighting seems too clean for the casual atmosphere of the studio. Cybersecurity experts say our instincts are usually drawn to something when it feels robotic or unnaturally sophisticated.
Why this happens:
AI tools are now cheap, fast, and available to anyone with a phone and five minutes.
Create compelling videos without any technical skills. Social platforms, on the other hand, measure engagement rather than accuracy, which can lead to confusion.
Digital literacy researchers have found that comment sections have become an early warning system. When viewers start debating whether something is real or not, it’s often a sign that the AI is working as designed – blending in seamlessly.
Where there are guardrails:
Social media companies are testing labels and detection tools, but media forensics experts say these systems are inconsistent and easily circumvented. Until stronger standards are established, most of the blame lies with us, the viewers.
Government initiatives are beginning to take shape. President Donald Trump signed the Take It Down Act in May 2025, the first federal law targeting AI-generated content. It would criminalize non-consensual intimate images and deepfakes and require platforms to create removal processes by May 2026.
As of early 2026, 46 states have passed their own deepfake laws, most of which focus on election integrity, intimate images, and disclosure requirements for political content. But the legal landscape remains a patchwork, as Congress has not addressed broader regulation of deepfakes for news and misinformation.
The best defense right now, according to the researchers studying this, is very simple: slow down. Please look carefully. Please pay attention to the details that seem a little strange.
Feeds designed to keep you scrolling at top speed depend on whether you take an extra second to see what you’re seeing.
source: Information for this article was reported by Terrence Lee of FOX Chicago.
