Hiding secret codes in Light protects you from fake videos

AI Video & Visuals


Fact Checkers may have new tools in the fight against misinformation.

A team of Cornell researchers developed a “watermark” light in video. This can be used to detect whether a video is fake or manipulated.

The idea is to hide information Almost merciless fluctuations Lighting at important events and locations, such as interviews and press conferences like the United Nations headquarters, and even the entire building. These variations are recorded as hidden watermarks in videos shot under special lighting that can be programmed to humans, but can be programmed to computer screens, photo lamps, and built-in lighting. Each watermarked light source has secret code that can be used to check for the corresponding watermark in the video and reveal malicious edits.

Peter Michael, a graduate student in the field of computer science who led the work, will be presenting his research.Noise code lighting for forensic and photometric videoAugust 10th at Siggraph 2025 in Vancouver, British Columbia.

Editing video footage in a misleading way is nothing new. But with generative AI and social media, spreading misinformation faster and easier than ever before.

Davis and Michael work at Davis' Gateshall Lab.

“It used to be treated as a source of truth, but that's no longer a hypothesis we can do,” he said. Abe DavisCornell Ann S. Assistant Professor of Computer Science at the Bowers College of Computing and Information Science. “Now you can almost make videos of what you want. It's fun, but there are problems too.

To address these concerns, researchers previously designed techniques by directly passing through digital video files. We used small changes to specific pixels to identify unmanipulated footage and to tell if the video was being created by AI. However, these approaches rely on the video creator using a specific camera or AI model. This is an unrealistic level of compliance to expect from potentially bad actors.

By embedding code in the lighting, the new method ensures that the actual video of the subject will contain a secret watermark, regardless of who has captured it. The team showed that it could be coded with small software on programmable light sources such as computer screens and certain types of room lighting, but that older lighting, like many ready-made lamps, can be coded with small computer tips for the size of postage stamps. Programs on the chip change the brightness of the light according to the secret code.

So, what secrets are hidden in these watermarks? Also, how does it become clear when the video is fake? “Each watermark has a lower fidelity version of the video that has not been fine-tuned, under slightly different lighting. It calls out these code videos,” Davis said. “When someone operates the video, the parts that were operated begin to contradict what you see in these code videos, which tells you where the changes were made.

Part of the challenge in this task was to make the code barely noticeable to humans. “We use research into human perceptual literature to inform designs of coded light,” Michael said. “This code is designed to look like a random variation that already occurs in light, known as “noise.” This makes detection difficult unless you know the secret code. ”

credit: Cornell University

A video explaining noise code lighting.

If the enemy cuts out footage of interviews, political speeches, etc., a forensic analyst with a secret code can see the gap. Also, when enemies add or replace objects, the changed parts are generally shown black in the recovered code video.

The team is successfully using up to three separate codes for different lights in the same scene. With each additional code, the pattern becomes more complicated and more difficult to forge.

“Even if the enemy knows that the technique is being used and knows in some way the code is still far more difficult,” Davis said. “Instead of forging lights with one video, each code video must be fake individually, and all of those fakes must agree with each other.”

They also confirmed that this approach works in several outdoor settings with people of different skin tones.

However, Davis and Michael warn that fighting misinformation is a weapons race and that the enemy will continue to devise new ways to deceive.

“This is an ongoing and important issue,” Davis said. “It won't go away. In fact, it's just getting harder.”

Zekun Hao, PhD '23, and Serge of the University of Copenhagen are co-authors of the study.

This study received partial support from the Graduate Fellowship in Defense Science and Engineering and the Pioneer Center for AI.

Patricia Waldron is an author of Cornell Ann S. Bowers College of Computing and Information Science.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *