
Deepfake video of Australian Prime Minister Anthony Albanese on smartphone
Australia's Associated Press/Aramie
Universal Deep Fake Detectors have achieved the best accuracy when discovering multiple types of videos that have been manipulated or fully generated by artificial intelligence. This technology may help flag porn, deepfalk scams, or election misinformation videos generated by indifferent AI.
The widespread availability of DeepFake Creation Tools with inexpensive AI has encouraged out-of-control online spreads of synthetic videos. Many portray women in non-consensual pornography, including celebrities and female students. Deepfakes are also used to influence political elections and to intensify financial fraud targeting both ordinary consumers and company executives.
However, most AI models trained to detect synthetic videos focus on the face. This means it is most effective in finding a specific type of deepfark where the face of an actual person is exchanged for existing videos. “We need one video that is operated face to face and one model that can detect background manipulation or fully generated video,” said Rohit Kundu of the University of California, Riverside. “Our model addresses that very concern, assuming that the entire video could be produced synthetically.”
Kundu and his colleagues trained a universal detector powered by AI to monitor multiple background elements in the video and people's faces. It can find subtle indications of spatial and temporal contradictions of deepfake. As a result, people who have been artificially inserted into face swap videos, inconsistent lighting conditions can be detected, including inconsistent AI-generated video background details, and even indications of AI manipulation in synthetic videos, including human faces. The detector also flags realistic scenes in video games such as Grand Theft Auto V, which are not necessarily generated by AI.
“Most existing methods process AI-generated face videos, such as face swaps, lip sync videos, and facial reproductions that animate faces from a single image,” says Siwei Lyu of Buffalo University in New York. “This method has a wider range of applications.”
The universal detector achieved 95% to 99% accuracy in identifying four sets of test videos containing deepfakes with face steered. This is better than all other published methods for detecting this type of deepfalc. When monitoring fully synthetic video, we achieved more accurate results than any other detectors evaluated so far. The researchers presented work at the 2025 IEEE/Conference on Computer Vision and Pattern Recognition in Nashville, Tennessee on June 15th.
Several Google researchers also participated in the development of new detectors. Google did not respond to questions about whether this detection method would help you find deepfakes on platforms like YouTube. However, the company is one of those who support watermark tools that make it easy to identify content generated by AI systems.
Universal detectors can also be improved in the future. For example, it would be useful to be able to detect deepfakes that were deployed during a live video conference call. This is a trick that some scammers have already begun to use.
“How do you know that the person on the other side is real, or is it a deepfake-generated video, even if the video travels across the network and is affected by the characteristics of the network, such as available bandwidth?” says Amit Roy Choudhry of the University of California, Riverside. “That's a different direction we're looking at in our lab.”
topic:
