Breakthrough new tool DIVID can detect fake AI-generated videos

AI Video & Visuals


A new tool is gaining attention for its groundbreaking ability to detect videos created using generative AI.

This news is making waves because scams and misrepresentations are common in content created using AI programs, and tech experts seem to have a hard time determining what's real and what's fake, especially when it comes to video design.

Earlier this year, an employee of a multinational corporation mistakenly transferred millions of dollars to fraudsters, on what they thought was an instruction from the company's CFO. However, this was not the case and the bogus request was made by a threat actor posing as a key member of the organization.

This elaborate scheme resulted in losses of $25 million because employees were unable to identify that the requests were fake using the detection systems currently in use.

With this incident in mind, researchers from Colombia's top technical university have unveiled a new, attention-grabbing tool called “DIVID” that can prevent such scams by detecting fake AI content.

The Diffusion Generated Video Detector works by performing analysis of the text without accessing the internal functionality of the LLM.

This model gadget improves upon all previous methods using generative AI video detection, performing the most effective means of identification that was not possible before.

The methodology involves detecting videos created using older models such as GANs, which feature dual neural networks: one to generate fake content and the other to evaluate to distinguish real from fake.

Given the right kind of feedback, both of these networks are designed to improve, ultimately resulting in the most realistic video. Currently, all AI detection tools look for telltale signs that can determine a video's authenticity, such as strange pixels, fake motion, and frame mismatches that can't be found in real content.

There are already several AI video tools available from the likes of OpenAI that utilize other models to create videos. This Diffusion model uses AI techniques to curate photos by slowly switching out random sounds with crisp images that look highly realistic.

For video, adjusting every frame at once produces smoother transitions, better quality, and more realistic results. As you can see, it's more refined and makes it even harder for experts to tell the real thing from the fake.

We've already heard about the release of Raider from the same team of experts, but it allows researchers to decipher text generated using AI techniques by directly analyzing the content — without needing access to LLMs like OpenAI's ChatGPT-4.

This simply involves modifying the text using the LLM to determine which numbers to edit within the system. If a large number of modifications are made, it is likely created by a human, but if the number of modifications is low, it is more likely created using machine techniques.

Its expansion into the world of video is big news, and what we know so far will help us distinguish between real and fake videos produced by viral models.

It's a concept that has caught the attention of many in the tech industry, as the researchers' latest approach requires open-source datasets and code. The idea was presented at this year's Computer Vision and Pattern Recognition event in Seattle a few days ago, and since then, it's been gaining attention for good reason.

Read next: Google brings address bar shortcuts and live event scores to Chrome Mobile


[ad_2]
Source link

Leave a Reply

Your email address will not be published. Required fields are marked *