Google's SynthID is the latest tool for catching AI-Made content. What is ai 'faterking'? Does it work?

AI Video & Visuals


Last month, Google announced SynthID Detector, a new tool that detects AI-generated content. Google claims that it can identify content generated by AI through text, images, videos or audio.

However, there are some caveats. One of these is that this tool is currently only available in “early testers” via waitlists.

The main catch is that SynthID works primarily with content generated using Google AI services. It could be TextForVeo for Video, Imagen for Images, or Lyria for Audio.

When I try to use Google's AI detector tool to see if anything generated using ChatGPT is flagged, it doesn't work.

This is because, strictly speaking, this tool cannot detect the presence of AI-generated content or distinguish it from other types of content. Instead, Google's AI products (and several others) use SynthID to detect the presence of “watermarks” embedded in the output.

A watermark is a special machine-readable element embedded in an image, video, sound, or text. Digital watermarks have been used to ensure that information about the origin of content or authorship moves based on it. They have been used to assert the authors of creative works and address the challenges of media misinformation.

https://www.youtube.com/watch?v=9btdaocfimy

SynthID embeds the watermark in the output from the AI ​​model. Watermarks are invisible to readers and audiences, but they can be used by other tools to identify content created or edited using the built-in AI model.

Synthid is one of the latest in many such efforts. But how effective are they?

There is no unified AI detection system

Several AI companies, including Meta, have developed their own watermark tools and detectors, as well as SynthID. However, these are “model-specific” solutions, not universal solutions.

This means that users will need to juggle multiple tools to see the content. The landscape remains fragmented, despite researchers seeking a unified system and major players like Google trying to adopt tools for others.

The parallel efforts focus on metadata – encoded information about the media's origins, authors, and editing history. For example, a tool that inspects content credentials allows users to check the edit history attached to content to see the media.

However, you can easily peel off the metadata by uploading your content to social media or converting it to another file format. This is especially problematic when someone intentionally tries to obscure the origins and authors of the content.

Some detectors rely on forensic cues, such as visual inconsistencies and lighting abnormalities. While some of these tools are automated, many rely on human judgment and common sense methods. This can be counting the number of fingers in images generated by AI. These methods can become redundant as AI models improve performance.

The AI-generated image shows a woman waving with six fingers.
Logical contradictions such as extra fingers are part of the visual “communicating” of AI-generated images in the present age.
TJ Thomson,CC by-nc

How effective are AI detection tools?

Overall, the effectiveness of AI detection tools can vary dramatically. There's some work to do when the content is completely AI-generated, such as when the entire essay was generated from scratch by a chatbot.

Using AI to edit or convert human-created content, the situation becomes even darker. In such cases, the AI ​​detector could be grossly wrong. They may fail to detect AI or flags as AI generated by AI.

AI detection tools don't explain how they reached their decisions well. When used to detect plagiarism in university assessments, they are considered “ethical minefields” and are known to discriminate against non-native English speakers.



Read more: Can you find an AI con man? We found that AI's faces look more realistic than real people


If AI detection tools are useful

There are a variety of use cases for AI detection tools. For example, get an insurance claim. Knowing whether a client will portray what the image they share depicts can help insurers know how to respond.

In addition to other approaches, journalists and fact checkers can draw out AI detectors when they try to determine whether more potentially newsworthy information should be shared.

Employers and job seekers should assess whether the person on the other side of the recruitment process is real or fake AI.

Dating app users need to know if the profile of people they meet online represents an AI avatar, perhaps a real romantic outlook that fronts romance scams.

If you are an emergency responder, you can confidently know if the caller is human or AI can save resources and lives if you decide whether to send help to the call.

Where from here?

As these examples show, reliability challenges are currently occurring in real time, and static tools like watermarks are unlikely to be sufficient. AI detectors that operate in real time with audio and video are the press field of development.

Whatever the scenario, it is unlikely that reliability judgments will be completely delegated to a single tool.

Understanding how such tools work, including limitations, is an important first step. Triangulating these with other information and your own contextual knowledge remains essential.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *