How tech companies are approaching AI-generated image detection

AI News



new york
CNN

Last month, an image purportedly of an explosion near the Pentagon briefly went viral on social media, sparking panic and a market crash. The image bore all the hallmarks of being generated by AI, but was later debunked by authorities.

But according to Truepic CEO Jeffrey McGregor, it’s “just the tip of the iceberg of what’s to come.” He said: “More and more AI-generated content will start appearing on social media, but we are not ready for it.”

McGregor’s company is working on a solution to this problem. Truepic provides technology that claims to authenticate media at the point of creation through the Truepic lens. The application captures data such as the date, time, location, and device used to create the image, and applies a digital signature to determine if the image is organic, or has been manipulated or generated by AI. to verify if it is

Backed by Microsoft, Truepic was founded in 2015, several years before AI-powered image generation tools such as Dall-E and Midjourney were launched. McGregor said he now sees interest in the company from “people making decisions based on photos,” from NGOs to media companies to insurance companies, who want to make sure their claims are legitimate.

“When something can be faked, everything can be faked,” McGregor said. “Given that generative AI has reached a tipping point in quality and accessibility, we no longer know what the reality is online.”

While technology companies like Truepic have been working to combat misinformation online for years, new AI tools are emerging that can quickly generate compelling images and sentences in response to user prompts. has added a new urgency to these efforts. In recent months, an AI-generated image of Pope Francis in a down jacket has gone viral, and an AI-generated image of former President Donald Trump being arrested shortly before he was indicted was widely shared.

Some lawmakers are now calling on tech companies to address the issue. European Commission Vice-President Bela Djulova said on Monday to signatories to the EU Code of Practice on Disinformation (a list that includes Google, Meta, Microsoft and TikTok) that they are “aware of such content and Introduce technology to label clearly,” he said. to the user. ”

A growing number of start-ups and big tech companies, including those that are implementing generative AI technology in their products, are working to help people determine if an image or video was made with AI. We are trying to introduce standards and solutions. Some of these companies have names like Reality Defender, but this is the potential of this effort to protect our very sense of what is real and what is not. It speaks to a significant risk.

However, AI technology is developing faster than humans can keep up, so it is unclear whether these technological solutions can fully address the problem. Even his OpenAI, which developed Dall-E and ChatGPT, acknowledged earlier this year that its efforts to help detect AI-generated sentences rather than images were “imperfect,” saying it “should be taken with a grain of salt. There is,” he warned. ”

“This is mitigation, not eradication,” Hany Farid, a digital forensics expert and professor at the University of California, Berkeley, told CNN. “I don’t think it’s a lost cause, but I think there’s a lot of work to be done.”

Farid said the hope is to get to a point where “teens in the basement of their parents’ house can’t create images and move elections or move $5 trillion in markets.”

Companies are taking two broad approaches to addressing this issue.

One tactic relies on developing a program to identify AI-generated images after they have been created and shared online. The other focuses on using a kind of digital signature to mark an image as real or as AI-generated in conception.

Reality Defender and Hive Moderation are working on the former. The company’s platform allows users to upload an existing image to scan and get an instant breakdown, including a percentage of the likelihood that it’s real or AI-generated based on massive amounts of data. You can receive it.

Reality Defender, which launched before “generative AI” became a buzzword and was part of competitive Silicon Valley technology accelerator Y Combinator, has “proprietary deepfake and generative content fingerprinting technology.” It says it uses it to identify AI-generated video, audio, and images.

In the example provided by the company, Reality Defender highlighted a deepfake image of Tom Cruise as 53% “suspicious” and found evidence of face distortion, a “common artifact of image manipulation.” Tell the user what you have done.

reality defender

Reality Defender example of AI-generated content labeled by CNN.

If this issue becomes a frequent concern for businesses and individuals, defending reality can become a lucrative business. These services offer limited free demos and paid tiers. Hive Moderation said he charges $1.50 per 1,000 images, as well as an “annual contract deal” that offers discounts. Realty Defender said pricing can vary based on a number of factors, including whether the client requires “bespoke elements that require the expertise and assistance of our team.”

“The risk is doubling every month,” Ben Coleman, CEO of Reality Defender, told CNN. “Anyone can do this. You don’t need a PhD in computer science. You don’t need to set up a server at Amazon. You don’t need to know how to create ransomware. Just google “fake face generator.” , anyone can do this. ”

Hive Moderation CEO Kevin Guo described this as an “arms race.”

“We need to keep watching all the new ways people create this content all the time, and we need to understand it and add it to our dataset to sort out the future,” Guo told CNN. Told. “Right now, AI-generated content is certainly a few percent, but I think that will change in the next few years.”

As another preventative approach, some big tech companies have incorporated a kind of watermark into images to prove whether the media was real or AI-generated when it was first created. is working on So far, this effort has been primarily driven by C2PA (Coalition for Content Provenance and Authenticity).

C2PA was founded in 2021 to create technical standards for authenticating sources and histories of digital media. It combines the work of the Content Authenticity Initiative (CAI), led by Adobe, and Project Origin, an initiative led by Microsoft and the BBC focused on combating misinformation in digital news. Other companies involved in C2PA include Truepic, Intel and Sony.

Based on C2PA guidelines, CAI is creating open source tools for companies to create content credentials, metadata containing information about images. According to CAI’s website, this “allows creators to transparently share the details of how an image was created.” “That way, end users can access the context of who, what, and how the photo was altered, and decide for themselves how authentic the image is.”

“Adobe doesn’t have a revenue center for this. “We believe this is a very important and fundamental countermeasure against misinformation and disinformation.”

Many companies have already integrated the C2PA standard and CAI tools into their applications. Adobe’s Firefly, an AI image generation tool recently added to Photoshop, follows the standard through its Content Credentials feature. Microsoft also announced that AI art created by Bing Image Creator and Microsoft Designer will be cryptographically signed in the coming months.

Other tech companies like Google seem to be pursuing strategies that draw a little from both approaches.

Google unveiled a tool called “About this image” in May that will tell you when an image found on a site was indexed by Google, where the image may have first appeared, and where it can be found elsewhere online. The user can check whether the The company also announced that all AI-generated images created by Google will include markup in the original file to “give context” if the image is found on another website or platform.

Technology companies are trying to address concerns about the integrity of AI-generated images and digital media, but experts in the field say they will eventually work together with each other and with governments. stresses the need to deal with the problem

“We will need Twitter and Facebook all over the world to take this issue more seriously and to stop promoting fakes and start promoting the real thing,” Farid said. Told. “There’s the regulation part we’re not talking about. There’s the education part we’re not talking about.”

Parsons agreed. “It is not a single company, a single government, or a single individual in academia that will make this possible,” he said. “We need everyone’s participation.”

But for now, tech companies continue to push more AI tools into the world.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *