OpenAI leads C2PA to label Sora videos with AI

AI Video & Visuals


See how companies are responsibly integrating AI into production environments. This invite-only event in SF explores the intersection of technology and business. Click here to learn how to participate.


One of several big announcements made by generative AI unicorn OpenAI today was the Coalition for Content Provenance and Authenticity (C2PA), founded in February 2021 by Microsoft and Adobe (including Arm). ), the company will participate in the “steering committee'' of an industry association called , BBC, Intel, and Truepic as initial members).

why? OpenAI calls this “to enable people to examine the tools used to create and edit different types of digital content,” and “particularly to help people identify content created by their own tools.” It says it is doing this to create “new technologies that can help.”

In other words, OpenAI wants to work with other companies in the space, including rivals, to develop tools and technology for labeling images, videos, and other content generated by AI. This allows viewers to trace back to the source and avoid confusion. Can be used for real-world footage and photos.

What is C2PA and what does it do?

The C2PA organization operates under the nonprofit Cooperative Development Foundation and is dedicated to “development.”[ing] Technical specifications for establishing the origin and authenticity of content. ”

VB event

AI Impact Tour – San Francisco

The next stop on VB's AI Impact tour in San Francisco will explain the complexities of responsibly integrating AI into your business. Don't miss the chance to gain insights from industry experts, network with like-minded innovators, explore the future of his GenAI in customer experience, and optimize your business processes.

request an invitation

In the three and a half years since its launch, other big tech and AI companies have joined its steering committee, including Google, and the group has built a number of open source technical standards that developers and companies can implement into their products. Released. Clarify where content generated by AI models and other tools comes from.

Among these standards is the “C2PA architecture”. A model for storing and accessing cryptographically verifiable information whose trustworthiness can be assessed based on a defined trust model. ”

The C2PA architecture has already been accepted by members of the organization and is being used to create “content credentials.” This is a web-friendly watermark that is indicated by a small “CR” icon in the top right corner of some images when the user hovers over them. Tap to see detailed information about who created it, what tools they used, and when.

Sample image of C2PA content credentials implemented on the web.Credit: Content Credentials

The C2PA architecture can also bake metadata (non-visual data that accompanies images, videos, and other multimedia files) on save, making it visible to anyone accessing it offline on other devices. Become. This is what OpenAI says it has been doing with images generated by its DALL-E 3 image generation AI model since at least February of this year. (Meta has started labeling AI-generated images with his C2PA standard.)

Today, Anna Makanju, OpenAI's vice president of global affairs, emphasized the importance of these efforts, saying in a C2PA press release: We look forward to contributing to this effort and believe it is an important part of building trust in what people see and hear online. ”

C2PA is also available for Sora, OpenAI's impressively realistic video-generating AI model, which is not yet publicly available but is being used by select trusted partners (including creating our first music video last week) Contains metadata. It is being integrated to label video clips generated as a product of AI when they are finally released to the public (dates not given here).

Additionally, OpenAI has launched something called the DALL-E Detection Classifier Researcher Access Program.

The effort features a binary classifier designed to predict whether an image comes from OpenAI's DALL-E 3 model. But the company is seeking help from outside groups for testing.

Researchers interested in the program can submit applications until July 31, which OpenAI says is open to “research institutions and research-oriented journalism nonprofit organizations.” Decisions will be announced by August 31st.

Social Resilience Fund

Additionally, OpenAI, along with investor Microsoft, has launched a $2 million Social Resilience Fund that will partner with other external organizations including AARP, International IDEA, and the AI ​​Partnership to “support education and understanding of AI.” He said he would launch it. Among older adults and non-tech savvy people.

The news comes amid reports of people appearing to have been fooled by posts featuring AI-generated images meant to resemble real photos, especially on social networking sites such as Meta's Facebook. Although many, such as “Shrimp Jesus,” are decidedly artistic and surreal.

AI-generated Shrimp Jesus found on Meta's Facebook.

The big question is: Will these efforts meaningfully help stem the tide of AI disinformation? OpenAI is clearly attacking both the production and education components, so that AI-generated content is not only labeled, but also that people learn how to recognize and search for it. can.

But in a time when many open source AI models are proliferating and it's still easy to screenshot or modify images to remove metadata (C2PA is trying to make this difficult or impossible) , the challenge of reliably labeling and identifying AI content is clear. It is likely to remain formidable for the foreseeable future. Still, the company wants to be a good, socially responsible company, and taking these steps makes sense both from a public relations perspective and, ideally, from a good corporate citizen perspective. .



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *