Several AI leaders have teamed up to create what they call the first transparent deepfake. It looks convincingly real, but is artificially created and labeled as such using new standards.
Important reasons: Many in the field say that concerns about synthetic media are no longer hypothetical, and now is the time to start clearly labeling how content is created or modified.
News promotion: A new synthetic video from Revel ai features a convincing mock-up of AI creator Nina Schick — with her permission.
- Videos are labeled using Truepic’s commercial technology to identify them as being synthetically produced and from Revel.ai.
- This labeling is based on the latest version of standards from the Content Authenticity Initiative, led by Adobe, and is designed to show how an image or video was created. Adobe uses a similar tool to identify content created using the new Firefly generated AI tools.
what they say: Truepic CEO Jeffrey McGregor told Axios that fears of regulation are among the motivations, so the momentum is centered around the need to label synthetic content.
- Last week’s letter calling for a moratorium on some AI development also highlighted the urgent need for content verification systems given the widespread adoption of existing AI tools.
- “I think even in the last two weeks, the conversation has changed,” McGregor said. “People are starting to see this as best practice for implementing responsible AI.”
yes, but: If most legally captured videos are not labeled, labeling artificially produced content is useless.
- Most of all, we want to know that genuine content is real, not just acknowledging that some fakes are fake.
McGregor said Labeling synthetic content can help educate consumers and incentivize other good actors, but to minimize the impact of deepfakes, the majority of all content should be transparent. We recognize the need to label
- Truepic is already in the business of helping media and other organizations create secure pipelines that can be used to prove that images and videos were legally captured and any edits made were made. I’m here.
- McGregor said the company works with more than 170 organizations, including a pilot with Microsoft to authenticate images from Ukraine.
To the point: The age of deepfakes has arrived, and videos that cannot be proven to be of origin should be viewed with suspicion. “You shouldn’t trust digital video at this point,” McGregor said. “You really shouldn’t.”