- AI Studio Creates World’s First Cryptographically Signed Deepfake
- Its tamper-proof seal declares that the video contains AI-generated content
- This is hoped to clear up confusion about the origin of online video.
For the past 30 years or so, kids have been told not to believe everything they find online, but now we may need to extend this lesson to adults.
This is because we are in the midst of the so-called “deepfake” phenomenon. We use artificial intelligence (AI) technology to manipulate video and audio in a way that mimics real life.
To demonstrate transparency, AI studio Revel.ai has released the world’s first ‘verified’ deepfake video.
This seems to be a warning from professional AI adviser Nina Schick that the lines between reality and fiction are blurring.
Of course, it’s not actually her, the video is cryptographically signed by digital certification firm Truepic, declaring it contains AI-generated content.
“Some say the truth reflects our reality,” Avatar says slowly and clearly. “We’re used to defining it in our own senses.
“But what if our reality is changing? What if we can no longer rely on our senses to judge the authenticity of what we see here?
“We are in the early days of artificial intelligence, and the lines between reality and fiction are already blurring.
“A world where shadows are real. Seeing things for what they are sometimes requires radical shifts in perspective.”
The video ends with the message “This deepfake was created by Revel.ai with Nina Schick’s consent and is cryptographically signed by Truepic.”
A deepfake is a type of AI that uses “deep learning” to manipulate audio, images, or video to create surreal yet fake media content.
The term was coined in 2017 when a Reddit user posted a manipulated pornographic video on a forum.
The video replaced the faces of celebrities such as Gal Gadot, Taylor Swift, and Scarlett Johansson with porn stars without their consent.
Another notorious example of a deepfake or “cheapfake” was a video that went viral on Russian social media last year, loosely impersonating Volodymyr Zelensky, who appeared to have surrendered to Russia.
The clip shows the Ukrainian president speaking from a pulpit, calling on the military to lay down their weapons and acquiesce to Putin’s invading army.
Savvy Internet users were quick to flag discrepancies between Zelensky’s neck and face color, odd accents, and pixelation around his head.
Despite the entertainment value of deepfakes, some experts warn of the dangers they pose.
Concerns have been raised in the past about how it has been used to generate child sexual abuse videos, revenge porn, and political hoaxes.
In November, the government’s Online Safety Bill was amended to state that it would be illegal to use deepfake technology to create pornographic images or videos of people without their consent.
Dr. Tim Stevens, director of the Cyber Security Research Group at King’s College London, said deepfake AI could undermine democratic institutions and national security.
He argues that the widespread availability of these tools has been used by states like Russia to “troll” target groups in order to achieve foreign policy objectives and “undermine” the country’s national security. He said it could be misused for
Earlier this month, an AI reporter was developed for a Chinese state-run newspaper.
The avatar could only answer preset questions, and the responses she gave heavily promoted the lines of the Central Committee of the Chinese Communist Party (CCP).
“AI and deepfakes have the potential to impact national security,” Stevens said.
“It’s not about high-level national defense or interstate warfare, it’s about the general undermining of trust in democratic institutions and the media.
“They can be exploited by authoritarian regimes like Russia to reduce the level of trust in those institutions and organizations.”
With the rise of freely available text-to-image and text-to-video AI tools such as DALL-E and Meta’s Make-A-Video, manipulated media is becoming more and more prevalent.
In fact, it is predicted that 90% of online content will be generated or created using AI by 2025.
For example, late last month, a deepfake photo of Pope Francis wearing a giant white puffer jacket went viral, leading thousands to believe it was real.
Social media users also uncovered images believed to be AI-generated of a cat with reptilian black and yellow spots on its body, declared a newly discovered species.
Chic’s new video has a tamper-proof signature, is declared AI-generated, identifies the creator, and is timestamped when it was created.
She told MailOnline:
“It’s not about telling people if this is right or wrong, it’s about how this was made – whether it’s made by AI or not, you make your own choices.
“By releasing this, we want to make sure that people who may be completely overwhelmed, concerned or even scared by the pace of change and the acceleration of AI-generated content are aware of some of the risks to information integrity. I would like to show a solution to mitigate the
“Our hope is to be able to sign content, know that there are open standards for content authenticity, but try to get some hands on platforms and generative AI companies that have not yet adopted them. There is also.
“I think AI will become a core part of the production process of almost all digital information. So without a way to authenticate that information, whether it is generated by AI or not, it would be difficult to navigate the digital information ecosystem. is a very difficult time.
“Consumers are unaware that they have a right to understand where the information they digest is coming from, but hopefully this campaign makes it possible and this is what consumers should demand. It shows that you are entitled.”
Signature generation technology follows a new standard developed by the Coalition for Content Provenance and Authenticity (C2PA).
It’s an industry association with members such as Adobe, Microsoft, and the BBC, working to address the misleading information that is prevalent online.
Schick, Truepic and Revel.ai say their video shows that digital signatures can increase transparency regarding AI-generated content.
They hope it will help eliminate confusion about where a video comes from and make the internet a safer place.
Bob de Jong, Creative Director of Revel.ai, said:
“The power of AI and its speed of development is unlike anything the world has ever seen.
“All of us, including content creators, can design a world of content creation that is ethical, certified and transparent so that society can accept and enjoy it, and that AI can continue to be used without harm. I need to be able to.”