World’s first ‘certified’ deepfake warns viewers not to trust everything they see online

AI Video & Visuals


  • AI Studio Creates World’s First Cryptographically Signed Deepfake
  • Its tamper-proof seal declares that the video contains AI-generated content
  • This is hoped to clear up confusion about the origin of online video.



For the past 30 years or so, kids have been told not to believe everything they find online, but now we may need to extend this lesson to adults.

This is because we are in the midst of the so-called “deepfake” phenomenon. We use artificial intelligence (AI) technology to manipulate video and audio in a way that mimics real life.

To demonstrate transparency, AI studio Revel.ai has released the world’s first ‘verified’ deepfake video.

This seems to be a warning from professional AI adviser Nina Schick that the lines between reality and fiction are blurring.

Of course, it’s not actually her, the video is cryptographically signed by digital certification firm Truepic, declaring it contains AI-generated content.

The world’s first ‘verified’ deepfake video has been released by AI studio Revel.ai. This seems to be a warning from professional AI adviser Nina Schick that the lines between reality and fiction are blurring.

“Some say the truth reflects our reality,” Avatar says slowly and clearly. “We’re used to defining it in our own senses.

What is a deepfake?

The technology behind deepfakes was developed in 2014 by Ian Goodfellow, director of machine learning in Apple’s Special Projects Group and a leader in the field.

The term comes from a collaboration of the terms “deep learning” and “fake” and is a form of artificial intelligence.

The system surveys a subject with photographs and videos, capturing them from multiple angles so that their behavior and speech patterns can be mimicked.

The technology gained prominence during the election season as many developers feared using it to damage the reputation of political candidates.

“But what if our reality is changing? What if we can no longer rely on our senses to judge the authenticity of what we see here?

“We are in the early days of artificial intelligence, and the lines between reality and fiction are already blurring.

“A world where shadows are real. Seeing things for what they are sometimes requires radical shifts in perspective.”

The video ends with the message “This deepfake was created by Revel.ai with Nina Schick’s consent and is cryptographically signed by Truepic.”

A deepfake is a type of AI that uses “deep learning” to manipulate audio, images, or video to create surreal yet fake media content.

The term was coined in 2017 when a Reddit user posted a manipulated pornographic video on a forum.

The video replaced the faces of celebrities such as Gal Gadot, Taylor Swift, and Scarlett Johansson with porn stars without their consent.

Another notorious example of a deepfake or “cheapfake” was a video that went viral on Russian social media last year, loosely impersonating Volodymyr Zelensky, who appeared to have surrendered to Russia.

The clip shows the Ukrainian president speaking from a pulpit, calling on the military to lay down their weapons and acquiesce to Putin’s invading army.

Savvy Internet users were quick to flag discrepancies between Zelensky’s neck and face color, odd accents, and pixelation around his head.

Despite the entertainment value of deepfakes, some experts warn of the dangers they pose.

Concerns have been raised in the past about how it has been used to generate child sexual abuse videos, revenge porn, and political hoaxes.

In November, the government’s Online Safety Bill was amended to state that it would be illegal to use deepfake technology to create pornographic images or videos of people without their consent.

Can you tell a deepfake from a real person? – Learn more

Dr. Tim Stevens, director of the Cyber ​​Security Research Group at King’s College London, said deepfake AI could undermine democratic institutions and national security.

He argues that the widespread availability of these tools has been used by states like Russia to “troll” target groups in order to achieve foreign policy objectives and “undermine” the country’s national security. He said it could be misused for

Earlier this month, an AI reporter was developed for a Chinese state-run newspaper.

The avatar could only answer preset questions, and the responses she gave heavily promoted the lines of the Central Committee of the Chinese Communist Party (CCP).

“AI and deepfakes have the potential to impact national security,” Stevens said.

“It’s not about high-level national defense or interstate warfare, it’s about the general undermining of trust in democratic institutions and the media.

“They can be exploited by authoritarian regimes like Russia to reduce the level of trust in those institutions and organizations.”

With the rise of freely available text-to-image and text-to-video AI tools such as DALL-E and Meta’s Make-A-Video, manipulated media is becoming more and more prevalent.

In fact, it is predicted that 90% of online content will be generated or created using AI by 2025.

For example, late last month, a deepfake photo of Pope Francis wearing a giant white puffer jacket went viral, leading thousands to believe it was real.

Social media users also uncovered images believed to be AI-generated of a cat with reptilian black and yellow spots on its body, declared a newly discovered species.

Late last month, a deepfake photo of Pope Francis wearing a giant white pufferfish jacket went viral, leading thousands to believe it was real.
Social media users also uncovered what appeared to be an AI-generated image of a cat with reptilian black and yellow spots on its body, which was declared to be a newly discovered species.

Chic’s new video has a tamper-proof signature, is declared AI-generated, identifies the creator, and is timestamped when it was created.

She told MailOnline:

“It’s not about telling people if this is right or wrong, it’s about how this was made – whether it’s made by AI or not, you make your own choices.

“By releasing this, we want to make sure that people who may be completely overwhelmed, concerned or even scared by the pace of change and the acceleration of AI-generated content are aware of some of the risks to information integrity. I would like to show a solution to mitigate the

“Our hope is to be able to sign content, know that there are open standards for content authenticity, but try to get some hands on platforms and generative AI companies that have not yet adopted them. There is also.

“I think AI will become a core part of the production process of almost all digital information. So without a way to authenticate that information, whether it is generated by AI or not, it would be difficult to navigate the digital information ecosystem. is a very difficult time.

“Consumers are unaware that they have a right to understand where the information they digest is coming from, but hopefully this campaign makes it possible and this is what consumers should demand. It shows that you are entitled.”

Chic’s new video is tamper-proof signed, declared to be AI-generated, identified by the creator, and timestamped when it was created.

Signature generation technology follows a new standard developed by the Coalition for Content Provenance and Authenticity (C2PA).

It’s an industry association with members such as Adobe, Microsoft, and the BBC, working to address the misleading information that is prevalent online.

Schick, Truepic and Revel.ai say their video shows that digital signatures can increase transparency regarding AI-generated content.

They hope it will help eliminate confusion about where a video comes from and make the internet a safer place.

Bob de Jong, Creative Director of Revel.ai, said:

“The power of AI and its speed of development is unlike anything the world has ever seen.

“All of us, including content creators, can design a world of content creation that is ethical, certified and transparent so that society can accept and enjoy it, and that AI can continue to be used without harm. I need to be able to.”

How to spot a deepfake

1. Unnatural eye movementsInvisible eye movements in nature, or lack of eye movement such as no blinking, are major red flags. It is difficult to reproduce blinking naturally. In addition, it is difficult to reproduce the movement of an actual human eye. That’s because they usually follow the person they’re talking to with their eyes.

2. Unnatural expressionIf there is something wrong with your face, it could indicate facial morphing. This happens when one image is stitched on top of another.

3. Awkward positioning of facial featuresIf someone’s face is one way and their nose is another, you have to be skeptical about the authenticity of the video.

4. Lack of emotionYou can also find what is known as ‘facial morphing’ or image stitching.

5. Awkward body and postureAnother sign is if a person’s body looks unnatural or if their head and body positions are awkward or inconsistent. This may be one of the easy-to-spot discrepancies, as deepfake technology usually focuses on facial features rather than the full body.

6. Unnatural body movements and body shapesIf someone looks distorted when they turn or move their head, if their movements are jerky, or if they fall apart from one frame to the next, then the video is fake. I have to doubt.

7. Unnatural coloringUnusual skin tones, discoloration, odd lighting, and misplaced shadows all indicate that what you’re seeing is likely fake.

8. Hair that doesn’t look realI don’t see any frizzy or frizzy hair. why? Fake images cannot produce these individual features.

9. Teeth that don’t look realThe lack of individual tooth contours can be a clue, as the algorithm may not be able to generate individual teeth.

10. Bleeding or smearing. If the edges of the image are blurry or the image is misaligned (for example, where someone’s face and neck meet their body), you know something is wrong.

11. Inconsistent noise or audio. Deepfake creators typically spend more time with video images than audio. The result can be bad lip-syncing, a robotic voice, mispronunciation of words, digital background noise, and even inaudibility.

12. Images that look unnatural when slowed downIf you watch a video on a screen larger than your smartphone or use video editing software that allows you to slow down the playback speed of the video, you can zoom in to examine the image more closely. For example, zooming in on lips can help you see if they’re really talking or if you’re lip-syncing badly.

13. Hashtag MismatchThere are encryption algorithms that help video creators prove that their videos are authentic. This algorithm is used to insert hashtags at specific locations throughout the video. Video manipulation should be suspected if the hashtag has changed.

14. Digital fingerprint. Blockchain technology can also create a digital fingerprint of a video. Not everyone can do it, but this blockchain-based verification helps establish the authenticity of the video. Here’s how it works: When a video is created, the content is registered in an immutable ledger. This technique helps prove the authenticity of the video.

15. Reverse Image SearchBy searching for the original image or using your computer to do a reverse image search, you can find similar videos online and determine if the image, sound or video has been altered in any way. Reverse video search technology isn’t public yet, but investing in such a tool can help.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *