As AI image generators become more sophisticated, spotting deepfakes is more difficult than ever. Law enforcement and world leaders continue to sound the alarm about the dangers AI creates. deepfake Whether on social media or in conflict zones.
“We’re entering an era where you can no longer believe what you’re seeing,” said Marco Jacques, co-founder and CEO of Secta Labs. Decryption in an interview. “At the moment, it’s easier because deepfakes aren’t that good yet. Sometimes you find that obvious.”
According to Jak, it’s probably not too far from a year before we can’t tell the forged images at a glance. And what he should know is that Jak is the CEO of an AI image generation company.
Co-founded by Jack sector lab In 2022. Austin-based generative AI startup focused on creating high-quality AI-generated images. Users can upload a photo of themselves and transform it into an AI-generated headshot or avatar.
Jak explained that Secta Labs sees its users as owners of the AI models generated from their data, but the company is merely an administrator helping create images from those models.
World leaders are calling for immediate action as more sophisticated AI models can be exploited. action The impact of AI regulations has led companies to decide not to open up their advanced tools to the public.
After announcing the new album last week, voice box Meta, an AI-generated voice platform, said it would not expose its AI to the public.
“We are open to the AI community and believe it is important to share our research to advance cutting-edge AI,” said a Meta spokesperson. Decryption on mail. “It also requires striking the right balance between openness and responsibility.”
Earlier this month, the U.S. Federal Bureau of Investigation warned Examples of AI deepfake extortion scams and criminals using photos and videos taken from social media to create fake content.
Jack said the answer to the fight against deepfakes may lie not in whether they can be spotted, but in whether they can be exposed.
“AI is the first way you can discover [a deepfake]”There are people developing artificial intelligence that can insert images, just like in a video, so the AI can know if it was generated by an AI,” Jack said.
Generative AI and the potential use of AI-generated imagery in film and television is a hot topic in the entertainment industry. Sag Atla Before entering contract negotiations, members voted to approve a strike on artificial intelligence, a serious concern.
Jack added that as technology advances further, there will be an AI arms race, and the challenge will be for bad guys to create more sophisticated deepfakes to counter the technology designed to detect them. rice field.
Jack acknowledged that blockchain has been overused (some might say overstated) as a solution to real-world problems, noting that the technology and cryptocurrency have helped solve the deepfake problem. Said it could be resolved.
But Jack said that while many problems with deepfakes can be solved with technology, a more low-tech solution — the wisdom of the crowd — may be the key.
“One of the things Twitter has done is community notes, which are community notes where you can add notes to give context to someone’s tweet,” Jack said. “Tweets can be misinformation, just like deepfakes,” he said. Jak added that social media companies would benefit from thinking about ways to leverage their communities to verify the authenticity of distributed content.
“Blockchain can address certain issues, but encryption can help authenticate the origin of an image,” he said. “Regardless of how sophisticated deepfakes are, this could be a practical solution as it deals with source verification rather than image content.”
