In both Tehran and Tel Aviv, residents have been facing a growing anxiety these days as the threat of missile strikes looms on the community. In addition to the highly realistic concerns about physical safety, there is growing alarms about the role of misinformation, particularly the content generated by artificial intelligence, and shaping public perception.
Online verification platform GeoConfirmed is reporting an increase in misinformation generated by AI, including manufactured videos of air strikes that have never occurred in both Iran and Israel.
This was caused by an increase in immigrant raids in the US's second-most populous city, following a wave of similar manipulation footage that circulated during the recent protests in Los Angeles.
This development is part of a broader trend in politically charged events that are being misused to spread false or misleading stories.
The launch of a new AI product by one of the world's largest tech companies added concern that it would detect facts from fiction.
Late last month, DeepMind, Google's AI research division, released Veo 3, a tool that allows you to generate 8-second videos from a text prompt. One of the most comprehensive systems available for free today, this system produces very realistic visuals and sounds that are difficult for the average viewer to distinguish from actual footage.
To see exactly what it could do, Al Jazeera created a fake video in minutes using a quick, quick-paced video with footage that appears to show violent unrest, historically used by swift Republicans who portrayed protesters who claimed to be paid for attendance. The final product was barely distinguishable from the real footage.
Al Jazeera also made a video showing fake missile strikes in both Tehran and Tel Aviv using a similar prompt from Tehran. Veo 3 says it blocks “harmful requests and results” on its website, but Al Jazeera had no problems making these fake videos.
“I recently created a fully composited video of myself speaking at a web summit using only one photo and a few dollars. It cheated my own team, my trusted colleagues and security experts.”
“If you can do this in a few minutes, imagine a motivated bad actor already doing it with unlimited time and resources.”
He said, “We are not prepared for future threats. We are already behind the race that started the moment Veo 3 launched. There is a robust solution that exists and works. It's not something that the model manufacturers offer.
Google says it takes this issue seriously.
“We are committed to developing AI responsibly, and we have a clear policy to protect our users from harm and manage the use of AI tools. Content generated by GoogleAI includes SynthID watermarks, which adds visible watermarks to VEO videos.”
“They don't care about their customers.”
However, experts say the tools were released before these features were fully implemented.
Joshua McKenty, CEO of Deepfake Detection Company Polyguard, said Google had brought its products to the market as it lags behind its competitors such as Openai and Microsoft, who released more user-friendly and published tools. Google did not respond to these claims.
“Google is trying to win the argument that it matters when AI is losing dramatically,” McKenty said. “They seem to be the third horse in a two-horse race. They don't care about their customers. They care about their shiny skills.”
That sentiment was echoed by Scrit Venkatagiri, an assistant professor of computer science at Swarthmore University.
“Companies are oddly tied up. If you don't develop generative AI, you're seen as lagging behind and your stocks will hit,” he said. “But they are also responsible for making these products safe when deployed in the real world. I don't think anyone cares about it now. All of these companies spend more profit or profit promises than safety.”
Google's own research published last year confirmed that threat generative AI is posed.
“The explosion of generative AI-based methods has inflamed these concerns [about misinformation]they can integrate visual content with very realistic audio, allowing them to integrate natural and fluent texts on a previously impossible scale without huge amounts of manual labor,” the study read.
Demis Hassabis, CEO of Google Deepmind, has long been warning his AI industry colleagues about putting speed ahead of safety. “I insist on moving fast and breaking things,” he said in 2023.
He refused to ask Al Jazeera for an interview.
But despite such warnings, Google released VEO 3 before fully implementing Safeguard, leading to an incident like the National Guard had to expose in Los Angeles after creating a fake “day” video of a soldier who said that Tiktok's account was preparing for “Today's Gasting.”
Imitation of real events
The meaning of VEO 3 goes far beyond protest footage. In the days after its release, several manufactured videos were distributed to social media that mimicked the actual news broadcasts distributed on social media, including one of the false reports of home intrusions, including CNN graphics.
Another clip mistakenly claims that JK Rowling's yacht had sunk off the coast of Torquier after the ORCA attack, launching a report at Alejandra Caravallo of Harvard Law Cyber Law Clinic.
In the post, Caraballo warned that such technologies could mislead older news consumers in particular.
“What's bothering me is how easy it is to repetition. I had multiple versions within 10 minutes, which makes it difficult to detect and easier to spread,” she writes. “The lack of Chalong [banner on a news broadcast] Make it easy to add after facts to make them look like a specific news channel. ”
In our own experiment, we used the prompt to create fake news videos with the ABC and NBC logos. It mimicked the voices of CNN anchors Jak Tapper, Erin Burnett, John Berman and Anderson Cooper.
“It's becoming increasingly difficult to convey facts from fiction right now,” Carabaro told Al Jazeera. “As someone who has been researching AI systems for years, I'm starting to struggle too.”
This challenge is also widespread. A survey from Pennsylvania State University found that 48% of consumers were fooled by fake videos distributed via messaging apps and social media.
Contrary to popular belief, young adults are more susceptible to misinformation than older adults. This means that there is no editorial standards and legal surveillance for traditional news organizations, primarily because younger generations rely on social media.
A UNESCO survey in December showed that 62% of news influencers did not fact-check information before sharing it.
Google is not just developing tools that will encourage the spread of synthetic media. Companies like Deepbrain offer users the ability to create AI-generated avatar videos, but with restrictions as they cannot create full scene renderings like VEO 3. Other tools such as Synthesia and Dubverse primarily allow for video dubbing for translation.
This growing toolkit offers more opportunities for malicious actors. Recent incidents included a manufactured news segment that was made to appear to be making racist remarks by CBS reporters in Dallas. The software used remains unconfirmed.
CBS News Texas did not respond to requests for comment.
As synthetic media becomes more common, according to Colman, it raises a unique risk that allows bad actors to push manipulation content that spreads faster than they can fix.
“When fake content spreads to platforms that don't check these markers [which is most of them]The damage is done through channels that strip them, or through bad actors who have learned to fake them,” Colman said.
