AI technology is being used to spread misinformation and disinformation in conflicts between Israel, Iran and the US
Most recently, images have appeared on social media after US military forces attacked three Iranian nuclear energy sites on Saturday, and are said to show the remains of US B2 bombers within Iran's territory. Social media users said that a US jet crashed and the Iranian forces did not leave Iran because it shot down. It turns out that the images are generated by AI.
The appearance of a fake image is one of many instances in which photos shared and promoted on social media have been generated by AI technology.
Another case occurred almost two weeks ago after Iran launched a ballistic missile into Israeli cities after the Israeli strike on June 13th. Following Iran's response, the AI-generated clips claimed to show destruction in Tel Aviv, Isreal. It was later reported that the clip was created using Google AI tools and was recorded prior to an Iranian missile attack. The spread of misinformation and misinformation generated by AI is not new. For example, AI-generated deepfakes have been a key concern in the 2024 election and the previous election, and continue to provide easy access to AI tools and technology.
“This problem isn't gone,” says Chirag Shah, professor of information and computer science at the University of Washington. “The problem is staying here.”
Difficult to detect
Shad added that part of the problem is becoming more difficult to detect whether something is realistic, and that detection tools and techniques are beginning to fail.
Even identifying things as fake later is becoming increasingly difficult.
Chirag ShahProfessor at Washington University in Seattle
“It's becoming increasingly difficult to even identify things as fake later,” he said.
What makes it difficult for some reason is that generative AI imaging tools are more refined.
“Part of this technique is to trick the detector,” Shah said.
Those who spread fake images use more sophisticated methods and target already biased audiences.
However, Emmanuelle Saliba, Chief Research Officer at GetReal, is the chief research officer at GetReal, a company specializing in detecting and mitigating threats from malicious, generated AI content, and said part of the issue is that such detectors rely on AI technology to detect content generated by AI. GetReal offers technology to validate and authenticate digital content files.
“If you're using only AI, this is an arms race,” Saliba said. She added that there is an element of forensic analysis that GetReal uses to see what is generated by AI and what is not.
Recently, GetReal worked to prove whether the six-second video showing Israel raiding Iran's Ebin prison is true.
According to a LinkedIn post by GetReal co-founder Hany Farid, the video could be generated by AI. It could add to the growth and nasty trends of fake content circulating online as major world events unfold, Farid writes.
Even experts cannot determine whether such videos are generated by AI. This illustrates the complexity of detection.
“It's complicated and it changes every day,” Saliba said. “This solution should be a mixture of the capabilities of these tools and the recognition of technology.”
Another issue is that the perception that videos remain the same even after forensics is run with AI-generated deepfake images and videos, says Joshua McKenty, founder and CEO of a cybersecurity company that works to stop AI-driven fraud.
“Whatever comes with that video is already there,” McEnti said. “If they're posted by Israel and it's like, 'This is what I saw in Tel Aviv last night', then that's what the media is saying last night, and it's the same in Tehran. ”
He added that both Israel and Iran have a history of using deepfakes and the ability to amplify messages using bots. Israel is listed as one of the top countries in AI technology and cyber capabilities. Meanwhile, Iran is working to become the top 10 AI country by 2032.
“We know that these are also used to some extent as diplomatic developments in the sense that they build illusions of consensus, illusions of rebellion, anything of the masses, and that people want to make this happen,” McEnti added.
More accessibility requires more awareness
The spread of AI-generated content in the Iran-Israel conflict is no different to the deceptions that AI-generated in the past, but the accessibility of the tools has become even stronger. Also, many of them are free.
“Everyone can access these tools and make hyperrealistic fakes of anything you can imagine. It created a flood of these images,” Saliba said. Additionally, the latest releases of AI tools and technology add a refinement of these new images.
Along with Iran and Israel, Google's VEO 3 model was released about a week before the conflict.
Google did not immediately respond to requests for comment.
“We've seen a large portion of the content that's been created using Google's VEO 3,” says Saliba.
The possibility of using VEO 3 speaks less about how it was made public, about more technology itself, McKenty said.
“We had good video in advance and good audio in advance, but there was no single tool to sync audio and video,” he said. Therefore, everyone in the Deepfake industry has the ability to do what VEO 3 did all the time. However, using new tools increases accessibility.
Unfortunately for Deepfake makers, Veo 3 watermarks were able to easily detect that content was being generated by AI. Otherwise, it's not that easy.
But Nemertes CEO Johnna Till Johnson said that if content consumers look at the material with a critical eye, the spread of AI misinformation and disinformation can fight.
“There's no such thing as a meter of truth,” Johnson said. “The real solution is to teach people skepticism and the real ability to source information… What someone said on Facebook doesn't mean it's true.”
Esther Shittu is an Informa TechTarget News Writer and Podcast host that covers artificial intelligence software and systems.