AI-generated videos and fabricated satellite images related to the US-Israel war against Iran have gone viral on social media, amassing hundreds of millions of views and generating revenue for some creators.
BBC Verify analysis tracked hundreds of posts featuring synthetic combat footage. Many clips show dramatic missile attacks, buildings on fire, or massive explosions. None of these depict actual events.
Researchers say this surge reflects how quickly generative AI tools can generate convincing visuals of war.
Conflict has sparked a surge in synthetic war footage
The current conflict began on February 28, when the United States and Israel launched attacks on Iran. The Iranian government responded with drone and missile attacks targeting Israel, Gulf states, and U.S. military facilities across the region.
As people searched online for quick updates, social media platforms were soon flooded with dramatic but fabricated footage.
One widely shared video appears to show a missile hitting Tel Aviv, with the explosion echoing throughout the city. BBC Verify tracked more than 300 posts sharing the clip, accumulating tens of thousands of reposts across platforms.
Another viral video claimed to show Dubai’s Burj Khalifa skyscraper engulfed in flames and crowds running through nearby streets. The video received tens of millions of views, even though it had nothing to do with the actual event.
Fact-checkers say such videos undermine public understanding in a rapidly evolving crisis.
Mahasa Alimardani of the Oxford Internet Institute said misleading visuals undermine trust in legitimate reporting. “Fake videos like this negatively impact people’s trust in verified information they see online and make it extremely difficult to document genuine evidence,” she said.
X moves to block revenue from unlabeled AI war videos
The proliferation of fake videos is forcing social media platforms to take action.
X has announced that it will stop users from earning money through its creator revenue sharing program if they repeatedly post AI-generated videos of armed conflict without clear disclosure.
Under the policy, creators who publish unlabeled synthetic war footage will receive a 90-day suspension from the monetization scheme. A second violation will result in permanent expulsion from the program.
Nikita Beer, head of product at X, said the move was necessary because AI tools can easily generate persuasive but misleading content.
“During wartime, it’s important that people have access to authentic information on the ground,” he says.
The financial incentives behind viral posts are important. Users with large audiences can earn hundreds of dollars a month if their posts generate high engagement.
Researchers say economic models can encourage sensational content.
Fake satellite images add new layer of misinformation
BBC Verify also identified fake satellite images that were circulated online during the conflict.
One widely shared image appeared to show extensive damage to the US Navy’s 5th Fleet headquarters in Bahrain. Analysis revealed that this photo was reused from a satellite image released in February 2025.
Despite claiming the photos were taken a year apart, the vehicle shown in the authentic image appears in the exact same position in the modified version.
Google’s SynthID watermark detection tool showed that the altered images were generated or edited using Google AI systems.
Experts say the rapid expansion of generation tools is changing the way misinformation is spread online.
Generative AI expert Henry Ajdar says the number of platforms that can create convincing synthetic media is rapidly increasing.
“I’ve never seen a tool so available, so easy, so cheap to use,” he said.

