Elon Musk’s X plans to ban users from making money on the platform if they repeatedly post unlabeled AI-generated war videos after social media feeds were flooded with fake battle scenes from the Iran conflict.
The social media platform, which has around 500 million monthly active users, will suspend revenue from AI-generated videos of armed conflict for 90 days if they are posted without additional disclosure that they were created by AI. It announced on Tuesday night that a second violation would result in a permanent ban, after the first days of the Iran conflict were marked by a torrent of fake online footage.
The Instagram and Facebook pages run by X’s Timeline and Meta have posted dozens of fake battle scenes, including one in which an Iranian rocket pursues and shoots down a U.S. military plane (watched 70 million times, according to BBC Verify), and another in which AI is used to replace smoke from a real missile attack site with a fake fireball several times the size.
Users can earn hundreds of dollars a month on X as part of the platform’s advertising model, which encourages them to create shocking viral posts, if they can amass a substantial following close to 100,000.
“During times of war, it’s important that people have access to authentic information on the ground. With today’s AI technology, it’s easy to create content that misleads people,” said Nikita Beer, head of product at X.
Other fake videos about the war have also had a major impact. A video circulating on Instagram purporting to show a huge fire after “Iran destroys a US air base in Riyadh” has turned out to be a fake, 18-month-old footage of the aftermath of an Israeli attack on a refinery in Hodeidah, Yemen.
Full Fact, a British fact-checking organization, said: “Increasingly, AI is fueling the spread of misinformation on social media.”
“Over the past few days, we have seen numerous examples of AI images being shared on various social media platforms as if they were real, including fake photos of fires on aircraft carriers and Burj Khalifa, as well as images purporting to show Ayatollah Khamenei’s body,” said Steve Nowotny, editor of Full Fact.
“Even when AI images appear to be of low quality or have visible watermarks, we often see them being shared at scale. And the sheer volume of this fake content and the ease with which it can be generated and spread is a major concern.”
Mr. Mehta has been contacted for comment.
