AI-generated images of fighter planes shot down in Iran, released on X's parody account. Users repeatedly asked the platform's AI chatbot, Grok, if the images were real.
@hehe_samir/annotation by npr
Hide captions
Toggle caption
@hehe_samir/annotation by npr
After the first few days of Israel's surprising airstrikes in Iran, it began to cycle at X. The newscast narrated by Azeri shows drone footage from the bombed airport. The video has received around 7 million views on X.
Hundreds of users have asked X's integrated AI botglock tagged: is this true?
Not that – the video was created with the Generated AI. However, Grok's response was very different, sometimes from one to one minute. “The video probably shows real damage,” one response said. “It's probably not the real thing,” another said.
In a new report, researchers at the Digital Forensic Research Lab have collected over 300 responses from Grok to the Post.
“What we see is that AI mediates the war experience,” said Emerson Brooking, Strategy Director at DFRLAB, part of the nonpartisan policy group The Atlantic Council. He co-authored a book on how social media shapes perceptions of war.
“There's a difference between experiencing conflicts on social media platforms alone and experiencing them with endlessly patient conversational peers. “This is another milestone about how the public handles and understands armed conflict and war. And we're just the beginning.”
The rapid and realistic growth of AI-generated images and videos has led to competition and information researchers saying that motivated actors spread false claims, making it difficult for anyone to understand conflicts based on what they see online. Brooking has seen this escalation since Hamas' attack on Israel on October 7, 2023.

“Many of the material originally generated by AI was in early Israel's public diplomatic efforts to justify an escalating strike against Gaza,” Brooking said. “However, as time went by last year, starting with the initial exchange of fires between Iran and Israel, Iran began to saturate the space with AI-generated conflict material.”
The destroyed buildings and fallen aircraft are one of the spreading AI-generated images and videos, and while the obvious ones were created with AI, others have more subtle signs.

“This is the worst I've seen the information environment in the last two years,” said Isabel Francis Wright, director of technology and society at a nonprofit laboratory for strategic dialogue. “I can only imagine what it would be like [for] The average social media user who joins these feeds. ”
AI Bot joined the chat
Social media companies and AI chatbot makers do not share data on how often people use chatbots to find information about current events, but a Reuters Institute report released in June showed about 7% of users in dozens of countries using AI to get news. When asked to comment, X, Openai, Google, and humanity did not respond.
Starting in March, X users were able to ask Grok questions by tagging them with replies. The DFRLAB report analyzed posts from over 100,000 users, tagged Grok and asked about the Israeli-Iranian war in the first three days.
The report found that when asked to fact-check something, Grok references community notes, X's fact-checking fact-checking efforts. This made the chatbot's answers more consistent, but still contradictory.
On June 15th, smoke rises from a location targeting Tehran, amid the third day of Israel's strike waves against Iran. The images are real, but the surge in AI-generated images has allowed state-backed impact campaigns to flourish.
Zara/AFP via Getty Images
Hide captions
Toggle caption
Zara/AFP via Getty Images
NPR has sent similar queries to other chatbots about the credibility of photos and videos that appear to portray the Israeli-Iran war. Openai's ChatGpt and Google's Gemini correctly replied that one image had misconducted not from the current conflict, but to other military operations. Anthropic's Claude said it cannot authenticate content in any case.
They even ask chatbots more complicated questions than “Is it true?” Mike Caulfield, a researcher of digital literacy and disinformation, says it has its own pitfalls. “[People] When you take a photo, you'll say, “Analyse this like you're a defense analyst.” “He said that chatbots can respond in a rather impressive way and become a useful tool for experts, but “it doesn't always help beginners.”
AI and “liar dividends”
“I don't know why I have to tell people this, but I don't get reliable information about social media or AI bots,” said Hany Farid, a media forensics specialist at the University of California, Berkeley.
Having pioneered techniques for detecting digital composite media, Farid warned against using chatbots to check the reliability of images and videos. “If you don't know when it's good, when it's not good, and how to offset it with more classical forensic techniques, you're just asking to lie.”
He used some of these chatbots in his work. “I'm actually good at object recognition and pattern recognition,” Farid said. The chatbot said it could analyze the style of the building and the type of car in a typical location.
The rise of people using AI chatbots as a source of news is consistent with AI-generated videos becoming more realistic. Together, these technologies present a growing list of concern for researchers.
“A year ago, we saw most of the images. People were a little tired or fainted. But now, a full-on video with sound effects – it's a completely different ball game,” he said.

The new technology is impressive, Farid said, but he and other researchers have long warned about the possibility of AI to enhance what is known as the “liar dividend.” That's when others are likely to believe it when someone who tries to avoid accountability claims that visual evidence is being produced that criminalizes or compromises the visual evidence against them.

Another concern for Farid is the AI's ability to drastically mudify current events. He points to an example of recent protests against President Trump's immigration attack. California Gov. Gavin Newsom shared an image of an activated National Guard member sleeping on the Los Angeles floor. The Newsom post criticized Trump's leadership, saying, “You sent your army here without fuel, food, water or a place to sleep.” Farid said internet users began questioning the authenticity of the photos, and some say AI was generated. Others submitted it to ChatGpt and were told the image was fake.
“And then, all of a sudden, the internet went crazy. “Governor Newsom caught sharing a fake image,” said Farid, who was able to authenticate the photo. “So people not only get unreliable information from ChatGpt, they also put images that don't fit the story, they don't fit the story they want to tell, and ChatGpt says, “Oh, that's fake.” And now we're out to race. ”
As Farid frequently warns, it appears that these added layers of uncertainty unfold in a dangerous way. “When a real video of a human rights violation comes out, or a bombing, or someone says something inappropriate, who's trying to believe it already?” he said. If you say “1 plus 1 is 2,” you say, “No, it's not. It's Apple Souche.” Because that's the tenor of recent conversations. I don't know where we are. ”
How AI accelerates impact campaigns
While generative AI can evoke a new, persuasive reality, Brooking of DFRLAB said in conflict, one of the more persuasive uses of AI is to easily create some kind of political cartoon or obvious propaganda message.

Brooking said there's no need to believe that visual content is authentic to enjoy sharing it. For example, humor attracts a lot of user engagement. He sees AI-generated content following a similar pattern to what researchers saw in political satire. onionthe satirical newspaper went viral.
“[Internet users] Brooking said.
According to Darren Linville, a professor at Clemson University who studies how states like China, Iran and Russia use digital tools for propaganda, the creative ability of generating AI is ripe for use in all kinds of propaganda.
“There's a very famous campaign in which Russians planted stories in Indian newspapers in the '80s,” Linville said. The KGB tried to spread the false narrative that the Pentagon was responsible for creating the AIDS virus.[the KGB] They planted the story in the newspapers they founded, and then used the original stories to layer the stories over time through the purposeful laundry campaigns of other outlets. But it took years for this story to come out. ”
As technology improves, impact campaigns are surged. “They are still in the same process today,” Linville said.
Linvill's research found that Russian propaganda sites can maintain their persuasiveness and have more than doubled their output after using ChatGpt to help write articles.
Linvill said AI can support foreign impact campaigns in many ways, but the most effective way these messages can actually be happily delivered is through the well-known people or influencers that state actors pay from time to time.
“They spread a lot of bread in the water, some of which are picked up and become a prominent part of the conversation,” he said.
When it comes to propaganda, whether they are looking for information in uncertain moments or not, Linvill and other researchers have said that the most powerful idea that AI can help spread is to confirm what people already want to believe.
