A new frontier in information warfare

AI Video & Visuals


While AI technology is new, information warfare is as old as conflict itself. For thousands of years, humans have used propaganda, deception, and psychological operations to influence the decisions and morale of their adversaries. For example, in the 13th century, the Mongols destroyed entire cities so that word of mouth would spread to the next city, aiming to break morale and force them to surrender before their troops arrived.

With the advancement of technology, new frontiers of information warfare have opened up. From World War II to the 1991 Gulf War, planes dropped leaflets to spread rumors and propaganda. During the Vietnam War, an English-language radio program hosted by Hanoi Hanna (real name Trịnh Thị Ngọ) mocked the U.S. military by showing locations and casualty lists in order to lower morale. Radio propaganda also demonstrated its devastating effectiveness when it was used to induce the Rwandan genocide in 1994.

Then came cable television. The 1991 Gulf War was the first major conflict to be broadcast on a 24-hour news cycle, as opposed to the evening news. Instead of daily updates from journals and newspapers, people at home now receive a continuous stream of information and images that are always biased towards national interests. This change in technology determined the public’s perception of the war, leading historians to refer to it as the “CNN War.”

What we are witnessing today is the next step in this evolution, from print, radio and television to social media. If the first Gulf War was the CNN war, the conflict between the United States, Israel, and Iran in 2025 and 2026 can be thought of as the first TikTok war and the first large-scale AI war.

AI has ushered in new forms of information warfare that target perception, the information environment, and trust itself. AI-generated video in particular has fundamentally changed the way states and non-state actors conduct information warfare, manipulate populations, and compete not only in the Gulf region but on the global stage.

This “synthetic media” is frequently deployed and disseminated to fake footage of real-world events, from devastating military attacks that never occurred to fake videos of officials pleading for a cease-fire.

But this technology also makes it easy to create convincing propaganda material that is obviously fictional. The most notable example is Iran’s viral Lego video, which has repeatedly, and very successfully, mocked Israel and the United States during the war.



Read more: Slopeganda war: How (and why) the US and Iran are flooding the region with AI-generated viral noise


digital weapons

To fully understand the disruptive potential of AI video, we can look back to the futuristic musings of dystopian science fiction novels. Science fiction author William Gibson coined the term “cyberspace” in his 1983 novel Neuromancer, describing it as a “consensual illusion,” that is, not reality, but “a graphical representation of data extracted from the banks of all the computers in the human system.”

However, when digital tools such as AI video and social media are used as weapons, the barrier between cyberspace and physical reality becomes permeable. They no longer create virtual reality, but what French media theorist Jean Baudrillard called “hyperreality.” This term refers to a state in which the distinction between reality and a simulation of reality collapses, and the simulation feels more “real than reality.”

Baudrillard’s work is underpinned by the concept of “simulacra,” a copy or representation of something that actually exists. He classified simulacra into three orders. The first order is for pre-industrial counterfeits, i.e. faithful copies or replicas of the real thing, and the second order is for mechanically mass-produced items.

A tertiary simulacra is a simulation, or symbol, that has no physical form at all. Check out this Lego video from Iran. It depicts President Trump, Prime Minister Benjamin Netanyahu, and others using the Iran war as an excuse to distract from the Epstein file while worshiping the pagan Canaanite god Baal. These have nothing to do with the intentions of the Danish company that makes the ubiquitous plastic brick toys, but they have nonetheless garnered significant attention in the West and around the world as viral meme propaganda.

AI is a message

Media theorist Marshall McLuhan’s oft-quoted phrase, “The medium is the message,” asserts that regardless of the message conveyed by the medium, be it newspaper, radio, or television, the medium itself tells us something.

The content of AI videos from Iran, the United States, and Israel are, of course, quite different, with each seeking to undermine the other’s narrative. But the medium of AI video shared on social media also sends a message. These videos cross enemy borders in ways that were not possible with previous media.

Unlike previous pamphlets, radio broadcasts, and television networks, the production and consumption of AI is not geographically limited. Anyone can create and view it anywhere, whether you’re in Tehran, Tel Aviv, Washington, or anywhere in the world. What this has created is a new era of borderless, decentralized and viral digital public diplomacy.



Read more: Iranian AI memes are reaching even people who don’t follow the news – and winning the propaganda war


Deepfakes, propaganda, and the “collapse of truth”

Unlike Iran’s Lego video, AI deepfakes are realistic but completely fabricated content, making it difficult for viewers to distinguish between truth and falsehood. While early iterations were crude and easily identified, modern deepfakes have reached a level of photorealism and audio authenticity that can fool even experienced observers and automated detection systems.

During the so-called “12 Day War” between Israel and Iran in 2025, AI deepfakes and video game footage attempted to recreate the actual battle. The fake footage included scenes of a destroyed Israeli plane, collapsing buildings in Tel Aviv and its airport, and footage of an Israeli attack on Tehran that left a crater at an intersection and sent cars flying.

But reliability isn’t always a top priority. One widely shared image of a downed Israeli F-35 fighter jet was taken from a flight simulator game. The plane was clearly too large compared to bystanders on the ground, but that didn’t stop the image from spreading (it’s racked up 23 million views on TikTok) and being spread by Russian-sympathetic networks seeking to demonstrate the vulnerability of the American aircraft.

The three most-watched deepfake videos during the 2025 war combined for 100 million views across social media. A deepfake video circulating on Facebook also showed an Israeli official claiming that “we can’t fight Iran any longer” and begging the US to force a ceasefire.

The content was spread on TikTok, Telegram, and X, but AI chatbot Grok was unable to identify the fake video, which used footage from other conflicts.

Legal scholars have coined the terms “liar’s dividend” and “decay of truth” to characterize this ongoing tendency to fabricate reality. These terms refer to a media landscape where AI fakes undermine trust to the point that even legitimate evidence is called into question and any image or media is dismissed as a deepfake.

The most recent 2025-2026 war shows a parallel arms race unfolding online as countries compete to develop drones, missiles, and defense systems. The digital revolution and advances in AI have dramatically increased the speed, scale, and sophistication of information operations. This conflict heralds a new era of information warfare, where AI technologies are weaponized to influence, disrupt, and destabilize adversaries.


A weekly email in English featuring the expertise of academics and researchers. We will introduce the diversity of research emanating from the continent and consider some of the key issues facing European countries. Get our newsletter!




Source link