Extremists across the United States are weaponizing artificial intelligence tools to more efficiently spread hate speech, recruit new members, and radicalize online supporters with unprecedented speed and scale, according to a new report from the Middle East Media Research Institute (MEMRI), a U.S. non-profit news watchdog organization.
According to the report, AI-generated content is now a mainstay of extremist production, as extremists develop AI models with their own extremist elements and are already experimenting with novel ways to use the technology, such as creating 3D weapon blueprints and bomb-making recipes.
Researchers at the Domestic Terrorism Threat Monitor, a group within the Institute that specifically tracks U.S.-based extremism, have detailed the scale and scope of AI use by domestic actors, including neo-Nazis, white supremacists and anti-government extremists.
“Initially there was some hesitation about the technology and there was a lot of debate and discussion. [extremists] “There's an ongoing debate online about whether this technology can be used for their own purposes,” Simon Purdue, director of MEMRI's Domestic Terrorism Threat Monitor, said at a press conference earlier this week. “Over the last few years, AI content has gone from being occasional to making up a significant portion of hateful propaganda content online, particularly when it comes to video and visual propaganda. So as this technology develops, we're going to see more use by extremists.”
As the US election approaches, a Purdue team is tracking several worrying trends in extremist use of AI technology, including the widespread adoption of AI video tools.
“The biggest trend we noticed was [in 2024] “The rise of video,” Perdue says. “Last year, AI-generated video content was very basic. This year, with the release of OpenAI's Sora and other video generation or manipulation platforms, we've seen extremists use these as a means to create video content. There's been a lot of excitement around this as well, with a lot of people saying maybe they could use this to make a feature film.”
Extremists have already used the technology to create videos of President Joe Biden using racist slurs while speaking, and of actress Emma Watson reciting speeches. Mein Kampf Wearing a Nazi uniform.
Last year, WIRED reported that extremists with ties to Hamas and Hezbollah were using generative AI tools to undermine the hash-sharing databases that allow Big Tech platforms to quickly and coordinately remove terrorist content, but there is currently no solution to this problem.
Adam Hadley, executive director of Tech Against Terrorism, said he and his colleagues have already archived tens of thousands of AI-generated images created by far-right extremists.
“The technology is being used in two main ways,” Hadley told WIRED. “First, generative AI is being used to create and manage bots that operate fake accounts. Second, just as generative AI is revolutionizing productivity, it is also being used to generate text, images, and videos through open source tools. These two uses present a significant risk for the creation and spread of terrorist and violent content at scale.”