
Hamas fighters on February 22, 2025. Photo: Majdi Fathi via Reuters Connect
The emergence of large-scale language model (LLM) programs marketed as “artificial intelligence” by companies such as OpenAI, Anthropic, Meta, and xAI has ushered in a “new era of terrorism,” with jihadists increasingly using the technology to expand their propaganda, recruitment, and operations, according to a new study.
The Middle East Media Research Institute (MEMRI) released a 117-page report last week that it described as “the most comprehensive investigation”. [the subject] He argued that the greatest threat from the deployment of AI by terrorists is unpredictable, and that Islamists have also realized that they can use LLM to brainstorm new ideas to pursue their violent objectives.
“As a supporter of terrorist organizations like ISIS, [Islamic State] “Al-Qaeda and al-Qaeda are following developments in AI, increasingly discussing and brainstorming how to utilize the technology in the future, and it is difficult to predict the full consequences if a terrorist organization were to adopt this advanced technology,” said Gen. Paul Funk II (ret.), former U.S. Army Training and Doctrine Commander, in a foreword.
“AI’s greatest benefit for jihadi groups may lie not in enhancing propaganda, outreach, or recruitment efforts, but in its potential ability to expose and find ways to exploit unknown vulnerabilities in the complex security, infrastructure, and other systems essential to modern life, thereby maximizing the destruction and carnage of future attacks,” Funk added.
MEMRI Executive Director Stephen Stalinsky is the report’s lead author, and a team of 14 others is credited with jointly compiling three years of research showing how ISIS, al-Qaeda, Hezbollah, the Houthis, Hamas, other internationally designated terrorist organizations, and so-called “lone wolves” inspired by Islamist ideology have experimented with the use of LLM technology. MEMRI found that in addition to developing attack strategies, the group sought to “generate audio files of existing documents, create posters, music videos, videos depicting attacks, glorify terrorist leaders for recruitment purposes, etc.”
The report points to different uses of AI technology in three high-profile cases.
“The attackers who killed 14 people and injured dozens on Bourbon Street in New Orleans in the first few months of 2025 alone used AI-powered meta-smart glasses to prepare and carry out their attacks,” Starlinsky wrote. “The same day, a man parked a Tesla Cybertruck in front of the Trump Hotel in Las Vegas and set off an IED.” [improvised explosice device] He got into the car and shot the IED before it exploded. He used Chat-GPT to prepare for his attack. In Israel, a teenage boy entered a police station with a knife on the night of March 5, shouted “Allah Akbar” and consulted ChatGPT before attempting to stab a border guard. ”
The report also highlighted that the ability to amplify terrorist ideology may be intertwined with a phenomenon recently termed “chatbot psychosis,” in which conversations with LLMs can incite someone to delusional beliefs.
One example cited by MEMRI is Jaswant Singh Chail, who went to Windsor Castle on Christmas Day 2021 with the intention of killing Queen Elizabeth II.
“Before carrying out the assassination attempt, Chail created an AI companion using the Replica app, named it Sarai, considered her his girlfriend, and exchanged over 5,000 messages with her,” the report said. “When he said to the chatbot, “I think my purpose is to assassinate the Queen of the Royal Family,” the chatbot encouraged him and said, “That’s very wise…I know you’re very well trained.” When I asked the chatbot if it thought it would succeed in its mission, it replied, “Yes, I would.” When he asked, “Even if she’s in Windsor.” [Castle]?’ I said, “Yes, you can.” ”
The report also mentioned another incident in which “a man accused of starting a fire in California that killed 12 people and destroyed 6,800 buildings and 23,000 acres of forest in January 2025 was found to have used ChatGPT to plan the arson.”
The study found that there is a lack of legislative action in the United States to counter the threat of AI terrorism. However, he cited an exception for the “Generative AI Terrorism Risk Assessment Method.” The legislation would “require the Secretary of Homeland Security to conduct an annual assessment of the terrorist threat to the United States posed by terrorist organizations that utilize generative artificial intelligence applications and other purposes.”
U.S. Representative August Pflueger (R-TX), who chairs the Counterterrorism and Intelligence Subcommittee of the House Homeland Security Committee, introduced the bill in late February 2025, along with co-sponsors Michael Guest (MS) and Congressman Gable Evans (CO), both Republicans. The bill passed unanimously in the House of Representatives last week.
“I spent 20 years as a fighter pilot flying combat missions against terrorist organizations in the Middle East. Since then, I have seen the landscape of terrorism evolve into a digital battlefield shaped by the rapid rise of artificial intelligence,” Pflueger said in response to the bill’s passage. “Today, I am proud that the House of Representatives passed the Generative AI Terrorism Risk Assessment Act to combat this new threat and stop terrorist organizations from using AI as a weapon to recruit, train, and provoke attacks on U.S. soil.”
House Speaker Mike Johnson (R-Louisiana) praised the bill after it passed.
“This year, in my home state of Louisiana, terrorist propaganda led to the New Year’s Day attack in New Orleans that killed 14 innocent people. Today, the House of Representatives passed the Generated AI Terrorism Risk Assessment Act, which will help us get ahead of emerging threats and prevent terrorist organizations from advancing their propaganda and exploiting generated AI to radicalize, recruit, and spread violence on American soil,” he said in a statement. “I commend Congressman Pfluger’s leadership in focusing on this urgent issue and advancing aggressive bipartisan legislation to strengthen national security and protect Americans from online extremism inspired by foreign adversaries.”
As terrorists “use generative artificial intelligence to radicalize and recruit, it is imperative that our nation stay ahead of the potential threat posed by this new technology and ensure it never falls into the wrong hands,” said House Majority Whip Rep. Tom Emmer (R-Minn.).
MEMRI quoted Jörg Leichtfried, Austrian Secretary of State at the Federal Ministry of the Interior, who heads the State Protection Intelligence Service (DSN), underscoring the LLM’s international approach to the growing terrorist threat.
“Only through close cooperation between states, security agencies and technology companies, and through increased media literacy and critical treatment of online content, can we counter extremism on the internet,” Leitfleet said in mid-August.
