AI accelerates the path to radicalization

AI News


How are ordinary people drawn into extremist circles? What role can artificial intelligence play in the process?

This problem is being addressed by new research that, for the first time, combines psychological theories of radicalization with knowledge of modern AI technologies such as recommendation algorithms, generative AI, and botnets.

“We have developed a comprehensive model that shows how digital systems can tap into or amplify people’s social and psychological needs in ways that we do not yet fully understand,” explains Milan Obaidi, associate professor at the Department of Psychology at the University of Copenhagen.

Anger grows step by step

Radicalization rarely begins as a sudden disruption. Instead, individuals gradually go through a process in which digital technology and psychological vulnerability interact with each other.

This study divides this process into four main phases.

  1. Exposure – Algorithms present polarizing or extreme content to users, often without the user actively searching for it.
  2. Reinforcement – ​​Repeated exposure and algorithmic personalization creates echo chambers and reinforces new attitudes.
  3. Group integration – Online communities and even AI-generated “peers” can create strong identity bonds reminiscent of group membership.
  4. Violence – In rare cases, this development can lead to violent extremism.

According to the researchers, AI systems can be considered a type of accelerator, able to identify psychologically vulnerable individuals, tailor content, and create synthetic communities that resemble human interactions.

“We have an environment where not only are users exposed to extreme content, but that content is reflected back to them by algorithms in ways that amplify their sense of meaning, anger, and injustice,” Milan Obaidi said, adding:

“The combination of technology’s scalability and people’s psychological needs makes this development particularly alarming.”

Generative AI brings entirely new risks

While recommendation algorithms primarily control what content is shown to users, generative models, such as large-scale language models, can add new layers and create radicalized content.

AI can:

  • Create vast amounts of personalized propaganda.
  • Simulate a community through a swarm of bots.
  • It acts as an “AI companion” that reinforces the user’s extreme beliefs.
  • Create highly convincing deepfakes and manipulated materials.

“This development may make it difficult to distinguish between human and non-human influences, thereby potentially amplifying radicalization processes that were previously limited by human labor,” Milan Obaidi emphasizes.

Psychological vulnerability plays an important role

This study highlights that not all users are equally vulnerable. AI will particularly impact people who already experience social isolation, identity anxiety, injustice and marginalization, or who need clarity, order, and strong group membership.

Researchers who supported the research

  • Jonas R. Kunst, University of Oslo
  • Milan Obaidi, University of Copenhagen
  • Anton Gollwitzer, BI Norwegian Business School, Max Planck Institute
  • Petter B. Brandtzæg, University of Oslo
  • Yannick Heinrichs, University of Oslo
  • Neha Saini, University of Oslo
  • Daniel T. Schrader, SINTEF Digital

Because AI systems are designed to maximize engagement, they can inadvertently exploit precisely these vulnerabilities, even without ideological intent.

“It is important to emphasize that AI will not suddenly cause radicalization. However, this technology may amplify known psychological mechanisms and make it easier for extreme ideas to gain a foothold among people who are already at risk,” says Milan Obaidi.

The study, “Intelligent Systems, Fragile Minds: A Framework for Radicalization to Violence in the Age of AI,” was published in the journal Personality and Social Psychology Review. Please read here.



Source link