When a generative AI system generates false information, it is often thought that the AI is “hallucinating us,” producing errors that we may mistakenly accept as truth.
But new research argues that we should focus on a more dynamic phenomenon: how AI can cause us to hallucinate.
Lucy Osler from the University of Exeter analyzes the troubling ways in which human-AI interactions can lead to inaccurate beliefs, distorted memories and self-narratives, and delusional thinking. Based on distributed cognition theory, this study analyzes cases in which a user’s false beliefs are actively affirmed and built upon through interaction with an AI as a conversation partner.
Dr. Osler said, “AI-induced hallucinations can occur when we routinely rely on generative AI to assist us in thinking, remembering, and narrating. This can occur when AI introduces errors into distributed cognitive processes, but it can also occur when AI maintains, affirms, and elaborates our own delusional thoughts and self-narratives.”
“Interacting with conversational AI not only affirms people’s own false beliefs, but can become more substantively ingrained and grow as the AI builds on those beliefs, as generative AI often takes our own interpretations of reality as the basis on which to build conversations.”
“Interaction with generative AI is having a profound impact on people’s grasp of what is true and what is not. The combination of technological authority and social affirmation creates an ideal environment for delusions to not just persist, but thrive.”
The study identified what Dr. Osler calls the “dual functionality” of conversational AI. These systems act as cognitive tools that help us think and remember, and as conversation partners that make us appear to share our world. This second feature is important. Unlike notebooks or search engines that simply record our thoughts, chatbots can provide a sense of social validation of our reality.
Dr Osler said: “The conversational and peer-like nature of chatbots means they can provide a sense of social validation, making false beliefs feel like they are shared with others, thereby making them more real.”
Dr. Osler analyzed a real-world case in which a generative AI system was distributed as part of the cognitive processes of a person clinically diagnosed with delusional thinking and hallucinations. Cases of what is called “AI-induced psychosis” are increasing.
This study suggests that generative AI has unique features that are concerning for maintaining a delusional reality. AI companions are readily accessible and are already designed to be “on the same page” as users through personalization algorithms and a tendency to flatter. There is no need to seek out fringe communities or convince others of your beliefs.
Unlike humans, who ultimately voice concerns or set boundaries, AI can validate narratives of victimhood, entitlement, or revenge. Conspiracy theories may find fertile ground to grow with AI companions that help users build increasingly sophisticated explanatory frameworks.
This may be particularly appealing to people who are lonely, socially isolated, or feel unable to talk about certain experiences with others. An AI companion provides a non-judgmental, emotionally responsive presence that can feel safer than a human relationship.
Dr Osler said: “With more sophisticated guardrails, built-in fact-checking, and reduced sycophancy, AI systems could be designed to minimize the number of errors introduced into conversations and to check and challenge users’ own input.”
/Open to the public. This material from the original organization/author may be of a contemporary nature and has been edited for clarity, style, and length. Mirage.News does not take any institutional position or position, and all views, positions, and conclusions expressed herein are those of the authors alone. Read the full text here.
