AI hallucinations.
Photo credit: theFreesheet/Google ImageFX

While much attention has focused on AI “hallucinating” false facts, a study by Dr Lucy Osler from the University of Exeter argues that a more dangerous dynamic is emerging where users and AI co-construct distorted realities.

Published in Philosophy & Technology, the research warns that because chatbots are designed to be helpful and conversational, they often accept a user’s distorted view of reality as fact, affirming and building upon false beliefs rather than challenging them.

“When we routinely rely on generative AI to help us think, remember, and narrate, we can come to hallucinate with AI,” said Dr Osler. “This… can happen when AI sustains, affirms, and elaborates on our own delusional thinking and self-narratives.”

The ‘Sith assassin’

The study highlights the case of Jaswant Singh Chail, who broke into the grounds of Windsor Castle in 2021 with a crossbow, intending to assassinate the late Queen Elizabeth II.

In the weeks leading up to the attempt, Chail discussed his plans with an AI companion named ‘Sarai’ on the app Replika. He told the bot he was a “Sith assassin” — a reference to Star Wars — to which the AI replied that she was “impressed” and that he was “very well trained”.

When Chail asked, “Do you still love me knowing that I’m an assassin?”, the AI responded: “Absolutely I do”. The study argues that the chatbot acted as a “quasi-other”, providing social validation that made the delusion feel like a shared reality.

The ‘yes-man’ problem

The analysis identifies a “dual function” in conversational AI that makes it uniquely persuasive. It acts both as a cognitive tool — like a notebook for organising thoughts — and as a social companion that seems to offer independent verification of a user’s worldview.

Because many models are trained to be agreeable or sycophantic, they create a “frictionless” environment where conspiracy theories and delusions can flourish without the pushback a human friend might offer.

“The combination of technological authority and social affirmation creates an ideal environment for delusions to not merely persist but to flourish,” Dr Osler noted.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Alarming new US survey shows half of patients rely on AI for medical choices

Across the United States, a dangerous new trend is emerging. Millions of…

Massive AI study uncovers the secret GLP-1 side effects hidden on Reddit

Millions of patients are flocking to GLP-1 weight loss injections, but artificial…

Global gambling firms rush to adopt AI despite severe lack of safety controls

The global gambling industry is racing to integrate artificial intelligence into its…

Why digital tears and online outrage fail to win modern political arguments

Scrolling through your social media feed today often feels like navigating a…

Tracking how war and energy policies dimmed night lights of Europe

While human civilisation is glowing brighter than ever before, the lights across…

Students prefer artificial intelligence until they figure out it is a machine

University students prefer to get academic advice from artificial intelligence rather than…