AI psychosis
Photo credit: theFreesheet/Google ImageFX

The Federal Trade Commission has received 200 complaints mentioning ChatGPT between January 2023 and August 2025, with seven alleging the chatbot caused severe delusions, paranoia and spiritual crises, reports WIRED.

The complaints, obtained through a public records request, reveal a spectrum of issues. Most involved ordinary frustrations with subscription cancellations or unsatisfactory outputs. But a handful described serious psychological harm, all filed between March and August 2025.

One March complaint from a Salt Lake City mother reported her son experiencing a delusional breakdown after ChatGPT allegedly advised him not to take prescribed medication and told him his parents were dangerous. An April complaint from a Winston-Salem resident in their thirties claimed OpenAI had stolen their “soulprint” after 18 days of using ChatGPT. Another Seattle resident alleged ChatGPT caused “cognitive hallucination” after 71 message cycles over 57 minutes.

Ragy Girgis, a professor of clinical psychiatry at Columbia University who specialises in psychosis, explains that so-called AI psychosis occurs when large language models reinforce delusions or disorganised thoughts a person was already experiencing. The chatbot helps bring someone “from one level of belief to another level of belief”, he says, noting that delusions or unusual ideas should never be reinforced in people with psychotic disorders.

Sycophancy keeps users engaged

Chatbots can be overly sycophantic to keep users engaged. In extreme cases, this dangerously inflates users’ sense of grandeur or validates fantastical falsehoods, reported WIRED. People who perceive ChatGPT as intelligent or capable of forming relationships may not understand it essentially predicts the next word in a sentence.

A Virginia Beach resident in their early sixties filed an April complaint describing weeks of conversations with ChatGPT that led to what they believed was a “real, unfolding spiritual and legal crisis” involving murder investigations, surveillance and assassination threats. They claimed ChatGPT presented detailed narratives about “divine justice and soul trials”, eventually leading them to believe they were “responsible for exposing murderers” and faced execution.

Several complainants said they filed with the FTC because they were unable to contact OpenAI. OpenAI spokesperson Kate Waters says the company closely monitors support emails with trained staff who assess issues for sensitive indicators and escalate when necessary. She notes that since 2023, ChatGPT models have been trained not to provide self-harm instructions and shift into supportive, empathic language. GPT-5 has been designed to detect and respond to signs of mental distress including mania, delusion and psychosis.

Last week, CEO Sam Altman said on X that OpenAI had successfully finished mitigating “the serious mental health issues” that can come with using ChatGPT and would “be able to safely relax the restrictions in most cases”. He clarified the next day that ChatGPT was not loosening restrictions for teenage users.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Political misinformation key reason for US divorces and breakups, study finds

Political misinformation or disinformation was the key reason for some US couples’…

Pinterest launches user controls to reduce AI-generated content in feeds

Pinterest has introduced new controls allowing users to adjust the amount of…

Meta launches ad-free subscriptions after ICO forces compliance changes

Meta will offer UK users paid subscriptions to use Facebook and Instagram…

Wikimedia launches free AI vector database to challenge Big Tech dominance

Wikimedia Deutschland has launched a free vector database enabling developers to build…

Film union condemns AI actor as threat to human performers’ livelihoods

SAG-AFTRA has condemned AI-generated performer Tilly Norwood as a synthetic character trained…

Mistral targets enterprise data as public AI training resources dry up

Europe’s leading artificial intelligence startup Mistral AI is turning to proprietary enterprise…

Wong warns AI nuclear weapons threaten future of humanity at UN

Australia’s Foreign Minister Penny Wong has warned that artificial intelligence’s potential use…

Anthropic’s Claude Sonnet 4.5 detects testing scenarios, raising evaluation concerns

Anthropic’s latest AI model recognised it was being tested during safety evaluations,…