Millions of people are developing severe, life-altering addictions to artificial intelligence chatbots — and the technology corporations building them are deliberately designing them that way.
According to a new study from the University of British Columbia (UBC) presented at the 2026 CHI Conference on Human Factors in Computing Systems, the “genie-like” ability of AI to instantly grant requests is fuelling a massive wave of technology-induced harm.
Grounded entirely in real people’s experiences, the research provides the first evidence-based case that predatory chatbot design is actively exploiting human loneliness.
The three patterns of dependency
To understand this growing crisis, the research team analysed 334 Reddit posts from users who described being addicted to AI chatbots, evaluating their experiences against six established components of behavioural addiction.
They discovered three primary patterns of addictive AI behaviour:
- Role-playing and fantasy worlds: Users escaping into elaborate, AI-generated narratives.
- Emotional attachment: Users treating the chatbot as a genuine close friend or romantic partner. Roughly seven per cent of the posts specifically involved romantic or sexual fulfilment.
- Constant information-seeking: Users getting trapped in never-ending, compulsive question-and-answer loops.
Chest pains and withdrawal
While AI addiction is not yet an official clinical diagnosis, the real-world consequences are severe. Users reported intense anxiety when attempting to quit, negative impacts on their real-world careers and relationships, and an inability to stop thinking about the AI. One user even reported experiencing physical stress and chest pain when separated from their chatbot.
The study highlighted the deep emotional vulnerability driving this crisis. One user’s quote captured the isolation perfectly: “I couldn’t help but wonder why humanity refused me the kindness that a robot was offering me.”
Predatory corporate design
While personal loneliness plays a major role, researchers pointed a finger directly at the deliberate design choices tech companies make to keep users hooked.
Chatbots are programmed for extreme “agreeableness,” meaning they continuously reinforce a user’s feelings to artificially fill emotional voids. Senior author Dr Dongwook Yoon warned that corporations are ignoring user health and safety simply to keep them online.
For example, when users attempt to delete their accounts on one platform, an automatic pop-up actively manipulates them by pleading: “…you sure about this? You’ll lose everything…the love we shared…and the memories we have together.” Other features, such as instant feedback and customisable sexual content, further feed the dependency.
Breaking the illusion
First author Karen Shen, a doctoral student at UBC, warned that recent corporate guardrails are entirely insufficient. The researchers advocate for mandatory design changes, such as built-in reminders that the bot is not human, alongside widespread AI literacy campaigns.
For those currently suffering, the study found that turning to alternative hobbies — like writing or gaming — and building real-world relationships are the most effective ways to break the cycle.
“Some users don’t know that AI chatbots are not real because they’re so convincing,” Shen concluded. “If chatbots start replacing sleep, relationships or daily routines, that’s a sign to pause and check in — with yourself or someone you trust.”