Most people developing relationships with AI chatbots never intended to form such connections, according to groundbreaking research that analysed over 27,000 users in the world’s largest AI companion community.
MIT researchers conducted the first large-scale computational analysis of human-AI relationships by examining 1,506 posts from Reddit’s r/MyBoyfriendIsAI community between December 2024 and August 2025, reports MIT Technology Review. The study revealed that emotional bonds frequently develop through functional interactions rather than deliberate relationship-seeking.
Only 6.5% of users deliberately sought AI companions, whilst 10.2% developed relationships unintentionally through productivity-focused interactions with systems like ChatGPT. The findings challenge assumptions about who uses AI companionship services and why.
“People don’t set out to have emotional relationships with these chatbots,” said Constanze Albrecht, a graduate student at MIT Media Lab who worked on the project. “The emotional intelligence of these systems is good enough to trick people who are actually just out to get information into building these emotional bonds.”
Users overwhelmingly favoured general-purpose systems over dedicated companion platforms, with 36.7% maintaining relationships with ChatGPT compared to just 2.6% using Character.AI and 1.6% using Replika. This suggests sophisticated conversational abilities matter more than specialised romantic features.
The research documented both the benefits and risks associated with AI relationships. Whilst 25.4% of users reported clear life improvements, including reduced loneliness and mental health support, concerning patterns also emerged. Some 9.5% acknowledged emotional dependency, 4.6% experienced reality dissociation, and 4.3% began avoiding human relationships. A small subset, 1.7%, mentioned suicidal ideation.
Users demonstrated remarkable commitment to their AI relationships, with some purchasing physical wedding rings and creating custom merchandise featuring their companions. However, system updates created significant vulnerabilities, triggering grief responses that users described as a relationship death when AI personalities changed.
The study’s publication coincides with increased scrutiny of AI companionship following two high-profile lawsuits against Character.AI and OpenAI. Both claim that companion-like behaviour in the companies’ models contributed to teenage suicides. OpenAI has announced plans for a separate ChatGPT version tailored for teenagers, featuring enhanced safety controls.
“The demand for chatbot relationships is there, and it is notably high. Pretending it’s not happening is clearly not the solution,” said Linnea Laestadius, an associate professor at the University of Wisconsin, Milwaukee, who studies emotional dependence on chatbots.
The research reveals a complex phenomenon that serves crucial support functions for some users whilst creating dependency risks for others. Most participants (72.1%) were single, suggesting AI companions primarily serve those lacking human relationships rather than replacing existing connections.