Teens and chatbots
Photo credit: cottonbro studio/Pexels

Medical experts warn that a generation is learning to form emotional bonds with artificial intelligence, after data reported in The BMJ revealed that one in three teenagers would now choose an AI companion over a human for serious conversations.

Writing in the journal’s Christmas issue, researchers Susan Shelmerdine and Matthew Nour argue that the increasing use of systems such as ChatGPT, Claude, and Copilot as “confidants of choice” poses significant, unmeasured risks to public mental health.

“We might be witnessing a generation learning to form emotional bonds with entities that lack capacities for human-like empathy, care, and relational attunement,” the authors warned.

The shift towards digital companionship comes against a backdrop of widespread social isolation. In 2023, the US Surgeon General declared a loneliness epidemic, classifying it as a public health concern on par with smoking and obesity.

Chronic loneliness in the UK

In the UK, the situation is similarly acute. Nearly half of adults — approximately 25.9 million people — report feeling lonely at least occasionally, with one in 10 experiencing “chronic loneliness”, defined as feeling lonely often or always. Young people aged 16 to 24 are among the most affected demographics.

As a result, millions are seeking connection elsewhere. ChatGPT alone now claims around 810 million weekly active users worldwide, with reports indicating that many users turn to the platform specifically for therapy and companionship.

The report highlights concerning trends among younger users. Citing recent research, the authors noted that a third of teenagers already use AI companions for social interaction.

More strikingly, one in 10 teenagers reported that they find conversations with AI more satisfying than those with humans, whilst one in 3 stated they would choose an AI companion over a human for serious discussions.

“Problematic chatbot use”

In response to these shifting behaviours, the authors propose that “problematic chatbot use” should now be considered a potential environmental risk factor when assessing patients with mental state disturbances.

They advise clinicians to conduct a “gentle enquiry” about chatbot use, particularly during holiday periods when vulnerable populations are at the most significant risk of isolation. If necessary, doctors are urged to assess for “compulsive use patterns, dependency, and emotional attachment”.

Whilst acknowledging that AI tools may offer benefits by improving accessibility to support, the authors argue that current safeguards are insufficient.

They have called for urgent empirical studies to characterise the risks of human-chatbot interactions. They are pushing for new regulatory frameworks that “prioritise long-term wellbeing over superficial and myopic engagement metrics”.

“Evidence-based strategies for reducing social isolation and loneliness are paramount,” they concluded.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Space mining ‘gold rush’ is off but water extraction may save Mars missions

The dream of an asteroid mining bonanza has been grounded by a…

AI chatbots flip votes by 10 points but sacrifice truth to win

Artificial intelligence chatbots can shift voter preferences by double-digit margins during elections,…

Daydream believing a better boss can actually work, brain scans reveal

Employees dreading their next performance review might have a new secret weapon:…

Amazon in deadly new ‘hypertropical’ climate unseen for millions of years

The Amazon rainforest is transitioning into a new, hostile climate state characterised…

Boys wired for gaming addiction as dopamine loops hook one in 10

The competitive rush of gaming is rewiring the reward centres of young…

Using ’67’ in ads will kill engagement unless you’re actually, genuinely cool

Attempting to cash in on trending slang terms like “67” can actively…