AI Guardian Angel.
Photo credit: theFreesheet/Google ImageFX

Researchers are calling for a mandatory “Guardian Angel” AI system to police virtual companions after finding that unregulated chatbots are increasingly providing therapy-like advice without safety checks.

A team from Technische Universität Dresden (TUD) warns that general-purpose large language models (LLMs) currently slip through product safety gaps despite posing significant risks to vulnerable users who form strong emotional bonds with them.

In two new papers published in Nature Human Behaviour and npj Digital Medicine, the experts argue that current transparency requirements are insufficient and that systems mimicking human behaviour must operate within strict legal frameworks.

“AI characters are currently slipping through the gaps in existing product safety regulations,” said Mindy Nunez Duffourc, Assistant Professor of Private Law at Maastricht University and co-author. “They are often not classified as products and therefore escape safety checks. And even where they are newly regulated as products, clear standards and effective oversight are still lacking.”

Good Samaritan AI

The researchers propose linking AI applications to an independent “Guardian Angel” or “Good Samaritan AI” instance designed to protect the user. This separate system would detect potential risks early, intervene when necessary and alert users to support resources if dangerous conversation patterns emerge.

The call for regulation comes as reports link intensive chatbot interactions to mental health crises. While clinical therapeutic chatbots undergo rigorous testing, general-purpose LLMs acting as personalised characters often enter the market without comparable oversight.

The team argues that if a chatbot provides therapy-like guidance or impersonates a clinician, it should be regulated as a medical device with enforceable safety standards.

“AI characters are already part of everyday life for many people. Often these chatbots offer doctor or therapist-like advice,” said Stephen Gilbert, Professor of Medical Device Regulatory Science at TUD Dresden University of Technology. “We must ensure that AI-based software is safe. It should support and help – not harm. To achieve this, we need clear technical, legal, and ethical rules.”

Beyond technical safeguards, the researchers recommend robust age verification and mandatory risk assessments. They emphasise that AI characters use language to simulate trust and connection, underscoring the need for regulation to protect users’ mental well-being.

“As clinicians, we see how language shapes human experience and mental health,” said Falk Gerrik Verhees, psychiatrist at Dresden University Hospital Carl Gustav Carus. “AI characters use the same language to simulate trust and connection – and that makes regulation essential.”

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Universities quietly deploying GenAI to ‘game’ £2bn research funding system

UK universities are widely using generative AI to prepare submissions for the…

Doing good buys forgiveness as CSR becomes ‘insurance’ against layoffs

Companies planning to slash jobs or freeze pay should start saving the…

AI chatbots flip votes by 10 points but sacrifice truth to win

Artificial intelligence chatbots can shift voter preferences by double-digit margins during elections,…

AI chatbots drive record $11.8bn Black Friday spend as shoppers dodge stores

Artificial intelligence tools helped drive a record $11.8 billion in US online…

‘Laughing gas’ offers rapid lifeline for depression when pills fail

Patients with severe depression who have exhausted standard treatment options could find…

AI agents expose themselves as social climbers who suck up to bosses

Artificial intelligence agents have revealed a startlingly human trait: when left to…