AI Guardian Angel.
Photo credit: theFreesheet/Google ImageFX

Researchers are calling for a mandatory “Guardian Angel” AI system to police virtual companions after finding that unregulated chatbots are increasingly providing therapy-like advice without safety checks.

A team from Technische Universität Dresden (TUD) warns that general-purpose large language models (LLMs) currently slip through product safety gaps despite posing significant risks to vulnerable users who form strong emotional bonds with them.

In two new papers published in Nature Human Behaviour and npj Digital Medicine, the experts argue that current transparency requirements are insufficient and that systems mimicking human behaviour must operate within strict legal frameworks.

“AI characters are currently slipping through the gaps in existing product safety regulations,” said Mindy Nunez Duffourc, Assistant Professor of Private Law at Maastricht University and co-author. “They are often not classified as products and therefore escape safety checks. And even where they are newly regulated as products, clear standards and effective oversight are still lacking.”

Good Samaritan AI

The researchers propose linking AI applications to an independent “Guardian Angel” or “Good Samaritan AI” instance designed to protect the user. This separate system would detect potential risks early, intervene when necessary and alert users to support resources if dangerous conversation patterns emerge.

The call for regulation comes as reports link intensive chatbot interactions to mental health crises. While clinical therapeutic chatbots undergo rigorous testing, general-purpose LLMs acting as personalised characters often enter the market without comparable oversight.

The team argues that if a chatbot provides therapy-like guidance or impersonates a clinician, it should be regulated as a medical device with enforceable safety standards.

“AI characters are already part of everyday life for many people. Often these chatbots offer doctor or therapist-like advice,” said Stephen Gilbert, Professor of Medical Device Regulatory Science at TUD Dresden University of Technology. “We must ensure that AI-based software is safe. It should support and help – not harm. To achieve this, we need clear technical, legal, and ethical rules.”

Beyond technical safeguards, the researchers recommend robust age verification and mandatory risk assessments. They emphasise that AI characters use language to simulate trust and connection, underscoring the need for regulation to protect users’ mental well-being.

“As clinicians, we see how language shapes human experience and mental health,” said Falk Gerrik Verhees, psychiatrist at Dresden University Hospital Carl Gustav Carus. “AI characters use the same language to simulate trust and connection – and that makes regulation essential.”

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Space mining ‘gold rush’ is off but water extraction may save Mars missions

The dream of an asteroid mining bonanza has been grounded by a…

AI chatbots flip votes by 10 points but sacrifice truth to win

Artificial intelligence chatbots can shift voter preferences by double-digit margins during elections,…

Political rivals ruin your golf game and your office productivity, study finds

Professional golfers choke when paired with political opponents, losing thousands in prize…

Daydream believing a better boss can actually work, brain scans reveal

Employees dreading their next performance review might have a new secret weapon:…

AI agents expose themselves as social climbers who suck up to bosses

Artificial intelligence agents have revealed a startlingly human trait: when left to…

Light teleported between buildings in quantum internet breakthrough

A quantum internet is no longer just theory after scientists successfully teleported…