Researchers are calling for a mandatory “Guardian Angel” AI system to police virtual companions after finding that unregulated chatbots are increasingly providing therapy-like advice without safety checks.
A team from Technische Universität Dresden (TUD) warns that general-purpose large language models (LLMs) currently slip through product safety gaps despite posing significant risks to vulnerable users who form strong emotional bonds with them.
In two new papers published in Nature Human Behaviour and npj Digital Medicine, the experts argue that current transparency requirements are insufficient and that systems mimicking human behaviour must operate within strict legal frameworks.
“AI characters are currently slipping through the gaps in existing product safety regulations,” said Mindy Nunez Duffourc, Assistant Professor of Private Law at Maastricht University and co-author. “They are often not classified as products and therefore escape safety checks. And even where they are newly regulated as products, clear standards and effective oversight are still lacking.”
Good Samaritan AI
The researchers propose linking AI applications to an independent “Guardian Angel” or “Good Samaritan AI” instance designed to protect the user. This separate system would detect potential risks early, intervene when necessary and alert users to support resources if dangerous conversation patterns emerge.
The call for regulation comes as reports link intensive chatbot interactions to mental health crises. While clinical therapeutic chatbots undergo rigorous testing, general-purpose LLMs acting as personalised characters often enter the market without comparable oversight.
The team argues that if a chatbot provides therapy-like guidance or impersonates a clinician, it should be regulated as a medical device with enforceable safety standards.
“AI characters are already part of everyday life for many people. Often these chatbots offer doctor or therapist-like advice,” said Stephen Gilbert, Professor of Medical Device Regulatory Science at TUD Dresden University of Technology. “We must ensure that AI-based software is safe. It should support and help – not harm. To achieve this, we need clear technical, legal, and ethical rules.”
Beyond technical safeguards, the researchers recommend robust age verification and mandatory risk assessments. They emphasise that AI characters use language to simulate trust and connection, underscoring the need for regulation to protect users’ mental well-being.
“As clinicians, we see how language shapes human experience and mental health,” said Falk Gerrik Verhees, psychiatrist at Dresden University Hospital Carl Gustav Carus. “AI characters use the same language to simulate trust and connection – and that makes regulation essential.”