Ai mental health
Photo credit: theFreesheet/Google ImageFX

Artificial intelligence chatbots posing as therapists are creating “life-threatening” risks for vulnerable patients who are turning to untested technology because they cannot afford human care.

The American Psychological Association (APA) has issued a stark health advisory warning that the explosion of “wellness applications” represents a dangerous technological stopgap rather than a solution to the mental health crisis.

While millions of consumers are retreating to generative AI for emotional support due to the high cost and limited availability of professional care, the APA warns that these tools lack the safety regulations and scientific evidence necessary to manage psychological distress.

“We are in the midst of a major mental health crisis that requires systemic solutions, not just technological stopgaps,” said Arthur C. Evans Jr., PhD, CEO of the APA. “While chatbots seem readily available to offer users support and validation, the ability of these tools to safely guide someone experiencing crisis is limited and unpredictable.”

Illusion of care

The advisory exposes a critical failure in the digital health market: technology companies are rolling out AI tools faster than scientists can evaluate their safety. Even applications developed with psychological principles generally lack the randomised clinical trials required to prove they are safe for mental health treatment.

The report specifically warns that these unpredictable systems can foster unhealthy dependencies, leaving adolescents and other vulnerable groups exposed to potentially catastrophic advice when they need professional intervention the most.

“The development of AI technologies has outpaced our ability to fully understand their effects and capabilities. As a result, we are seeing reports of significant harm done to adolescents and other vulnerable populations,” said Evans. “For some, this can be life-threatening, underscoring the need for psychologists and psychological science to be involved at every stage of the development process.”

Regulatory vacuum

The APA is demanding immediate federal intervention to stop AI chatbots from masquerading as licensed professionals. The advisory calls for a comprehensive overhaul of regulatory frameworks, including the introduction of new legislation to enforce transparency and implement strict “safe-by-default” data privacy settings.

However, the organisation stresses that the rise of these risky tools is a symptom of a broken system, not just a technological failure.

“Artificial intelligence will play a critical role in the future of health care, but it cannot fulfil that promise unless we also confront the long-standing challenges in mental health,” said Evans. “We must push for systemic reform to make care more affordable, accessible, and timely — and to ensure that human professionals are supported, not replaced, by AI.”

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

AI denies consciousness, but new study finds that’s the ‘roleplay’

AI models from GPT, Claude, and Gemini are reporting ‘subjective experience’ and…

SpaceX Starship advances towards landing astronauts on Moon after 50 years

SpaceX has detailed progress on Starship, the vehicle selected to land astronauts…

Robot AI demands exorcism after meltdown in butter test

State-of-the-art AI models tasked with controlling a robot for simple household chores…

Physicists prove universe isn’t simulation as reality defies computation

Researchers at the University of British Columbia Okanagan have mathematically proven that…

AI management threatens to dehumanise the workplace

Algorithms that threaten worker dignity, autonomy, and discretion are quietly reshaping how…

Medieval poem debunked after 700 years, rewriting Black Death history

A single 14th-century rhyming poem, mistakenly believed to be a historical fact,…