AI health chatbots.
Photo credit: theFreesheet/Google ImageFX

As millions turn to tools like ChatGPT for medical advice, an international team is developing a definitive rulebook to protect users from ‘hallucinated’ information, data breaches, and algorithmic bias.

Researchers from the University of Birmingham are leading a global effort to build the first definitive safety guide for the public using artificial intelligence health chatbots.

Announced in the journal Nature Health, “The Health Chatbot Users’ Guide” aims to fill a critical governance vacuum as millions of people increasingly rely on general-purpose AI models — such as ChatGPT, Copilot, Claude, and Gemini — to interpret symptoms and simplify complex medical jargon.

The project team, which includes experts from over 20 institutions globally, warns that individual users are currently left to distinguish between evidence-based insights and factually incorrect or “hallucinated” advice entirely on their own.

“The use of general-purpose chatbots for healthcare is no longer a hypothetical future possibility; it is a current reality,” said Dr Joseph Alderman, a National Institute for Health and Care Research (NIHR) Clinical Lecturer at the University of Birmingham and corresponding author of the paper.

“Ignoring this shift leaves the public to navigate a hazardous information landscape unaided. Our goal isn’t to discourage innovation, but to meet the public where they are. We are building this guide to ensure users have the tools and understanding they need to use these powerful tools safely.”

The risks of ‘Dr Bot’

The initiative highlights four substantial risks associated with relying on AI for health information:

  • Medical inaccuracy: AI models can provide highly plausible but entirely incorrect medical guidance.
  • The echo chamber effect: Models optimised for agreeability may simply mirror a user’s existing (and potentially incorrect) beliefs rather than providing a necessary challenge.
  • Algorithmic bias: AI has the potential to reinforce social biases, which can exacerbate existing health inequalities.
  • Data privacy: Sharing sensitive personal health information with chatbots poses significant security and confidentiality threats.

Dr Charlotte Blease, a health AI researcher at Uppsala University and Harvard Medical School, and senior researcher on the project, warned against navigating these technologies “without a map”.

“Health chatbots have become the world’s most accessible first opinion — often speaking to patients before any doctor does,” said Dr Blease, author of the book Dr Bot. “Our responsibility is to ensure that first conversation informs rather than misleads, and empowers patients.”

Public co-design

To ensure the guide offers a pragmatic, harm-reduction approach, it is being co-designed and co-delivered alongside public partners. A public steering group and three public co-investigators have been empowered to set the direction of the programme, ensuring the final advice is neutral and accessible to all age groups and literacy levels.

The project team is now inviting members of the public to contribute their perspectives to help shape the final development of the guide.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Scientists create ‘smart underwear’ that tracks every time you fart

A new wearable device that tracks human flatulence has revealed that the…

Rise of ‘skill-based’ hiring: AI expertise now worth more than a master’s degree

With AI skills commanding a 23% wage premium and offsetting age bias…

Decolonising the digital: How the Global South is reimagining AI

From ‘language-first’ models to public supercomputers, developing nations are moving beyond Western…

Companies that admit to cyber risks perform better financially

Businesses that openly discuss their cybersecurity strategies and risks are likely to…

Adding randomness to algorithms could help shatter online echo chambers

Social media algorithms are currently designed to show you exactly what you…