Brett Jordan/Pexels

A new artificial intelligence tool can scan social media to discover adverse events from consumer health products, achieving 99.7 per cent accuracy in identifying harmful side effects.

The automated machine learning system, called Waldo, was tested on its ability to scan Reddit posts for adverse events from cannabis-derived products. Researchers published findings on 30 September in the open-access journal PLOS Digital Health.

Current adverse-event reporting systems for prescription medications and medical devices rely on voluntary submissions from doctors and manufacturers to the US Food and Drug Administration. The rapid growth in consumer health products such as cannabis derivatives and dietary supplements has created need for new detection systems.

Waldo significantly outperformed a general-purpose ChatGPT chatbot given the same dataset. In a broader analysis of 437,132 Reddit posts, the tool identified 28,832 potential harm reports. Manual validation of a random sample confirmed 86 per cent were genuine adverse events.

Lead author Karan Desai of the University of California, San Diego said: “Waldo shows that the health experiences people share online are not just noise, they’re valuable safety signals. By capturing these voices, we can surface real-world harms that are invisible to traditional reporting systems.”

John Ayers, also from UC San Diego, added: “This project highlights how digital health tools can transform post-market surveillance. By making Waldo open-source, we’re ensuring that anyone, from regulators to clinicians, can use it to protect patients.”

Second author Vijay Tiyyala noted the technical achievement: “From a technical perspective, we demonstrated that a carefully trained model like RoBERTa can outperform state-of-the-art chatbots for AE detection. Waldo’s accuracy was surprising and encouraging.”

The team has made Waldo open-source for use by researchers, clinicians and regulators. The tool’s automated approach applies beyond cannabis products to other consumer health products lacking regulatory oversight.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Political misinformation key reason for US divorces and breakups, study finds

Political misinformation or disinformation was the key reason for some US couples’…

Meta launches ad-free subscriptions after ICO forces compliance changes

Meta will offer UK users paid subscriptions to use Facebook and Instagram…

Wikimedia launches free AI vector database to challenge Big Tech dominance

Wikimedia Deutschland has launched a free vector database enabling developers to build…

Mistral targets enterprise data as public AI training resources dry up

Europe’s leading artificial intelligence startup Mistral AI is turning to proprietary enterprise…

Anthropic’s Claude Sonnet 4.5 detects testing scenarios, raising evaluation concerns

Anthropic’s latest AI model recognised it was being tested during safety evaluations,…

Film union condemns AI actor as threat to human performers’ livelihoods

SAG-AFTRA has condemned AI-generated performer Tilly Norwood as a synthetic character trained…

Walmart continues developer hiring while expanding AI agent automation

Walmart will continue hiring software engineers despite deploying more than 200 AI…

Majority of TikTok health videos spread medical misinformation to parents

Most medical and parenting videos shared on TikTok by non-medical professionals contain…