Pixnio

AI-powered research tools are referencing material from retracted scientific studies when answering questions, raising concerns about the reliability of automated systems used for scientific inquiry.

Research conducted by University of Tennessee medical researcher Weikuan Gu examined how OpenAI’s ChatGPT responded to questions based on 21 retracted medical imaging papers, reports MIT Technology Review. The chatbot referenced retracted studies in five cases, whilst advising caution in only three instances.

“The chatbot is using a real paper, real material, to tell you something,” explained Gu. “But if people only look at the content of the answer and do not click through to the paper and see that it’s been retracted, that’s really a problem.”

Additional testing by MIT Technology Review found widespread citation of discredited research across specialised AI research tools. Elicit referenced five retracted papers, whilst Ai2 ScholarQA cited 17, Perplexity referenced 11, and Consensus cited 18 papers from the same sample, all without noting retraction status.

The findings concern researchers as AI tools increasingly serve public users seeking medical advice and scientists reviewing existing literature. The US National Science Foundation invested $75 million in August toward developing AI models for scientific research.

Several companies have begun addressing the issue. Consensus co-founder Christian Salem acknowledged that “until recently, we didn’t have great retraction data in our search engine.” The platform now incorporates retraction information from multiple sources, including Retraction Watch, reducing citations from 18 to five retracted papers in subsequent testing.

However, creating comprehensive retraction databases presents significant challenges. Publishers employ inconsistent labelling systems, using terms including “correction,” “expression of concern,” and “retracted” for various issues. Research papers distributed across preprint servers and repositories create additional complications.

“If a tool is facing the general public, then using retraction as a kind of quality indicator is very important,” said Yuanxi Fu, information science researcher at the University of Illinois Urbana-Champaign.

Aaron Tay, librarian at Singapore Management University, cautioned that users must remain vigilant: “We are at the very, very early stages, and essentially you have to be skeptical.”

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

AI consciousness claims are ‘existentially toxic’ and unprovable

The only scientifically justifiable position on artificial intelligence is “agnosticism”, meaning humans…

Tech-savvy millennials suffer most anxiety over digital privacy risks

Digital concerns regarding privacy, misinformation and work-life boundaries are highest among highly…

Social media ‘cocktail’ helps surgeons solve cases in three hours

A global social media community is helping neurosurgeons diagnose complex pathologies and…

Experts warn of emotional risks as one in three teens turn to AI for support

Medical experts warn that a generation is learning to form emotional bonds…

AI exposes alcohol screening ‘blind spot’, finds 60 times more at-risk patients

Artificial intelligence has revealed a staggering gap in the detection of dangerous…

Being organised cuts death risk by 10 per cent, major global study confirms

Your personality type effectively determines your lifespan, with organised individuals showing a…