Pixnio

AI-powered research tools are referencing material from retracted scientific studies when answering questions, raising concerns about the reliability of automated systems used for scientific inquiry.

Research conducted by University of Tennessee medical researcher Weikuan Gu examined how OpenAI’s ChatGPT responded to questions based on 21 retracted medical imaging papers, reports MIT Technology Review. The chatbot referenced retracted studies in five cases, whilst advising caution in only three instances.

“The chatbot is using a real paper, real material, to tell you something,” explained Gu. “But if people only look at the content of the answer and do not click through to the paper and see that it’s been retracted, that’s really a problem.”

Additional testing by MIT Technology Review found widespread citation of discredited research across specialised AI research tools. Elicit referenced five retracted papers, whilst Ai2 ScholarQA cited 17, Perplexity referenced 11, and Consensus cited 18 papers from the same sample, all without noting retraction status.

The findings concern researchers as AI tools increasingly serve public users seeking medical advice and scientists reviewing existing literature. The US National Science Foundation invested $75 million in August toward developing AI models for scientific research.

Several companies have begun addressing the issue. Consensus co-founder Christian Salem acknowledged that “until recently, we didn’t have great retraction data in our search engine.” The platform now incorporates retraction information from multiple sources, including Retraction Watch, reducing citations from 18 to five retracted papers in subsequent testing.

However, creating comprehensive retraction databases presents significant challenges. Publishers employ inconsistent labelling systems, using terms including “correction,” “expression of concern,” and “retracted” for various issues. Research papers distributed across preprint servers and repositories create additional complications.

“If a tool is facing the general public, then using retraction as a kind of quality indicator is very important,” said Yuanxi Fu, information science researcher at the University of Illinois Urbana-Champaign.

Aaron Tay, librarian at Singapore Management University, cautioned that users must remain vigilant: “We are at the very, very early stages, and essentially you have to be skeptical.”

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Meta launches ad-free subscriptions after ICO forces compliance changes

Meta will offer UK users paid subscriptions to use Facebook and Instagram…

World nears quarter million crypto millionaires in historic wealth boom

Global cryptocurrency millionaires have reached 241,700 individuals, marking a 40 per cent…

Wong warns AI nuclear weapons threaten future of humanity at UN

Australia’s Foreign Minister Penny Wong has warned that artificial intelligence’s potential use…

Mistral targets enterprise data as public AI training resources dry up

Europe’s leading artificial intelligence startup Mistral AI is turning to proprietary enterprise…

Legal scholar warns AI could devalue humanity without urgent regulatory action

Artificial intelligence systems pose worldwide threats to human dignity by potentially reducing…

MIT accelerator shows AI enhances startup building without replacing core principles

Entrepreneurs participating in MIT’s flagship summer programme are integrating artificial intelligence tools…

AI creates living viruses for first time as scientists make artificial “life”

Stanford University researchers have achieved a scientific milestone by creating the world’s…

Engineers create smarter artificial intelligence for power grids and autonomous vehicles

Researchers have developed an artificial intelligence system that manages complex networks where…