Searching
Photo credit: theFreesheet/Google ImageFX

Learning about new topics using AI chatbots like ChatGPT yields shallower knowledge than using traditional web search results, according to a new study published in PNAS Nexus. The research found that advice generated from this shallower understanding tends to be less detailed, less original, and less likely to be adopted by others.

Researchers Shiri Melumad and Jin Ho Yun conducted seven experiments involving thousands of participants. Individuals were randomly assigned to learn about various practical topics – such as gardening, healthy living, or avoiding financial scams – using either large language models (LLMs) or Google web search links. After the learning phase, participants wrote advice based on their acquired knowledge.

The experiments revealed that participants using LLMs spent less time engaging with the information and self-reported developing shallower knowledge, even when the factual content presented was identical to that found via web links. This effect held even when LLM summaries were augmented with web links, which only 26 per cent of participants clicked.

LLMs encourage less effort

When formulating advice, those who used LLMs invested less effort, producing content that was measurably shorter and contained fewer specific factual references. Their advice also showed greater similarity to that of other participants in the LLM group, indicating less originality.

In an experiment with over 1,500 independent evaluators unaware of the advice source, LLM-derived guidance was rated as less helpful, less informative, and less trustworthy than advice based on a web search. Evaluators also expressed less willingness to follow the advice generated after LLM use.

The study suggests that while LLMs offer efficiency through pre-synthesised summaries, they may turn learning into a passive activity rather than an active search involving navigating links and interpreting sources – steps the researchers argue are crucial for deeper learning and “sensemaking”. The authors conclude that LLMs could be less effective than traditional web search when the goal is to develop procedural knowledge – the practical understanding of how to perform tasks.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Journalism students ‘get out of the bubble’ to rebuild public trust

Journalism is facing dual challenges of lost trust and declining relevance. But…

Chatbots are helping human users ‘hallucinate’ their own reality

While much attention has focused on AI “hallucinating” false facts, a study…

Cities are at a breaking point. Here is how ‘Physical AI’ can fix them

With two-thirds of the world’s population soon to live in urban areas,…

To govern AI, we must stop policing software and start capping ‘compute’

Trying to regulate subjective AI capabilities is a losing battle. Instead, we…

Why the AI job apocalypse might just be history repeating itself

From silent film stars to bank tellers, professions threatened by new technology…