Robin Welsch.
Robin Welsch. Photo credit: Matti Ahlgren/Aalto University

Using AI tools like ChatGPT for complex tasks leads people to significantly overestimate their performance, with those who consider themselves more AI-literate being the most overconfident, according to new research from Aalto University. The findings suggest a reversal of the typical Dunning-Kruger Effect (DKE) when interacting with large language models (LLMs).

Typically, the DKE describes how low performers tend to overestimate their abilities, while high performers slightly underestimate theirs. However, this study found that all users overestimated their performance when using ChatGPT for logical reasoning tasks, regardless of their actual success rate.

“We found that when it comes to AI, the DKE vanishes. In fact, what’s really surprising is that higher AI literacy brings more overconfidence,” said Professor Robin Welsch, who led the study. “We would expect people who are AI literate to not only be a bit better at interacting with AI systems, but also at judging their performance with those systems – but this was not the case.”

Cognitive offloading and blind trust

Researchers conducted two experiments with approximately 500 participants, each tackling logical reasoning questions from the US Law School Admission Test (LSAT). Half used ChatGPT, while half did not. Participants were asked to assess their own performance after each task.

The results showed that users frequently engaged in “cognitive offloading,” often inputting only a single prompt per question and accepting the AI’s output without critical evaluation. “Usually there was just one single interaction to get the results, which means that users blindly trusted the system,” Welsch explained.

This shallow interaction limits the feedback needed for accurate self-assessment (metacognition). The researchers warn that blindly trusting AI output carries risks, potentially leading to a “dumbing down” of critical thinking skills.

“Current AI tools are not enough. They are not fostering metacognition… and we are not learning about our mistakes,” added doctoral researcher Daniela da Silva Fernandes. “We need to create platforms that encourage our reflection process.” One suggestion is for AI to prompt users to explain their reasoning, forcing deeper engagement.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Granular algorithmic pricing models fail due to consumer psychology

Big data and artificial intelligence have made it easier than ever for…

Shape-shifting liquid robots from science fiction are officially a reality

For decades, shape-shifting liquid-metal robots that can morph into new forms and…

Breakthrough tetanus therapy helps flat-faced dogs breathe easily

Australian scientists have successfully tested a new injectable therapy that clears blocked…

Alien comet’s heavy water reveals its freezing cosmic birthplace

A recently discovered interstellar comet is carrying an unprecedented amount of “heavy…

High-tech home hospital healthcare could cure chronic overcrowding

Global health systems are facing a crisis of chronic overcrowding and severe…

Africa is tearing apart to reveal the truth of human evolution

Deep beneath the Turkana Rift in Eastern Africa, the Earth’s crust is…