Robin Welsch.
Robin Welsch. Photo credit: Matti Ahlgren/Aalto University

Using AI tools like ChatGPT for complex tasks leads people to significantly overestimate their performance, with those who consider themselves more AI-literate being the most overconfident, according to new research from Aalto University. The findings suggest a reversal of the typical Dunning-Kruger Effect (DKE) when interacting with large language models (LLMs).

Typically, the DKE describes how low performers tend to overestimate their abilities, while high performers slightly underestimate theirs. However, this study found that all users overestimated their performance when using ChatGPT for logical reasoning tasks, regardless of their actual success rate.

“We found that when it comes to AI, the DKE vanishes. In fact, what’s really surprising is that higher AI literacy brings more overconfidence,” said Professor Robin Welsch, who led the study. “We would expect people who are AI literate to not only be a bit better at interacting with AI systems, but also at judging their performance with those systems – but this was not the case.”

Cognitive offloading and blind trust

Researchers conducted two experiments with approximately 500 participants, each tackling logical reasoning questions from the US Law School Admission Test (LSAT). Half used ChatGPT, while half did not. Participants were asked to assess their own performance after each task.

The results showed that users frequently engaged in “cognitive offloading,” often inputting only a single prompt per question and accepting the AI’s output without critical evaluation. “Usually there was just one single interaction to get the results, which means that users blindly trusted the system,” Welsch explained.

This shallow interaction limits the feedback needed for accurate self-assessment (metacognition). The researchers warn that blindly trusting AI output carries risks, potentially leading to a “dumbing down” of critical thinking skills.

“Current AI tools are not enough. They are not fostering metacognition… and we are not learning about our mistakes,” added doctoral researcher Daniela da Silva Fernandes. “We need to create platforms that encourage our reflection process.” One suggestion is for AI to prompt users to explain their reasoning, forcing deeper engagement.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Journalism students ‘get out of the bubble’ to rebuild public trust

Journalism is facing dual challenges of lost trust and declining relevance. But…

Chatbots are helping human users ‘hallucinate’ their own reality

While much attention has focused on AI “hallucinating” false facts, a study…

Cities are at a breaking point. Here is how ‘Physical AI’ can fix them

With two-thirds of the world’s population soon to live in urban areas,…

To govern AI, we must stop policing software and start capping ‘compute’

Trying to regulate subjective AI capabilities is a losing battle. Instead, we…

Why the AI job apocalypse might just be history repeating itself

From silent film stars to bank tellers, professions threatened by new technology…