Robin Welsch.
Robin Welsch. Photo credit: Matti Ahlgren/Aalto University

Using AI tools like ChatGPT for complex tasks leads people to significantly overestimate their performance, with those who consider themselves more AI-literate being the most overconfident, according to new research from Aalto University. The findings suggest a reversal of the typical Dunning-Kruger Effect (DKE) when interacting with large language models (LLMs).

Typically, the DKE describes how low performers tend to overestimate their abilities, while high performers slightly underestimate theirs. However, this study found that all users overestimated their performance when using ChatGPT for logical reasoning tasks, regardless of their actual success rate.

“We found that when it comes to AI, the DKE vanishes. In fact, what’s really surprising is that higher AI literacy brings more overconfidence,” said Professor Robin Welsch, who led the study. “We would expect people who are AI literate to not only be a bit better at interacting with AI systems, but also at judging their performance with those systems – but this was not the case.”

Cognitive offloading and blind trust

Researchers conducted two experiments with approximately 500 participants, each tackling logical reasoning questions from the US Law School Admission Test (LSAT). Half used ChatGPT, while half did not. Participants were asked to assess their own performance after each task.

The results showed that users frequently engaged in “cognitive offloading,” often inputting only a single prompt per question and accepting the AI’s output without critical evaluation. “Usually there was just one single interaction to get the results, which means that users blindly trusted the system,” Welsch explained.

This shallow interaction limits the feedback needed for accurate self-assessment (metacognition). The researchers warn that blindly trusting AI output carries risks, potentially leading to a “dumbing down” of critical thinking skills.

“Current AI tools are not enough. They are not fostering metacognition… and we are not learning about our mistakes,” added doctoral researcher Daniela da Silva Fernandes. “We need to create platforms that encourage our reflection process.” One suggestion is for AI to prompt users to explain their reasoning, forcing deeper engagement.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Space mining ‘gold rush’ is off but water extraction may save Mars missions

The dream of an asteroid mining bonanza has been grounded by a…

Daydream believing a better boss can actually work, brain scans reveal

Employees dreading their next performance review might have a new secret weapon:…

Amazon in deadly new ‘hypertropical’ climate unseen for millions of years

The Amazon rainforest is transitioning into a new, hostile climate state characterised…

Boys wired for gaming addiction as dopamine loops hook one in 10

The competitive rush of gaming is rewiring the reward centres of young…

Using ’67’ in ads will kill engagement unless you’re actually, genuinely cool

Attempting to cash in on trending slang terms like “67” can actively…

Understanding sarcasm and white lies rely on three hidden brain skills

Understanding whether “lovely weather” denotes sunshine or a sarcastic comment about the…