Robin Welsch.
Robin Welsch. Photo credit: Matti Ahlgren/Aalto University

Using AI tools like ChatGPT for complex tasks leads people to significantly overestimate their performance, with those who consider themselves more AI-literate being the most overconfident, according to new research from Aalto University. The findings suggest a reversal of the typical Dunning-Kruger Effect (DKE) when interacting with large language models (LLMs).

Typically, the DKE describes how low performers tend to overestimate their abilities, while high performers slightly underestimate theirs. However, this study found that all users overestimated their performance when using ChatGPT for logical reasoning tasks, regardless of their actual success rate.

“We found that when it comes to AI, the DKE vanishes. In fact, what’s really surprising is that higher AI literacy brings more overconfidence,” said Professor Robin Welsch, who led the study. “We would expect people who are AI literate to not only be a bit better at interacting with AI systems, but also at judging their performance with those systems – but this was not the case.”

Cognitive offloading and blind trust

Researchers conducted two experiments with approximately 500 participants, each tackling logical reasoning questions from the US Law School Admission Test (LSAT). Half used ChatGPT, while half did not. Participants were asked to assess their own performance after each task.

The results showed that users frequently engaged in “cognitive offloading,” often inputting only a single prompt per question and accepting the AI’s output without critical evaluation. “Usually there was just one single interaction to get the results, which means that users blindly trusted the system,” Welsch explained.

This shallow interaction limits the feedback needed for accurate self-assessment (metacognition). The researchers warn that blindly trusting AI output carries risks, potentially leading to a “dumbing down” of critical thinking skills.

“Current AI tools are not enough. They are not fostering metacognition… and we are not learning about our mistakes,” added doctoral researcher Daniela da Silva Fernandes. “We need to create platforms that encourage our reflection process.” One suggestion is for AI to prompt users to explain their reasoning, forcing deeper engagement.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Political misinformation key reason for US divorces and breakups, study finds

Political misinformation or disinformation was the key reason for some US couples’…

Pinterest launches user controls to reduce AI-generated content in feeds

Pinterest has introduced new controls allowing users to adjust the amount of…

Meta launches ad-free subscriptions after ICO forces compliance changes

Meta will offer UK users paid subscriptions to use Facebook and Instagram…

Wikimedia launches free AI vector database to challenge Big Tech dominance

Wikimedia Deutschland has launched a free vector database enabling developers to build…

Titan submersible’s memory card survives but held no fatal dive footage

Recovery teams have found an undamaged SD card inside a specialist underwater…

Cranston deepfake forces OpenAI to strengthen Sora 2 actor protections

Bryan Cranston’s voice and likeness were generated in Sora 2 outputs without…