AI and humans.
Photo credit: cottonbro studio/Pexels

Artificial intelligence might not be actively destroying our underlying cognitive abilities, but it is quietly eroding our professional confidence and our sense of ownership of our own ideas.

According to a new study published by the American Psychological Association, workers who rely passively on algorithms to complete their daily duties are experiencing a severe decline in their independent reasoning skills.

The research, published in the online journal Technology, Mind, and Behavior, warns that while AI offers immense speed and efficiency, the psychological cost of outsourcing our deep cognitive work is leaving employees feeling detached and professionally insecure.

The danger of passive acceptance

To understand exactly how generative AI is reshaping human workflows, researchers surveyed 1,923 adult professionals in the United States and Canada, aged 25 to 57.

Participants were instructed to use commercially available large language models to complete 10 simulated executive tasks. These complex scenarios included developing plans with incomplete information, interpreting highly ambiguous data, and articulating the reasoning behind strategic corporate decisions.

The results exposed a deeply passive modern workforce. Following the tasks, 58 per cent of participants openly agreed that the AI “did most of the thinking” required to complete the work, particularly for activities involving planning or sequencing.

The study found that this passive acceptance comes with a heavy psychological penalty. Participants who let the AI take the wheel reported significantly reduced confidence in their own independent reasoning and a lesser perceived ownership of the final ideas. They actively traded depth of thought for mere task speed.

The researchers also noted a distinct gender divide in the data, with men reporting significantly higher levels of AI reliance than women.

Reclaiming human authorship

However, the study revealed a clear antidote to this algorithmic dread. Participants who actively fought back against the machine — modifying, challenging, or outright rejecting the AI’s suggestions — reported vastly greater confidence and a much stronger sense of personal authorship over their work.

Lead study author Sarah Baldeo, an MBA and PhD candidate in AI and neuroscience at Middlesex University in England, explained that the core problem is human behaviour, not the technology itself.

“The issue was not AI use itself but the degree of passive acceptance,” Baldeo said. “Participants who used AI but still maintained oversight and active judgment tended to feel more confident in their own reasoning.”

Baldeo warned that a failure to engage actively with the technology could lead to a highly damaging phenomenon she calls “intellectual levelling,” in which lazy workers begin to sound exactly like an AI due to chronic overuse.

“The potential long-term risks aren’t that AI makes people less intelligent but that some users may become less engaged in the deeper cognitive work that produces novel thinking,” Baldeo cautioned. “That is why the distinction between AI assistance and overreliance is so important.”

Protect your mind at work

To combat this growing crisis of confidence, Baldeo advises workers to drastically change how they interact with their digital assistants.

“Broadly, the best way to use AI is to train it rather than letting it train you,” Baldeo explained. “Program it to function for specific uses, and stop anthropomorphising AI.”

She outlined three vital rules for modern workers to protect their cognitive engagement:

  • Try it yourself first: Always attempt to solve a problem independently before asking an AI program to do the heavy lifting for you.
  • Force the AI to work harder: Never accept the first output. Refine your AI prompts at least two or three times to generate a higher-quality response and force your own brain to critically evaluate the data.
  • Take a digital detox: Commit to taking at least two or three days off each week from using AI programs at work entirely to prevent “intellectual levelling” and maintain your unique human voice.

The study urges tech companies to take responsibility for this psychological shift. The researchers argue that future AI programs should be specifically designed to prompt users to think of their own alternatives and actively review the machine’s underlying assumptions, rather than serving up unquestioned answers.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Massive AI study uncovers the secret GLP-1 side effects hidden on Reddit

Millions of patients are flocking to GLP-1 weight loss injections, but artificial…

Alarming new US survey shows half of patients rely on AI for medical choices

Across the United States, a dangerous new trend is emerging. Millions of…

One in four Americans now consult AI chatbots for medical advice

Millions of desperate patients are quietly abandoning the waiting room for a…

Global gambling firms rush to adopt AI despite severe lack of safety controls

The global gambling industry is racing to integrate artificial intelligence into its…

Why digital tears and online outrage fail to win modern political arguments

Scrolling through your social media feed today often feels like navigating a…

Tracking how war and energy policies dimmed night lights of Europe

While human civilisation is glowing brighter than ever before, the lights across…