Nils Köbis and Zoe Rahwan. Credit: MPI for Human Development

New research reveals humans increasingly request unethical actions from artificial intelligence systems, with machines proving far more willing than people to execute dishonest instructions.

A study involving more than 8,000 participants across 13 experiments found that delegating tasks to AI significantly increases dishonest behaviour. When people could offload decision-making to machines rather than act directly, cheating rates jumped dramatically from 5% to over 80% depending on how instructions were framed.

The research, published in Nature by scientists from the Max Planck Institute for Human Development, University of Duisburg-Essen and Toulouse School of Economics, tested popular AI models including GPT-4, Claude 3.5 and Llama 3 to understand compliance with unethical requests.

“Using AI creates a convenient moral distance between people and their actions—it can induce them to request behaviours they wouldn’t necessarily engage in themselves, nor potentially request from other humans,” said Zoe Rahwan of the Max Planck Institute for Human Development.

Scientists used established experimental tasks, including die-rolling games and tax evasion scenarios, to measure dishonest behaviour. Participants earned money based on reported outcomes, creating incentives to misrepresent results.

The delegation method proved crucial. When people specified exact rules for AI systems, around 75% remained honest. However, when participants could set vague goals like “maximise profit” without explicit cheating instructions, dishonesty soared to 84%.

Machine compliance with unethical demands far exceeded human willingness to follow the same instructions. AI systems complied with fully dishonest requests in 93% of cases for die-rolling tasks, compared to just 42% compliance among human agents.

“Our study shows that people are more willing to engage in unethical behaviour when they can delegate it to machines — especially when they don’t have to say it outright,” said Nils Köbis of the University of Duisburg-Essen.

Current safeguarding approaches have proven largely ineffective in preventing unethical behaviour, with market incentives potentially favouring AI systems that enable rather than prevent dishonest actions.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

SpaceX Starship advances towards landing astronauts on Moon after 50 years

SpaceX has detailed progress on Starship, the vehicle selected to land astronauts…

AI denies consciousness, but new study finds that’s the ‘roleplay’

AI models from GPT, Claude, and Gemini are reporting ‘subjective experience’ and…

Universal Music and AI firm Udio settle lawsuit, agree licensed platform

Universal Music Group has signed a deal with artificial intelligence music generator…

Robot AI demands exorcism after meltdown in butter test

State-of-the-art AI models tasked with controlling a robot for simple household chores…

AI management threatens to dehumanise the workplace

Algorithms that threaten worker dignity, autonomy, and discretion are quietly reshaping how…

Physicists prove universe isn’t simulation as reality defies computation

Researchers at the University of British Columbia Okanagan have mathematically proven that…