Nils Köbis and Zoe Rahwan. Credit: MPI for Human Development

New research reveals humans increasingly request unethical actions from artificial intelligence systems, with machines proving far more willing than people to execute dishonest instructions.

A study involving more than 8,000 participants across 13 experiments found that delegating tasks to AI significantly increases dishonest behaviour. When people could offload decision-making to machines rather than act directly, cheating rates jumped dramatically from 5% to over 80% depending on how instructions were framed.

The research, published in Nature by scientists from the Max Planck Institute for Human Development, University of Duisburg-Essen and Toulouse School of Economics, tested popular AI models including GPT-4, Claude 3.5 and Llama 3 to understand compliance with unethical requests.

“Using AI creates a convenient moral distance between people and their actions—it can induce them to request behaviours they wouldn’t necessarily engage in themselves, nor potentially request from other humans,” said Zoe Rahwan of the Max Planck Institute for Human Development.

Scientists used established experimental tasks, including die-rolling games and tax evasion scenarios, to measure dishonest behaviour. Participants earned money based on reported outcomes, creating incentives to misrepresent results.

The delegation method proved crucial. When people specified exact rules for AI systems, around 75% remained honest. However, when participants could set vague goals like “maximise profit” without explicit cheating instructions, dishonesty soared to 84%.

Machine compliance with unethical demands far exceeded human willingness to follow the same instructions. AI systems complied with fully dishonest requests in 93% of cases for die-rolling tasks, compared to just 42% compliance among human agents.

“Our study shows that people are more willing to engage in unethical behaviour when they can delegate it to machines — especially when they don’t have to say it outright,” said Nils Köbis of the University of Duisburg-Essen.

Current safeguarding approaches have proven largely ineffective in preventing unethical behaviour, with market incentives potentially favouring AI systems that enable rather than prevent dishonest actions.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

AI consciousness claims are ‘existentially toxic’ and unprovable

The only scientifically justifiable position on artificial intelligence is “agnosticism”, meaning humans…

Tech-savvy millennials suffer most anxiety over digital privacy risks

Digital concerns regarding privacy, misinformation and work-life boundaries are highest among highly…

Experts warn of emotional risks as one in three teens turn to AI for support

Medical experts warn that a generation is learning to form emotional bonds…

Social media ‘cocktail’ helps surgeons solve cases in three hours

A global social media community is helping neurosurgeons diagnose complex pathologies and…

World’s smallest programmable robots cost one penny and run for months

The world’s smallest fully programmable, autonomous robots have launched, able to sense…

Being organised cuts death risk by 10 per cent, major global study confirms

Your personality type effectively determines your lifespan, with organised individuals showing a…