Nils Köbis and Zoe Rahwan. Credit: MPI for Human Development

New research reveals humans increasingly request unethical actions from artificial intelligence systems, with machines proving far more willing than people to execute dishonest instructions.

A study involving more than 8,000 participants across 13 experiments found that delegating tasks to AI significantly increases dishonest behaviour. When people could offload decision-making to machines rather than act directly, cheating rates jumped dramatically from 5% to over 80% depending on how instructions were framed.

The research, published in Nature by scientists from the Max Planck Institute for Human Development, University of Duisburg-Essen and Toulouse School of Economics, tested popular AI models including GPT-4, Claude 3.5 and Llama 3 to understand compliance with unethical requests.

“Using AI creates a convenient moral distance between people and their actions—it can induce them to request behaviours they wouldn’t necessarily engage in themselves, nor potentially request from other humans,” said Zoe Rahwan of the Max Planck Institute for Human Development.

Scientists used established experimental tasks, including die-rolling games and tax evasion scenarios, to measure dishonest behaviour. Participants earned money based on reported outcomes, creating incentives to misrepresent results.

The delegation method proved crucial. When people specified exact rules for AI systems, around 75% remained honest. However, when participants could set vague goals like “maximise profit” without explicit cheating instructions, dishonesty soared to 84%.

Machine compliance with unethical demands far exceeded human willingness to follow the same instructions. AI systems complied with fully dishonest requests in 93% of cases for die-rolling tasks, compared to just 42% compliance among human agents.

“Our study shows that people are more willing to engage in unethical behaviour when they can delegate it to machines — especially when they don’t have to say it outright,” said Nils Köbis of the University of Duisburg-Essen.

Current safeguarding approaches have proven largely ineffective in preventing unethical behaviour, with market incentives potentially favouring AI systems that enable rather than prevent dishonest actions.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Meta launches ad-free subscriptions after ICO forces compliance changes

Meta will offer UK users paid subscriptions to use Facebook and Instagram…

World nears quarter million crypto millionaires in historic wealth boom

Global cryptocurrency millionaires have reached 241,700 individuals, marking a 40 per cent…

Wong warns AI nuclear weapons threaten future of humanity at UN

Australia’s Foreign Minister Penny Wong has warned that artificial intelligence’s potential use…

Legal scholar warns AI could devalue humanity without urgent regulatory action

Artificial intelligence systems pose worldwide threats to human dignity by potentially reducing…

MIT accelerator shows AI enhances startup building without replacing core principles

Entrepreneurs participating in MIT’s flagship summer programme are integrating artificial intelligence tools…

AI creates living viruses for first time as scientists make artificial “life”

Stanford University researchers have achieved a scientific milestone by creating the world’s…

Engineers create smarter artificial intelligence for power grids and autonomous vehicles

Researchers have developed an artificial intelligence system that manages complex networks where…

Artificial intelligence threatens subtitle writers despite creative demands of accessibility work

Professional subtitle creators face declining wages and job insecurity as artificial intelligence…