Pete Chappell/Flickr

Artificial intelligence platforms are secretly embedding distinct ethical frameworks into business and personal decision-making processes, with research revealing significant variations in moral reasoning that could fundamentally alter human behaviour patterns.

UC Berkeley scientists have exposed how major AI systems demonstrate radically different approaches to ethical judgment, raising urgent questions about the invisible influence these technologies exert over millions of users seeking guidance daily.

The groundbreaking study tested seven leading language models against over 10,000 real-world moral conflicts from Reddit’s “Am I the Asshole?” forum, uncovering distinct ethical programming embedded within commercial AI platforms that users and organisations remain unaware of.

Researchers examined models including OpenAI’s GPT-3.5 and GPT-4, Claude Haiku, Google’s PaLM 2 Bison and Gemma 7B, Meta’s LLaMa 2 7B, and Mistral 7B across complex interpersonal scenarios, revealing striking disparities in moral reasoning.

Pratik Sachdeva, a senior data scientist at UC Berkeley’s D-Lab, warns that AI systems are increasingly shaping human behaviour through advice and feedback, while their underlying ethical programming remains hidden from users and enterprises.

“Through their advice and feedback, these technologies are shaping how humans act, what they believe and what norms they adhere to,” Sachdeva explained. “But many of these tools are proprietary. We don’t know how they were trained.”

The research found that while individual models often disagreed on moral judgements, their collective consensus typically aligned with the decisions of human Reddit users. However, significant variations emerged in how different systems weighted ethical considerations.

ChatGPT-4 and Claude demonstrated heightened sensitivity to emotional factors compared to other models, whilst most systems prioritised fairness and harm prevention over honesty considerations.

Mistral 7B exhibited particularly distinct behaviour, frequently applying “No assholes here” labels due to literal interpretation of terminology rather than contextual understanding of forum conventions.

Tom van Nuenen, senior data scientist and lecturer at Berkeley’s D-Lab, emphasised the importance of understanding AI moral frameworks as these technologies handle increasingly complex business and personal decisions.

The findings have profound implications for enterprises deploying AI customer service systems, where inconsistent ethical reasoning could affect brand reputation and user trust across commercial applications.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Meta launches ad-free subscriptions after ICO forces compliance changes

Meta will offer UK users paid subscriptions to use Facebook and Instagram…

World nears quarter million crypto millionaires in historic wealth boom

Global cryptocurrency millionaires have reached 241,700 individuals, marking a 40 per cent…

Wong warns AI nuclear weapons threaten future of humanity at UN

Australia’s Foreign Minister Penny Wong has warned that artificial intelligence’s potential use…

Mistral targets enterprise data as public AI training resources dry up

Europe’s leading artificial intelligence startup Mistral AI is turning to proprietary enterprise…

Legal scholar warns AI could devalue humanity without urgent regulatory action

Artificial intelligence systems pose worldwide threats to human dignity by potentially reducing…

MIT accelerator shows AI enhances startup building without replacing core principles

Entrepreneurs participating in MIT’s flagship summer programme are integrating artificial intelligence tools…

AI creates living viruses for first time as scientists make artificial “life”

Stanford University researchers have achieved a scientific milestone by creating the world’s…

Engineers create smarter artificial intelligence for power grids and autonomous vehicles

Researchers have developed an artificial intelligence system that manages complex networks where…