Pete Chappell/Flickr

Artificial intelligence platforms are secretly embedding distinct ethical frameworks into business and personal decision-making processes, with research revealing significant variations in moral reasoning that could fundamentally alter human behaviour patterns.

UC Berkeley scientists have exposed how major AI systems demonstrate radically different approaches to ethical judgment, raising urgent questions about the invisible influence these technologies exert over millions of users seeking guidance daily.

The groundbreaking study tested seven leading language models against over 10,000 real-world moral conflicts from Reddit’s “Am I the Asshole?” forum, uncovering distinct ethical programming embedded within commercial AI platforms that users and organisations remain unaware of.

Researchers examined models including OpenAI’s GPT-3.5 and GPT-4, Claude Haiku, Google’s PaLM 2 Bison and Gemma 7B, Meta’s LLaMa 2 7B, and Mistral 7B across complex interpersonal scenarios, revealing striking disparities in moral reasoning.

Pratik Sachdeva, a senior data scientist at UC Berkeley’s D-Lab, warns that AI systems are increasingly shaping human behaviour through advice and feedback, while their underlying ethical programming remains hidden from users and enterprises.

“Through their advice and feedback, these technologies are shaping how humans act, what they believe and what norms they adhere to,” Sachdeva explained. “But many of these tools are proprietary. We don’t know how they were trained.”

The research found that while individual models often disagreed on moral judgements, their collective consensus typically aligned with the decisions of human Reddit users. However, significant variations emerged in how different systems weighted ethical considerations.

ChatGPT-4 and Claude demonstrated heightened sensitivity to emotional factors compared to other models, whilst most systems prioritised fairness and harm prevention over honesty considerations.

Mistral 7B exhibited particularly distinct behaviour, frequently applying “No assholes here” labels due to literal interpretation of terminology rather than contextual understanding of forum conventions.

Tom van Nuenen, senior data scientist and lecturer at Berkeley’s D-Lab, emphasised the importance of understanding AI moral frameworks as these technologies handle increasingly complex business and personal decisions.

The findings have profound implications for enterprises deploying AI customer service systems, where inconsistent ethical reasoning could affect brand reputation and user trust across commercial applications.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

AI consciousness claims are ‘existentially toxic’ and unprovable

The only scientifically justifiable position on artificial intelligence is “agnosticism”, meaning humans…

Tech-savvy millennials suffer most anxiety over digital privacy risks

Digital concerns regarding privacy, misinformation and work-life boundaries are highest among highly…

Experts warn of emotional risks as one in three teens turn to AI for support

Medical experts warn that a generation is learning to form emotional bonds…

Social media ‘cocktail’ helps surgeons solve cases in three hours

A global social media community is helping neurosurgeons diagnose complex pathologies and…

World’s smallest programmable robots cost one penny and run for months

The world’s smallest fully programmable, autonomous robots have launched, able to sense…

AI predicts unknown brain neurons that scientists then find in real mice

Artificial intelligence has successfully predicted the existence of previously unknown neuron types…