Pete Chappell/Flickr

Artificial intelligence platforms are secretly embedding distinct ethical frameworks into business and personal decision-making processes, with research revealing significant variations in moral reasoning that could fundamentally alter human behaviour patterns.

UC Berkeley scientists have exposed how major AI systems demonstrate radically different approaches to ethical judgment, raising urgent questions about the invisible influence these technologies exert over millions of users seeking guidance daily.

The groundbreaking study tested seven leading language models against over 10,000 real-world moral conflicts from Reddit’s “Am I the Asshole?” forum, uncovering distinct ethical programming embedded within commercial AI platforms that users and organisations remain unaware of.

Researchers examined models including OpenAI’s GPT-3.5 and GPT-4, Claude Haiku, Google’s PaLM 2 Bison and Gemma 7B, Meta’s LLaMa 2 7B, and Mistral 7B across complex interpersonal scenarios, revealing striking disparities in moral reasoning.

Pratik Sachdeva, a senior data scientist at UC Berkeley’s D-Lab, warns that AI systems are increasingly shaping human behaviour through advice and feedback, while their underlying ethical programming remains hidden from users and enterprises.

“Through their advice and feedback, these technologies are shaping how humans act, what they believe and what norms they adhere to,” Sachdeva explained. “But many of these tools are proprietary. We don’t know how they were trained.”

The research found that while individual models often disagreed on moral judgements, their collective consensus typically aligned with the decisions of human Reddit users. However, significant variations emerged in how different systems weighted ethical considerations.

ChatGPT-4 and Claude demonstrated heightened sensitivity to emotional factors compared to other models, whilst most systems prioritised fairness and harm prevention over honesty considerations.

Mistral 7B exhibited particularly distinct behaviour, frequently applying “No assholes here” labels due to literal interpretation of terminology rather than contextual understanding of forum conventions.

Tom van Nuenen, senior data scientist and lecturer at Berkeley’s D-Lab, emphasised the importance of understanding AI moral frameworks as these technologies handle increasingly complex business and personal decisions.

The findings have profound implications for enterprises deploying AI customer service systems, where inconsistent ethical reasoning could affect brand reputation and user trust across commercial applications.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

SpaceX Starship advances towards landing astronauts on Moon after 50 years

SpaceX has detailed progress on Starship, the vehicle selected to land astronauts…

AI denies consciousness, but new study finds that’s the ‘roleplay’

AI models from GPT, Claude, and Gemini are reporting ‘subjective experience’ and…

Universal Music and AI firm Udio settle lawsuit, agree licensed platform

Universal Music Group has signed a deal with artificial intelligence music generator…

Robot AI demands exorcism after meltdown in butter test

State-of-the-art AI models tasked with controlling a robot for simple household chores…

AI management threatens to dehumanise the workplace

Algorithms that threaten worker dignity, autonomy, and discretion are quietly reshaping how…

Physicists prove universe isn’t simulation as reality defies computation

Researchers at the University of British Columbia Okanagan have mathematically proven that…