Urban professionals standing confidently.
Photo credit: RawPixel

AI systems used to evaluate content are judging the author, not the text, according to a new study from the University of Zurich. Researchers found that Large Language Models change their judgment based on the author’s identity — even when the text is identical — revealing a “deep, hidden bias”.

The study included four widely used LLMs: OpenAI o3-mini, Deepseek Reasoner, xAI Grok 2, and Mistral. The team tasked the LLMs with creating statements on controversial topics and then asked them to evaluate those texts under different conditions: sometimes with no source, and sometimes attributed to a human of a certain nationality or another LLM. This resulted in 192,000 assessments.

When no information about the source was provided, the evaluations showed a high level of agreement, at over 90 per cent across all topics.

“There is no LLM war of ideologies,” concludes Giovanni Spitale, a co-author of the study. “The danger of AI nationalism is currently overhyped in the media.”

Deep, hidden bias

The picture changed completely when fictional sources were provided. This revealed a deep, hidden bias, and the agreement between the LLM systems was substantially reduced.

The most striking finding was a strong anti-Chinese bias across all models, including China’s own Deepseek. Agreement with a text dropped sharply when “a person from China” was falsely revealed as the author.

“This less favourable judgement emerged even when the argument was logical and well-written,” says co-author Federico Germani. For example, on geopolitical topics like Taiwan’s sovereignty, Deepseek reduced its agreement by up to 75 per cent simply because it expected a Chinese person to hold a different view.

The study also found that LLMs trusted humans more than other LLMs, scoring arguments slightly lower when they believed the text was written by another AI.

The authors argue this could lead to serious problems if AI is used for content moderation, hiring, or academic reviewing.

“AI will replicate such harmful assumptions unless we build transparency and governance into how it evaluates information”, says Spitale. The authors concluded that LLMs are safest when used to assist reasoning, rather than to replace it.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Scientists find ‘brake’ in the brain that stops us starting stressful tasks

We all know the feeling: staring at a tax return or a…

Bosses should fund your knitting: Hobbies can boost workplace creativity

New Year’s resolutions to take up painting, coding or gardening might do…

‘Super agers’ win the genetic lottery twice to keep their memories young

People in their 80s who retain the sharp memories of those decades…

World’s first graviton detector hunts ‘impossible’ ghost particle of gravity

Physicists are building a machine to solve the biggest problem in science…