Varied faces.
Photo credit: Dawn Hudson/PDP

Researchers from Penn State and Oregon State University discovered that laypersons fail to recognise systematic racial bias in AI training data, even when the correlation between race and emotion is explicitly visible.

The study, published in Media Psychology, examined whether people understand that unrepresentative training data leads to biased AI performance. Across three experiments with 769 participants, researchers presented 12 versions of a prototype facial expression detection system trained on racially skewed datasets. Happy faces were predominantly white, while sad faces were predominantly Black.

Most participants indicated they noticed no bias in the training data. Only when the AI demonstrated biased performance—misclassifying emotions for Black individuals whilst accurately classifying white individuals—did some participants suspect problems.

“We were surprised that people failed to recognise that race and emotion were confounded, that one race was more likely than others to represent a given emotion in the training data — even when it was staring them in the face,” said S. Shyam Sundar, Evan Pugh University Professor and director of the Center for Socially Responsible Artificial Intelligence at Penn State. “For me, that’s the most important discovery of the study.”

Identifying racial bias

Black participants proved more likely to identify racial bias, particularly when training data over-represented their own group for negative emotions.

Lead author Cheng Chen, an assistant professor of emerging media and technology at Oregon State University, said bias in performance proves “very, very persuasive,” with people ignoring training data characteristics to form perceptions based on biased outcomes.

The research suggests humans trust AI to remain neutral even when evidence indicates otherwise. The scholars, who have studied this issue for five years, said AI systems should “work for everyone” and produce outcomes that are diverse and representative.

Future research will focus on developing better methods to communicate inherent AI bias to users, developers and policymakers, with plans to improve media and AI literacy.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Political misinformation key reason for US divorces and breakups, study finds

Political misinformation or disinformation was the key reason for some US couples’…

Pinterest launches user controls to reduce AI-generated content in feeds

Pinterest has introduced new controls allowing users to adjust the amount of…

Meta launches ad-free subscriptions after ICO forces compliance changes

Meta will offer UK users paid subscriptions to use Facebook and Instagram…

Wikimedia launches free AI vector database to challenge Big Tech dominance

Wikimedia Deutschland has launched a free vector database enabling developers to build…

Film union condemns AI actor as threat to human performers’ livelihoods

SAG-AFTRA has condemned AI-generated performer Tilly Norwood as a synthetic character trained…

Mistral targets enterprise data as public AI training resources dry up

Europe’s leading artificial intelligence startup Mistral AI is turning to proprietary enterprise…

Wong warns AI nuclear weapons threaten future of humanity at UN

Australia’s Foreign Minister Penny Wong has warned that artificial intelligence’s potential use…

Anthropic’s Claude Sonnet 4.5 detects testing scenarios, raising evaluation concerns

Anthropic’s latest AI model recognised it was being tested during safety evaluations,…