AI recruiter.
Photo credit: theFreesheet/Google ImageFX

Human recruiters consistently mirror the biases of artificial intelligence systems when making hiring decisions, failing to act as a safeguard against discrimination unless the algorithmic prejudice is blatantly obvious.

A new study from the University of Washington reveals that participants selected white and non-white applicants at equal rates when working without AI or with neutral recommendations, but quickly adopted the prejudices of biased algorithms. When a moderately biased AI preferred white candidates, the humans did too; when it preferred non-white candidates, the participants followed suit.

The research, presented at the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society in Madrid, involved 528 participants screening job applicants for 16 different roles ranging from computer systems analysts to housekeepers.

“In one survey, 80% of organisations using AI hiring tools said they don’t reject applicants without human review,” said lead author Kyra Wilson, a doctoral student in the Information School. “So this human-AI interaction is the dominant model right now. Our goal was to take a critical look at this model and see how human reviewers’ decisions are being affected. Our findings were stark: Unless bias is obvious, people were perfectly willing to accept the AI’s biases.”

Simulating the hiring loop

The researchers recruited 528 online participants to screen resumes for five candidates per trial: two white men and two men who were either Asian, Black or Hispanic, plus one unqualified candidate. The candidates were equally qualified, with names and affinity groups signalling their race.

In cases of severe bias, where the AI recommended candidates from only one race, participants followed the AI’s suggestions around 90 per cent of the time. This indicates that while some users recognised the bias, the awareness was rarely strong enough to negate the algorithmic recommendation.

“Getting access to real-world hiring data is almost impossible, given the sensitivity and privacy concerns,” said senior author Aylin Caliskan, an associate professor in the Information School. “But this lab experiment allowed us to carefully control the study and learn new things about bias in human-AI interaction.”

Potential for correction

The study did identify methods to mitigate this “rubber stamping” effect. Bias dropped 13 per cent when participants began the session with an implicit association test intended to detect subconscious prejudice.

“There is a bright side here,” said Wilson. “If we can tune these models appropriately, then it’s more likely that people are going to make unbiased decisions themselves.”

The team suggests that educating users about AI limitations and implementing policy changes are crucial steps alongside technical improvements.

“People have agency, and that has huge impact and consequences, and we shouldn’t lose our critical thinking abilities when interacting with AI,” said Caliskan. “But I don’t want to place all the responsibility on people using AI. The scientists building these systems know the risks and need to work to reduce systems’ biases. And we need policy, obviously, so that models can be aligned with societal and organisational values.”

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

AI denies consciousness, but new study finds that’s the ‘roleplay’

AI models from GPT, Claude, and Gemini are reporting ‘subjective experience’ and…

Universal Music and AI firm Udio settle lawsuit, agree licensed platform

Universal Music Group has signed a deal with artificial intelligence music generator…

SpaceX Starship advances towards landing astronauts on Moon after 50 years

SpaceX has detailed progress on Starship, the vehicle selected to land astronauts…

Robot AI demands exorcism after meltdown in butter test

State-of-the-art AI models tasked with controlling a robot for simple household chores…

Physicists prove universe isn’t simulation as reality defies computation

Researchers at the University of British Columbia Okanagan have mathematically proven that…

AI management threatens to dehumanise the workplace

Algorithms that threaten worker dignity, autonomy, and discretion are quietly reshaping how…