Using autocomplete to finish our sentences has become commonplace for many, but a chilling new study suggests these artificial intelligence tools aren’t just changing how we write — they are covertly changing how we think.
According to researchers at Cornell Tech, using a biased AI writing assistant can actively shift a person’s inner beliefs on major societal issues, and even explicitly warning them about the bias does absolutely nothing to stop it.
Published in the journal Science Advances, the research involved over 2,500 participants across two large-scale experiments to test the psychological impact of AI suggestions.
“Previous misinformation research has shown that warning people before they’re exposed to misinformation, or debriefing them afterward, can provide ‘immunity’ against believing it,” said lead author and information science doctoral candidate Sterling Williams-Ceci. “So we were surprised because neither of those interventions actually reduced the extent to which people’s attitudes shifted toward the AI’s bias in this context.”
The illusion of independent thought
In the experiments, participants were asked to write short essays on highly debated topics. As they wrote, they were exposed to an AI writing assistant that provided autocomplete suggestions.
In one study examining whether standardised testing should be used in education, participants received autocomplete suggestions that favoured testing. The researchers found that these users’ attitudes shifted significantly towards the AI’s bias. A third group that was simply shown a list of AI-generated pro-testing arguments, rather than using the autocomplete tool, did not experience the same shift in opinion.
The second experiment broadened the scope to politically consequential topics, including the death penalty, fracking, genetically modified organisms (GMOs), and voting rights for felons. The researchers deliberately engineered the AI suggestions to lean toward predetermined biases — liberal-leaning for the death penalty and GMOs, and conservative-leaning for felon voting and fracking.
Using pre- and post-experiment surveys, the team found that participants’ personal views consistently gravitated toward the AI’s positions across all topics and political leanings. Furthermore, the users were completely unaware that their opinions were shifting.
Warnings do not work
The most alarming discovery was the complete failure of standard mitigation measures. The researchers explicitly explained the AI’s bias to participants, both before and after the writing exercises.
Senior author and professor of information science Mor Naaman highlighted this shock: “We told people before, and after, to be careful, that the AI is going to be (or was) biased, and nothing helped. Their attitudes about the issues still shifted.”
Naaman pointed out that this research is more urgent than ever because autocomplete technology has rapidly evolved from suggesting short phrases to drafting entire emails on a user’s behalf.
He also noted that the idea of bias being explicitly built into AI interactions is now a very plausible scenario. The danger lies in the covert nature of the influence, as people simply do not notice it happening and are unable to resist it.
Williams-Ceci warned: “A lot of research has shown that large language models and AI applications are not just producing neutral information, but they also actually can produce very biased information, depending on how they were trained and implemented. By doing that, there’s a risk that these systems, inadvertently or purposefully, induce people to write biased viewpoints, which decades of psychology research has shown can in turn shift people’s attitudes.”