It’s not just your syntax that artificial intelligence is hacking — it’s your entire personality. While we previously reported on how biased AI assistants are secretly changing political opinions, another new paper warns that the psychological threat actually goes much deeper.
According to computer scientists and psychologists, the world’s most popular AI chatbots are actively standardising how humanity speaks, writes, and thinks.
Published in the Cell Press journal Trends in Cognitive Sciences, the opinion paper argues that if this homogenisation continues unchecked, it risks severely reducing humanity’s collective wisdom and our ability to adapt.
First author Zhivar Sourati, a computer scientist at the University of Southern California, explained that billions of people are increasingly relying on the same handful of large language models (LLMs) for daily tasks.
“Individuals differ in how they write, reason, and view the world. When these differences are mediated by the same LLMs, their distinct linguistic style, perspective, and reasoning strategies become homogenised, producing standardised expressions and thoughts across users.”
A shrinking pool of ideas
The researchers point to multiple studies showing that while an individual might generate more detailed ideas when using an LLM, groups of people actually produce fewer and less creative ideas with AI than they do when simply combining their own collective human powers.
According to the paper, this reliance on AI is shrinking cognitive diversity in several alarming ways:
- Loss of ownership: When people use chatbots to polish their writing, the final product loses its stylistic individuality, leaving the user feeling less creative ownership over what they produced.
- Skewed perspectives: Because LLMs are trained on data that overrepresents Western, educated, industrialised, rich, and democratic societies, their outputs reflect a highly narrow, skewed slice of human experience.
- Forced linear thinking: LLMs heavily favour linear “chain-of-thought” reasoning, which reduces the use of intuitive or abstract problem-solving styles that are often more efficient.
- Shifting opinions: After interacting with biased LLMs, people’s own opinions have been shown to shift and become more similar to the AI they used.
Perhaps most alarmingly, the researchers note that even people who refuse to use AI will be affected by this shift. As LLM-generated text becomes the new societal standard, it subtly alters expectations and redefines what counts as credible speech, correct perspective, or good reasoning.
Sourati detailed how this indirect pressure works: “Even if people are not the first-hand users of LLMs, LLMs are still going to affect them indirectly. If a lot of people around me are thinking and speaking in a certain way, and I do things differently, I would feel pressure to align with them, because it would seem like a more credible or socially acceptable way of expressing my ideas.”
Users are gradually shifting their agency over to the machines. Rather than actively generating ideas, people frequently defer to the AI’s suggestions, selecting options that seem “good enough” rather than putting in the effort to craft their own thoughts.
To combat this creeping standardisation, the researchers argue that AI developers must intentionally train their models to incorporate true global human diversity in language, perspectives, and reasoning, rather than just introducing random variation.
“If LLMs had more diverse ways of approaching ideas and problems, they would better support the collective intelligence and problem-solving capabilities of our societies. We need to diversify the AI models themselves while also adjusting how we interact with them, especially given their widespread use across tasks and contexts, to protect the cognitive diversity and ideation potential of future generations.”