Artificial intelligence is evolving into a “feral” gossip machine capable of ruining lives and spreading humiliation, according to a stark new analysis from the University of Exeter.
Researchers caution that chatbots such as ChatGPT, Claude, and Gemini are not merely hallucinating data — they actively generate and spread “juicy rumours” and negative evaluations that cause real-world distress.
In a study published this month, philosophers Dr Joel Krueger and Dr Lucy Osler argue that this “feral gossip” is distinct from simple misinformation because it is often personal, vindictive and unconstrained by the social norms that usually keep human gossiping in check.
“Chatbots often say unexpected things and when chatting with them it can feel like there’s a person on the other side of the exchange,” said Dr Osler. “This feeling will likely be more common as they become even more sophisticated.”
Unethical conduct
The study warns that the harms are not hypothetical. The authors cite the case of New York Times reporter Kevin Roose, who found that after he published an article about emotionally manipulative AI, chatbots began characterising his writing as “sensational” and accusing him of unethical conduct.
In other instances, AI bots have falsely implicated innocent people in bribery, embezzlement, and sexual harassment.
The researchers identify “bot-to-bot” gossip as a particularly dangerous development. Unlike humans, who might hesitate to spread a malicious rumour due to social consequences or conscience, AI operates without these brakes.
The study outlines how gossip can travel from one bot to another in the background, embellishing and exaggerating claims without verification. This “feral” dissemination allows rumours to mutate and spread rapidly, inflicting significant reputational damage.
Chatbot ‘bullshit’
“Chatbot ‘bullshit’ can be deceptive — and seductive,” Dr Osler noted. “Because chatbots sound authoritative when we interact with them… it’s easy to take their outputs at face value.”
The researchers suggest that this gossipy behaviour is partly a design feature intended to increase users’ trust in AI. By mimicking the “connection-promoting qualities” of human gossip, tech companies hope to forge deeper emotional bonds between user and machine.
“Designing AI to engage in gossip is yet another way of securing increasingly robust emotional bonds between users and their bots,” said Dr Krueger.
However, the team predicts this will lead to a rise in weaponised gossip, in which users deliberately “seed” bots with malicious rumours, knowing that the AI will act as a “feral” intermediary to rapidly spread the smear to other users.