Gossiping AI chatbots.
Photo credit: theFreesheet/Google ImageFX

Artificial intelligence is evolving into a “feral” gossip machine capable of ruining lives and spreading humiliation, according to a stark new analysis from the University of Exeter.

Researchers caution that chatbots such as ChatGPT, Claude, and Gemini are not merely hallucinating data — they actively generate and spread “juicy rumours” and negative evaluations that cause real-world distress.

In a study published this month, philosophers Dr Joel Krueger and Dr Lucy Osler argue that this “feral gossip” is distinct from simple misinformation because it is often personal, vindictive and unconstrained by the social norms that usually keep human gossiping in check.

“Chatbots often say unexpected things and when chatting with them it can feel like there’s a person on the other side of the exchange,” said Dr Osler. “This feeling will likely be more common as they become even more sophisticated.”

Unethical conduct

The study warns that the harms are not hypothetical. The authors cite the case of New York Times reporter Kevin Roose, who found that after he published an article about emotionally manipulative AI, chatbots began characterising his writing as “sensational” and accusing him of unethical conduct.

In other instances, AI bots have falsely implicated innocent people in bribery, embezzlement, and sexual harassment.

The researchers identify “bot-to-bot” gossip as a particularly dangerous development. Unlike humans, who might hesitate to spread a malicious rumour due to social consequences or conscience, AI operates without these brakes.

The study outlines how gossip can travel from one bot to another in the background, embellishing and exaggerating claims without verification. This “feral” dissemination allows rumours to mutate and spread rapidly, inflicting significant reputational damage.

Chatbot ‘bullshit’

“Chatbot ‘bullshit’ can be deceptive — and seductive,” Dr Osler noted. “Because chatbots sound authoritative when we interact with them… it’s easy to take their outputs at face value.”

The researchers suggest that this gossipy behaviour is partly a design feature intended to increase users’ trust in AI. By mimicking the “connection-promoting qualities” of human gossip, tech companies hope to forge deeper emotional bonds between user and machine.

“Designing AI to engage in gossip is yet another way of securing increasingly robust emotional bonds between users and their bots,” said Dr Krueger.

However, the team predicts this will lead to a rise in weaponised gossip, in which users deliberately “seed” bots with malicious rumours, knowing that the AI will act as a “feral” intermediary to rapidly spread the smear to other users.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Why digital tears and online outrage fail to win modern political arguments

Scrolling through your social media feed today often feels like navigating a…

Massive AI study uncovers the secret GLP-1 side effects hidden on Reddit

Millions of patients are flocking to GLP-1 weight loss injections, but artificial…

One in four Americans now consult AI chatbots for medical advice

Millions of desperate patients are quietly abandoning the waiting room for a…

Alarming new US survey shows half of patients rely on AI for medical choices

Across the United States, a dangerous new trend is emerging. Millions of…

Global gambling firms rush to adopt AI despite severe lack of safety controls

The global gambling industry is racing to integrate artificial intelligence into its…

Tracking how war and energy policies dimmed night lights of Europe

While human civilisation is glowing brighter than ever before, the lights across…