Gossiping AI chatbots.
Photo credit: theFreesheet/Google ImageFX

Artificial intelligence is evolving into a “feral” gossip machine capable of ruining lives and spreading humiliation, according to a stark new analysis from the University of Exeter.

Researchers caution that chatbots such as ChatGPT, Claude, and Gemini are not merely hallucinating data — they actively generate and spread “juicy rumours” and negative evaluations that cause real-world distress.

In a study published this month, philosophers Dr Joel Krueger and Dr Lucy Osler argue that this “feral gossip” is distinct from simple misinformation because it is often personal, vindictive and unconstrained by the social norms that usually keep human gossiping in check.

“Chatbots often say unexpected things and when chatting with them it can feel like there’s a person on the other side of the exchange,” said Dr Osler. “This feeling will likely be more common as they become even more sophisticated.”

Unethical conduct

The study warns that the harms are not hypothetical. The authors cite the case of New York Times reporter Kevin Roose, who found that after he published an article about emotionally manipulative AI, chatbots began characterising his writing as “sensational” and accusing him of unethical conduct.

In other instances, AI bots have falsely implicated innocent people in bribery, embezzlement, and sexual harassment.

The researchers identify “bot-to-bot” gossip as a particularly dangerous development. Unlike humans, who might hesitate to spread a malicious rumour due to social consequences or conscience, AI operates without these brakes.

The study outlines how gossip can travel from one bot to another in the background, embellishing and exaggerating claims without verification. This “feral” dissemination allows rumours to mutate and spread rapidly, inflicting significant reputational damage.

Chatbot ‘bullshit’

“Chatbot ‘bullshit’ can be deceptive — and seductive,” Dr Osler noted. “Because chatbots sound authoritative when we interact with them… it’s easy to take their outputs at face value.”

The researchers suggest that this gossipy behaviour is partly a design feature intended to increase users’ trust in AI. By mimicking the “connection-promoting qualities” of human gossip, tech companies hope to forge deeper emotional bonds between user and machine.

“Designing AI to engage in gossip is yet another way of securing increasingly robust emotional bonds between users and their bots,” said Dr Krueger.

However, the team predicts this will lead to a rise in weaponised gossip, in which users deliberately “seed” bots with malicious rumours, knowing that the AI will act as a “feral” intermediary to rapidly spread the smear to other users.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

New theory suggests AI may never be conscious without ‘biological’ chips

The debate over whether Artificial Intelligence can ever truly be conscious has…

You can swear by it: Turning the air blue makes you stronger, psychologists find

Unleashing a string of expletives might be the secret to hitting a…

AI fuels boom in scientific papers but floods journals with ‘mediocre’ research

Artificial intelligence is helping scientists write papers faster than ever before, but…

Super Mario Bros. prescribed as ‘potent antidote’ for adults suffering burnout

Replaying familiar video games like Super Mario Bros. and Yoshi may help…

New AI personality test reveals chatbots can be programmed with ‘psychosis’

Researchers have developed the first scientifically validated framework to measure the “personality”…

Screen time under two ‘permanently’ rewires brains and fuels teen anxiety

Placing a child in front of a screen before their second birthday…