AI swarms.
Photo credit: theFreesheet/Google ImageFX

A new class of “AI swarms” threatens to erode democratic stability by automating the manipulation of public opinion on a massive scale, according to a Policy Forum published in Science.

Daniel Schroeder and colleagues argue that the fusion of large language models with autonomous agents allows malicious actors to generate “synthetic consensus cascades”. These cascades create an illusion of widespread public agreement, manipulating individual beliefs through social proof.

Unlike historical information operations that relied on human labour — such as the “paid troll” farms seen in the Philippines and Brazil — AI swarms operate with superhuman efficiency. They can infiltrate online communities, mimicking social dynamics to spread persuasive content that is difficult to distinguish from authentic discourse.

“Success depends on fostering collaborative action without hindering scientific research while ensuring that the public sphere remains both resilient and accountable,” the authors write.

Pathways of harm

The researchers identify several mechanisms by which these swarms damage democracy. “LLM grooming” involves seeding large volumes of duplicative content to contaminate future model training data. Meanwhile, “engineered norm shifts” can normalise extremist views, with evidence linking inflammatory online narratives to real-world violence.

The report also warns of “institutional legitimacy erosion”, citing the 2024 Taiwan presidential election, where AI-generated disinformation targeted electoral trust.

Given the unreliability of human detection of AI deepfakes, the authors propose a distributed “AI Influence Observatory” to guide oversight. They also advocate for technical “model immunisation”, where systems are proactively trained to resist generating harmful content.

“By committing now to rigorous measurement, proportionate safeguards, and shared oversight, upcoming elections could even become a proving ground for, rather than a setback to, democratic AI governance,” the authors said.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Resilience by design: Protecting the North’s digital backbone

theFreesheet is the official media partner for Manchester Edge & Digital Infrastructure…

DeepMind and Anthropic CEOs clash on AGI timeline but agree on disruption

The leaders of two of the world’s most powerful AI companies offered…

Funny business: Algorithms reveal hidden engineering of stand-up comedy

It may feel like a spontaneous conversation, but a new algorithmic analysis…

95% of AI pilots failing as companies driven by ‘fear of missing out’, Davos told

Ninety-five per cent of generative AI pilot projects are failing to deliver…

‘Digital harness’ needed to tame AI before it surpasses human intelligence

A “digital harness” is urgently needed to prevent artificial intelligence from outrunning…