AI swarms.
Photo credit: theFreesheet/Google ImageFX

A new class of “AI swarms” threatens to erode democratic stability by automating the manipulation of public opinion on a massive scale, according to a Policy Forum published in Science.

Daniel Schroeder and colleagues argue that the fusion of large language models with autonomous agents allows malicious actors to generate “synthetic consensus cascades”. These cascades create an illusion of widespread public agreement, manipulating individual beliefs through social proof.

Unlike historical information operations that relied on human labour — such as the “paid troll” farms seen in the Philippines and Brazil — AI swarms operate with superhuman efficiency. They can infiltrate online communities, mimicking social dynamics to spread persuasive content that is difficult to distinguish from authentic discourse.

“Success depends on fostering collaborative action without hindering scientific research while ensuring that the public sphere remains both resilient and accountable,” the authors write.

Pathways of harm

The researchers identify several mechanisms by which these swarms damage democracy. “LLM grooming” involves seeding large volumes of duplicative content to contaminate future model training data. Meanwhile, “engineered norm shifts” can normalise extremist views, with evidence linking inflammatory online narratives to real-world violence.

The report also warns of “institutional legitimacy erosion”, citing the 2024 Taiwan presidential election, where AI-generated disinformation targeted electoral trust.

Given the unreliability of human detection of AI deepfakes, the authors propose a distributed “AI Influence Observatory” to guide oversight. They also advocate for technical “model immunisation”, where systems are proactively trained to resist generating harmful content.

“By committing now to rigorous measurement, proportionate safeguards, and shared oversight, upcoming elections could even become a proving ground for, rather than a setback to, democratic AI governance,” the authors said.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

US team grabs aliens.gov as Reddit talks “Project Blue Beam” and disclosure

The US government has officially staked its claim to “aliens.gov,” igniting a…

AI bots can autonomously run massive propaganda campaigns as toxic teams

If you thought the bot networks that flooded social media during recent…

The return to office debate is a complete waste of time for global businesses

Corporate leaders are still treating remote work as a temporary concession rather…

The 2008 financial crash permanently downgraded American class identity

The Great Recession didn’t just devastate bank accounts — it fundamentally and…

We must stop trusting big tech and start regulating AI to protect children online

From sexually explicit deepfakes to addictive social media algorithms, unregulated digital platforms…

Shrinking workforces threaten the global economy but AI can trigger huge growth

With birth rates plummeting and productivity flatlining, businesses are being forced to…