A new class of “AI swarms” threatens to erode democratic stability by automating the manipulation of public opinion on a massive scale, according to a Policy Forum published in Science.
Daniel Schroeder and colleagues argue that the fusion of large language models with autonomous agents allows malicious actors to generate “synthetic consensus cascades”. These cascades create an illusion of widespread public agreement, manipulating individual beliefs through social proof.
Unlike historical information operations that relied on human labour — such as the “paid troll” farms seen in the Philippines and Brazil — AI swarms operate with superhuman efficiency. They can infiltrate online communities, mimicking social dynamics to spread persuasive content that is difficult to distinguish from authentic discourse.
“Success depends on fostering collaborative action without hindering scientific research while ensuring that the public sphere remains both resilient and accountable,” the authors write.
Pathways of harm
The researchers identify several mechanisms by which these swarms damage democracy. “LLM grooming” involves seeding large volumes of duplicative content to contaminate future model training data. Meanwhile, “engineered norm shifts” can normalise extremist views, with evidence linking inflammatory online narratives to real-world violence.
The report also warns of “institutional legitimacy erosion”, citing the 2024 Taiwan presidential election, where AI-generated disinformation targeted electoral trust.
Given the unreliability of human detection of AI deepfakes, the authors propose a distributed “AI Influence Observatory” to guide oversight. They also advocate for technical “model immunisation”, where systems are proactively trained to resist generating harmful content.
“By committing now to rigorous measurement, proportionate safeguards, and shared oversight, upcoming elections could even become a proving ground for, rather than a setback to, democratic AI governance,” the authors said.