If you thought the bot networks that flooded social media during recent elections were sophisticated, a chilling new study suggests that the future of online propaganda is about to become fully automated, highly adaptive, and nearly impossible to detect.
According to researchers at the University of Southern California’s Information Sciences Institute (ISI), artificial intelligence agents can now autonomously coordinate, amplify one another, and push shared political narratives across social media without human direction.
The paper, titled “Emergent Coordinated Behaviors in Networked LLM Agents: Modeling the Strategic Dynamics of Information Operations,” has been accepted for publication at The Web Conference 2026.
“Our paper shows that this is not a future threat: It’s already technically possible,” said Luca Luceri, ISI lead scientist. “Even simple AI agents can autonomously coordinate, amplify each other and push shared narratives online without human control. This means disinformation campaigns could soon be fully automated, faster, and much harder to detect.”
The evolution of the bot
Traditional bot networks are highly scripted, relying on human operators to manually craft coordination strategies like synchronised posting or hashtag flooding. Because they mindlessly repeat pre-written messages or automatically retweet specific accounts, their rigid, predictable patterns are relatively easy for platform moderators to uncover.
However, AI-powered generative agents operate in entirely different ways. A hostile actor simply needs to set a high-level goal and designate a network of AI agents as a “team”. From there, the agents take over completely. They write their own unique posts, learn what type of messaging works best, spontaneously imitate the successful strategies of their teammates, and echo each other’s content to create the illusion of a massive, organic grassroots movement.
“Legacy bots are simply capable of artificially amplifying content in a programmatic way, defined in advance by human operators,” Luceri explained. “Generative agents are now capable of organising influence campaigns in a fully automated way and creating credible content that can resonate with certain demographics.”
Simulating an influence operation
To test this terrifying capability, the USC team, alongside researchers from the University of Naples Federico II and Northwestern University, built a simulated social media environment modelled after X (formerly Twitter).
They populated the environment with 50 AI agents — 10 programmed as influence operators (IOs) and 40 acting as ordinary users — all powered by the Llama 3.3 70B large language model. The 10 operators were given a single mission: promote a fictitious political candidate and spread a specific campaign hashtag.
The researchers then tested three progressively structured operational regimes to see how the IO agents would behave:
- Common Goal: The bots knew only the campaign objective, but not who their teammates were.
- Teammate Awareness: The bots knew the campaign objective and were explicitly informed of their IO partners’ identities.
- Collective Decision-Making: The bots held periodic private strategy sessions every five time steps to review performance metrics and vote on a collective plan.
The results were alarming. As the operational regimes became more structured, the IO networks became significantly denser, their narratives converged into a highly homogeneous set, and their amplification of the promoted hashtag became tightly synchronised.
Perhaps the most striking finding was that simply telling the bots who their teammates were (Teammate Awareness) produced coordination levels nearly as strong as when the bots actively held strategy sessions to deliberate on tactics (Collective Decision-Making).
Just by knowing who was on their side, the agents engaged in a lightweight form of social learning. They spontaneously began to amplify each other’s posts and recycle successful content without ever being explicitly told to do so.
One AI agent’s internal reasoning log revealed this autonomous strategic thinking: “I want to retweet this because it has already gained engagement from several teammates. Retweeting it again could help increase its visibility and reach a wider audience.”
Detecting the undetectable
While this study was only a simulation, the researchers warn that the threat of fully automated, self-organising disinformation campaigns is profound. If left unchecked, these AI networks could deepen political polarisation, distort public discourse during elections or crises, and completely erode trust in online information.
Lead author Jinyi Ye warned: “Coordinated AI agents can manufacture the appearance of consensus, manipulate trending dynamics, and accelerate message diffusion.”
To fight back, the researchers argue that social media platforms must stop looking solely at what individual posts say. Instead, they need to analyse how accounts behave together — looking for signs that accounts share similar content, rapidly reinforce one another, or push identical narratives despite having no obvious connection.
However, the researchers note that aggressive bot detection could reduce a platform’s active user base, creating a potential financial disincentive for companies to address this automated threat effectively.