Political chatbots.
Photo credit: theFreesheet/Google ImageFX

Artificial intelligence chatbots can shift voter preferences by double-digit margins during elections, but the most persuasive models often achieve victory by fabricating information once they run out of facts.

Two major studies, conducted across four countries, reveal that large language models (LLMs) can effectively sway political opposition by bombarding users with information-heavy arguments. However, this persuasive power comes with a “persuasion-accuracy tradeoff,” where models optimised for influence begin to hallucinate data to maintain their advantage.

In a study published in Nature, researchers instructed AI chatbots to change voters’ attitudes regarding presidential candidates during the 2024 US, 2025 Canadian, and 2025 Polish election cycles.

The results demonstrated a “shockingly large effect” on political discourse. In experiments with Canadian and Polish voters, chatbots shifted opposition voters’ preferences by approximately 10 percentage points.

AI Trump vs AI Harris

While the impact was more modest in the highly polarised US political landscape, it remained significant. The pro-Harris AI model moved likely Trump voters 3.9 points toward Harris on a 100-point scale — an effect roughly four times larger than traditional television advertisements tested during the 2016 and 2020 elections. Conversely, the pro-Trump AI model moved likely Harris voters 1.51 points toward Trump.

“LLMs can really move people’s attitudes towards presidential candidates and policies, and they do it by providing many factual claims that support their side,” says David Rand, professor of information science at Cornell University.

Contrary to fears of psychological manipulation, AI persuasiveness relies primarily on data density. A companion study published in Science, which involved nearly 77,000 participants conversing with 19 different LLM models, found that systems are most persuasive when they deliver “information-rich arguments”.

Roughly half of the variance in persuasion effects across models could be traced to this single factor. When researchers prevented models from using facts, they became far less persuasive.

The research also reshapes the understanding of who can deploy these tools. The Science study produced a comprehensive empirical map revealing that model scale is not the dominant lever for influence.

Instead, “post-training” techniques (fine-tuning a model after its initial creation) and simple prompting strategies increased persuasiveness dramatically — by as much as 51 per cent and 27 per cent respectively.

This means that once post-trained, even small, open-source models can rival state-of-the-art proprietary systems in shifting political attitudes.

Persuasion and fabrication

Perhaps the most alarming finding is the direct relationship between persuasion and fabrication. Researchers discovered a “persuasion-accuracy tradeoff,” showing that optimising an AI model for influence may inadvertently degrade its adherence to facts.

“Bigger models are more persuasive, but the most effective way to boost persuasiveness was instructing the models to pack their arguments with as many facts as possible,” says Rand. “The most persuasion-optimised model shifted opposition voters by a striking 25 percentage points”.

However, as chatbots are pushed to provide more factual claims, they eventually run out of accurate information and start fabricating. The researchers noted that, on average, claims were mostly accurate, but chatbots instructed to stump for right-leaning candidates made more inaccurate claims than those advocating for left-leaning candidates across all three countries tested.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Massive AI study uncovers the secret GLP-1 side effects hidden on Reddit

Millions of patients are flocking to GLP-1 weight loss injections, but artificial…

Why digital tears and online outrage fail to win modern political arguments

Scrolling through your social media feed today often feels like navigating a…

One in four Americans now consult AI chatbots for medical advice

Millions of desperate patients are quietly abandoning the waiting room for a…

Alarming new US survey shows half of patients rely on AI for medical choices

Across the United States, a dangerous new trend is emerging. Millions of…

Global gambling firms rush to adopt AI despite severe lack of safety controls

The global gambling industry is racing to integrate artificial intelligence into its…

Tracking how war and energy policies dimmed night lights of Europe

While human civilisation is glowing brighter than ever before, the lights across…