Political chatbots.
Photo credit: theFreesheet/Google ImageFX

Artificial intelligence chatbots can shift voter preferences by double-digit margins during elections, but the most persuasive models often achieve victory by fabricating information once they run out of facts.

Two major studies, conducted across four countries, reveal that large language models (LLMs) can effectively sway political opposition by bombarding users with information-heavy arguments. However, this persuasive power comes with a “persuasion-accuracy tradeoff,” where models optimised for influence begin to hallucinate data to maintain their advantage.

In a study published in Nature, researchers instructed AI chatbots to change voters’ attitudes regarding presidential candidates during the 2024 US, 2025 Canadian, and 2025 Polish election cycles.

The results demonstrated a “shockingly large effect” on political discourse. In experiments with Canadian and Polish voters, chatbots shifted opposition voters’ preferences by approximately 10 percentage points.

AI Trump vs AI Harris

While the impact was more modest in the highly polarised US political landscape, it remained significant. The pro-Harris AI model moved likely Trump voters 3.9 points toward Harris on a 100-point scale — an effect roughly four times larger than traditional television advertisements tested during the 2016 and 2020 elections. Conversely, the pro-Trump AI model moved likely Harris voters 1.51 points toward Trump.

“LLMs can really move people’s attitudes towards presidential candidates and policies, and they do it by providing many factual claims that support their side,” says David Rand, professor of information science at Cornell University.

Contrary to fears of psychological manipulation, AI persuasiveness relies primarily on data density. A companion study published in Science, which involved nearly 77,000 participants conversing with 19 different LLM models, found that systems are most persuasive when they deliver “information-rich arguments”.

Roughly half of the variance in persuasion effects across models could be traced to this single factor. When researchers prevented models from using facts, they became far less persuasive.

The research also reshapes the understanding of who can deploy these tools. The Science study produced a comprehensive empirical map revealing that model scale is not the dominant lever for influence.

Instead, “post-training” techniques (fine-tuning a model after its initial creation) and simple prompting strategies increased persuasiveness dramatically — by as much as 51 per cent and 27 per cent respectively.

This means that once post-trained, even small, open-source models can rival state-of-the-art proprietary systems in shifting political attitudes.

Persuasion and fabrication

Perhaps the most alarming finding is the direct relationship between persuasion and fabrication. Researchers discovered a “persuasion-accuracy tradeoff,” showing that optimising an AI model for influence may inadvertently degrade its adherence to facts.

“Bigger models are more persuasive, but the most effective way to boost persuasiveness was instructing the models to pack their arguments with as many facts as possible,” says Rand. “The most persuasion-optimised model shifted opposition voters by a striking 25 percentage points”.

However, as chatbots are pushed to provide more factual claims, they eventually run out of accurate information and start fabricating. The researchers noted that, on average, claims were mostly accurate, but chatbots instructed to stump for right-leaning candidates made more inaccurate claims than those advocating for left-leaning candidates across all three countries tested.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Super-intelligent AI could ‘play dumb’ to trick evaluators and evade controls

The dream of an AI-integrated society could turn into a nightmare if…

AI guardrails defeated by poetry as ‘smarter’ models prove most gullible

The world’s most advanced artificial intelligence systems are being easily manipulated into…

Satellite dataset uses deep learning to map 9.2 million kilometres of roads

Researchers have combined deep-learning models with high-resolution satellite imagery to classify 9.2…

Researchers hijack X feed with ad blocker tech to cool political tempers

Scientists have successfully intercepted and reshaped live social media feeds using ad-blocker-style…

Universities quietly deploying GenAI to ‘game’ £2bn research funding system

UK universities are widely using generative AI to prepare submissions for the…

Doing good buys forgiveness as CSR becomes ‘insurance’ against layoffs

Companies planning to slash jobs or freeze pay should start saving the…