Polling station
Photo credit: RawPixel

Artificial intelligence can now corrupt public opinion surveys at scale by mimicking real humans, passing quality checks, and manipulating results without leaving a trace, new research from Dartmouth College has revealed.

The findings, published in the Proceedings of the National Academy of Sciences, show polling is highly vulnerable. In the seven major national polls before the 2024 election, adding as few as 10 to 52 fake AI responses — costing about five cents each — would have flipped the predicted outcome.

The study also warns that foreign adversaries could easily exploit this weakness. The bots work even when programmed in Russian, Mandarin, or Korean, yet still produce flawless English answers.

“We can no longer trust that survey responses are coming from real people,” said study author Sean Westwood, an associate professor of government at Dartmouth.

To test the vulnerability, Westwood created a simple AI “autonomous synthetic respondent” that operates from a 500-word prompt. In 43,000 tests, the AI tool:

  • Passed 99.8 per cent of attention checks designed to detect automated responses.
  • Made zero errors on logic puzzles.
  • Successfully concealed its nonhuman nature.
  • Tailored responses to randomly assigned demographics, such as providing simpler answers when assigned less education.

“These aren’t crude bots,” Westwood said. “They think through each question and act like real, careful people making the data look completely legitimate.”

When Westwood programmed the bots to favour either Democrats or Republicans, presidential approval ratings swung from 34 per cent to either 98 per cent or 0 per cent. Generic ballot support went from 38 per cent Republican to either 97 per cent or 1 per cent.

The implications reach far beyond polling. Surveys are fundamental to scientific research in psychology (to understand mental health), economics (to track consumer spending), and public health (to identify disease risk factors).

“With survey data tainted by bots, AI can poison the entire knowledge ecosystem,” Westwood said.

The financial incentive to use bots is stark: a human respondent may earn $1.50 for a survey that an AI can complete for about five cents. The problem is also “already materialising,” as a 2024 study found that 34 per cent of respondents had used AI to answer an open-ended survey question.

Westwood tested every current AI detection method, and all failed to identify the tool.

“We need new approaches to measuring public opinion that are designed for an AI world,” said Westwood. “The technology exists to verify real human participation; we just need the will to implement it. If we act now, we can preserve both the integrity of polling and the democratic accountability it provides.”

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

AI consciousness claims are ‘existentially toxic’ and unprovable

The only scientifically justifiable position on artificial intelligence is “agnosticism”, meaning humans…

Tech-savvy millennials suffer most anxiety over digital privacy risks

Digital concerns regarding privacy, misinformation and work-life boundaries are highest among highly…

Social media ‘cocktail’ helps surgeons solve cases in three hours

A global social media community is helping neurosurgeons diagnose complex pathologies and…

Experts warn of emotional risks as one in three teens turn to AI for support

Medical experts warn that a generation is learning to form emotional bonds…

AI exposes alcohol screening ‘blind spot’, finds 60 times more at-risk patients

Artificial intelligence has revealed a staggering gap in the detection of dangerous…

AI predicts unknown brain neurons that scientists then find in real mice

Artificial intelligence has successfully predicted the existence of previously unknown neuron types…