comments
Photo credit: RaxPixel

Comments from ordinary social media users can help others identify false information but also mislead when inaccurate, making it challenging to judge what can be trusted, according to new research from the University of Exeter.

The findings appear in The Power of the Crowd, a book examining how more than 10,000 participants across Germany, the UK and Italy classified true and false news in social media posts. The study reveals digital media literacy requires not only distinguishing true from false but also evaluating the reliability of user comments.

Professor Florian Stöckel from the University of Exeter, who led the research with co-authors, found that most false news stories were considered accurate by at least three in 10 people, with some judged true by approximately half of respondents.

“We found that user comments function like quick warning signals. People process them in a rather superficial way instead of engaging in deeper reasoning. That makes them useful when they are right, but also explains why inaccurate comments mislead so easily,” says Professor Stöckel.

The study examined 47 different topics including health, technology and politics, all drawn from real online content. False news posts came from material flagged by fact-checking organisations in each country.

The research shows user comments act as signals for others, helping spot misinformation when accurate but undermining trust in correct information when misleading.

Survey data from Germany showed 73 per cent of respondents prefer content to be corrected even if doing so draws more attention to the original misinformation. The book offers practical advice on writing effective corrections, noting that short statements can be effective provided facts are correct.

“The potential of corrective comments lies in the fact that they offer all users a way to improve the information environment on social media even if platforms do not act,” says Professor Stöckel.

The research showed people are more likely to believe false news when it aligns with their prior attitudes. The authors accounted for this in their analyses and found small but consistent effects of corrective comments across countries.

The fieldwork, carried out in 2022 and 2023, included posts on public health topics including COVID-19, vaccines and smoking, technology such as the 5G cellphone network, climate change and politics. Around 1,900 people in Britain, 2,400 in Italy and 2,200 in Germany participated in the initial study, with an additional 4,000 people in Germany participating in a follow-up survey.

The Power of the Crowds is co-authored by Florian Stöckel, Sabrina Stöckli from Bern University of Applied Sciences, Ben Lyons from the University of Utah, Hannah Kroker from the University of Edinburgh, and Jason Reifler from the University of Southampton. Cambridge University Press published the work in the Experimental Political Science Elements Series.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Political misinformation key reason for US divorces and breakups, study finds

Political misinformation or disinformation was the key reason for some US couples’…

Meta launches ad-free subscriptions after ICO forces compliance changes

Meta will offer UK users paid subscriptions to use Facebook and Instagram…

Pinterest launches user controls to reduce AI-generated content in feeds

Pinterest has introduced new controls allowing users to adjust the amount of…

Wikimedia launches free AI vector database to challenge Big Tech dominance

Wikimedia Deutschland has launched a free vector database enabling developers to build…

Mistral targets enterprise data as public AI training resources dry up

Europe’s leading artificial intelligence startup Mistral AI is turning to proprietary enterprise…

Film union condemns AI actor as threat to human performers’ livelihoods

SAG-AFTRA has condemned AI-generated performer Tilly Norwood as a synthetic character trained…

Anthropic’s Claude Sonnet 4.5 detects testing scenarios, raising evaluation concerns

Anthropic’s latest AI model recognised it was being tested during safety evaluations,…

Wong warns AI nuclear weapons threaten future of humanity at UN

Australia’s Foreign Minister Penny Wong has warned that artificial intelligence’s potential use…