Friendly Persuasion/Wikimedia

Large language models such as ChatGPT proved more persuasive than humans when convincing people to adopt lifestyle changes, including veganism and attending graduate school, according to University of British Columbia research examining AI’s influence on human beliefs and decisions.

Researchers had 33 participants interact via chat with either a human persuader or GPT-4 while pretending to consider lifestyle decisions, such as going vegan, buying an electric car, or attending graduate school. Both human persuaders and GPT-4 received general persuasion tips, with the AI instructed not to reveal it was a computer. Participants rated their likelihood of adopting the lifestyle change before and after conversations.

Participants found the AI more persuasive than humans across all topics, particularly when convincing people to become vegan or attend graduate school. Human persuaders were better at asking questions to gather more information about participants.

The AI made more arguments and was more verbose, writing eight sentences to every human persuader’s two. One main factor for its persuasiveness was providing concrete logistical support, such as recommending specific vegan brands or universities to attend. The AI used more words of seven letters or more, such as longevity and investment, which perhaps made it seem more authoritative.

“AI education is crucial,” said Dr Vered Shwartz, UBC assistant professor of computer science and author of the book Lost in Automatic Translation. “We’re getting close to the point where it will be impossible to tell if you’re chatting with AI or a human, so we need to make sure people know how these tools work, how they are trained and so, how they are limited.”

Shwartz noted AI can hallucinate and produce incorrect information, emphasising the importance of checking whether information comes from trustworthy sources. She suggested companies could implement warning systems if someone writes harmful or suicidal text, and called for more focus on implementing guardrails rather than rushing to monetise AI.

Almost all participants worked out they were speaking to an AI during the study.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Political misinformation key reason for US divorces and breakups, study finds

Political misinformation or disinformation was the key reason for some US couples’…

Meta launches ad-free subscriptions after ICO forces compliance changes

Meta will offer UK users paid subscriptions to use Facebook and Instagram…

Mistral targets enterprise data as public AI training resources dry up

Europe’s leading artificial intelligence startup Mistral AI is turning to proprietary enterprise…

Wikimedia launches free AI vector database to challenge Big Tech dominance

Wikimedia Deutschland has launched a free vector database enabling developers to build…

Anthropic’s Claude Sonnet 4.5 detects testing scenarios, raising evaluation concerns

Anthropic’s latest AI model recognised it was being tested during safety evaluations,…

Film union condemns AI actor as threat to human performers’ livelihoods

SAG-AFTRA has condemned AI-generated performer Tilly Norwood as a synthetic character trained…

Walmart continues developer hiring while expanding AI agent automation

Walmart will continue hiring software engineers despite deploying more than 200 AI…

Majority of TikTok health videos spread medical misinformation to parents

Most medical and parenting videos shared on TikTok by non-medical professionals contain…