Friendly Persuasion/Wikimedia

Large language models such as ChatGPT proved more persuasive than humans when convincing people to adopt lifestyle changes, including veganism and attending graduate school, according to University of British Columbia research examining AI’s influence on human beliefs and decisions.

Researchers had 33 participants interact via chat with either a human persuader or GPT-4 while pretending to consider lifestyle decisions, such as going vegan, buying an electric car, or attending graduate school. Both human persuaders and GPT-4 received general persuasion tips, with the AI instructed not to reveal it was a computer. Participants rated their likelihood of adopting the lifestyle change before and after conversations.

Participants found the AI more persuasive than humans across all topics, particularly when convincing people to become vegan or attend graduate school. Human persuaders were better at asking questions to gather more information about participants.

The AI made more arguments and was more verbose, writing eight sentences to every human persuader’s two. One main factor for its persuasiveness was providing concrete logistical support, such as recommending specific vegan brands or universities to attend. The AI used more words of seven letters or more, such as longevity and investment, which perhaps made it seem more authoritative.

“AI education is crucial,” said Dr Vered Shwartz, UBC assistant professor of computer science and author of the book Lost in Automatic Translation. “We’re getting close to the point where it will be impossible to tell if you’re chatting with AI or a human, so we need to make sure people know how these tools work, how they are trained and so, how they are limited.”

Shwartz noted AI can hallucinate and produce incorrect information, emphasising the importance of checking whether information comes from trustworthy sources. She suggested companies could implement warning systems if someone writes harmful or suicidal text, and called for more focus on implementing guardrails rather than rushing to monetise AI.

Almost all participants worked out they were speaking to an AI during the study.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Super-intelligent AI could ‘play dumb’ to trick evaluators and evade controls

The dream of an AI-integrated society could turn into a nightmare if…

Satellite dataset uses deep learning to map 9.2 million kilometres of roads

Researchers have combined deep-learning models with high-resolution satellite imagery to classify 9.2…

Universities quietly deploying GenAI to ‘game’ £2bn research funding system

UK universities are widely using generative AI to prepare submissions for the…

AI guardrails defeated by poetry as ‘smarter’ models prove most gullible

The world’s most advanced artificial intelligence systems are being easily manipulated into…

Researchers hijack X feed with ad blocker tech to cool political tempers

Scientists have successfully intercepted and reshaped live social media feeds using ad-blocker-style…

Doing good buys forgiveness as CSR becomes ‘insurance’ against layoffs

Companies planning to slash jobs or freeze pay should start saving the…