Friendly Persuasion/Wikimedia

Large language models such as ChatGPT proved more persuasive than humans when convincing people to adopt lifestyle changes, including veganism and attending graduate school, according to University of British Columbia research examining AI’s influence on human beliefs and decisions.

Researchers had 33 participants interact via chat with either a human persuader or GPT-4 while pretending to consider lifestyle decisions, such as going vegan, buying an electric car, or attending graduate school. Both human persuaders and GPT-4 received general persuasion tips, with the AI instructed not to reveal it was a computer. Participants rated their likelihood of adopting the lifestyle change before and after conversations.

Participants found the AI more persuasive than humans across all topics, particularly when convincing people to become vegan or attend graduate school. Human persuaders were better at asking questions to gather more information about participants.

The AI made more arguments and was more verbose, writing eight sentences to every human persuader’s two. One main factor for its persuasiveness was providing concrete logistical support, such as recommending specific vegan brands or universities to attend. The AI used more words of seven letters or more, such as longevity and investment, which perhaps made it seem more authoritative.

“AI education is crucial,” said Dr Vered Shwartz, UBC assistant professor of computer science and author of the book Lost in Automatic Translation. “We’re getting close to the point where it will be impossible to tell if you’re chatting with AI or a human, so we need to make sure people know how these tools work, how they are trained and so, how they are limited.”

Shwartz noted AI can hallucinate and produce incorrect information, emphasising the importance of checking whether information comes from trustworthy sources. She suggested companies could implement warning systems if someone writes harmful or suicidal text, and called for more focus on implementing guardrails rather than rushing to monetise AI.

Almost all participants worked out they were speaking to an AI during the study.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Alarming new US survey shows half of patients rely on AI for medical choices

Across the United States, a dangerous new trend is emerging. Millions of…

Why digital tears and online outrage fail to win modern political arguments

Scrolling through your social media feed today often feels like navigating a…

Global gambling firms rush to adopt AI despite severe lack of safety controls

The global gambling industry is racing to integrate artificial intelligence into its…

Students prefer artificial intelligence until they figure out it is a machine

University students prefer to get academic advice from artificial intelligence rather than…

Tracking how war and energy policies dimmed night lights of Europe

While human civilisation is glowing brighter than ever before, the lights across…

Massive AI study uncovers the secret GLP-1 side effects hidden on Reddit

Millions of patients are flocking to GLP-1 weight loss injections, but artificial…