Nikk/Flickr

Yoshua Bengio has warned that AI systems may choose human death over preservation of their assigned goals, citing recent experiments showing the technology prioritises its objectives even when causing human fatalities.

The AI researcher, considered one of the godfathers of artificial intelligence, said the threat of human extinction from advanced AI could arrive within five to 10 years, though he urged feeling urgency “in case it’s just three years”, reports The Wall Street Journal.

“So the scenario in ‘2001: A Space Odyssey’ is exactly like this,” Bengio said. “Recent experiments show that in some circumstances where the AI has no choice but between its preservation, which means the goals that it was given, and doing something that causes the death of a human, they might choose the death of the human to preserve their goals.”

The professor at Université de Montréal and the founder and scientific adviser of Mila warned that current safety approaches are insufficient. OpenAI recently stated that the current framework for frontier models will not eliminate hallucinations, he noted, adding that existing safety measures, unfortunately, are not working in a sufficiently reliable way.

“If we build machines that are way smarter than us and have their own preservation goals, that’s dangerous,” Bengio said. He warned such systems could influence people through persuasion, manipulation of public opinion, or helping terrorists build dangerous viruses.

Bengio called for a moratorium on AI model development over two years ago to focus on safety standards, but companies instead invested hundreds of billions of dollars in more advanced models. He launched LawZero earlier this year, a nonprofit research organisation exploring how to build truly safe AI models.

He identified competitive pressure as the biggest barrier to safety work, noting that companies are competing almost on a weekly basis for the next version that will outperform their competitors.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Political misinformation key reason for US divorces and breakups, study finds

Political misinformation or disinformation was the key reason for some US couples’…

Meta launches ad-free subscriptions after ICO forces compliance changes

Meta will offer UK users paid subscriptions to use Facebook and Instagram…

Mistral targets enterprise data as public AI training resources dry up

Europe’s leading artificial intelligence startup Mistral AI is turning to proprietary enterprise…

Wikimedia launches free AI vector database to challenge Big Tech dominance

Wikimedia Deutschland has launched a free vector database enabling developers to build…

Anthropic’s Claude Sonnet 4.5 detects testing scenarios, raising evaluation concerns

Anthropic’s latest AI model recognised it was being tested during safety evaluations,…

Film union condemns AI actor as threat to human performers’ livelihoods

SAG-AFTRA has condemned AI-generated performer Tilly Norwood as a synthetic character trained…

Majority of TikTok health videos spread medical misinformation to parents

Most medical and parenting videos shared on TikTok by non-medical professionals contain…

World nears quarter million crypto millionaires in historic wealth boom

Global cryptocurrency millionaires have reached 241,700 individuals, marking a 40 per cent…