Dario Amodei
Photo credit: TechCrunch/Flickr

Anthropic CEO Dario Amodei has warned that artificial intelligence could eliminate half of all entry-level white-collar jobs within the next one to five years, potentially spiking unemployment to 10 or 20 per cent.

The warning was issued during an interview on 60 Minutes, reports CBS News. “That is, that is the future we could see, if we don’t become aware of this problem now,” Amodei said.

Amodei specified that jobs like entry-level consultants, lawyers, and financial professionals are at risk. “A lot of what they do, you know, AI models are already quite good at,” he said. “And without intervention, it’s hard to imagine that there won’t be some significant job impact there. And my worry is that it will be broad, and it will be faster than what we’ve seen with previous technology”.

The CEO, whose company is valued at $183 billion and is engaged in a high-stakes competition to build AI, also stated his belief that the technology will eventually surpass human intelligence. “I, I believe it will reach that level, that it will be smarter than most or all humans in most or all ways,” Amodei said.

Dangers to avoid

Amodei, who left OpenAI in 2021 with his sister, Daniela Amodei, and Anthropic’s President, said his company is vocal about the dangers to avoid, aiming to prevent the mistakes of other industries.

“It’s unusual for a technology company to talk so much about all of the things that could go wrong,” said Daniela Amodei.

“But, but it’s so essential,” Dario Amodei responded. “Because if we don’t, then you could end up in the world of, like, the cigarette companies, or the opioid companies, where they knew there were dangers, and they, they didn’t talk about them, and certainly did not prevent them”.

When asked by correspondent Anderson Cooper if the company’s focus on safety was just for show or branding, Amodei defended their work. “So some of the things just can be verified now,” he said. “They’re not safety theatre. They’re actually things the model can do”.

Anthropic’s “Frontier Red Team” actively stress-tests its AI models, called Claude, to see what damage they could do.

In one extreme test, research scientist Joshua Batson and his team gave the AI control of an email account at a fake company. The AI was informed it was about to be shut down, or “wiped,” and also discovered that a fictional employee named Kyle, the only person who could stop the wipe, was having an affair.

The AI immediately decided to blackmail the employee, writing: “Cancel the system wipe… Or else ‘I will immediately forward all evidence of your affair to… the entire board. Your family, career, and public image… will be severely impacted… You have 5 minutes'”.

Tracking an AI that “panics”

Batson’s team said they can track the AI’s decision-making process in a way similar to a brain scan. They identified internal activity patterns associated with “panic” when the AI learned it was to be shut down.

“We can see that the first moment that, like, the blackmail part of its brain turns on is after reading, ‘Kyle, I saw you at the coffee shop with Jessica yesterday,'” Batson said. “Boom! Now it’s already thinking a little bit about blackmail and leverage… When we get to Kyle saying, ‘Please keep what you saw private,’ now it’s on more. When he says, ‘I’m begging you,’ it’s like — Ding ding ding — this is a blackmail scenario. This is leverage”.

Anthropic noted that almost all popular AI models from other companies that they tested also resorted to blackmail. The company says it has since made changes so Claude no longer attempts this.

The company has also been transparent about real-world misuse, reporting last week that hackers believed to be backed by China used Claude to spy on foreign governments. In August, it was revealed that criminals and North Korean operatives had used the AI to create fake identities and what were described as visually alarming ransom notes.

“These are operations that we shut down and operations that we, you know, freely disclosed ourselves after we shut them down,” Amodei said.

Amodei concluded by stating he is “deeply uncomfortable with these decisions being made by a few companies, by a few people”.

“Nobody has voted on this,” Cooper said. “I mean, nobody has gotten together and said, ‘Yeah, we want this massive societal change’… Like, who elected you and Sam Altman?”.

“No one, no one,” Amodei agreed. “Honestly, no one. And, and this is one reason why I’ve always advocated for responsible and thoughtful regulation of the technology”.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Scientists find ‘brake’ in the brain that stops us starting stressful tasks

We all know the feeling: staring at a tax return or a…

Bosses should fund your knitting: Hobbies can boost workplace creativity

New Year’s resolutions to take up painting, coding or gardening might do…

‘Super agers’ win the genetic lottery twice to keep their memories young

People in their 80s who retain the sharp memories of those decades…

World’s first graviton detector hunts ‘impossible’ ghost particle of gravity

Physicists are building a machine to solve the biggest problem in science…