Mustafa Suleyman, the CEO of Microsoft AI, has explicitly rejected the industry’s “race to AGI,” warning that “no AI developer… has a reassuring answer” to the question of how to guarantee the safety of superintelligent systems.
In a blog post published today (but intriguingly dated for tomorrow), Suleyman announced the formation of the MAI Superintelligence Team, a new division he will lead. Its mission is to build what he terms “Humanist Superintelligence” (HSI), an approach that prioritises human-centrism over directionless capability.
Suleyman, whose division oversees consumer AI products including Copilot and Bing, argued that progress is already “phenomenal” and that the Turing Test “was effectively passed without any fanfare and hardly any acknowledgement.” He argued that instead of “endlessly debating capabilities or timing,” it is time to define the purpose of the technology.
He defined HSI as “incredibly advanced AI capabilities that always work for, in service of, people and humanity more generally.” He clarified this means “systems that are problem-oriented and tend towards the domain specific. Not an unbounded and unlimited entity with high degrees of autonomy – but AI that is carefully calibrated, contextualised, within limits.”
Potential to double life expectancy
Despite being an optimist for the technology – citing its potential to double life expectancy and quoting CEO Satya Nadella’s vision of AI increasing global GDP growth to 10 per cent – Suleyman issued a stark warning about the unsolved challenge of containment.
“How are we going to contain (secure and control), let alone align… a system that is – by design – intended to keep getting smarter than us?” he asked. “No AI developer, no safety researcher, no policy expert, no person I’ve encountered has a reassuring answer to this question. How do we guarantee it’s safe?”
He referred to this as the “urgent challenge facing humanity in the 21st century.”
This safety imperative, he argued, is why Microsoft’s new team is rejecting the current industry narrative. “In doing this, we reject narratives about a race to AGI, and instead see it as part of a wider and deeply human endeavour to improve our lives,” he wrote. “We also reject binaries of boom and doom; we’re in this for the long haul to deliver tangible, specific, safe benefits for billions of people.”
Suleyman outlined three key application domains for this HSI approach: an “AI companion for everyone” to help “shoulder that load” of daily life; plentiful clean energy,” with a prediction that AI will help deliver “cheap and abundant renewable generation and storage before 2040”; and “Medical Superintelligence.”
He called the medical field “the kind of domain-specific humanist superintelligence we need more than anything.” He highlighted a recent Microsoft AI breakthrough where its orchestrator, MAI-DxO, achieved 85 per cent accuracy on complex diagnostic challenges from the New England Journal of Medicine, compared to human doctors who “max out at about 20%.”
Suleyman’s post concludes by arguing that the entire industry must shift its focus. “Ultimately what HSI requires is an industry shift in approach. Are those building AI optimising for AI or for humanity, and who gets to judge?” he said. “At Microsoft AI, we believe humans matter more than AI. … HSI keeps humanity in the driving seat, always.”