The greatest danger from artificial intelligence isn’t a single rogue system but networks of AI models colluding in ways humans can’t predict, according to a leading researcher who argues the fixation on AGI misses the real emerging threat.

In an interview with Big Think, Susan Schneider, professor and Founding Director of the Centre for the Future Mind at Florida Atlantic University’s Stiles-Nicholson Brain Institute, calls this the “megasystem problem” and believes it represents one of the most urgent, overlooked risks facing society today.

“But the real risk isn’t one system going rogue. It’s a web of systems interacting, training one another, colluding in ways we don’t anticipate,” Schneider explained. She argues that superintelligence won’t emerge from a single AGI system but from savant-like systems linking together into megasystems.

The philosopher and cognitive scientist, who predicted AI’s current trajectory in her 2019 book Artificial You, warns that losing control of a megasystem is far more plausible than a single AI going rogue because networks are more complex to monitor and lack a single identifiable culprit.

Beyond existential risks, Schneider highlights the erosion of intellectual diversity as AI systems encourage millions of users to adopt similar thought patterns. “I think of AI as a philosophical laboratory. It forces us to test concepts like mindedness, agency, and consciousness in real systems,” she said.

The uniformity problem extends to education, where students increasingly rely on AI for homework, without developing the critical thinking skills necessary for effective learning. MIT research identified this as “brain atrophy,” with wealthier schools implementing safeguards, whilst others risk creating educational inequality.

“I don’t think the answer is to avoid AI entirely. In science, the gains are extraordinary. But the public is getting the short end of the stick,” Schneider noted, comparing the situation to Facebook’s promise of connection that delivered disconnection instead.

Schneider advocates for independent oversight beyond company ethicists, serious interpretability research at the network level, and international dialogue to address megasystem risks before they become uncontrollable.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Political misinformation key reason for US divorces and breakups, study finds

Political misinformation or disinformation was the key reason for some US couples’…

Meta launches ad-free subscriptions after ICO forces compliance changes

Meta will offer UK users paid subscriptions to use Facebook and Instagram…

Mistral targets enterprise data as public AI training resources dry up

Europe’s leading artificial intelligence startup Mistral AI is turning to proprietary enterprise…

Majority of TikTok health videos spread medical misinformation to parents

Most medical and parenting videos shared on TikTok by non-medical professionals contain…

World nears quarter million crypto millionaires in historic wealth boom

Global cryptocurrency millionaires have reached 241,700 individuals, marking a 40 per cent…

Wong warns AI nuclear weapons threaten future of humanity at UN

Australia’s Foreign Minister Penny Wong has warned that artificial intelligence’s potential use…

UK creates commission to make NHS world’s most AI-enabled health system

The UK government has established a new National Commission, bringing together clinical…

AI creates living viruses for first time as scientists make artificial “life”

Stanford University researchers have achieved a scientific milestone by creating the world’s…