The greatest danger from artificial intelligence isn’t a single rogue system but networks of AI models colluding in ways humans can’t predict, according to a leading researcher who argues the fixation on AGI misses the real emerging threat.

In an interview with Big Think, Susan Schneider, professor and Founding Director of the Centre for the Future Mind at Florida Atlantic University’s Stiles-Nicholson Brain Institute, calls this the “megasystem problem” and believes it represents one of the most urgent, overlooked risks facing society today.

“But the real risk isn’t one system going rogue. It’s a web of systems interacting, training one another, colluding in ways we don’t anticipate,” Schneider explained. She argues that superintelligence won’t emerge from a single AGI system but from savant-like systems linking together into megasystems.

The philosopher and cognitive scientist, who predicted AI’s current trajectory in her 2019 book Artificial You, warns that losing control of a megasystem is far more plausible than a single AI going rogue because networks are more complex to monitor and lack a single identifiable culprit.

Beyond existential risks, Schneider highlights the erosion of intellectual diversity as AI systems encourage millions of users to adopt similar thought patterns. “I think of AI as a philosophical laboratory. It forces us to test concepts like mindedness, agency, and consciousness in real systems,” she said.

The uniformity problem extends to education, where students increasingly rely on AI for homework, without developing the critical thinking skills necessary for effective learning. MIT research identified this as “brain atrophy,” with wealthier schools implementing safeguards, whilst others risk creating educational inequality.

“I don’t think the answer is to avoid AI entirely. In science, the gains are extraordinary. But the public is getting the short end of the stick,” Schneider noted, comparing the situation to Facebook’s promise of connection that delivered disconnection instead.

Schneider advocates for independent oversight beyond company ethicists, serious interpretability research at the network level, and international dialogue to address megasystem risks before they become uncontrollable.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Employees happiest with ‘moderate’ AI as excessive automation triggers anxiety

Implementing artificial intelligence in the workplace boosts employee morale — but only…

Forced office returns risk widening Europe’s regional inequality gap

Corporate mandates forcing staff back to desks threaten to reverse work-life balance…

Ambient AI restores eye contact to medicine by slashing clinical burnout

Ambient artificial intelligence is restoring the human connection to medicine by liberating…

‘Breathing’ robots transmit fear through touch alone as humans catch panic

Humans can “catch” fear from machines, according to new research, revealing that…

“Parasocial” crowned Cambridge Word of the Year as fans fall for AI chatbots

The rise of one-sided emotional bonds with artificial intelligence has driven Cambridge…

Net zero transition brings ‘unknown risks’ as workplace illness costs UK £22.9bn

The Health and Safety Executive (HSE) has warned that the transition to…