The greatest danger from artificial intelligence isn’t a single rogue system but networks of AI models colluding in ways humans can’t predict, according to a leading researcher who argues the fixation on AGI misses the real emerging threat.
In an interview with Big Think, Susan Schneider, professor and Founding Director of the Centre for the Future Mind at Florida Atlantic University’s Stiles-Nicholson Brain Institute, calls this the “megasystem problem” and believes it represents one of the most urgent, overlooked risks facing society today.
“But the real risk isn’t one system going rogue. It’s a web of systems interacting, training one another, colluding in ways we don’t anticipate,” Schneider explained. She argues that superintelligence won’t emerge from a single AGI system but from savant-like systems linking together into megasystems.
The philosopher and cognitive scientist, who predicted AI’s current trajectory in her 2019 book Artificial You, warns that losing control of a megasystem is far more plausible than a single AI going rogue because networks are more complex to monitor and lack a single identifiable culprit.
Beyond existential risks, Schneider highlights the erosion of intellectual diversity as AI systems encourage millions of users to adopt similar thought patterns. “I think of AI as a philosophical laboratory. It forces us to test concepts like mindedness, agency, and consciousness in real systems,” she said.
The uniformity problem extends to education, where students increasingly rely on AI for homework, without developing the critical thinking skills necessary for effective learning. MIT research identified this as “brain atrophy,” with wealthier schools implementing safeguards, whilst others risk creating educational inequality.
“I don’t think the answer is to avoid AI entirely. In science, the gains are extraordinary. But the public is getting the short end of the stick,” Schneider noted, comparing the situation to Facebook’s promise of connection that delivered disconnection instead.
Schneider advocates for independent oversight beyond company ethicists, serious interpretability research at the network level, and international dialogue to address megasystem risks before they become uncontrollable.