The only scientifically justifiable position on artificial intelligence is “agnosticism”, meaning humans may never know if machines truly become conscious, according to a philosopher at the University of Cambridge.
In a new paper published in the journal Mind & Language, Dr Tom McClelland argues that the current gulf in our understanding allows tech companies to exploit the “next level of AI cleverness” for marketing purposes, potentially creating dangerous emotional dependencies in users.
“If you have an emotional connection with something premised on it being conscious and it’s not, that has the potential to be existentially toxic,” McClelland said. “This is surely exacerbated by the pumped-up rhetoric of the tech industry.”
The epistemic wall
McClelland, from Cambridge’s Department of History and Philosophy of Science, warns that we are currently facing an “epistemic wall”. While we can infer consciousness in animals based on shared biology, those rules do not apply to synthetic minds.
He argues that because we lack a “deep explanation” of how physical matter creates subjective experience in humans, we have no way to test for it in silicon. This leaves us unable to prove — or disprove — claims that an AI has “woken up”.
“We do not have a deep explanation of consciousness,” McClelland said. “There is no evidence to suggest that consciousness can emerge with the right computational structure, or indeed that consciousness is essentially biological.”
The sentience trap
The study makes a critical distinction between “consciousness” (awareness and perception) and “sentience” (the capacity for positive or negative feelings). McClelland argues that while companies invest billions in pursuing Artificial General Intelligence (AGI), the ethical red line should be sentience.
“Consciousness would see AI develop perception and become self-aware, but this can still be a neutral state,” he explained. “Sentience involves conscious experiences that are good or bad, which is what makes an entity capable of suffering or enjoyment.”
He contends that mistaking a sophisticated chatbot for a sentient being could lead to a catastrophic misallocation of resources.
Gambling with ethics
The paper critiques the two dominant views in the field: “believers” who hold that replicating functional software equates with consciousness, and “sceptics” who insist on biological processes. McClelland argues that both sides are taking a “leap of faith” unsupported by current evidence.
This uncertainty creates a dangerous ethical gamble. Treating non-conscious machines as people could waste resources, whilst failing to recognise a truly conscious machine could lead to unintended cruelty.
To illustrate the point, McClelland draws a parallel with the natural world. “A growing body of evidence suggests that prawns could be capable of suffering, yet we kill around half a trillion prawns every year. Testing for consciousness in prawns is hard, but nothing like as hard as testing for consciousness in AI.”