The human brain processes spoken language using a layered sequence that almost exactly mirrors the architecture of artificial intelligence, challenging decades of theory about how we understand speech.
A new study led by the Hebrew University of Jerusalem reveals that the neural computations underlying story comprehension unfold in a hierarchy parallel to the deep learning layers of large language models (LLMs).
Using electrocorticography to record brain activity in participants listening to a 30-minute podcast, researchers discovered that the brain’s “step-by-step buildup” toward understanding aligns surprisingly well with how AI models analyse text.
“What surprised us most was how closely the brain’s temporal unfolding of meaning matches the sequence of transformations inside large language models,” said Dr. Ariel Goldstein, lead researcher at Hebrew University. “Even though these systems are built very differently, both seem to converge on a similar step-by-step buildup toward understanding.”
Tone and meaning
The study found that early neural responses in the brain align with the initial layers of AI models, which track simple word features. Conversely, later responses in key language regions such as Broca’s area align with deeper AI layers that process context, tone and meaning.
These findings challenge traditional linguistic theories that suggest comprehension relies on rigid symbolic rules. Instead, the research supports a more dynamic, statistical approach where meaning emerges gradually through layers of processing—much like a neural network.
Crucially, the researchers found that AI-derived contextual embeddings predicted brain activity better than classical linguistic features such as phonemes and morphemes, suggesting the brain integrates meaning in a more fluid way than previously believed.
The team, which included collaborators from Google Research and Princeton University, has publicly released the dataset to help scientists test competing theories of human cognition.