Artificial intelligence may have cracked the code for delivering personalised education to mass student populations, but only if institutions abandon open-internet models in favour of strictly curated data.
A new study from Dartmouth College reveals that automated platforms can successfully tailor instruction to individual needs at scale, provided the systems are restricted to vetted expert sources that eliminate the “hallucinations” common in general-purpose chatbots.
Researchers tracked 190 medical students at the Geisel School of Medicine as they utilised “NeuroBot TA,” an AI teaching assistant designed to provide round-the-clock support for neuroscience courses. The system utilises retrieval-augmented generation (RAG) to anchor its responses exclusively to course textbooks and lecture slides.
“We’re showing that AI can scale personalised learning, all while gaining students’ trust,” said Thomas Thesen, associate professor of medical education. “This has implications for future learning with AI, especially in low-resource settings.”
Curated accuracy builds confidence
The study found that trust is the primary barrier to scaling AI in education. Of the 143 students who provided feedback, the vast majority preferred the restricted NeuroBot system over broader tools like ChatGPT because it guaranteed accuracy.
By limiting the AI to verified course materials, the researchers eliminated the risk of the system inventing facts, a flaw that frequently undermines student confidence in educational technology.
“Transparency builds trust,” said Thesen. “Students appreciated knowing that answers were grounded in their actual course materials rather than drawn from training data based on the entire internet, where information quality and relevance varies.”
While the technology successfully delivered personalised support to nearly 200 students simultaneously, the study uncovered a critical vulnerability in how learners interact with automated tutors. Students primarily used the tool for rapid fact-checking before exams rather than engaging in deep, exploratory learning.
The researchers warn that while AI can solve the scalability problem in education, it risks creating a false sense of competence if students rely on it too heavily for quick answers.
“There is an illusion of mastery when we cognitively outsource all of our thinking and learning to AI, but we’re not really learning,” said Thesen.
To address this, the team is developing hybrid approaches that incorporate Socratic tutoring — where the AI asks guiding questions rather than providing immediate answers — to ensure the scalability of AI support does not come at the cost of critical thinking skills.