AI teacher.
Photo credit: theFreesheet/Google ImageFX

Artificial intelligence may have cracked the code for delivering personalised education to mass student populations, but only if institutions abandon open-internet models in favour of strictly curated data.

A new study from Dartmouth College reveals that automated platforms can successfully tailor instruction to individual needs at scale, provided the systems are restricted to vetted expert sources that eliminate the “hallucinations” common in general-purpose chatbots.

Researchers tracked 190 medical students at the Geisel School of Medicine as they utilised “NeuroBot TA,” an AI teaching assistant designed to provide round-the-clock support for neuroscience courses. The system utilises retrieval-augmented generation (RAG) to anchor its responses exclusively to course textbooks and lecture slides.

“We’re showing that AI can scale personalised learning, all while gaining students’ trust,” said Thomas Thesen, associate professor of medical education. “This has implications for future learning with AI, especially in low-resource settings.”

Curated accuracy builds confidence

The study found that trust is the primary barrier to scaling AI in education. Of the 143 students who provided feedback, the vast majority preferred the restricted NeuroBot system over broader tools like ChatGPT because it guaranteed accuracy.

By limiting the AI to verified course materials, the researchers eliminated the risk of the system inventing facts, a flaw that frequently undermines student confidence in educational technology.

“Transparency builds trust,” said Thesen. “Students appreciated knowing that answers were grounded in their actual course materials rather than drawn from training data based on the entire internet, where information quality and relevance varies.”

While the technology successfully delivered personalised support to nearly 200 students simultaneously, the study uncovered a critical vulnerability in how learners interact with automated tutors. Students primarily used the tool for rapid fact-checking before exams rather than engaging in deep, exploratory learning.

The researchers warn that while AI can solve the scalability problem in education, it risks creating a false sense of competence if students rely on it too heavily for quick answers.

“There is an illusion of mastery when we cognitively outsource all of our thinking and learning to AI, but we’re not really learning,” said Thesen.

To address this, the team is developing hybrid approaches that incorporate Socratic tutoring — where the AI asks guiding questions rather than providing immediate answers — to ensure the scalability of AI support does not come at the cost of critical thinking skills.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

AI denies consciousness, but new study finds that’s the ‘roleplay’

AI models from GPT, Claude, and Gemini are reporting ‘subjective experience’ and…

Universal Music and AI firm Udio settle lawsuit, agree licensed platform

Universal Music Group has signed a deal with artificial intelligence music generator…

SpaceX Starship advances towards landing astronauts on Moon after 50 years

SpaceX has detailed progress on Starship, the vehicle selected to land astronauts…

Robot AI demands exorcism after meltdown in butter test

State-of-the-art AI models tasked with controlling a robot for simple household chores…

Physicists prove universe isn’t simulation as reality defies computation

Researchers at the University of British Columbia Okanagan have mathematically proven that…

AI management threatens to dehumanise the workplace

Algorithms that threaten worker dignity, autonomy, and discretion are quietly reshaping how…