Robot teachers.
Photo credit: theFreesheet/Google ImageFX

University students prefer to get academic advice from artificial intelligence rather than their own human professors — right up until the moment they realise it is actually an AI.

A new pilot study from the University of Cincinnati College of Nursing reveals that while chatbots consistently outperform human educators in blind tests, a deep-seated bias prevents students from actually trusting the technology once its identity is suspected.

Published in the Journal of Nursing Education, the research asked a small group of seven Doctor of Nursing Practice (DNP) students to submit complex statistical questions related to their capstone projects. The students were then given three blinded answers: one from a professor, one from a graduate assistant, and one from a custom AI chatbot.

The bias against machines

The participants rated each response on a scale of one to five for helpfulness and overall satisfaction, and were then asked to guess which of the three answers came from the chatbot.

Dr Joshua Lambert, an associate professor and biostatistician who led the study, revealed that the AI easily won the blind test.

Lambert said: “The students rated the chatbot’s response the highest in terms of overall satisfaction and helpfulness.”

However, the data offered a fascinating psychological twist. When asked to identify the AI, the students consistently assumed that the lowest-rated, least helpful response was the one generated by the chatbot.

Lambert explained: “Students preferred the large language model (LLM) chatbot’s responses when blinded yet demonstrated a bias against it when the source was suspected. This bias is likely rooted in a lack of trust, and trust may influence AI adoption by both students and professors.”

Lowering the barrier to learning

While the research team — which included Dr Robyn Stamm, Dr Shannon White, Dr Melanie Kroger-Jarvis, and Dr Bailey Martin — acknowledged that the small sample size means the findings act strictly as a pilot study, they argue it is a vital first step in understanding human-machine interaction in education.

Lambert believes that, despite the inherent lack of trust, chatbots could serve a crucial role in university settings by removing the severe intimidation factor of higher education. Students are often hesitant to ask human professors questions for fear of appearing uneducated, but AI removes that social anxiety entirely.

Lambert said: “Sometimes the topics we cover are challenging or intimidating. Educators want something that will lower the barrier so students can ask any questions they like.”

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Alarming new US survey shows half of patients rely on AI for medical choices

Across the United States, a dangerous new trend is emerging. Millions of…

Why digital tears and online outrage fail to win modern political arguments

Scrolling through your social media feed today often feels like navigating a…

Global gambling firms rush to adopt AI despite severe lack of safety controls

The global gambling industry is racing to integrate artificial intelligence into its…

Tracking how war and energy policies dimmed night lights of Europe

While human civilisation is glowing brighter than ever before, the lights across…

Smiley faces could be quietly ruining your professional reputation

When people interact in person, subtle signals like facial expressions, body language,…

AI tracking reveals severe protein deficiency in weight loss injection users

Millions of people taking popular weight loss jabs are starving themselves of…