Artificial intelligence could fundamentally alter how human history is understood and preserved, with generative systems systematically erasing the moral complexity and contradictions that define historical truth, according to new academic research.
Historian Jan Burzlaff warns that AI’s tendency to smooth over narrative fractures risks creating sanitised versions of the past that lose essential interpretive value, potentially reshaping collective memory for future generations.
The research, published online in a Taylor & Francis journal, examines how AI systems process Holocaust testimonies from 1995, revealing that whilst machines excel at summarisation, they cannot engage with the ethical ambiguity and contradictions that characterise authentic historical sources.
Burzlaff analysed five video testimonies from the Fortunoff Video Archive using ChatGPT, discovering that the system consistently missed emotionally complex moments that resist categorisation, whilst producing fluent but hollow historical summaries.
“AI renders the past too smoothly, rendering history as something already digested, already known,” Burzlaff observed, highlighting technology’s preference for narrative coherence over historical accuracy.
The study establishes five methodological principles for historians working in the AI era: prioritising interpretation over description, creating rather than reproducing content, utilising corpora without becoming subsumed by them, rejecting algorithmic ethics frameworks, and maintaining human authorial voice.
The research demonstrates particular concerns about AI’s approach to moral complexity, noting that algorithmic systems seek resolution through categorisation rather than engaging with the ambiguity inherent in historical sources.
Burzlaff argues this represents a fundamental shift requiring historians to reassert distinctly human interpretive practices that resist machine logic, warning that the stakes extend beyond academic methodology to the preservation of historical truth itself.
The findings have implications for educational institutions increasingly integrating AI tools into humanities curricula, suggesting an urgent need for frameworks protecting critical thinking whilst leveraging technological capabilities.