Generative AI projects in public administration often persist even when the technology fails to perform as promised, as innovators use a complex web of justifications to keep the momentum alive, a new study reveals.
Research conducted by the University of Eastern Finland and Aalto University found that the allure of cutting-edge technology makes AI innovations almost “irresistible” to government bodies, insulating them from criticism even when they repeatedly break down.
“Our findings show that generative AI projects in public administration often continue not because the tools work well, but because a compelling set of justifications makes them hard to stop and declare the tools nonfunctional,” said Marta Choroszewicz, a Senior Researcher at the University of Eastern Finland and co-author of the study.
Insulated from criticism
The findings are based on nearly 1.5 years of ethnographic fieldwork in Finland. Researchers observed the development of a large language model (LLM) designed to help frontline claims specialists navigate complex welfare guidance documents.
Despite evident limitations in the tool’s accuracy, precision, and consistency, the innovation team managed to keep the project moving forward by relying on nine “justificatory frames”.
Five of these focused on familiar AI promises, such as efficiency, cost savings, employee well-being, and fairness. The other four frames were used to legitimise speed and experimentation, effectively normalising the project’s setbacks.
Blaming the users
When the tool repeatedly failed to deliver on its promises, the project was not halted. Instead, the narrative shifted.
The researchers found that the innovation team began to reframe success as being dependent on organisational change and the “AI skills” of the users, rather than the actual performance of the software. Technical failures were brushed off as an expected “learning process” associated with emerging technologies.
“By normalising setbacks as learning, the team maintained innovation momentum even when accuracy, precision and consistency remained out of reach,” noted co-author Antti Rannisto, a Doctoral Researcher at Aalto University.
The study also highlighted a growing divide between the “flexible world of innovation” and the controlled routines of frontline public sector workers. Innovators built powerful alliances with managers and consultants to secure resources, while asserting their authority over the claims specialists who were supposed to benefit from the tool.
The authors warn that the technical opacity of generative AI makes it difficult for public administrations to pinpoint the causes of failure, preventing them from pausing for critical re-evaluation and risking severe resource lock-in.