Local government and AIs.
Photo credit: theFreesheet/Google ImageFX

Generative AI projects in public administration often persist even when the technology fails to perform as promised, as innovators use a complex web of justifications to keep the momentum alive, a new study reveals.

Research conducted by the University of Eastern Finland and Aalto University found that the allure of cutting-edge technology makes AI innovations almost “irresistible” to government bodies, insulating them from criticism even when they repeatedly break down.

“Our findings show that generative AI projects in public administration often continue not because the tools work well, but because a compelling set of justifications makes them hard to stop and declare the tools nonfunctional,” said Marta Choroszewicz, a Senior Researcher at the University of Eastern Finland and co-author of the study.

Insulated from criticism

The findings are based on nearly 1.5 years of ethnographic fieldwork in Finland. Researchers observed the development of a large language model (LLM) designed to help frontline claims specialists navigate complex welfare guidance documents.

Despite evident limitations in the tool’s accuracy, precision, and consistency, the innovation team managed to keep the project moving forward by relying on nine “justificatory frames”.

Five of these focused on familiar AI promises, such as efficiency, cost savings, employee well-being, and fairness. The other four frames were used to legitimise speed and experimentation, effectively normalising the project’s setbacks.

Blaming the users

When the tool repeatedly failed to deliver on its promises, the project was not halted. Instead, the narrative shifted.

The researchers found that the innovation team began to reframe success as being dependent on organisational change and the “AI skills” of the users, rather than the actual performance of the software. Technical failures were brushed off as an expected “learning process” associated with emerging technologies.

“By normalising setbacks as learning, the team maintained innovation momentum even when accuracy, precision and consistency remained out of reach,” noted co-author Antti Rannisto, a Doctoral Researcher at Aalto University.

The study also highlighted a growing divide between the “flexible world of innovation” and the controlled routines of frontline public sector workers. Innovators built powerful alliances with managers and consultants to secure resources, while asserting their authority over the claims specialists who were supposed to benefit from the tool.

The authors warn that the technical opacity of generative AI makes it difficult for public administrations to pinpoint the causes of failure, preventing them from pausing for critical re-evaluation and risking severe resource lock-in.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Simple picture test helps AI diagnose addiction with over 80% accuracy

Diagnosing addiction is notoriously difficult due to stigma and denial, but scientists…

Rise of ‘skill-based’ hiring: AI expertise now worth more than a master’s degree

With AI skills commanding a 23% wage premium and offsetting age bias…

Scientists create ‘smart underwear’ that tracks every time you fart

A new wearable device that tracks human flatulence has revealed that the…

Decolonising the digital: How the Global South is reimagining AI

From ‘language-first’ models to public supercomputers, developing nations are moving beyond Western…

Companies that admit to cyber risks perform better financially

Businesses that openly discuss their cybersecurity strategies and risks are likely to…