Tired computer user.
Photo credit: Anna Tarazevich/Pexels

Significant progress has been made with the introduction of biometrics and passkeys, but most systems used by technology companies still require users to navigate a maze of complex rules with little choice in how they protect their data.

A new study published in the journal Computers & Security argues that the industry’s “fragmented approach” is increasing cognitive load and creating barriers for those with physical or cognitive limitations.

Researchers from the University of Plymouth and the University of Nottingham, who have monitored password practices for two decades, warn that usability and security are often treated as competing objectives rather than coexisting goals.

“Technology is now fundamental to every aspect of our daily lives,” says Professor Nathan Clarke, a cyber security expert at the University of Plymouth. “Each of us may need to authenticate something at least 100 times a day, whether that’s accessing our mobile phones, our computer devices or apps and software within them.”

The friction problem

The study highlights that users are currently forced to engage with a chaotic mix of authentication methods—including passwords, PINs, tokens, and biometrics—across different devices and services throughout the day.

This constant switching creates “unnecessary friction,” making security feel like a series of interruptions rather than seamless protection.

“If we authenticate over 100 times a day, then we don’t want this to seem like over 100 interruptions and delays,” says Professor Steven Furnell from the University of Nottingham. “We want protection to be the natural default position.”

A call for unity

The researchers are calling on technology providers to move away from “one-size-fits-all” models and unite behind consistent, user-centred approaches.

They argue that security measures need to become technically complex in the background to ensure safety, while remaining simple and flexible for the user on the front end.

Without this shift, the study warns of a risk of perpetuating systems that are “secure in theory but flawed in practice,” ultimately undermining user trust.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Digital sovereignty: Why 2026 is Europe’s make-or-break year for sovereign cloud

theFreesheet is the official media partner for Manchester Edge & Digital Infrastructure…

Medical AI fails in real-world clinics due to ‘contextual errors’

Despite the massive hype surrounding artificial intelligence in healthcare, a vast gap…

Study reveals why humans blindly follow ‘influencers’ and how inequality forms

For decades, scientists believed that early hunter-gatherer societies were largely egalitarian, with…