University of Colorado at Boulder researchers have developed a framework for building artificial intelligence systems that earn public confidence, arguing that trust formation between humans and technology mirrors the social dynamics that helped early civilisations cooperate and survive.
Amir Behzadan, a professor in the Department of Civil, Environmental and Architectural Engineering and a fellow in the Institute of Behavioral Science at CU Boulder, leads research in the Connected Informatics and Built Environment Research (CIBER) Lab examining how AI technology used in daily life can earn user confidence. The team created a framework for developing trustworthy AI tools that benefit people and society.
In a paper published in the journal AI and Ethics, Behzadan and PhD student Armita Dabiri drew on that framework to create a conceptual AI tool incorporating elements of trustworthiness. Behzadan explained that humans trust others when they make themselves vulnerable to potential harm whilst assuming positive intentions, and this concept transfers from human-human relationships to human-technology relationships.
Understanding trust formation
Behzadan studies the building blocks of human trust in AI systems used in the built environment, ranging from self-driving cars and smart home security systems to mobile public transportation apps and systems that facilitate collaboration on group projects. He says trust has a critical impact on whether people will adopt and rely on them.
Trust has been deeply embedded in human civilisation since ancient times, helping people cooperate, share knowledge and resources, form communal bonds and divide labour, according to Behzadan. Early humans began forming communities and trusting those within their inner circles, whilst mistrust arose as a survival instinct, making people more cautious when interacting with outsiders. Cross-group trade encouraged different groups to interact and become interdependent over time, but did not eliminate mistrust.
Behzadan suggests modern attitudes towards AI echo this trust-mistrust dynamic, especially when developed by corporations, governments, or others considered outsiders.
It knows its users
Many factors affect whether and how much people trust new AI technology. Each person has their own individual inclination towards trust, influenced by experiences, value systems, cultural beliefs and even the way their brains are wired.
Understanding of trust differs significantly from one person to the next, with reactions to trustworthy systems or people varying considerably between individuals, according to Behzadan. He emphasised that it is essential for developers to consider who the users of an AI tool are, including what social or cultural norms they follow, what their preferences might be, and how technologically literate they are.
Voice assistants like Amazon Alexa and Google Assistant offer simpler language, larger text displays on devices and a longer response time for older adults and people who are not as technologically savvy, according to Behzadan.
It’s reliable, ethical and transparent
Technical trustworthiness generally refers to how well an AI tool works, how safe and secure it is, and how easy it is for users to understand its functionality and data usage.
An optimally trustworthy tool must do its job accurately and consistently, Behzadan said. If it does fail, it should not harm people, property or the environment. It must also provide security against unauthorised access, protect users’ privacy and be able to adapt and keep working amid unexpected changes. It should also be free from harmful bias and should not discriminate between different users.
Transparency is also key. Behzadan says some AI technologies, such as sophisticated tools used for credit scoring or loan approval, operate like a black box, preventing users from seeing how their data is used or where it goes once in the system. If the system can share how it is using data and users can see how it makes decisions, more people might be willing to share their data.
In many settings, like medical diagnosis, the most trustworthy AI tools should complement human expertise and be transparent about their reasoning with expert clinicians, according to Behzadan. AI developers should not only strive to create reliable, ethical tools but also find ways to measure and improve their trustworthiness once launched for the intended users.
It takes context into account
Particular tools should be sensitive to the context of problems they attempt to solve. In the newest study, Behzadan and co-researcher Dabiri created a hypothetical scenario where a project team of engineers, urban planners, historic preservationists and government officials had been tasked with repairing and maintaining a historical building in downtown Denver. Such work can be complex and involve competing priorities, like cost-effectiveness, energy savings, historical integrity, and safety.
The researchers proposed a conceptual AI assistive tool called PreservAI, designed to balance competing interests, incorporate stakeholder input, analyse different outcomes and trade-offs, and collaborate helpfully with humans rather than replacing their expertise. Ideally, AI tools should include as much contextual information as possible so they can work reliably.
It’s easy to use and asks users how it’s doing
The AI tool should not only do its job efficiently, but also provide a good user experience, keeping errors to a minimum, engaging users and building in ways to address potential frustrations, Behzadan said.
Another key ingredient for building trust involves actually allowing people to use AI systems and challenge AI outcomes. Behzadan said that even with the most trustworthy system, if people cannot interact with it, they will not trust it. In addition, if very few people have really tested it, an entire society cannot be expected to trust and use it.
Stakeholders should be able to provide feedback on how well the tool is working. That feedback can help improve the tool and make it more trustworthy for future users.
When trust is lost, it adapts to rebuild it
Trust in new technology can change over time. One person might generally trust new technology and be excited to ride in a self-driving taxi, but if they read news stories about the taxis getting in crashes, they might start to lose trust.
That trust can later be rebuilt, said Behzadan, although users can remain sceptical about the tool. He cited the example of Microsoft’s Tay chatbot, which failed within hours of its 2016 launch. It picked up harmful language from social media and began to post offensive tweets, causing public outrage. Microsoft released a new chatbot, Zo, later that same year with stronger content filtering and other guardrails. Although some users criticised Zo as a censored chatbot, its improved design helped more people trust it.
There is no way to eliminate the risk that comes with trusting AI, Behzadan said. AI systems rely on people being willing to share data, as having less data makes systems less reliable. However, there is always a risk of data being misused or AI not functioning as intended.
When people are willing to use AI systems and share data with them, the systems become better at their jobs and more trustworthy. Behzadan said that when people trust AI systems enough to share their data and engage with them meaningfully, those systems can improve significantly, becoming more accurate, fair and useful, adding that trust is not just a benefit to the technology but a pathway for people to gain more personalised and effective support from AI in return.