Toward Trustworthy AI: Bridging The Trust Gap Between Humans And Intelligent Machines

Published by Forbes.com on March 29, 2022

Usama Fayyad

Forbes Councils Member

Forbes Technology Council

As artificial intelligence (AI) technology becomes more mainstream in a wide range of applications, settings and domains, we must address the issues of establishing trust in AI systems. Practitioners, users and organizations need to trust how a system reaches decisions, how it operates and the fact that it won’t exhibit erratic or dangerous behavior.

The question of trust between humans and machines has a long-standing history built on product and engineering. Elevators and ATMs make life easier, but it’s not enough to be told a thing works. There needs to be an acceptance or conviction that a machine will work as designed and intended. The gas and brake pedals in a vehicle should make the car go and stop, with no room for uncertainty. The tasks should be repeatable, predictable and reliable. Safety measures exist to ensure the systems work as expected. Stress tests, crash tests, ratings and reviews all work to ease our acceptance and gain trust in the machines we use.

When it comes to AI, deterministic and simple execution assurances become questionable and challenging. The word “intelligence” in the name itself implies a degree of “autonomous” decision making, adding another burdensome layer toward establishing trust.

Trust: Fundamentally Human

The process by which humans learn to trust other humans can be complicated in any context. To establish trust between humans and AI machines, we must address even more difficult challenges.

Trust: Fundamentally Human

The process by which humans learn to trust other humans can be complicated in any context. To establish trust between humans and AI machines, we must address even more difficult challenges.

Human trust typically comes from a foundation of understanding each other’s motivations, reasoning methods and the likelihood of how each party may react to a variety of situations. This level of understanding comes from a thesis of similarity. Since both parties are human, they presume they share enough commonalities to produce a general inclination to “do the right thing,” despite lacking any quantifiable metrics.

With AI, the foundation of similarity is missing. Trust is earned based on track record, transparency and understanding of how the machine will react to new situations. There are two primary approaches for overcoming human-to-AI trust challenges: trustworthiness and responsibility.

There has been much debate about the topics of trustworthy versus responsible AI, but a standard definition has failed to emerge. In fact, some experts argue that no one should trust AI. We don’t expect the same trust between humans to apply to machines. Instead, when AI fails to perform its programmed task, the system developers can and should be held to the same standards as any manufacturer of a faulty product.

Others have tackled the topic of trustworthy AI by considering it as related to, but distinct from, “responsible AI.” A recent survey published by the Association for Computing Machinery (ACM) presents a six-dimensional framework currently gaining acceptance in several government agencies that spells out some differences between trustworthy and responsible AI. In Communications of the ACM, Jeannette Wing writes about a similar series of properties that deem AI trustworthy. IBM has narrowed the definition scope to three core ethical principles, but this has not gained wide acceptance.

I prefer to approach the topic from two dimensions: technical versus responsible.

Trustworthy In The Technical Sense

In this first dimension, trust comes from knowing an AI system has the following distinct technical properties.

1. Accuracy: A system performs designated and predictive tasks with a high degree of accuracy and low error rate.

2. Robustness: Statistically speaking, a system deals well with outliers and environmental changes. It recognizes changes it cannot handle and shuts down gracefully rather than crashing, spewing nonsense or going completely awry.

3. Resiliency: As environmental changes occur, a system adapts and learns to recover functionality and a path to thrive in the new conditions.

4. Security: A system cannot be hacked or hijacked and resists attacks.

5. Explainability: When technical requirements call for human explanation, system actions and decisions are justified in understandable ways.

Trustworthy From A Sense Of Social And Ethical Responsibility

Often referred to as “responsible AI,” the second dimension deals with ethical issues in AI, competence, expertise and relatable human trust traits like reputation and good governance.

1. Privacy: A system does not violate user or subject privacy.

2. Fairness: The data and algorithms of a system prohibit bias and prevent unfair treatment towards any segment of the population.

3. Interpretability: Humans can understand how a system functions, reaches a decision and why it exhibits particular reactions.

4. Transparency: People have visibility into a system’s process and its policies.

5. Accountability: The system complies with laws and policies. Its functions are traceable, and its actions are accountable.

A New Type Of Science Is Required

Systems that consider responsible and technical attributes effectively provide trust based on ethical behavior, professional competence and reputation. This set of requirements also addresses governance issues and leads to social acceptability. Whatever your viewpoint, advancing the capabilities of AI and building trustworthy systems for humans, organizations and societies require that we develop a new kind of science.

We need more precise definitions, standards, criteria, tools and scoring metrics that allow us to build system reputation and set realistic expectations from AI. Pragmatism dictates that we also need a better understanding of the current limitations of technology and systems.

A Complex But Necessary Problem To Solve

Connecting the dots that bridge human-to-AI trust is not an easy problem to solve, in part because it lacks a clear definition. The topic of AI breeds mystery and ambiguity. Demystifying the technology and the behaviors exhibited by algorithms, good or bad, establishes real progress and creates valuable outcomes on all fronts: theoretical, academic, commercial and practical.

To make AI trustworthy and keep developments healthy, it should be done in highly applied settings that demonstrate real, credible solutions to actual problems and an ability to cope robustly with live data and environments. Establishing definitions and producing effective training mechanisms for talent development in this field is best done by working in an environment where applications uncover real issues and address company needs.

View online

Leave a Reply