Responsible AI: A Mandate In Finance And Insurance

Published by Forbes.com on July 6, 2023

AI is rapidly becoming an essential technology for companies in every industry, and the finance and insurance industries are no different. AI is already being used to help such businesses stand out from competitors. Fintech companies are using it to automate back-office processing and improve customer service. Traditional financial services companies are using it in trading and to optimize back- and front-office operations. Insurance companies use AI for everything from underwriting and pricing to claims processing and measuring the detailed behaviors of individuals.

As all of these trends accelerate, the companies that can effectively adopt AI will likely leave behind the companies that can’t.

But for all of its potential, there are still huge limitations to AI technologies today, and the massive misconceptions about generative AI only make matters worse. For instance, the generative AI currently dominating headlines is powered by a large language model (LLM) called GPT-4, on which ChatGPT is built. Such models suffer from problems including biases in their training data, an inability to explain where they get information and a lack of commonsense reasoning. ChatGPT specifically has been shown to present wrong information as fact and give unpredictable responses when facing new situations. Although such errors may seem minor in a school paper or advertising copy, the bar is much higher when it comes to financial serviceshealth and other regulated arenas.

A good practice is to use human relevance feedback to train models with more curated and well-labeled data designed to provide “adversarial” feedback as the LLM is built and used. But this takes time and iteration and is expensive, underscoring the need for companies to have a human-centric AI strategy—what we refer to as “experiential AI” at the Institute for Experiential AI (EAI) at Northeastern University. The “AI haves” using AI—such as Google, Microsoft, Amazon, Meta and OpenAI—know that human intervention is the only way to keep the algorithms on track.

A human-centric AI strategy isn’t only necessary for effective AI but also for responsible AI. Unfortunately, even as many companies agree that deploying AI responsibly is imperative, tightening budgets are causing mass layoffs (paywall) of in-house ethics teams, especially at giant tech companies.

Why Finance And Insurance Companies Need A Responsible AI Strategy

The stakes are especially high for companies working in fintech, insurance and financial services. These tightly regulated companies make decisions that can have major consequences for individuals, families and society as a whole.

Inaccurate or biased algorithms can be problematic when they’re populating a social media feed, but things get much more serious when they’re deciding whether or not to give out a loan or deny an insurance claim. Companies deploying models with unintended biases or without proper safeguards are thus exposing themselves to many potential problems.

Still, AI is being used to supercharge operations for companies across the finance and insurance industries. Companies are using it to create faster, more relevant customer experiences, enhance market forecasting, identify macro trends and open up a host of new operational efficiencies. Earlier this year, Jamie Dimon, the CEO of JPMorgan Chase, called AI a “groundbreaking” technology and said the company has identified 300 use cases for it.

But as everyone rushes to implement AI, the companies that deploy it responsibly can stand out from competitors while avoiding legal problems and other mistakes that can erode customer trust and damage reputations.

A Responsible AI Strategy That Works

A comprehensive approach to responsible AI involves identifying areas in which problems could arise before deploying AI models. This shouldn’t slow a company’s adoption of AI. Companies should also set up guardrails and conduct periodic assessments of their data, algorithms and the operations they influence. The goal for companies is to minimize risks while showing regulators and the public they’re being thoughtful about AI’s possible consequences and are, therefore, worthy custodians of the trust placed in them to manage our assets.

The question then becomes how do you approach this area pragmatically and practically while the business continues to execute and evolve? This requires answering questions like: How do you measure and assess risk? Where are algorithmic audits needed? How do you mitigate against identified risks?

The reality is that there are no mature processes in practice or even in academia. So, at the Institute for EAI, we developed our own approach for the purpose of understanding what’s needed for training students while gaining an understanding of what actually works. Some of our experiences revealed the following steps to creating a responsible AI strategy:

1. Conduct a technical audit to assess risks such as model bias, security flaws or accuracy issues and to fully understand how the data will be used and protected.

2. Craft an ethics strategy for new AI projects.

3. Establish a model for AI governance.

4. Address the AI talent gap by providing training to employees and injecting expert talent into the team.

5. Close the loop by determining which risks were addressed and whether new risks have appeared.

6. Build an ongoing process and a governance mechanism to make sure systematic organizational attention is given to this area (much like any other business process).

It may be tempting to take an ad hoc approach to your AI strategy development. But I’ve found that it’s best done by bringing together a small advisory group of ethics experts who understand the business.

A Double-Edged Sword

The transformative nature of AI algorithms will likely lead to an algorithmic arms race in financial services. But firms thoughtlessly rushing to deploy AI face a minefield of new risks: unintended biases, improper use of data and other issues that can lead companies to violate regulations. The layoffs of ethics personnel at big tech companies are disheartening, but they should serve as motivation to double down on your own responsible AI practices and to delve deeper into what it takes to improve trust in AI. A comprehensive strategy can allow companies to reap the full benefits of this world-changing technology while avoiding its significant risks.

View online

Leave a Reply