Published by Forbes.com on April 10, 2023
With all the hype around AI, especially generative AI chatbots, the topic is getting a lot more attention from businesses and the general public. Machine learning (ML), especially deep learning and generative ML, are a big driver of these developments.
A sober analysis of AI in business contexts, however, reveals a story that may at first seem at odds with the headlines. According to McKinsey, only 15% of businesses’ ML projects ever succeed. Another study by Gartner found only 53% of AI projects ever make it from prototype to production. If that’s true, why are businesses investing billions of dollars in AI?
The disconnect is not due to hype—although there is certainly some of that—so much as to approach. The most common strategy for organizations building ML solutions is to look at data sets and demonstrate a way to model them (typically predictively). This strategy causes problems to arise because the proposed solution is developed in a silo, overlooking the operational realities of a company.
Organizations are more likely to succeed in their AI efforts if they walk backwards from the solution to the problem: Figure out what it would take to deploy an effective solution in its operational context, identify the real problems, then break proposed solutions down into smaller steps.
Choosing The Right Problem
There’s a difference between “nice to solve” and “need to solve.” What an AI/data science department, manager or team member is interested in solving may be insignificant compared with business challenges around it. Poor communication as to how these objectives align with organizational goals can lead decision-makers to select the wrong problems for ML to solve.
A better strategy would be to make sure the target solution is of the highest business priority. Proper triaging of problems can help determine the allocation of resources to maintain systems, implement needed changes and ensure continued adoption in production.
Too many businesses overlook secondary or tertiary costs of AI when estimating returns on investment (ROI). But ML solutions typically involve more than just development costs. Companies should also consider maintenance and infrastructure, training requirements, compliance issues and engineering costs.
For a truly credible estimation, decision-makers need to involve finance, legal and customer service teams. It’s all too easy for data scientists to overlook financial realities when assessing potential returns, and an evolving regulatory environment further complicates things.
AI-driven predictions are not always easy for decision-makers to digest. For example, if an ML model designed to predict maintenance schedules recommends shutting down a system to effect a fix, managers may be hesitant to abide by the recommendation if it means suspending service or causing downtime. Overcoming that reluctance requires trust in the AI system.
The solution: Gain trust through transparency and explainability. Explanations need to make sense to managers who may or may not have a data science background, and they need to provide convincing evidence to support their conclusions. For example, explanations can include examples of events that preceded a system failure, which would serve to illustrate how an AI system may be aware of an imminent problem before its human operators are.
Building the model alongside stakeholders further helps to build trust. We call this “Experiential AI,” or AI with a human in the loop. Not only does this approach result in more robust and informed AI solutions, it gives experts a stake in seeing the solution work, since its development relied on their knowledge and interventions. In turn, predictions may receive additional input from experts, improving them further.
Follow The Data Chain
Even if businesses manage to collect good data, they still have to contend with a gap between proof of concept and in-production settings: that is, the data that models are trained on versus the data that models can access in production. Too many AI systems are trained in environments that assume access to more complete data, but real-world data can be incomplete and subject to dynamic, shifting landscapes that can diverge significantly from historical training conditions, compromising the value of predictions.
In developing AI systems, companies need to think about their data chain carefully. That means identifying the required inputs, determining their availability and quality as well as what data transformations are needed to extract the correct attributes.
AI and data science also compound the talent acquisition and retention problem. According to a recent IDC study, AI spending in IT is expected to outpace that of business services by 2024. But traditional programmers, analysts and technicians are rarely trained in data science, suggesting companies may need to overhaul their recruitment practices to stay apace. That involves:
• Seeking out online communities where data scientists frequent.
• Cultivating a forward-thinking culture of innovation and flexibility.
• Establishing partnerships with universities and tech institutions.
On the training end, companies should take a closer look at their upskilling efforts, as there is often a disconnect between what employers are investing in and what they’re gaining. A recent Genpact study found more than half of employers claimed to offer their workers training and upskilling in AI, but only 35% of employees said any such training existed. Meanwhile, as much as 80% of workers said they were interested in gaining more AI skills.
Walking Backwards From A Complete Solution—Not Tackling the AI Problem First
AI comes with enormous promises in the form of both ROI and innovation, but decision-makers need to be careful about their approach. Reports show a marked increase in AI-driven ROI in recent years, suggesting the problem of failed deployment is not due to the technology but how it is used.
So how do we address the issues addressed in this article effectively? Rather than focusing on proving their AI technology works, leaders should see how it would fit in business contexts in detail and walk backwards from there. That will likely avoid many of the mines along the road to deployment in production.