Published by https://www.forbes.com/ on November 19, 2021
The U.S. and almost all countries today identify AI as a critical strategic area in the future of computing. Companies are more invested than ever in discovering how AI can provide advantages in their competitive markets. According to a report released earlier this year by Appen Limited, AI budgets increased 55% year over year, ranging from $500,000 to $5 million, with more attention placed on internal processes, a better understanding of data and efficiency gains.
Fueling this interest are super-accelerated digital transformations driven by a “digital or die” theme mitigating the limitations imposed by the Covid-19 pandemic. With digitization, the volume, variety and velocity of data have increased exponentially for many years. Capturing, managing and exploiting the data proved challenging. AI gained prominence with the promise of not only coping with but exploiting the increase in data.
Despite the common association of AI with a look toward the future, it is an old field with roots that date back decades. In 1956, John McCarthy coined the phrase “artificial intelligence” during a summer workshop at Dartmouth College, where he pitched the idea that machines could simulate human intelligence if described accurately enough.
Mathematicians such as William Rowan Hamilton, Kurt Gödel, Alfred Tarski and Alan Turing, whose theories around calculability, completeness, recursive sets and logic hierarchies helped lay the foundations of computer science, date back much earlier. Each tried to capture and replicate the capacity of the human brain in an abstract form. Yet despite over 70 years of trying to make computers “intelligent,” natural language, machine vision and common-sense reasoning remain open problems to this day. In fact, generally speaking, AI has been a disappointment on almost all fronts except one.
Machine learning as one of the survival tools for the future of AI.
In the early 1970s and again in the late 1980s, AI experienced a precipitous decline in interest and funding. These AI winters came from a plethora of hype and imagination about what AI could achieve coupled with overpromises on delivering solutions to Grand Challenge problems and capped with a failure to address these problems even in their simplest forms.
Machine learning is the one subfield of AI that survived both AI winters and will likely survive the next — not because we have developed great algorithms for learning from data but because we have much more data at our disposal. The few weak algorithms that we have can compensate for their weaknesses by leveraging the overabundance of datasets now available and growing exponentially in the digital world.
Machine learning within AI acts as a shortcut to avoid the problem of understanding how human intelligence works by leveraging multitudes of examples of desired outputs given a set of inputs. It’s essentially a way of doing more flexible nonlinear regression to produce correct outputs like decisions, classifications or conclusions under similar future states or inputs. This process can lead to powerful results and often outperforms humans in speed, consistency and accuracy.
However, that is not quite enough to produce general or autonomous AI. The best way to make AI robust and resilient is to embrace the notion that human intervention is needed to compensate for the limitations of algorithms and data. When an algorithm goes “rogue” because the data or the world around it change, human intervention (done efficiently and correctly) can save us from detrimental outcomes. Human intelligence has a unique capacity to reason, understand and adapt to uncertainty and change. Machines are better-suited to process large amounts of data, perform repetitive tasks and tirelessly search for combinations and correlations. A human-centric approach of fusing the best of human and machine intelligence can create robust, resilient and intelligently adaptive solutions.
A great example of this approach is machine learning relevance (MLR) in search engines like Google. MLR is used to leverage data and human feedback to adjust and optimize the search engine’s relevance ranking algorithm. With this information, a search engine incrementally improves its accuracy until it almost magically understands what we are looking for when we type those few words in a search box.
Solving real problems in real settings is key to advancing the science and practice of AI.
In the last few decades, almost all advances in AI have come by figuring out how to make machine learning work in real applications. For the last 15 years, China has had a maniacal focus on making machine learning technology work in real applications. That dedication may explain how China is closing the gap on a 70-year AI lead held by the U.S. and Europe.
Many hard lessons learned in my years of experience convinced me that figuring out how to make AI work on real problems is the key to advancing the science of AI. The philosophy we adopted when we founded Yahoo Research Labs in 2005 has intensified today with our approach at the Institute for Experiential AI at Northeastern University. It inspired my approach to creating an AI Solutions Factory that offers valuable learning residencies for learners and acts as a research stimulant in relevant areas when it comes to doing the science of AI. Research that focuses on addressing the issues of why a particular AI technology works or does not work in specific applications and determining what is needed to make it work systematically will lead us to solve crucial and fundamental problems.
The shop floor of the AI Solutions Factory is rife with Nobel Prize class problems waiting to be solved. By making AI solutions practical and relevant to most companies, we can advance the science and the practice of AI and avoid another AI winter that I believe is coming.