The Morning Download: Weekly AI Insights

Published by CIO Journal on April 5, 2022

By Angus Loten

Good Morning, CIOs. Killer robots have long been confined to the world of science fiction. But according to new research from the University of Pennsylvania’s Leonard Davis Institute, killer robots are real—and possibly more deadly than “The Terminator.”

‘Death by despair.’ In a new study, ominously titled “Death by Robots,” the school’s researchers found that steadily increasing factory automation from 1993 to 2007 led to “substantive increases in all-cause mortality for both men and women aged 45–54.” The study, published in the journal Demography, seeks to demonstrate a causal relationship between factory automation—or “robotization”—and job losses leading to drug overdose deaths, suicide, homicide and cardiovascular mortality.

Working robots, dying workers. Over the period covered by the study, every robot added per 1,000 factory workers resulted in eight deaths per 100,000 males aged 45 to 54, and four deaths per 100,000 females in that same age group. During the last 20 years, as many as 750,000 jobs have been eliminated by robots, according to estimates cited in the study.

The Covid effect. “The adoption of industrial robots was expected to double by 2030, a projection that was made before the Covid-19 pandemic,” Atheendar Venkataramani, an assistant professor of Health Policy and Medicine, University of Pennsylvania, told me. The potential acceleration of adoption as a direct result of the pandemic is concerning, he said, “given that it is occurring during a time in which population health has worsened and the social safety net remains patchy.”

AI R&D
Humans in the loop. Northeastern University this week is set to unveil its new Institute for Experiential AI, a research center focused on promoting efforts to take a human-centric approach to developing AI, the school says. CIO Journal recently spoke with Dr. Usama Fayyad, the center’s inaugural executive director, about the need to keep humans at the center of AI development. Edited excerpts below: WSJ: How do you define Experiential AI?

Dr. Fayyad: Experiential AI is human-centered AI with the human-in-the-loop allowing intervention and feedback to bring what machine intelligence does well—data crunching, automation, repetitive tasks—with the uniquely human intelligence capabilities of humans, such as commonsense reasoning, intuitive decision-making and understanding. WSJ: What is the broad goal of launching the Institute? And why launch it now?

Dr. Fayyad: The Institute for Experiential AI at Northeastern University aims to build the leading research institute in experiential AI with an approach rooted in making AI solutions work in real settings. The truth is that AI algorithms have not seen major advances over the last 70 years, but what has changed is that we have a lot more data available for learning algorithms to infer target outcomes for different situations. Thus, most of the action has been in making machine learning work with all this data. We aim to change the dynamic by using applied work in our AI Solutions Hub to drive a new research agenda for academia rooted in where the state-of-the-art hits limitations in practice.

WSJ: How will the institute benefit private-sector companies seeking to apply AI?
Dr. Fayyad: While applied working AI has been the purview of only the tech giants and some select well-funded startups, we aim to bring AI solutions by working with the overwhelming majority companies who have not had the chance to figure out that AI can be transformative in their business operations and ability to compete. Many companies have not appreciated the need to have granular and high quality training data to drive AI through machine learning.

WSJ: What is Responsible AI? How do you achieve that?
Dr. Fayyad: It is about designing AI solutions that avoid bias and unfairness in decision-making. It is also about systematically assessing risk in already operational AI systems to know what needs attention, what warrants algorithmic audits, and how to deal with the difficult and thorny issues of ethics in AI practice and systems.

View online

Leave a Reply