Published by https://venturebeat.com/ on July 12, 2023
There is no shortage of hype around generative AI, but there is also reality.
In a fireside chat session at today’s VentureBeat Transform 2023, Jeff Wong, global CIO at Ernst and Young, was joined by Usama Fayyad, executive director of the Institute for Experiential AI at Northeastern University, for an insightful conversation about the reality of generative AI today.
“I’ve studied technology for a long time and there’s always a difference between what I call the hype curve and the reality curve,” said Wong. “There is the hype and excitement of what’s possible with all these new things that come out, and then the reality of what’s really happening on the ground and what’s really possible with these technologies.”
While there is lots of real opportunity for generative AI, Fayyad emphasized that there is hype around what the technology actually delivers. Fayyad argued that while large language models (LLMs) and generative AI have made impressive advances, they still rely heavily on human oversight and intervention.
“They are stochastic parrots,” said Fayyad. “They don’t understand what they’re saying, they repeat stuff they heard before.”
Fayyad added that ‘parrot’ refers to the repetition of learned items, while ‘stochastic’ provides the randomization. It is that randomization that, in his view, gets models into trouble and leads to potential hallucination.
Why the generative AI hype cycle is grounded in reality
Hype cycles in technology is nothing new, although Fayyad sees generative AI as having a basis in reality that will drive future productivity and economic growth.
In the past, AI has been used to solve different problems, such as helping a computer to beat a human at chess. Generative AI has a much stronger practical set of use cases, and it’s easier to use too.
“The type of skills that you get with generative models are very well aligned with what we do in the knowledge economy,” he said. “Most of what we do in the knowledge economy is repetitive, laborious and robotic and this stands a chance to kind of provide automation, cost saving and acceleration.”
Where government and regulations should fit in
In Fayyad’s view, the role of governance in general is to outline and make clear who is liable when a problem happens and what the implications are of of that liability.
Once the source of liability is determined, there is a person or a legal entity, not just a model, that is to blame. The potential liability is what will motivate organizations to help ensure accuracy and fairness.
Ultimately, though, Fayyad sees the current generation of generative AI as being complementary to humans and should be used as decision makers. So, for example, if a generative AI tool produces a legal brief, the lawyer still needs to read it and be responsible for it. The same is true for code, where a developer needs to be responsible and be able to debug potential errors.
“People ask me the question, ‘Is AI going to take my job?’” Fayyad said. “My answer is no, AI will not take your job away, but a human using AI will replace you.”