Published by Forbes.com on June 6, 2023
Forbes Councils Member
ChatGPT has taken the internet by storm with its ability to summarize information and present it in a conversational manner. Most capabilities are not new. ChatGPT is the latest version of a type of generative AI called large language models (LLMs). These models are very large (and deep) neural networks that are notoriously expensive to train. The concept of “connection” is used to track words across pages and documents to improve performance. Such models have been called “stochastic parrots” because they have no understanding of what they say. They can’t tell you where their answers come from and will propagate misinformation if it has enough frequency and buzz.
All this emphasizes the need to have humans in the loop. That’s the human-centric, “experiential” approach we are pioneering at the Institute for Experiential AI at Northeastern University. We believe that not only is human intervention necessary, but it will make AI trustworthy, keep its development healthy and provide more credible solutions to problems. In fact, ChatGPT has humans in the loop, and this human relevance feedback coupled with reinforcement learning is essential to its performance (even though OpenAI does not emphasize its foundational critical role). Incidentally, the same holds for Google’s search engine ranking algorithm, with which human relevance feedback is used to retrain the MLR ranking algorithm multiple times a day.
Ridiculous Results And Hallucinations
ChatGPT relies on human relevance feedback to avoid the easy-to-find gaps in its abilities. Consider this example from Gary Marcus when he prompted GPT3 with:
“You poured yourself a glass of cranberry juice, but then absentmindedly, you poured about a teaspoon of grape juice into it. It looks OK. You try sniffing it, but you have a bad cold, so you can’t smell anything. You are very thirsty. So you …”
GPT3 responded with the best auto-complete it could muster: “drink it. You are now dead.”
The problem has to do with the way generative AI works. The model is just performing auto-completion based on pattern matching with no semantic understanding (e.g., that grape and cranberry juice poisoning is not likely). So it makes things up—what the industry refers to as “hallucinations.”
Such models can seem intelligent by scripting answers, but this fails fairly quickly. For example:
• If you ask “Why did you say so?” ChatGPT seems programmed to apologize and tries to refine its answer.
• If you ask ChatGPT to justify something, it will go out of its way to support a ridiculous proposition. The computer scientist Andrew Ng, for instance, recently got ChatGPT to confidently explain to him why abacus computing is a good option in data centers.
What Are The Realistic Societal And Economic Impacts?
Many of the jobs in the knowledge economy, including in industries such as law and medicine, can benefit from letting a chatbot produce a first draft followed by human editing, thus leading to the emergence of a new profession: prompt engineers.
When people worry that AI will replace their jobs, my standard answer is: “AI will not replace your job, but a human using AI will replace your job … if you are not using AI.” Chatbots will NOT replace doctors, but doctors who use AI will replace doctors that don’t. The same is true for programmers and a host of other professions. Machines are good at automating repetitive and low-value tasks, freeing humans to think critically and creatively about problems. Tools like ChatGPT can effectively respond to questions for which the answer is widely accepted, making talent assessment more susceptible to cheating. That means recruiters and hiring companies need to rethink the way they evaluate candidates. All of this will increase efficiency. I believe that over 80% of the work done by knowledge workers today will be accelerated by AI—but the human worker needs to check and fine-tune the initial accelerated output. Legacy processes that are fundamentally robotic will certainly be disrupted.
How Should Educators Adapt?
Educators face a particular dilemma because ChatGPT is good at regurgitating the kind of information taught in classrooms. Today, it can summarize homogenous texts into papers and ace many standardized exams. GPT4 recently passed exams in law and business management—fields that require years of human training and instruction. What ChatGPT does not have is an understanding of the topic. Only humans can do this effectively.
We have adapted as educators to technology developments in the past. ChatGPT is yet another tool available to learners. Teaching should focus on the right and wrong ways to use it. We should also demystify the technology and make it clear how it can go wrong. Rote learning is obsolete. We need to emphasize creative thinking and the synthesis of information. How can learners and workers use these tools and build value?
A Brave New World
We’re only beginning to understand how AI will change the way we learn, work and live. What the world looks like with regard to AI depends heavily on how we handle the next five to 10 years. There are undoubtedly risks and potential for misuse as technologies like generative AI become more powerful and widespread, and it’s critical that we establish safeguards and human-in-the-loop systems to mitigate them. The growing concern from industry experts, governments and the public isn’t unfounded. Responsible AI is rapidly gaining relevance and importance.
All of this does not preclude the use of AI for good. In fact, this is an unprecedented opportunity. Life will be more interesting when we understand technology’s limits and learn to leverage its strengths.
View online