Published by https://ai.northeastern.edu/ on May 1, 2023
In recent months, Twitter, Microsoft, Amazon, Google, and Meta have all eliminated or reduced their AI ethics teams. The timing is odd given the competitive frenzy kicked off by the release of ChatGPT. After Microsoft’s $10 billion in OpenAI (makers of ChatGPT), Google hastily released its own chatbot, Bard. And last month, Meta unveiled LLaMA, a 64-billion-parameter language model that, unlike ChatGPT, is open-source. The explosion of R&D alongside mass layoffs seems to indicate a belief among tech giants that AI ethics and AI innovation are mutually exclusive. But is that true?
Here are five things to know about Responsible AI from EAI Experts
“While we are all fascinated with the apparent eloquence and ‘fluency’ of chatbots like ChatGPT, it is important that we do not confuse these with ‘intelligence’—we are far from systems that have a semantic understanding of what they are saying. We are also far from systems that have reasoning capabilities—including common-sense reasoning—which remains elusive for machines and strictly in the domain of humans. In such an environment, it is particularly important that we create guardrails for how to use the technology while avoiding serious ethical issues. It is disheartening to see the big players in AI disassemble their ethics teams, but it serves as motivation for EAI to double-down on its own Responsible AI practice, and on delving deeper into what it would take to develop trust in AI.”
—USAMA FAYYAD, EXECUTIVE DIRECTOR
“Industry cannot ask for self-regulation and at the same time get rid of their AI ethics teams. It’s a complete contradiction. I am worried about the near future, but not an apocalyptic one. Now we can generate videos with the right face, right voice, and my false message. So it will be very hard to distinguish what is true. Not even a Zoom call may help in the future, as there will be perfect avatars of us. If we do not do something, democracy might be in danger. We need to regulate the unethical usage of AI. We need to stop irresponsible AI, but we do not know how because it is not trivial to enforce”
—RICARDO BAEZA-YATES, DIRECTOR OF RESEARCH
“Ethical decisions and value judgements are inherent parts of the innovation process, and they are necessarily made on a day-to-day basis. Dismissing ethics only means that these decisions are more likely to be misarticulated, misguided, opaque, and untraceable. It is interesting to wait and watch how the market will react to the actions taken by the tech giants. Will other companies follow suit and race to the bottom or will they take the opportunity to offer better products? Ethics is not an abstract ideal. Consumers—especially institutional consumers like the healthcare, public safety, or finance sector—prefer ethically robust products as they reduce regulatory and reputational risks and provide an edge for them against competitors.”
—CANSU CANCA, ETHICS LEAD
“With unprecedented levels of data becoming available and more powerful AI tools helping us make sense of that information, organizations that produce AI need to focus on how to use these tools as a force for human empowerment. The tools don’t have to be perfect for us to seize the opportunity to do better than our current systems. Unfortunately, we seem to be going in the opposite direction or, at the very least, not putting in place the personnel and processes to ask and answer: How might we use these tools for good? Discussion about the far-off future or the robot apocalypse may be obscuring attention towards how we can leverage the technology to design better mechanisms for participatory oversight.”
—BETH NOVECK, CORE FACULTY
“Generative AI models are full of biases. They do not have access to verifiable information, so they can easily suggest false and even dangerous advice, not to mention they may provide false answers presented confidently as factual information. We are in the midst of another steam-engine-and-horseless-carriage-like shake-up, and it will take some time until we figure things out. Yes, we should be careful, cautious, and even a little concerned. And we should not accept the tech giants’ dismissal of addressing responsible and ethical concerns of the use of AI. At the same time, we should not be alarmists. The combined wisdom of academics, researchers, ethicists, and concerned decision-makers will, in the end, figure out how to do this right.”
—WALID SABA, SENIOR RESEARCH SCIENTIST
View online