A group of current and former OpenAI employees is calling on ChatGPT makers and other artificial intelligence companies to protect employees who report safety risks regarding AI technology.
A group of current and former OpenAI employees is calling on the developers of ChatGPT and other artificial intelligence companies to protect employees who report safety risks related to AI technology.
The open letter, published Tuesday, calls on tech companies to establish strong whistleblower protections so that researchers have the “right to speak out” about the dangers of AI without fear of retaliation.
The development of more powerful AI systems is “moving rapidly, and there are many strong incentives to rush ahead without due caution,” said Daniel Ziegler, a former OpenAI engineer and one of the letter's organizers.
Ziegler said in an interview on Tuesday that he wasn't afraid to speak up within the company during his time at OpenAI from 2018 to 2021. During that time, he helped develop some of the technologies that would later make ChatGPT a huge success. But now he worries that the race to rapidly commercialize the technology is pressuring OpenAI and its competitors to ignore risks.
Daniel Kokotaijo, another co-organizer, said he left OpenAI earlier this year because he had “lost hope that they would act responsibly,” especially since OpenAI is working to build AI systems that are better than humans, known as artificial general intelligence.
“They and others have embraced a 'move fast and break things' approach, which is the exact opposite of what is needed for such a powerful and poorly understood technology,” Kokotajlo said in a statement.
In response to the letter, OpenAI said it already has mechanisms in place for employees to voice concerns, including an anonymous integrity hotline.
“We are proud of our track record of delivering the most capable and safe AI systems, and believe in a science-based approach to addressing risks,” the company said in a statement. “We agree that rigorous discussion is essential given the importance of this technology, and we will continue to engage with governments, civil society and other communities around the world.”
The letter has 13 signatories, most of them former OpenAI employees, including two who work or have worked at Google's DeepMind. Four are listed anonymously as current OpenAI employees. The letter calls on companies to stop forcing employees to sign “non-disparagement” agreements that carry penalties that strip them of a key financial perk – stock investments – if they criticize the company after they leave.
Following outrage on social media over the wording of OpenAI's departing worker documents, the company recently released all of its former employees from those contracts.
The open letter was backed by pioneering AI scientists Yoshua Bengio and Geoffrey Hinton, co-recipients of computer science's highest award, as well as Stuart Russell, all of whom have warned about the risks that future AI systems pose to human existence.
The letter comes shortly after OpenAI announced it had begun development of the next-generation AI technology that powers ChatGPT. The company formed a new safety committee shortly after losing leadership, including co-founder Ilya Sutskever, who was part of a team focused on safely developing the most powerful AI systems.
The AI research community at large has long been at odds over the severity of AI's short- and long-term risks and how to balance that with the commercialization of the technology. These conflicts contributed to OpenAI CEO Sam Altman's firing and quick return last year and continue to fuel distrust in his leadership.
More recently, a product launch drew the ire of Hollywood star Scarlett Johansson, who said she was shocked by how ChatGPT's voice sounded “eerie” to her own, despite having previously turned down Altman's request to lend her voice to the system.
Several of the letter's signatories, including Ziegler, are associated with Effective Altruism, a charitable social movement whose causes include mitigating the worst potential impacts of AI. Ziegler said the letter's authors were concerned not just about the “catastrophic” future risks of out-of-control AI systems, but also about fairness, product misuse, job losses and the possibility of very realistic AI manipulating people without proper safeguards.
“I'm not so much interested in reprimanding OpenAI,” he said, “I'm more interested in how this is an opportunity for any cutting-edge AI company to do something that will really increase oversight and transparency and maybe increase public trust.”
——-
The Associated Press and OpenAI have a licensing and technology agreement that gives OpenAI access to portions of the AP's text archive.
