A group of former and current OpenAI employees, along with current and former Google Deepmind and Anthropic employees, published a letter on Tuesday calling for whistleblower protections, saying artificial intelligence (AI) safety needs an open, public debate.
letterThe report, signed by more than a dozen people, some real and some anonymous, says the existential risks posed by AI outweigh the desire of companies to keep secrets in a competitive marketplace.
The letter calls on AI companies not to enforce “smear agreements,” prohibit criticism, or use vested economic interests to retaliate, for companies to facilitate anonymous reporting of risks by current employees, for companies to support a culture of open criticism, and for companies not to retaliate against current and former employees who publicly raise risks.
“We believe that AI companies have strong economic incentives to avoid effective oversight and that bespoke structures of corporate governance will not be enough to change this,” the letter said. “AI companies possess substantial non-public information about the capabilities and limitations of their systems… However, there are currently only limited obligations to share information with governments and no obligations to share information with civil society, and we cannot expect all companies to share information voluntarily.”
OpenAI is currently embroiled in multiple lawsuits over its ChatGPT chatbot, which was released in November 2022. ChatGPT kicked off an arms race in generative AI (GenAI) between big tech companies like Google, Microsoft, Nvidia, and Amazon, along with other companies, as businesses and consumers race to adopt new AI technologies. The GenAI market is expected to exceed $1.3 trillion within a decade. Bloomberg.
in X ThreadsElaborating on their concerns, former OpenAI employee and letter signatory Jacob Hilton said the group is “calling on all cutting-edge AI companies to provide assurances that employees who responsibly disclose risk-related concerns will not be retaliated against…Historically at OpenAI, employees have been threatened in their contracts with losing vested benefits if they are fired for 'good cause,' including breach of confidentiality. If employees find that the company has reneged on its promises, they have no one to turn to outside the company.”
last month, CNBCindicated that OpenAI had ended its practice of forcing employees to choose between signing non-disparagement agreements or forfeiting their vested shares in the company.
Hilton praised Open AI for dropping the non-disparagement agreement, but said the company needs to do more to protect whistleblowers. “Employees may still fear other forms of retaliation for disclosing information, such as termination or lawsuits for damages,” he wrote.
The letter was also supported by several notable figures in the technology world, including Geoffrey Hinton, Yoshua Bengio and Stuart Russell.
The letter states that the technology poses significant risks. “These risks range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the potential extinction of the human race due to loss of control over autonomous AI systems,” the letter said.
“We agree that rigorous discussion is important given the importance of this technology, and we will continue to engage with governments, civil society, and other communities around the world,” an OpenAI spokesperson said in a statement to InformationWeek.
OpenAI also said it had removed non-disparagement clauses from documents for departing employees. The company said it has a good track record of not releasing technology without safeguards, citing its voice engine and Sora video model, whose public release has been delayed, as examples.
Eric Noyes, founder of the university's AI lab and associate professor of entrepreneurship, said in an email interview with InformationWeek that the letter is a step in the right direction toward responsible AI transparency. “Given the unique importance of AI to the future of human innovation, and the very real risks it could pose, this call makes a lot of sense.”
“This shows once again that the world's most powerful technology companies need practical, tactical oversight, as their incentives may be at odds with those of society as a whole,” he added.
