OpenAI has faced internal disputes and a wave of external criticism over its practices and the potential risks its technology poses.
In May, several high-profile employees left the company, including Jan Reicke, former head of OpenAI's “super-alignment” effort to ensure that advanced AI systems stay aligned with human values. Reicke's departure came shortly after OpenAI unveiled its new flagship model, GPT-4o, which it touted as “magical,” at its spring update event.
Reike's departure was reportedly due to ongoing disagreements over security measures, surveillance practices and the prioritization of flashy product releases over safety concerns.
Reicke's departure opened a Pandora's box for AI companies, as former OpenAI directors came forward with allegations of emotional abuse against CEO Sam Altman and the company's executives.
The growing turmoil within OpenAI is matched by external concerns about the potential risks posed by generative AI techniques like the company's language model. Critics warn of immediate existential threats of advanced AI surpassing human capabilities, as well as more immediate risks such as job losses and the weaponization of AI for misinformation and manipulation campaigns.
In response, a group of current and former employees of OpenAI, Anthropic, DeepMind, and other major AI companies penned an open letter addressing these risks.
“We are current and former employees of cutting-edge AI companies who believe in the potential of AI technologies to bring unprecedented benefits to humanity. We also understand the serious risks these technologies pose,” the letter read.
“These risks range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the potential extinction of humanity due to loss of control over autonomous AI systems. AI companies themselves are aware of these risks, as are governments and other AI experts around the world.”
The letter, signed by 13 employees and supported by AI pioneers Yoshua Bengio and Geoffrey Hinton, outlines four key demands aimed at protecting whistleblowers and increasing transparency and accountability around AI development.
- The company will not enforce non-disparagement clauses or retaliate against employees who raise risk-related concerns.
- Companies will facilitate a verifiable, anonymous process for employees to raise concerns with the board, regulators and independent experts.
- Companies should support a culture of open criticism and allow employees to publicly share risk-related concerns while appropriately protecting trade secrets.
- The company will not retaliate against employees who share confidential risk-related information after other processes have failed.
“They and others have embraced a 'move fast and break things' approach, which is the exact opposite of what's needed for such a powerful and poorly understood technology,” said Daniel Kokotajiro, a former OpenAI employee who left the company over concerns about its values and lack of accountability.
The request comes amid reports that OpenAI forced departing employees to sign non-disclosure agreements to stop them from criticizing the company and risking losing their vested rights. OpenAI CEO Sam Altman acknowledged that the situation was “embarrassing” but maintained that the company has never actually taken back anyone's vested rights.
As the AI revolution forges ahead, OpenAI's infighting and whistleblowing calls highlight the growing pains and unresolved ethical quandaries surrounding the technology.
reference: OpenAI thwarts five covert influence operations

Want to learn more about AI and big data from industry leaders? Check out the AI & Big Data Expo in Amsterdam, California and London – this comprehensive event will take place alongside other major events such as Intelligent Automation Conference, BlockX, Digital Transformation Week and Cyber Security & Cloud Expo.
Find out about upcoming enterprise technology events and webinars hosted by TechForge here.
