(Reuters) – A group of current and former employees of artificial intelligence (AI) companies including Microsoft Inc-backed OpenAI and Alphabet Inc's Google DeepMind expressed concerns on Tuesday about risks posed by emerging technologies.
The open letter, written by a group consisting of 11 current and former OpenAI employees and one current and former Google DeepMind employee, said the financial incentives of AI companies prevent effective oversight.
“We do not believe that a bespoke structure of corporate governance would be sufficient to change this,” the letter added.
It also warns of the risks posed by unregulated AI, ranging from the spread of misinformation to the loss of independence for AI systems and the deepening of existing inequalities, which it says could ultimately lead to the “extinction of the human species”.
The researchers found examples of image generators from companies like OpenAI and Microsoft producing photos containing voting-related misinformation, despite policies banning such content.
The letter said AI companies have a “weak obligation” to share information with governments about the capabilities and limitations of their systems, adding that the companies cannot be expected to share information voluntarily.
The open letter is the latest to raise safety concerns about generative AI technology that can quickly and cheaply generate human-like text, images and voice.
The group is calling on AI companies to facilitate a process for current and former employees to raise risk-related concerns and not enforce non-disclosure agreements that prohibit criticism.
Meanwhile, the Sam Altman-led company said on Thursday it had disrupted five covert influence operations that sought to use its artificial intelligence models to carry out “deceptive activities” online.
(Reporting by Jaspreet Singh in Bengaluru; Editing by Tasim Zahid)
