OpenAI officials have called on the company to be more transparent about the “serious risks” its AI technology poses to society.

AI Video & Visuals


Didem Mente/Anadolu/Getty Images

Current and former OpenAI employees have spoken out about the need for more transparency about the technology it and similar companies are developing.



CNN

A group of OpenAI insiders is calling on artificial intelligence companies to be more transparent about the “serious risks” of AI and to protect employees who raise concerns about the technology their companies are building.

“AI companies have a strong financial incentive to avoid effective oversight,” said an open letter posted on Tuesday signed by current and former employees of AI companies including OpenAI, developer of the controversial ChatGPT tool.

They also called on AI companies to foster a “culture of open criticism” that welcomes, rather than punishes, people who raise concerns, especially as laws struggle to keep up with rapidly evolving technology.

Companies are aware of the “serious risks” that AI poses, from manipulation to a loss of control known as the “singularity.” It could lead to human extinction, but more should be done to educate the public about the risks and how to protect them, the group wrote.

Under the current law, AI employees said they don't believe AI companies would voluntarily share important information about their technology.

It is therefore imperative that current and former employees speak out, and that companies do not enforce “smear” agreements or retaliate against those who do. Risk-related concerns: “Normal whistleblower protections are inadequate because they focus on illegal activity, yet many of the risks we worry about are still unregulated,” the groups wrote.

Their letter comes as companies move quickly to implement generative AI tools in their products while government regulators, businesses and consumers grapple with responsible use. Meanwhile, many tech experts, researchers and leaders are calling for a moratorium on the AI ​​race, or for governments to step in and impose a moratorium.

In response to the letter, an OpenAI spokesperson told CNN that “we are proud of our track record of delivering the most capable and safest AI systems, and believe in a scientific approach to risk,” adding that the company agrees that “given the importance of this technology, a rigorous discussion is essential.”

OpenAI said it has an anonymous honesty hotline and a safety and security committee led by board members and the company's safety leaders. The company said it does not sell personal information, create user profiles or use that data to target or sell anything to anyone.

But Daniel Ziegler, one of the letter's organizers and an early machine learning engineer who worked at OpenAI from 2018 to 2021, told CNN it's important to remain skeptical of the company's transparency efforts.

“It's very difficult from the outside to gauge how seriously they are taking their commitment to safety assessments and understanding of societal harms, especially when there is strong commercial pressure to act quickly,” he said. “It's really important to have the right culture and processes in place so that employees can speak up in a targeted way when they have concerns.”

He hopes that the letter will encourage more experts in the AI ​​industry to publicly voice their concerns.

Meanwhile, Apple is widely expected to announce a partnership with OpenAI to bring generative AI to the iPhone at its annual Worldwide Developers Conference.

“We see generative AI as a key opportunity across our entire range of products and believe it gives us a differentiating advantage,” Apple CEO Tim Cook said during the company's most recent earnings call in early May.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *