In an open letter, former OpenAI researchers called for greater transparency about AI risks.

Machine Learning


A group of machine learning researchers today Open Letter It urged the technology industry to develop advanced artificial intelligence models in a more transparent manner.

The letter, titled “The Right to Warn About Advanced Artificial Intelligence,” has 13 signatories. The group includes current and former researchers from OpenAI, Alphabet Inc.'s Google DeepMind research group, and Anthropic PBC. The letter is supported by three prominent computer scientists known for their foundational contributions to machine learning: Yoshua Bengio, Geoffrey Hinton, and Stuart Russell.

The signatories argue that companies such as OpenAI have substantial data about potential risks associated with their AI models. Some of this data is not publicly available, the letter notes, and there are no regulations requiring AI developers to disclose the information. As a result, the signatories argue that current and former employees of machine learning companies “are among the few who can hold companies accountable to the public.”

The letter goes on to outline four steps that AI providers should take to enable their employees to share risks they identify with the public.

The signatories' first recommendation is for companies to “support a culture of open criticism.” AI providers building cutting-edge models “should enable current and former employees to raise risk-related concerns about their technology to the public, the company's board of directors, regulators, or appropriate independent organizations with relevant expertise,” according to the letter.

The signatories argue that companies should also create a process for employees to anonymously share concerns about AI risks. “Typical whistleblower protections are inadequate because they focus on illegal activity, yet many of the risks we worry about are still unregulated,” the researchers backing the effort explained in the letter.

Two other best practices recommended by the signatories focus on protecting employees from retaliation for reporting AI risks: The letter states that a company's transparency efforts should include a pledge “not to retaliate against risk-related criticism by interfering with vested economic interests,” among other commitments.

The letter comes a few weeks after news surfaced that OpenAI had included non-disparagement clauses in employee severance contracts, which could result in employees who criticize the company or don't accept the clause losing all vested benefits. Days after the practice came to light, OpenAI Announced Not to enforce that provision.

Recently, the company Been formed The safety and security committee, tasked with ensuring that AI research is conducted safely, is made up of OpenAI CEO Sam Altman, three board members, and five engineering executives. In conjunction with the committee's formation, the company said it recently began training a successor to GPT-4.

image: Unsplash

Your vote of support matters to us and helps keep our content free.

With just one click below you can support our mission of providing free, rich, relevant content.

Join the YouTube community

Join a community of over 15,000 #CubeAlumni experts, including many notable figures and experts, such as Amazon.com CEO Andy Jassy, ​​Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more.

“TheCUBE is an important partner for the industry. You guys are really participating in our events. We really appreciate you coming. I think people also appreciate the content that you're creating.” – Andy Jassy

thank you



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *