OpenAI Scrappy team investigates the risks of “rogue” AI

AI For Business


Ilya Satskever played a key role in the ouster of Sam Altman last year and recently announced he was leaving the company.
Jack Gez/Getty

  • OpenAI's Superalignment team was established in July 2023 to mitigate AI risks such as “unauthorized” behavior.
  • OpenAI has reportedly disbanded its Superalignment team following the resignation of its co-leader.
  • One former leader criticized OpenAI in a post about X for focusing on “shiny” products over safety.

Wired first reported that OpenAI disbanded its Super Alignment team the same week it released GPT-4o, its most human-like AI to date.

OpenAI established the Superalignment team in July 2023, co-led by Ilya Sutskever and Jan Leike. The team focused on mitigating the risks of AI, including the potential for AI to “go rogue.”

The team reportedly disbanded days after leaders Ilya Sutskeva and Jan Reike announced their resignations earlier this week. Sutskever said in his post that he is “confident that OpenAI will build a secure and profitable AGI” under its current leadership.

He also added that he was “excited about what happens next” and said this is a “project that has very personal meaning” to him. A former OpenAI executive declined to provide further details, but said he would share more details in due course.

Sutskever, co-founder and former chief scientist of OpenAI, made headlines when he announced his resignation. The executive was involved in the firing of CEO Sam Altman in November. Despite later expressing regret for having contributed to Altman's firing, Sutskever's future at OpenAI has been in doubt since Altman's return.

Following Sutskever's announcement, Leike posted on X (formerly Twitter) that he was also leaving OpenAI. The former executive released a series of posts Friday explaining his resignation, which he said came after disagreements over the company's core priorities for “quite some time.”

Rike said the team was “sailing against the wind” and was having trouble obtaining calculators for research. The Superalignment team's mission includes using 20% ​​of OpenAI's computing power over the next four years to “build a near-human-level automated alignment researcher,” according to OpenAI's team announcement last July. It was included.

“OpenAI must become a safety-first AGI company,” Rike added. He said building generative AI is an “inherently risky endeavor” and that OpenAI is more concerned with releasing “shiny products” than safety.

Media not supported by AMP.
Tap to get the full mobile experience.

Jan Rijke did not respond to requests for comment.

The Super Alignment team's goal is to “solve the core technical challenges of Super Intelligence Alignment in four years,” a goal the company acknowledged is “incredibly ambitious.” They also added that there is no guarantee of success.

The risks the team addressed included “misuse, economic disruption, disinformation, stigma and discrimination, addiction and overdependence.” The company said in a post that the new team's work is in addition to OpenAI's existing work aimed at improving the security of current models such as ChatGPT.

Wired reports that some of the remaining members of the team have joined other OpenAI teams.

OpenAI did not respond to requests for comment.

Axel Springer, Business Insider's parent company, has a global deal that allows OpenAI to train models based on its media brands' reporting.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *