OpenAI executive slams Sam Altman for choosing 'shiny products' over AI safety

AI For Business


OpenAI CEO Sam Altman
Justin Sullivan/Getty Images

  • A top executive at safety research company OpenAI resigned on Tuesday.
  • Jan Rijke said he had reached a “breaking point”.
  • He added that Sam Altman's company prioritized “shiny products” over safety.

OpenAI's former head of safety is revealing all.

On Tuesday night, Jan Reich, the leader of the artificial intelligence company Super Alignment Group, announced that she was resigning from her post in a candid post saying she had “resigned” to X.

Now, three days later, Reich has further elaborated on his departure, saying OpenAI doesn't take safety seriously enough.

“Over the past few years, safety culture and processes have taken a backseat to shiny products,” Reich wrote in a long thread on X on Friday.

In his post, Reike said he joined OpenAI as the best place in the world to study how to “steer and control” artificial general intelligence (AGI), which can think faster than humans. He said it was because he thought about it.

“However, I disagreed with OpenAI's leadership on the company's core priorities for quite some time and eventually reached a breaking point,” Reike wrote.

A former OpenAI executive said the company should pay most attention to issues of “security, surveillance, readiness, safety, adversarial robustness, (hyper)coordination, confidentiality, social impact, and related topics.” Stated.

But Reich said his team, which was working on how to tune AI systems to what's best for humanity, was “sailing against the wind” with OpenAI.

“We are long overdue to address the implications of AGI very seriously,” he wrote, adding, “OpenAI must become a safety-first AGI company.”

Reike concluded the thread with a note to OpenAI employees, encouraging them to change the company's safety culture.

“I'm counting on you. The world is counting on you,” he said.

Resignation at OpenAI

Reich and Ilya Satskeva, another Super Alignment team leader, left OpenAI within hours of each other on Tuesday.

In a statement on X, Altman praised Sutskever as “arguably one of the greatest minds of our generation, a leader in our field, and a dear friend.”

“OpenAI would not be where it is today without him,” Altman wrote. “While he has personally meaningful work ahead of him, I am forever grateful for what he has done here and look forward to completing the mission we started together.” I am working hard.”

Mr. Altman did not comment on Mr. Reich's resignation.

On Friday, Wired reported that OpenAI had disbanded both men's AI risk team. According to Wired, researchers investigating the dangers of AI going out of control will now be absorbed into other parts of the company.

OpenAI did not respond to a request for comment from Business Insider.

The AI ​​company, which recently debuted a new large-scale language model, GPT-4o, has been rocked by high-profile shake-ups in recent weeks.

In addition to Reich and Sutskever's resignations, Vice President of Human Resources Diane Yun and Director of Nonprofit and Strategic Initiatives Chris Clark have also left the company, the Information reported. And last week, BI reported that two other safety researchers have left the company.

One of those researchers later wrote that he lost confidence that OpenAI would “act responsibly in the days of AGI.”

Axel Springer, Business Insider's parent company, has a global deal that allows OpenAI to train models based on its media brands' reporting.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *