OpenAI co-founder Ilya Sutskever launches AI company that puts safety first

AI News


OpenAI co-founder and former chief scientist Ilya Sutskever has announced he is starting a new AI company focused on developing “safe superintelligence.”

Former OpenAI member Daniel Levy and former Apple AI leader Daniel Gross are also co-founders of the company, named Safe Superintelligence Inc., according to a June 19 announcement.

The company says superintelligence is “within reach,” but ensuring it is “safe” for humans is “the most important technological challenge of our time.”

The company added that it aims to become a “Straight Shot Safe Superintelligence (SSI) Lab” with technology as its sole product and safety as its number one goal.

“We are assembling an elite team of the world's best engineers and researchers who are dedicated to focusing solely on SSI.”

Safe Superintelligence said it aims to improve capabilities as quickly as possible while pursuing safety, and its focused approach means its goals won't be hindered by management, overhead costs, short-term commercial pressures or product cycles.

“That way we can scale up peacefully.”

The company added that investors support its approach of prioritizing safe development above all else.

In an interview with Bloomberg, Sutskever declined to reveal the names of funders or how much the company has raised so far, but Gross commented generally, saying that “raising money won't be an issue for the company.”

Safe Superintelligence Inc. is headquartered in Palo Alto, California, with offices in Tel Aviv, Israel.

OpenAI launches amid safety concerns

The launch of Safe Superintelligence follows controversy at OpenAI, where Sutskever was part of a group that tried to oust OpenAI CEO Sam Altman in November 2023.

Early reports, including from The Atlantic, suggested there were safety concerns at the company around the time of the controversy, while internal memos suggested Altman's attempted firing was related to a breakdown in communication between him and the company's board of directors.

Sutskever disappeared from public view for months after the incident, and formally left Open AI a few weeks ago in May. While she didn't give any reason for her departure, recent developments at AI companies have brought the issue of AI safety to the forefront.

OpenAI employees Jan Reicke and Gretchen Krueger recently left the company over concerns about AI safety, while Vox reports that at least five “safety-focused employees” have left the company since November.

In an interview with Bloomberg, Sutskever said he has a good relationship with Altman and that OpenAI has a “loose understanding” of the new company.

Mentioned in this article



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *