OpenAI co-founder and former chief scientist Ilya Sutskever is starting a new AI company with a focus on safety. In a post on Wednesday, Sutskever announced Safe Superintelligence Inc. (SSI), a startup with “one goal and one product”: creating safe and powerful AI systems.
The announcement described SSI as a startup with a “parallel approach to safety and functionality,” rapidly evolving AI systems while prioritizing safety. It also cited the external pressures that AI teams at companies like OpenAI, Google, and Microsoft often face, saying the company's “single-point focus” allows it to avoid “the administrative overhead and disruptions of product cycles.”
“Our business model means that safety, security, and progress are all insulated from short-term commercial pressures,” the announcement read. “This allows us to scale with peace of mind.” In addition to Sutskever, SSI was co-founded by Daniel Gross, a former Apple AI leader, and Daniel Levy, who previously worked as a technical staff member at OpenAI.
With OpenAI pursuing partnerships with Apple and Microsoft, SSI is unlikely to do so anytime soon. BloombergSutskever says SSI's first product will be secure superintelligence, and until then the company “will not do anything else.”