Former OpenAI Scientist Ilya Sutskever Launches Safe AI Startup

After a notable clash with OpenAI CEO Sam Altman last November, Ilya Sutskever, the former chief scientist at OpenAI, has launched a new AI venture. This move comes after Sutskever, along with other board members, unsuccessfully attempted to oust Altman, leading to his departure in May. 

 

Joining forces with Daniel Levy, a former OpenAI colleague, and Daniel Gross, previously an AI leader at Apple, Sutskever has founded Safe Superintelligence Inc. (SSI). The new company’s mission is clear from its name: to develop safe and beneficial artificial intelligence.

 

SSI aims to tackle what its founders believe to be the most pressing technical issue of our time—creating safe superintelligent AI. Experts predict that once machines achieve human-level intelligence, known as Artificial General Intelligence (AGI), they will continue to evolve into Artificial Superintelligence (ASI), which could pose significant risks. 

 

Sutskever’s concern for these risks is not new; he has long advocated for ethical safeguards in AI development. The establishment of SSI underscores a commitment to ensuring that future advancements in AI contribute positively to humanity and mitigate potential threats.