Ilya Sutskever, a pivotal figure and co-founder of OpenAI, has recently initiated a new venture called Safe Superintelligence Inc. (SSI). This development comes just a month after he officially departed from OpenAI, where he served as the chief scientist for many years. Sutskever's new enterprise, SSI, has been founded in collaboration with Daniel Gross, formerly of Y Combinator, and Daniel Levy, an ex-engineer at OpenAI.
What was Sutskever's internal role at OpenAI?
While at OpenAI, Sutskever played a critical role in directing the company's efforts to improve the security of highly intelligent AI systems. He worked closely with Jan Leike, who led the OpenAI Superalignment team. This team was tasked with dealing with the complexity and potential risks associated with the emergence of artificial intelligence systems that exceed human intelligence.
However, both Sutskever and Leike exited the company in May following a significant disagreement with OpenAI's leadership over their strategies for AI safety. Leike has since moved to Anthropic, another AI-focused organization, where he now leads a dedicated team.
Sutskever’s concern for the potential dangers of advanced AI has been a long-standing one. In a 2023 blog post co-authored with Leike, he predicted the advent of AI with intelligence exceeding that of humans within the next decade. The post emphasized that such superintelligent systems might not inherently be benevolent, underscoring the need for rigorous research to develop methods to control and restrict them effectively.
Founding Safe Superintelligence Inc.
Sutskever’s dedication to AI safety remains undiminished as he embarks on his new journey with SSI. A tweet announcing the launch of the company highlighted SSI's unwavering focus: "SSI is our mission, our name, and our entire product roadmap because it is our sole focus.
Our team, investors, and business model are working together to achieve SSI. We simultaneously approach safety and efficiency as technical challenges to be solved through innovative engineering and scientific innovation.
The vision for SSI is to advance AI capabilities at a rapid pace while ensuring that safety measures outpace technological advancements, thereby allowing for scalable development without compromising security. The business model is structured to prioritize safety, security, and progress, insulating the company from immediate commercial pressures and distractions related to management or product cycles.
Company Structure and Funding
Sutskever shared more insights about SSI in a conversation with Bloomberg, although he refrained from commenting on the company's funding status or valuation. Unlike OpenAI, which began as a non-profit in 2015 and was later restructured to meet the financial demands of its computing needs, SSI was established as a for-profit entity from the outset.
Given the heightened interest in AI and the impressive backgrounds of SSI's founders, it is anticipated that the company will attract substantial investment swiftly. Daniel Gross, a co-founder, indicated confidence in this area, suggesting that securing capital will not be a challenge for SSI.
SSI is actively setting up operations in Palo Alto and Tel Aviv and is in the process of recruiting top technical talent to join its ranks.