X

Ilya Sutskever, OpenAI Co-Founder, Announces New AI Startup: Safe Superintelligence

OpenAI co-founder Ilya Sutskever, who left the artificial intelligence startup last month, has announced a new AI company he calls Safe Superintelligence (SSI).

“I am starting a new company,” Sutskever wrote on X on Wednesday. “We will pursue safe superintelligence in a straight shot, with one focus, one goal, and one product.”

In addition to serving as chief scientist at OpenAI, Sutskever co-led the Superalignment team alongside Jan Leike, who departed the business in May to join competitor AI startup Anthropic.

The Superalignment team at OpenAI, which was in charge of directing and managing AI systems, was disbanded soon after Sutskever and Leike made their exit announcements.

At his new startup, Sutskever intends to maintain his focus on safety.

“SSI is our mission, our name, and our entire product roadmap, because it is our sole focus,” an account for SSI posted on X. “Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.”

Sutskever co-founded the company with Daniel Gross, who led Apple’s AI and search efforts, and Daniel Levy, formerly of OpenAI. The company has offices in Palo Alto, California, and Tel Aviv, Israel.

Sutskever was one of the OpenAI board members who tried to oust Sam Altman in November. Altman, Sutskever and other board members clashed over the guardrails OpenAI had put in place for the development of advanced AI.

Following Altman’s sudden collapse, before quickly returning to the job, Sutskever publicly apologized for his role in the ordeal.

“I deeply regret my participation in the board’s actions,” Sutskever wrote in a post on X on Nov. 20. “I never intended to harm OpenAI. I love everything we’ve built together and I will do everything I can to reunite the company.”

Categories: Business
Priyanka Patil:
X

Headline

Privacy Settings