
Ilya Sutskever, the co-founder and former chief scientist of OpenAI, is taking a bold step forward in the AI industry with the launch of Safe Superintelligence Inc. (SSI).
This new venture is set to focus on developing a safe and powerful AI system, steering clear of the usual commercial pressures.
Sutskever introduced SSI on Wednesday, emphasizing its singular goal: creating a secure AI system without compromising on safety.
He highlighted the company’s unique approach of advancing AI capabilities while maintaining a strong emphasis on safety, unlike other tech giants like OpenAI, Google, and Microsoft.
“Our business model means safety, security, and progress are all insulated from short-term commercial pressures,” the announcement reads. “This way, we can scale in peace.”
Joining Sutskever in this mission are co-founders Daniel Gross, a former AI lead at Apple, and Daniel Levy, a former technical staff member at OpenAI.
Their combined expertise sets a strong foundation for SSI’s ambitious goals.
Last year, Sutskever was instrumental in the push to remove OpenAI CEO Sam Altman, which led to his own departure from the company in May.
His exit was followed by resignations from key figures like AI researcher Jan Leike and policy researcher Gretchen Krueger, both citing safety concerns at OpenAI.
Unlike OpenAI, which is forging ahead with partnerships with Apple and Microsoft, SSI’s focus remains steadfast.
In an interview with Bloomberg, Sutskever stated that SSI’s first product will be safe superintelligence, and they “will not do anything else” until that goal is achieved.
Safe Superintelligence Inc. stands as a beacon of hope for those concerned about the ethical development of AI, promising a future where progress and safety walk hand in hand.
You May Also Like: Nvidia’s Meteoric Rise: Analyst Projects $5 Trillion Valuation