In a recent announcement, former OpenAI co-founder Ilya Sutskever unveiled his latest venture: a new company focusing on developing safe and powerful artificial intelligence technology. Named Safe Superintelligence Inc., the company aims to create AI that can rival his previous employer.
According to a statement on the company’s website, “SSI is our mission, our name, and our entire product roadmap, because it is our sole focus. Our team, investors, and business model are all aligned to achieve SSI.”
This news comes amidst concerns in the tech industry about the rapid advancement of AI technology without sufficient research on safety and regulation. Sutskever, a key figure in the AI revolution, previously worked on developing ChatGPT and was involved in a leadership shuffle at OpenAI last year.
Following his departure from OpenAI last month, Sutskever expressed a desire to work on a project that holds personal significance for him. Safe Superintelligence’s launch raises questions about how they plan to monetize their “safer” AI model and what their definition of “safety” entails.
The company’s focus on creating AI systems that are on par with or surpass human intelligence underscores the imminent reality of advanced AI technology. Safe Superintelligence aims to prioritize safety and progress over short-term commercial gains.
Joining Sutskever in this endeavor are Daniel Levy, a former OpenAI employee, and Daniel Gross, an experienced investor and AI researcher. The company will have offices in Palo Alto, California, and Tel Aviv, Israel, as it embarks on the journey to develop safe and powerful artificial intelligence technology.