Business
Former OpenAI Chief Scientist Announces New Safety-Focused Company
Ilya Sutskever, a co-founder and former chief scientist of OpenAI, announced on Wednesday that he’s launching a new venture dubbed Safe Superintelligence Inc. Sutskever said on X that the new lab will focus solely on building a safe “superintelligence”—an industry term for a hypothetical system that’s smarter than humans.
Sutskever is joined at Safe SuperIntelligence Inc. by co-founders Daniel Gross, an investor and engineer who worked on AI at Apple till 2017, and Daniel Levy, another former OpenAI employee. The new American-based firm will have offices in Palo Alto, Calif., and Tel Aviv, according to a description Sutskever shared.
Sutskever was one of OpenAI’s founding members, and was chief scientist during the company’s meteoric rise following the release of ChatGPT. In November, Sutskever took part in the infamous attempt to oust OpenAI CEO Sam Altman, only to later change his mind and support Altman’s return. When Sutskever announced his resignation in May, he said he was “confident that OpenAI will build AGI that is both safe and beneficial” under Altman’s leadership.
Safe Superintelligence Inc. says it will only aim to release one product: the system in its name. This model will insulate the company from commercial pressures, its founders wrote. However, it’s currently unclear who will fund the new venture's development or what exactly its Business model will eventually be.
“Our singular focus means no distraction by management overhead or product cycles,” the announcement reads, perhaps subtly taking aim at OpenAI. In May, another senior OpenAI member, Jan Leike, who co-led a safety team with Sutskever, accused the company of prioritizing “shiny products” over safety. Leike’s accusations came around the time that six other safety-conscious employees left the company. Altman and OpenAI’s President, Greg Brockman, responded to Leike’s accusations by acknowledging there was more work to be done, saying “we take our role here very seriously and carefully weigh feedback on our actions.”
Read more: A Timeline of All the Recent Accusations Leveled at OpenAI and Sam Altman
In an interview with Bloomberg, Sutskever elaborated on Safe Superintelligence Inc.’s approach, saying, “By safe, we mean safe like nuclear safety as opposed to safe as in ‘trust and safety’”; one of OpenAI’s core safety principles is to “be a pioneer in trust and safety.”
While many details about the new company remain to be revealed, its founders have one message for those in the industry who are intrigued: They’re hiring.
-
Business3h ago
US House passes measure that could punish nonprofits Treasury Department decides are ‘terrorist’
-
Business3h ago
Fast fashion may seem cheap, but it’s taking a costly toll on the planet − and on millions of young customers
-
Business1d ago
New Information: These HV Big Lots Are Now Staying Open
-
Business1d ago
Brush Fire Rages On Near Butternut In Great Barrington, MA
-
Business3d ago
Carbon offsets can help bring energy efficiency to low-income Americans − our Nashville data shows it could be a win for everyone
-
Business3d ago
Workplace diversity training programs are everywhere, but their effectiveness varies widely
-
Business3d ago
Firm bosses urged to make use of Welsh language to revitalise rural economic system
-
Business4d ago
Donor-advised funds are drawing a lot of assets besides cash – taking a bigger bite out of tax revenue than other kinds of charitable giving