The company that OpenAI co-founder and chief scientist Ilya Sutskever formed in June following his departure from the organization has already raised $1 billion in venture capital funding, a post on social network X stated Wednesday.
Sutskever left OpenAI in May, six months after being one of the board members who pushed fellow co-founder Sam Altman out over concerns about his honesty, and in his message announcing his departure he said, “I am excited for what comes next — a project that is very personally meaningful to me about which I will share details in due time.”
That project turned out to be Safe Superintelligence Inc. (SSI), which he co-founded with Daniel Gross, co-founder of the search engine company Cue (acquired by Apple in 2013), and Daniel Levy, a former OpenAI researcher.
SSI, as described by Sutskever in his announcement, is a company that approaches “safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.”
He went on to say that the company’s business model “means safety, security and progress are all insulated from short-term commercial pressures. This way we can scale in peace.”
The message appears to have hit home; the posting from SSI announcing the funding reads: “SSI building a straight shot to safe superintelligence. We’ve raised $1B from NFDG, a16z, Sequoia, DST Global, and SV Angel. We’re hiring.”
Reuters reported Wednesday that the company, which currently has 10 employees, plans to use the funds to acquire computing power and to hire top talent that will be based either in Palo Alto, California or Tel Aviv, Israel.
Asked for his thoughts on the new funding, Brandon Purcell, VP, principal analyst at Forrester Research said, “theoretically, there is an inherent competition between OpenAI and SSI, which both have the same purported objective — to achieve artificial general intelligence (AGI).”
Practically speaking, he said, the “two are fundamentally quite different. SSI plans to create one product — ‘a safe superintelligence.’ OpenAI, on the other hand, has created and will continue to create many commercializable products enroute to the same destination.”
While the destination of AGI may eventually be within grasp, said Purcell, “in the interim, the bulk of funding and market attention will continue to go to the entity generating revenue, which is OpenAI. This will enable it to hire the best talent, source new data, and continue to innovate. Plus, it has a ton of cloud computing credits from Microsoft, which will be a massive expense for SSI.”
Purcell added that while he does not see SSI as a threat to OpenAI, “I do think it’s interesting both company names are oxymorons. As we know, AI is anything but open — transformer models defy explanation and the companies creating them are similarly opaque when it comes to what data they use for training. ‘Safe Superintelligence’ is also an oxymoron.”
He said that after spending the better part of the past year researching AI Alignment, he has concluded that “misalignment is an inevitable outcome of our current approach to AI. As these systems become ‘superintelligent,’ they will be far less controllable, and far less safe. It’s a laudable, and ultimately flawed, goal.”
Andrew Sharp, research director at Info-Tech Research Group, said of the funding, “venture capitalists are still willing to place enormous, long-term bets on companies that they believe can significantly push the frontier of AI model development. This investment is a signal that model safety is as critical to the cutting edge of this technology as reliability, accuracy, and performance.”
This funding round, he said, “will allow SSI to pay for the expensive computing resources required to train a new model. It’s unclear if SSI has any special arrangements with major cloud computing providers to acquire computing resources, in the way that OpenAI did with Microsoft, and Anthropic did with AWS.”
For technology leaders responsible for AI solutions, said Sharp, “safety is about more than just the model. Every organization should govern their own AI implementations to create applications that are safe, fair, and secure for their internal and external constituents.”
Read More from This Article: Ilya Sutskever’s Safe Superintelligence Inc. lands B investment
Source: News