OpenAI co-founder Ilya Sutskever raises $1 billion for his venture

Anamika Dey, editor 

Brief news

  • Ilya Sutskever, co-founder of OpenAI, has secured $1 billion in funding for his new AI company, Safe Superintelligence (SSI).
  • Investors in SSI include Andreessen Horowitz, Sequoia Capital, DST Global, and SV Angel, among others.
  • Sutskever’s departure from OpenAI was followed by the disbanding of the Superalignment team and the departure of Jan Leike, who joined rival AI firm Anthropic.

Detailed news

Ilya Sutskever, the co-founder of OpenAI, has secured $1 billion in funding from investors for his new AI company, Safe Superintelligence (SSI), following his departure from the artificial intelligence startup in May.

In a post on X, the company disclosed that investors included Andreessen Horowitz, Sequoia Capital, DST Global, and SV Angel, as well as NFDG, an investment partnership co-managed by SSI executive Daniel Gross.

In May, Sutskever announced the new venture on X, stating, “We will pursue safe superintelligence in a straight shot, with one focus, one goal, and one product.”.

Sutskever served as the chief scientist of OpenAI and collaborated with Jan Leike to oversee the company’s Superalignment team. Leike departed in May to join the rival AI firm Anthropic. OpenAI disbanded the team shortly after their departures, a mere year after it announced the group. At the time, a source who was acquainted with the situation informed CNBC that certain team members were transferred to other teams within the organization.

At the time, Leike stated in a post on X that OpenAI’s “safety culture and processes have taken a backseat to shiny products.”

Sutskever established SSI in collaboration with Daniel Gross, who was responsible for Apple’s AI and search initiatives, and Daniel Levy, a former employee of OpenAI. The organization maintains offices in Tel Aviv, Israel, and Palo Alto, California.

The company declared on X that “SSI is our sole focus, our name, and our entire product roadmap.” “Our business model ensures that safety, security, and progress are insulate from short-term commercial pressures, and our singular focus means that there is no distraction from management overhead or product cycles.”

In November, Sutskever was one of the OpenAI board members who played a role in the provisional removal of co-founder and CEO Sam Altman.

Altman’s communications with the board had not been “consistently candid,” according to a statement issued by the OpenAI board in November. The situation rapidly intensified in complexity. According to the Wall Street Journal and other media outlets, Sutskever honed his focus on ensuring that artificial intelligence would not cause damage to humans, while others, such as Altman, were more enthusiastic about advancing the delivery of new technology.

In response to the board’s action, nearly all of OpenAI’s employees signed an open letter indicating that they would be departing. Altman returned to the organization several days later.

Sutskever issued a public apology for his involvement in the ordeal soon after Altman’s abrupt dismissal and prior to his prompt reinstatement.

Sutskever expressed his profound contrition for his involvement in the board’s actions in a post on X on November 20. “I had no intention of causing harm to OpenAI.” I am deeply appreciative of the collaborative efforts we have undertaken and will exert myself to ensure the company’s reunion.

Source : CNBC News

Leave a Reply

Your email address will not be published. Required fields are marked *