After dissolving its previous oversight board in mid-May, OpenAI claimed on Tuesday that it had established a safety and security committee under the direction of senior executives.
The new committee will be responsible for recommending to OpenAI’s board “critical safety and security decisions for OpenAI projects and operations,” the company said.
News of the new committee comes as the developer of the ChatGPT virtual assistant announced that it has begun training its “next frontier model.”
Anticipating the “resulting systems to bring us to the next level of capabilities on our path to AGI,” or artificial general intelligence—a type of AI that is on par with or more intelligent than humans—the company stated in a blog post.
The blog post states that Bret Taylor, Adam D’Angelo, and Nicole Seligman—all members of OpenAI’s board of directors—will be on the safety committee in addition to CEO Sam Altman.
OpenAI dissolved the previous committee that looked into the long-term risks associated with AI, and in its place a new oversight team was established. Previous to that, notable researchers Jan Leike and Ilya Sutskever, who co-founded OpenAI, announced their departures from the Microsoft-backed company.
A letter Leike wrote earlier this month stated that OpenAI’s “safety culture and procedures have taken precedence over flashy products.” Altman expressed his sadness over Leike’s leaving on the social media platform X, saying that OpenAI still had “a lot more work to do.”
According to the blog post, the safety group will assess OpenAI’s procedures and security measures over the course of the following ninety days and present its findings to the board of directors. At a later time, OpenAI will offer an update on the proposals it has implemented.
As the massive models that power apps like ChatGPT get more sophisticated, the topic of AI safety has taken centre stage in a wider discussion. AGI’s arrival date and associated hazards are also questions that AI product creators have.
Source – CNBC News