Just weeks after OpenAI got rid of a team focused on AI safety, the company established a new committee aimed at enhancing safety and security. The company also announced on Tuesday, May 28, that it has begun training its next AI model.
In a blog post, OpenAI said the new committee will be led by CEO Sam Altman, Chair Bret Taylor, Adam D’Angelo, and Nicole Seligman.
One of the first things the Safety and Security Committee will do is evaluate and further develop OpenAI’s processes and safeguards over the next 90 days. After that, the group will share recommendations with OpenAI’s full board, which will then decide how to move forward with those recommendations.
This follows May’s earlier exits of OpenAI safety executive Jan Leike and company co-founder Ilya Sutskever, who were both on the company’s Superalignment team, which was dedicated to foreseeing and stemming potential issues caused by advanced AI.
In a thread on X about his departure, Leike criticized the company, saying he “reached a breaking point” and “over the past years, safety culture and processes have taken a backseat to shiny products.”
OpenAI said the Safety and Security Committee will also work with the company’s technical and policy experts and other cybersecurity officials.
Also in Tuesday’s blog post, OpenAI confirmed it has started training its next big language model, which will be the successor to its current GPT-4. That is expected to be unveiled later this year.
“While we are proud to build and release models that are industry-leading on both capabilities and safety,” OpenAI’s post said, “we welcome a robust debate at this important moment.”