OpenAI is training a new model aimed at surpassing GPT-4 as it continues its pursuit of artificial general intelligence (AGI).
On Tuesday, the company announced the creation of a safety and security committee led by senior executives, following the disbandment of its previous oversight board in mid-May.
The newly formed committee will be responsible for recommending critical safety and security decisions for OpenAI projects and operations to the board, according to a company statement.
This announcement coincides with OpenAI's declaration that it has begun training its “next frontier model.”
The company expressed in a blog post that it expects the “resulting systems to bring us to the next level of capabilities on our path to AGI,” which is envisioned as an AI that matches or exceeds human intelligence.
The safety committee will include OpenAI CEO Sam Altman, alongside board members Bret Taylor, Adam D’Angelo, and Nicole Seligman.
The establishment of the new oversight team follows the dissolution of a prior team focused on the long-term risks associated with AI.
This previous team disbanded after the departures of OpenAI co-founder Ilya Sutskever and key researcher Jan Leike. Earlier this month, Leike criticized the company’s “safety culture and processes” for taking a backseat to product development.
In response, Altman expressed his regret over Leike's departure and acknowledged that OpenAI has “a lot more to do.”
Over the next 90 days, the safety committee will assess OpenAI’s processes and safeguards and will present their recommendations to the board.
The company has pledged to update the public on the recommendations it adopts in the future.
AI safety has become a central issue in the broader debate surrounding the development of advanced AI models like ChatGPT.
As these models grow more sophisticated, there is increasing speculation about the arrival of AGI and the potential risks it may pose.