An outgoing OpenAI executive focused on safety has raised concerns about the company's direction as he exits.
Jan Leike, who resigned from his role leading the company’s “superalignment” team this week, expressed his disagreements with OpenAI leadership’s “core priorities” in a thread on X on Friday, stating that he had “reached a breaking point.”
“Alignment” or “superalignment” in the artificial intelligence sector refers to efforts to train AI systems to operate according to human needs and priorities.
Leike, who joined OpenAI in 2021, was announced last summer as co-leader of the Superalignment team, tasked with achieving “scientific and technical breakthroughs to steer and control AI systems much smarter than us.”
However, Leike revealed that the team has been under-resourced and facing increasing challenges. “Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done,” he said on X, adding that Thursday was his last day at OpenAI.
“Building smarter-than-human machines is an inherently dangerous endeavour… But over the past years, safety culture and processes have taken a backseat to shiny products.”
Leike’s departure, which he announced Wednesday, comes amid a broader leadership transition at OpenAI. His resignation followed an announcement by OpenAI Co-Founder and Chief Scientist Ilya Sutskever on Tuesday that he would also be leaving the company.
Sutskever stated he was departing to work on a “project that is very personally meaningful to me.” His exit is particularly notable given his pivotal role in the dramatic firing — and subsequent reinstatement — of OpenAI CEO Sam Altman last year.
CNN contributor Kara Swisher previously reported that Sutskever had been concerned that Altman was advancing AI technology “too far, too fast.”
After initially voting to remove Altman as chief executive and chairman of the board, Sutskever later reversed his stance and signed an employee letter calling for the entire board to resign and for Altman to return.
Tensions over the development and public release of AI technology may have persisted within the company even after Altman’s reinstatement.
The executive exits come on the heels of OpenAI’s announcement this week that it would offer its most powerful AI model yet, GPT-4o, for free to the public via ChatGPT.
The technology aims to transform ChatGPT into a digital personal assistant capable of real-time spoken conversations.
“I believe much more of our bandwidth should be spent getting ready for the next generations of models, on security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact, and related topics,” Leike wrote in his X thread on Friday.
“These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there.”
In response to Leike’s claims, OpenAI directed CNN to an X post from Altman affirming the company’s commitment to safety. “i’m super appreciative of @janleike’s contributions to openai’s alignment research and safety culture, and very sad to see him leave,” Altman said. “he’s right we have a lot more to do; we are committed to doing it. i’ll have a longer post in the next couple of days.”