OpenAI Safety Employees Leaving Company, We’re Sure That’s A Good Thing

By Charlene Badasie | Published

ai warfare OpenAI Safety

OpenAI has slowly been losing employees who have worked to ensure OpenAI safety. But now, they are leaving in droves. Ilya Sutskever and Jan Leike, who led the super-alignment team responsible for ensuring the technology aligns with its creators’ goals and doesn’t act unpredictably or harm humanity, resigned from the company behind ChatGPT in mid-May.

Many Leaving OpenAI

chatgpt

They’re not the only ones who have left. Since November 2023, when OpenAI’s board tried and failed to fire CEO Sam Altman, approximately five more of the company’s most safety-focused employees quit or were forced out.

While it may seem like former chief scientist Sutskever left because he saw something terrifying, like an AI that could destroy humanity, the real reason is losing faith in Altman.

According to sources familiar with the company, safety-focused employees no longer trust the OpenAi CEO regarding safety.

“It’s a process of trust collapsing bit by bit, like dominoes falling one by one,” someone with inside knowledge of the company said anonymously. Not many employees are willing to talk about this publicly.

Questionable Off-Boarding Agreements

And where OpenAI safety is concerned, it might be tough to totally figure out. OpenAI often requires its workers to sign off-boarding agreements with non-disparagement clauses when they leave.

If someone refuses to sign, they give up their equity in the company, potentially losing out on millions of dollars. After the agreement’s existence was made public, Altman took to social media to clarify that the off-boarding documents did include a clause about potential equity cancellation.

Sam Altman Almost Fired

The Open AI safety team drama began when the board tried to fire Altman, accusing him of dishonesty. Altman and President Greg Brockman retaliated by threatening to leave and take top talent to Microsoft, forcing the board to reinstate him.

This power struggle left the relationship between Sutskever and Altman strained.

Despite their public show of friendship, Sutskever has been absent from the office since the coup attempt, working remotely on AI alignment projects.

Altman’s reaction to his firing – demanding a favorable board and threatening the company’s stability – showed his desire to maintain control and avoid scrutiny.

Manipulative And Insincere?

artificial intelligence OpenAI Safety

Critics described Altman as manipulative and insincere, prioritizing rapid development at OpenAI over safety.

His fundraising from regimes like Saudi Arabia, which could misuse AI, further alarmed safety-conscious employees. This behavior led to a growing distrust among employees regarding the company’s commitments, culminating in their loss of faith in its values.

OpenAI’s Super-Alignment Team

OpenAI’s super-alignment team is now led by co-founder John Schulman, but it is significantly weakened. Schulman is already occupied with ensuring the safety of current products, leaving little capacity for forward-looking safety work.

The team was originally created to address future safety issues related to AGI, but it always had limited resources.

Now, even a small fraction of researchers and 20 percent of computing power might be redirected elsewhere, reducing the focus on preventing future AI risks.

Leike previously highlighted the struggle for resources and emphasized the need for more preparation for future models, including security and societal impact.

Not On A Safe Path

AI extinction OpenAI Safety

His concerns suggest that OpenAI is not on the right path to safely develop AGI – Artificial General Intelligence that can understand, learn, and apply knowledge across a wide range of tasks at a level comparable to human intelligence. This assessment from a leading AI safety expert signals significant worry about OpenAI’s future direction.

Source: Vox