Just six months ago, OpenAI was a model of unity as hundreds of employees and managers threatened to quit the startup en masse unless cofounder Sam Altman was rehired as CEO. The gambit led to Altman’s return just days after he was fired by OpenAI’s board—a dramatic display of the power of an organization marching in lockstep, aligned around a common mission.
Today, that company looks like a distant memory. A series of scandals have tarnished OpenAI’s image and several prominent insiders have quit, accusing the company of abandoning its principles.
The chaotic recent events reveal a company that suddenly appears to be on the brink of a civil war as it grapples with a crisis over leadership and direction. And as the internal crisis continues to unfold, the company that for years has set the pace of generative AI is now engulfed in uncertainty about its own future.
“There is something endemic inside of OpenAI that is coming to the surface through one significant issue after another,” said Daniel Newman, principal analyst at the Futurum Group. “There’s this culture that has risen up out of super-hyper-growth, out of being the most important company in one of the biggest trends in history, and now there are little signs of stress fractures.”
The future of OpenAI is being closely watched not just because of the company’s high-profile stature as the creator of ChatGPT. OpenAI’s technology is at the heart of Microsoft’s AI products, as well as the foundation on top of which countless startups have built their AI strategy. Disruptions, and even just distractions, within OpenAI could have big ripple effects.
Some of OpenAI’s current woes may be the result of a clash of Silicon Valley cultures. Since the end of 2022, when OpenAI launched ChatGPT and became a household name, hundreds of new employees have joined the San Francisco startup. The new blood often have product, sales, and marketing backgrounds, to help OpenAI ramp up its business. It’s a stark contrast to the earlier employees who hail from AI research or safety communities and who joined the company when it was a noncommercial, open-source research lab focused squarely on a mission of reaching what it defines as AGI (artificial general intelligence).
Tech historians may well place the blame at the feet of its high-profile leader. Altman reclaimed his OpenAI throne last November, but in just the past couple of weeks has hardly been treated with kid gloves. Instead, he has undergone scrutiny for a laundry list of PR problems, including accusations from actor Scarlett Johansson about ChatGPT 4o’s “Sky” voice allegedly sounding like her (and Her); leaks regarding OpenAI’s aggressive tactics against former employees; and news that the company failed to provide its Superalignment team with promised compute.
Altman’s direct involvement in the recent string of departures from OpenAI is not clear. But given the nature of the exits, particularly those involving high-ranking members of the AI safety, policy, and governance teams, it seems difficult to imagine that the top-down tone set by the CEO is not connected in some way. Among the recent departures, OpenAI chief scientist and former board member Ilya Sutskever had led the board’s push last year to fire Altman for not being “consistently candid.” Then there was the resignation of Jan Leike, a longtime OpenAI researcher who, with Sutskever, co-led a team called Superalignment that focused on ensuring that future superhuman AI systems could be safely controlled. On his way out, Leike declared that safety, culture, and processes at OpenAI have taken a back seat to “shiny products.”
And in announcing her departure on Wednesday, an OpenAI policy researcher, Gretchen Krueger, suggested the company was sowing divisions among various OpenAI teams concerned with ethics, safety, and governance. “One of the ways tech companies in general can disempower those seeking to hold them accountable is to sow division among those raising concerns or challenging their power. I care deeply about preventing this,” she wrote.