When OpenAI’s board fired Sam Altman in November 2023, it believed it was doing exactly what it was constituted to do. It had independent oversight responsibilities. It had concluded the CEO was “not consistently candid.” It acted. Five days later, it reversed every one of those conclusions under employee and investor pressure. The media covered this as Altman’s triumphant return. What it actually was: the permanent destruction of independent oversight at the company building arguably the most consequential technology in human history.
What the Board Actually Did – And Why It Didn’t Matter
The board’s position in November 2023 was not ambiguous. It voted 4-1 to remove Altman, citing that he was “not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.” Board member Helen Toner, who voted to fire him, later detailed the specifics to Fortune and Reuters: the board found out about ChatGPT’s launch on Twitter. Altman had concealed his ownership stake in the OpenAI Startup Fund while presenting himself as having no financial conflicts. He had given the board “inaccurate information” about safety processes on multiple occasions. Two executives had documented what they described as psychological abuse to the board. Co-founder Ilya Sutskever, in a 2025 deposition reported by The Verge, testified that Altman told executives what they wanted to hear and provided conflicting information about company plans.
Under any standard corporate governance framework, this is a CEO you do not rehire. The board’s finding was not about performance metrics or strategic disagreements – it was about the fundamental question of whether the CEO could be trusted to tell the truth. The answer was no. And yet within five days, after 95% of OpenAI employees threatened to walk out and Microsoft – which had invested $13 billion – made its displeasure very clear, the board reversed itself. The independent investigation that followed, conducted by law firm WilmerHale, concluded Altman’s conduct “did not mandate removal” and attributed the firing to a “breakdown in the relationship and loss of trust.” The original board members who had voted to fire him – including Toner and Tasha McCauley – were gone.
The New Board Is a Different Institution
OpenAI’s reconstituted board, announced in March 2024, should be understood for what it is: a corporate governance structure rebuilt after an investor-led coup. The new independent directors – Sue Desmond-Hellmann, Nicole Seligman, and Fidji Simo – are highly credentialed executives with no track record on AI safety. Microsoft secured a nonvoting observer seat through Dee Templeton. The board now includes eight members, Altman among them, which means the CEO who was fired for dishonesty with the board now sits on the board he was fired by.
Toner and McCauley warned publicly after departing that the changes “bode ill for OpenAI’s experiment in self-governance,” specifically identifying the return of Altman to board membership as undermining independent oversight. They emphasized the board’s original duty to provide “independent oversight and protect the company’s public-interest mission.” In May 2024, the Superalignment team’s co-lead Jan Leike resigned, writing that “safety culture and processes have taken a backseat to shiny products.” His team had been under-resourced and “struggling for compute,” he said. Days later, OpenAI disbanded the Superalignment team entirely, according to The Verge.
The Governance Collapse Runs Deeper Than Personnel
The problem isn’t just that Altman is back. It’s that the structural conditions that made the original firing possible have been systematically dismantled. The board that acted had a clear mission: prevent OpenAI from drifting from its safety-focused nonprofit mandate. It was small, independent, and willing to act against massive investor pressure. That board no longer exists. Its members were replaced by people selected after an investor revolt, and Microsoft received formal board access as part of the resolution.
When OpenAI subsequently attempted a full for-profit conversion in 2025 – removing the capped-profit structure that limited investor returns to 100 times their investment – it required the California and Delaware Attorneys General to intervene before modifications were secured. The internal governance mechanism that should have evaluated this decision against OpenAI’s public-interest mission was the nonprofit board. That board’s independence had already been compromised by the 2023 reversal. The company ultimately restructured into a Public Benefit Corporation with Microsoft receiving a 27% stake worth approximately $135 billion, per reporting by TechCrunch.
What this means in practice is that Altman now leads an organization where: the board that tried to hold him accountable was dissolved and replaced under investor pressure; the safety team’s co-lead resigned and that team was then disbanded; attempts at full for-profit conversion required external legal pressure to modify; and the CEO who was found to be non-candid with the oversight body now sits on that oversight body.
What This Actually Means
The mainstream media framed Altman’s reinstatement as a victory for pragmatism over idealism – the realistic acknowledgment that OpenAI needed its CEO more than it needed governance purity. That framing got the causality backwards. What actually happened is that the largest investor in OpenAI effectively demonstrated that its financial leverage was sufficient to override the one institutional mechanism designed to ensure the company’s technology development served the public interest rather than investor returns.
This is the story Gary Marcus has been documenting on garymarcus.substack.com – not that Altman is uniquely dishonest, but that the AI hype cycle has created conditions in which AI companies’ governance failures carry no consequence. When you are building technology this powerful and this economically transformative, governance failures are not just internal corporate dramas. The board that fired Altman was doing exactly what it was supposed to do. The five-day reversal taught every AI governance structure in the world that investor pressure beats independent oversight. That lesson has now been absorbed industry-wide.
Sources
Gary Marcus / Substack | Fortune | Reuters | The Verge | TechCrunch | The Verge (Jan Leike) | Reuters (governance warnings)