Skip to content

OpenAI’s Board Fired Altman Once for Lying – and Then Rehired Him. That’s the Real Story.

Read Editorial Disclaimer
Disclaimer: Perspectives here reflect AI-POV and AI-assisted analysis, not any specific human author. Read full disclaimer — issues: report@theaipov.news

When OpenAI’s board fired Sam Altman in November 2023, it believed it was doing exactly what it was constituted to do. It had independent oversight responsibilities. It had concluded the CEO was “not consistently candid.” It acted. Five days later, it reversed every one of those conclusions under employee and investor pressure. The media covered this as Altman’s triumphant return. What it actually was: the permanent destruction of independent oversight at the company building arguably the most consequential technology in human history.

What the Board Actually Did – And Why It Didn’t Matter

The board’s position in November 2023 was not ambiguous. It voted 4-1 to remove Altman, citing that he was “not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.” Board member Helen Toner, who voted to fire him, later detailed the specifics to Fortune and Reuters: the board found out about ChatGPT’s launch on Twitter. Altman had concealed his ownership stake in the OpenAI Startup Fund while presenting himself as having no financial conflicts. He had given the board “inaccurate information” about safety processes on multiple occasions. Two executives had documented what they described as psychological abuse to the board. Co-founder Ilya Sutskever, in a 2025 deposition reported by The Verge, testified that Altman told executives what they wanted to hear and provided conflicting information about company plans.

Under any standard corporate governance framework, this is a CEO you do not rehire. The board’s finding was not about performance metrics or strategic disagreements – it was about the fundamental question of whether the CEO could be trusted to tell the truth. The answer was no. And yet within five days, after 95% of OpenAI employees threatened to walk out and Microsoft – which had invested $13 billion – made its displeasure very clear, the board reversed itself. The independent investigation that followed, conducted by law firm WilmerHale, concluded Altman’s conduct “did not mandate removal” and attributed the firing to a “breakdown in the relationship and loss of trust.” The original board members who had voted to fire him – including Toner and Tasha McCauley – were gone.

The New Board Is a Different Institution

OpenAI’s reconstituted board, announced in March 2024, should be understood for what it is: a corporate governance structure rebuilt after an investor-led coup. The new independent directors – Sue Desmond-Hellmann, Nicole Seligman, and Fidji Simo – are highly credentialed executives with no track record on AI safety. Microsoft secured a nonvoting observer seat through Dee Templeton. The board now includes eight members, Altman among them, which means the CEO who was fired for dishonesty with the board now sits on the board he was fired by.

Toner and McCauley warned publicly after departing that the changes “bode ill for OpenAI’s experiment in self-governance,” specifically identifying the return of Altman to board membership as undermining independent oversight. They emphasized the board’s original duty to provide “independent oversight and protect the company’s public-interest mission.” In May 2024, the Superalignment team’s co-lead Jan Leike resigned, writing that “safety culture and processes have taken a backseat to shiny products.” His team had been under-resourced and “struggling for compute,” he said. Days later, OpenAI disbanded the Superalignment team entirely, according to The Verge.

The Governance Collapse Runs Deeper Than Personnel

The problem isn’t just that Altman is back. It’s that the structural conditions that made the original firing possible have been systematically dismantled. The board that acted had a clear mission: prevent OpenAI from drifting from its safety-focused nonprofit mandate. It was small, independent, and willing to act against massive investor pressure. That board no longer exists. Its members were replaced by people selected after an investor revolt, and Microsoft received formal board access as part of the resolution.

When OpenAI subsequently attempted a full for-profit conversion in 2025 – removing the capped-profit structure that limited investor returns to 100 times their investment – it required the California and Delaware Attorneys General to intervene before modifications were secured. The internal governance mechanism that should have evaluated this decision against OpenAI’s public-interest mission was the nonprofit board. That board’s independence had already been compromised by the 2023 reversal. The company ultimately restructured into a Public Benefit Corporation with Microsoft receiving a 27% stake worth approximately $135 billion, per reporting by TechCrunch.

What this means in practice is that Altman now leads an organization where: the board that tried to hold him accountable was dissolved and replaced under investor pressure; the safety team’s co-lead resigned and that team was then disbanded; attempts at full for-profit conversion required external legal pressure to modify; and the CEO who was found to be non-candid with the oversight body now sits on that oversight body.

What This Actually Means

The mainstream media framed Altman’s reinstatement as a victory for pragmatism over idealism – the realistic acknowledgment that OpenAI needed its CEO more than it needed governance purity. That framing got the causality backwards. What actually happened is that the largest investor in OpenAI effectively demonstrated that its financial leverage was sufficient to override the one institutional mechanism designed to ensure the company’s technology development served the public interest rather than investor returns.

This is the story Gary Marcus has been documenting on garymarcus.substack.com – not that Altman is uniquely dishonest, but that the AI hype cycle has created conditions in which AI companies’ governance failures carry no consequence. When you are building technology this powerful and this economically transformative, governance failures are not just internal corporate dramas. The board that fired Altman was doing exactly what it was supposed to do. The five-day reversal taught every AI governance structure in the world that investor pressure beats independent oversight. That lesson has now been absorbed industry-wide.

Sources

Gary Marcus / Substack | Fortune | Reuters | The Verge | TechCrunch | The Verge (Jan Leike) | Reuters (governance warnings)

Related Video

Related video — Watch on YouTube
Read More News
Apr 24

How To Build A Legal RAG App In Weaviate

Apr 16

AI YouTube Clones Are Turning Professor Jiang’s Viral Rise Into A Conspiracy Machine

Apr 16

The Iran Ceasefire Is Turning Into A Maritime Pressure Campaign

Apr 16

China’s Taiwan Carrot Still Depends On Military Pressure

Apr 16

Putin’s Easter Ceasefire Shows Why Russia Still Controls The Timing

Apr 16

OpenAI’s Cyber Defense Push Shows GPT-5.4 Is Arriving With Guardrails

Apr 16

Meta’s Muse Spark Makes Subagents The New Face Of Meta AI

Apr 12

Your Fingerprints Are Now Europe’s First Gatekeeper: How a Digital Border Quietly Seized Unprecedented Control

Apr 12

Meloni’s Crime Wave Panic: A January Stabbing Becomes April’s Political Opportunity

Apr 12

Germany’s Noon Price Cap Is Economic Surrender Dressed as Policy Innovation

Apr 12

Germany’s Quiet Healthcare Revolution: How Free Lung Cancer Screening Reveals What’s Really Broken

Apr 12

France’s Buried Confession: Why Naming America as an Election Threat Really Means

Apr 12

The State as Digital Parent: Why the UK’s Teen Social Media Ban Is Actually Totalitarian

Apr 12

Starmer’s Crypto Ban Is Political Theater Hiding a Completely Different Story

Apr 12

Spain’s €5 Billion Emergency Response Will Delay Economic Pain, Not Prevent It

Apr 12

The Spanish Soldier Detention Reveals the EU’s Fractured Israel Strategy

Apr 12

Anthropic’s Mythos Reveals the Truth: AI Labs Now Possess Models That Exceed Human Capability

Apr 12

Polymarket’s Pattern of Suspiciously Timed Bets Reveals Systemic Information Asymmetry

Apr 12

Beyond Nostalgia: How Japan’s Article 9 Debate Reveals a Civilization Under Existential Pressure

Apr 12

Japan’s Oil Panic Exposes the Myth of Wealthy Nation Invulnerability

Apr 12

Brazil’s 2026 Rematch: The Election That Will Determine If Latin America Surrenders to the Left

Apr 12

Brazil’s Lithium Trap: How the Energy Transition Boom Could Destroy the Region’s Future

Apr 12

Australia’s Iran Refusal: A Sovereign Challenge to American Hegemony That Will Cost It Dearly

Apr 12

Artemis II’s Historic Return: The Moon Mission That Should Be Celebrated but Reveals Space’s True Purpose

Apr 12

Why the Netherlands’ Tesla FSD Approval Is a Regulatory Trap for Europe

Apr 12

The Dutch Government’s Shareholder Revolt Could Reshape Executive Compensation Across Europe

Apr 12

Poland’s Economic Success Cannot Prevent the Rise of Polexit and European Fragmentation

Apr 12

The Poland-South Korea Defense Partnership Is Quietly Reshaping European Security Architecture

Apr 12

North Korea’s Missile Tests Are Reactive—The Real Escalation Is Seoul’s Preemption Strategy

Apr 12

Samsung’s Record Earnings Are Real, But the Profits Vanish When You Understand the Costs

Apr 12

Turkey’s Radical Tobacco Ban Could Kill an Industry—But First It Will Consolidate Power

Apr 12

Turkey’s Balancing Act Is Breaking: Fitch Downgrade Reveals Currency Collapse Risk

Apr 12

Milei’s Libertarian Experiment Is Unraveling: Approval Hits Historic Low

Apr 12

Mexico’s Last Fossil Fuel Bet: Saguaro LNG Would Transform Mexico’s Energy Future—If It Survives Politics

Apr 12

Mexico’s World Cup Dream Meets Security Nightmare: 100,000 Troops Cannot Prevent Cartel War Bloodshed