There is a principle in institutional governance that rarely gets articulated but everyone understands: the behaviour that gets rewarded becomes the norm. Sam Altman misled his board, was fired for it, and was reinstated within five days. He then built OpenAI into one of the most valuable private companies in history, secured billions in government contracts, and faced no lasting professional consequence of any kind. Every AI CEO working today has absorbed that lesson. The consequences for what happens next will be far larger than any single Pentagon deal.
The Precedent That Sets Itself Without Anyone Voting For It
When Altman was fired in November 2023, the official reason was that he had been not consistently candid with the board – a finding backed by specific evidence. According to former board member Helen Toner, as reported by The Verge, Altman withheld his ownership of the OpenAI Startup Fund while claiming to be an independent director. He gave inaccurate information about safety measures on multiple occasions. He allowed the board to learn about major product launches via Twitter. Co-founder Ilya Sutskever testified that Altman had been manipulating executives for over a year.
None of it mattered. Investor pressure from Microsoft and others forced his reinstatement within days. The governance structure that tried to hold him accountable was subsequently restructured to reduce its power. The employees who signed letters demanding his removal mostly stayed on. And Altman went on to preside over a company now valued in the hundreds of billions of dollars and embedded in U.S. military infrastructure.
From the perspective of any other AI CEO watching this unfold, the signal could not be clearer: the rules that appear to govern AI company leadership are not actually enforced. Misleading oversight bodies – whether internal boards, investors, or regulators – carries no meaningful professional cost as long as the product is performing.
Google DeepMind, Anthropic, and the Governance Vacuum
This matters because AI governance frameworks everywhere depend on CEO honesty with oversight bodies. The Hiroshima AI Process, the EU AI Act’s compliance mechanisms, the voluntary commitments signed at the 2023 and 2024 AI Safety Summits – all of these assume that AI company leaders will provide accurate information to regulators and boards about what their systems can do, what safety measures are in place, and what risks are being taken.
The AI Safety Index published by the Future of Life Institute in Winter 2025 gave both Anthropic and OpenAI a C+ on overall safety governance. Google DeepMind scored a C. These are not passing grades – and they reflect a sector where, as the World Benchmarking Alliance found, fewer than 10 percent of major tech companies explain their internal AI governance mechanisms at all. The sector is already operating largely on trust; Altman’s unscathed career demonstrates what happens when that trust is abused.
Consider what Google DeepMind faces now. It has published AI Principles since 2018, maintains formal ethics review infrastructure, and presents itself as the responsible alternative to OpenAI’s move-fast culture. But Anthropic CEO Dario Amodei publicly accused OpenAI’s Pentagon deal of being safety theatre – and then, according to TechCrunch, began quietly re-opening negotiations with the Pentagon himself. The moral positioning turned out to be temporary. The competitive pressure turned out to be permanent. That is exactly the dynamic Altman’s unpunished governance failures make more likely across the industry.
What Happens When the Pattern Normalises
The scenario that should concern regulators is not the one where Sam Altman personally does something harmful. It is the one where the absence of consequences for his behaviour normalises a governance culture across the AI sector in which candour with oversight bodies is treated as optional when commercially inconvenient.
That normalisation is already visible. As CNN Business reported in February 2026, AI safety researchers have been departing OpenAI, Anthropic, and other labs in significant numbers, citing concerns that commercial pressures are overriding safety priorities. At OpenAI specifically: the head of safety research on mental health issues left for Anthropic, the vice president of product policy was terminated after raising concerns about a new product feature, and the company disbanded its mission alignment team. These are not isolated events – they are what the governance culture looks like when it has already absorbed the message that accountability is negotiable.
Gary Marcus, writing on his Substack, has been documenting Altman’s pattern of dishonesty since before most mainstream technology journalists were willing to name it. His argument – that the pattern that produced the 2023 firing is the same pattern that produced the Pentagon deal – is not moralising. It is an observation about institutional incentives. When the industry’s most prominent leader faces no meaningful consequence for misleading his oversight structure, every other leader in that industry receives permission to do the same.
What This Actually Means
The dangerous part of Altman getting away with it is not what Altman will do next. It is what the other twenty AI CEOs watching him are now licensed to do. Regulators in Brussels, Westminster, and Washington are currently designing governance frameworks predicated on the assumption that AI company leaders will communicate honestly with oversight bodies. Altman’s unscathed career demonstrates that assumption is not guaranteed by any mechanism currently in place.
Congress has not passed binding AI legislation. The voluntary safety commitments lack enforcement mechanisms. The internal boards at major AI labs have been structurally weakened in the wake of what happened at OpenAI in 2023. If the next major AI governance failure follows Altman’s template – mislead early, move fast, let the product’s success neutralise the scrutiny – there will be no structural safeguard left to catch it. The dangerous part is not that Altman got away with it. The dangerous part is that everyone knows he got away with it.
Sources
Gary Marcus Substack |
The Verge |
TechCrunch |
CNN Business |
Future of Life Institute |
World Benchmarking Alliance