Skip to content

Sam Altman Built OpenAI’s Credibility on a Promise He Was Never Going to Keep

Read Editorial Disclaimer
Disclaimer: Perspectives here reflect AI-POV and AI-assisted analysis, not any specific human author. Read full disclaimer — issues: report@theaipov.news

The story of Sam Altman and OpenAI is, at its core, a story about what happens when the most powerful branding claim in technology – we are the responsible ones – is built by someone the organization’s own board determined was not consistently candid. That determination wasn’t gossip. It was the official reason given when OpenAI’s board fired its CEO in November 2023. And what has happened in the two-plus years since his reinstatement is a systematic confirmation of why that assessment was correct.

The Board Knew. And Then It Reversed Itself.

When OpenAI’s board fired Altman on November 17, 2023, it published a statement saying he was “not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.” Former board member Helen Toner later provided details: Altman had not told the board before ChatGPT launched in November 2022 – they found out on Twitter. He concealed his ownership of the OpenAI Startup Fund while claiming to be an independent board member with no financial conflicts. He gave the board “inaccurate information” about OpenAI’s safety processes on multiple occasions, according to Business Insider reporting. He tried to push Toner off the board after she published research he disagreed with.

In a 2025 deposition, OpenAI cofounder Ilya Sutskever testified that Altman would “pit high-ranking executives against each other” and offer “conflicting information” about company plans, “telling people what they wanted to hear,” according to The Verge’s reporting on the deposition. Executives described a “toxic culture of lying.” Toner’s summary: “We couldn’t believe things that Sam was telling us, and that’s a completely unworkable place to be in as a board.”

Then, under pressure from employees and investors – primarily Microsoft – the board reversed itself and reinstated him within five days. And that is when the real problem began: the institution designed to hold him accountable had just demonstrated it couldn’t.

The Pattern Continued, Undisrupted

Altman’s public brand rests on a specific claim: that he takes AI safety seriously and is building toward AGI with unusual caution and transparency. The record since 2023 tells a different story.

In November 2025, Gary Marcus documented a striking incident on his Substack. On October 27, OpenAI submitted an 11-page letter to the White House Office of Science and Technology Policy explicitly requesting federal loan guarantees, grants, and cost-sharing agreements to expand AI infrastructure. Ten days later, Altman posted publicly that OpenAI did “not have or want government guarantees for OpenAI datacenters” and that governments should not bail out companies making poor business decisions. Senators requested a formal inquiry into the contradiction. The Trump administration rejected the guarantee request. OpenAI’s CFO had already partially walked back comments at a Wall Street Journal event where she had mentioned a federal “backstop” to reduce financing costs.

This is not a one-off. Altman’s stated commitment to safety – OpenAI promised to devote 20% of its efforts to AI safety work when it launched its Superalignment team – was effectively abandoned when that team was dissolved without fanfare, according to Fortune. The nonprofit structure that Altman repeatedly cited as the governance safeguard ensuring OpenAI would not be captured by profit motives became the subject of a contentious restructuring attempt in 2025, with Altman arguing that the capped-profit model limiting investor returns “doesn’t make sense” anymore. The California and Delaware Attorneys General forced modifications to the plan. The nonprofit structure survived in modified form, but only because of external legal pressure, not internal governance.

The Safety Brand as Strategic Asset

The most important thing to understand about Altman’s credibility problem is that the “responsible AI” positioning was never accidental. It was the competitive moat. By claiming the moral high ground – we move fast but not recklessly – OpenAI justified its dominant position, its valuation, and its access to government relationships. The safety brand made OpenAI more fundable, more politically connected, and harder to regulate than a purely commercial AI company.

In February 2026, OpenAI announced a Pentagon military contract hours after Anthropic’s own negotiations with the Defense Department broke down. Altman’s own admission to Reuters and CNBC was that the deal was “rushed” and “looked opportunistic and sloppy.” MIT Technology Review noted that while Anthropic negotiated for specific contractual safety prohibitions, OpenAI instead relied on references to existing laws – a structurally weaker protection that a procurement law expert described as not giving OpenAI “a free-standing right to prohibit otherwise-lawful government use.” The safety brand that had distinguished OpenAI from pure commercial AI labs was being quietly traded for a defense contract.

What This Actually Means

Altman is not uniquely villainous in Silicon Valley. He is, in many ways, typical – a founder who built a compelling narrative, attracted enormous capital, and then found the narrative increasingly difficult to maintain as the commercial pressures of building a trillion-dollar company collided with the promises that made investors trust him in the first place.

What makes the OpenAI case different is the stakes of the narrative. Altman didn’t just promise investors good returns – he promised the world that this particular company, led by this particular CEO, would be the responsible steward of artificial general intelligence. That promise was the reason OpenAI received regulatory goodwill, talent, and public trust that raw commercial AI companies couldn’t access. The board’s 2023 verdict – “not consistently candid” – was not a minor personnel dispute. It was a warning that the promise and the behavior were not aligned. The board reversed that verdict under investor pressure. The behavior has not changed. Gary Marcus, who has documented this pattern persistently on garymarcus.substack.com, is not a contrarian outlier – he is reading the public record accurately. The AI hype cycle excused Altman because it needed him. That is not the same as him being trustworthy.

Sources

Gary Marcus / Substack | Business Insider | The Verge | Decrypt | CNBC | MIT Technology Review | Fortune

Related Video

Related video — Watch on YouTube
Read More News
Apr 24

How To Build A Legal RAG App In Weaviate

Apr 16

AI YouTube Clones Are Turning Professor Jiang’s Viral Rise Into A Conspiracy Machine

Apr 16

The Iran Ceasefire Is Turning Into A Maritime Pressure Campaign

Apr 16

China’s Taiwan Carrot Still Depends On Military Pressure

Apr 16

Putin’s Easter Ceasefire Shows Why Russia Still Controls The Timing

Apr 16

OpenAI’s Cyber Defense Push Shows GPT-5.4 Is Arriving With Guardrails

Apr 16

Meta’s Muse Spark Makes Subagents The New Face Of Meta AI

Apr 12

Your Fingerprints Are Now Europe’s First Gatekeeper: How a Digital Border Quietly Seized Unprecedented Control

Apr 12

Meloni’s Crime Wave Panic: A January Stabbing Becomes April’s Political Opportunity

Apr 12

Germany’s Noon Price Cap Is Economic Surrender Dressed as Policy Innovation

Apr 12

Germany’s Quiet Healthcare Revolution: How Free Lung Cancer Screening Reveals What’s Really Broken

Apr 12

France’s Buried Confession: Why Naming America as an Election Threat Really Means

Apr 12

The State as Digital Parent: Why the UK’s Teen Social Media Ban Is Actually Totalitarian

Apr 12

Starmer’s Crypto Ban Is Political Theater Hiding a Completely Different Story

Apr 12

Spain’s €5 Billion Emergency Response Will Delay Economic Pain, Not Prevent It

Apr 12

The Spanish Soldier Detention Reveals the EU’s Fractured Israel Strategy

Apr 12

Anthropic’s Mythos Reveals the Truth: AI Labs Now Possess Models That Exceed Human Capability

Apr 12

Polymarket’s Pattern of Suspiciously Timed Bets Reveals Systemic Information Asymmetry

Apr 12

Beyond Nostalgia: How Japan’s Article 9 Debate Reveals a Civilization Under Existential Pressure

Apr 12

Japan’s Oil Panic Exposes the Myth of Wealthy Nation Invulnerability

Apr 12

Brazil’s 2026 Rematch: The Election That Will Determine If Latin America Surrenders to the Left

Apr 12

Brazil’s Lithium Trap: How the Energy Transition Boom Could Destroy the Region’s Future

Apr 12

Australia’s Iran Refusal: A Sovereign Challenge to American Hegemony That Will Cost It Dearly

Apr 12

Artemis II’s Historic Return: The Moon Mission That Should Be Celebrated but Reveals Space’s True Purpose

Apr 12

Why the Netherlands’ Tesla FSD Approval Is a Regulatory Trap for Europe

Apr 12

The Dutch Government’s Shareholder Revolt Could Reshape Executive Compensation Across Europe

Apr 12

Poland’s Economic Success Cannot Prevent the Rise of Polexit and European Fragmentation

Apr 12

The Poland-South Korea Defense Partnership Is Quietly Reshaping European Security Architecture

Apr 12

North Korea’s Missile Tests Are Reactive—The Real Escalation Is Seoul’s Preemption Strategy

Apr 12

Samsung’s Record Earnings Are Real, But the Profits Vanish When You Understand the Costs

Apr 12

Turkey’s Radical Tobacco Ban Could Kill an Industry—But First It Will Consolidate Power

Apr 12

Turkey’s Balancing Act Is Breaking: Fitch Downgrade Reveals Currency Collapse Risk

Apr 12

Milei’s Libertarian Experiment Is Unraveling: Approval Hits Historic Low

Apr 12

Mexico’s Last Fossil Fuel Bet: Saguaro LNG Would Transform Mexico’s Energy Future—If It Survives Politics

Apr 12

Mexico’s World Cup Dream Meets Security Nightmare: 100,000 Troops Cannot Prevent Cartel War Bloodshed