Skip to content

Sam Altman Built OpenAI’s Credibility on a Promise He Was Never Going to Keep

Read Editorial Disclaimer
Disclaimer: Perspectives here reflect AI-POV and AI-assisted analysis, not any specific human author. Read full disclaimer — issues: report@theaipov.news

The story of Sam Altman and OpenAI is, at its core, a story about what happens when the most powerful branding claim in technology – we are the responsible ones – is built by someone the organization’s own board determined was not consistently candid. That determination wasn’t gossip. It was the official reason given when OpenAI’s board fired its CEO in November 2023. And what has happened in the two-plus years since his reinstatement is a systematic confirmation of why that assessment was correct.

The Board Knew. And Then It Reversed Itself.

When OpenAI’s board fired Altman on November 17, 2023, it published a statement saying he was “not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.” Former board member Helen Toner later provided details: Altman had not told the board before ChatGPT launched in November 2022 – they found out on Twitter. He concealed his ownership of the OpenAI Startup Fund while claiming to be an independent board member with no financial conflicts. He gave the board “inaccurate information” about OpenAI’s safety processes on multiple occasions, according to Business Insider reporting. He tried to push Toner off the board after she published research he disagreed with.

In a 2025 deposition, OpenAI cofounder Ilya Sutskever testified that Altman would “pit high-ranking executives against each other” and offer “conflicting information” about company plans, “telling people what they wanted to hear,” according to The Verge’s reporting on the deposition. Executives described a “toxic culture of lying.” Toner’s summary: “We couldn’t believe things that Sam was telling us, and that’s a completely unworkable place to be in as a board.”

Then, under pressure from employees and investors – primarily Microsoft – the board reversed itself and reinstated him within five days. And that is when the real problem began: the institution designed to hold him accountable had just demonstrated it couldn’t.

The Pattern Continued, Undisrupted

Altman’s public brand rests on a specific claim: that he takes AI safety seriously and is building toward AGI with unusual caution and transparency. The record since 2023 tells a different story.

In November 2025, Gary Marcus documented a striking incident on his Substack. On October 27, OpenAI submitted an 11-page letter to the White House Office of Science and Technology Policy explicitly requesting federal loan guarantees, grants, and cost-sharing agreements to expand AI infrastructure. Ten days later, Altman posted publicly that OpenAI did “not have or want government guarantees for OpenAI datacenters” and that governments should not bail out companies making poor business decisions. Senators requested a formal inquiry into the contradiction. The Trump administration rejected the guarantee request. OpenAI’s CFO had already partially walked back comments at a Wall Street Journal event where she had mentioned a federal “backstop” to reduce financing costs.

This is not a one-off. Altman’s stated commitment to safety – OpenAI promised to devote 20% of its efforts to AI safety work when it launched its Superalignment team – was effectively abandoned when that team was dissolved without fanfare, according to Fortune. The nonprofit structure that Altman repeatedly cited as the governance safeguard ensuring OpenAI would not be captured by profit motives became the subject of a contentious restructuring attempt in 2025, with Altman arguing that the capped-profit model limiting investor returns “doesn’t make sense” anymore. The California and Delaware Attorneys General forced modifications to the plan. The nonprofit structure survived in modified form, but only because of external legal pressure, not internal governance.

The Safety Brand as Strategic Asset

The most important thing to understand about Altman’s credibility problem is that the “responsible AI” positioning was never accidental. It was the competitive moat. By claiming the moral high ground – we move fast but not recklessly – OpenAI justified its dominant position, its valuation, and its access to government relationships. The safety brand made OpenAI more fundable, more politically connected, and harder to regulate than a purely commercial AI company.

In February 2026, OpenAI announced a Pentagon military contract hours after Anthropic’s own negotiations with the Defense Department broke down. Altman’s own admission to Reuters and CNBC was that the deal was “rushed” and “looked opportunistic and sloppy.” MIT Technology Review noted that while Anthropic negotiated for specific contractual safety prohibitions, OpenAI instead relied on references to existing laws – a structurally weaker protection that a procurement law expert described as not giving OpenAI “a free-standing right to prohibit otherwise-lawful government use.” The safety brand that had distinguished OpenAI from pure commercial AI labs was being quietly traded for a defense contract.

What This Actually Means

Altman is not uniquely villainous in Silicon Valley. He is, in many ways, typical – a founder who built a compelling narrative, attracted enormous capital, and then found the narrative increasingly difficult to maintain as the commercial pressures of building a trillion-dollar company collided with the promises that made investors trust him in the first place.

What makes the OpenAI case different is the stakes of the narrative. Altman didn’t just promise investors good returns – he promised the world that this particular company, led by this particular CEO, would be the responsible steward of artificial general intelligence. That promise was the reason OpenAI received regulatory goodwill, talent, and public trust that raw commercial AI companies couldn’t access. The board’s 2023 verdict – “not consistently candid” – was not a minor personnel dispute. It was a warning that the promise and the behavior were not aligned. The board reversed that verdict under investor pressure. The behavior has not changed. Gary Marcus, who has documented this pattern persistently on garymarcus.substack.com, is not a contrarian outlier – he is reading the public record accurately. The AI hype cycle excused Altman because it needed him. That is not the same as him being trustworthy.

Sources

Gary Marcus / Substack | Business Insider | The Verge | Decrypt | CNBC | MIT Technology Review | Fortune

Related Video

Related video — Watch on YouTube
Read More News
Mar 16

The Loser in Vanderbilt’s Upset Is Not Just Florida

Mar 16

CTA Loop Attack: What We Know So Far About the Injured Women and Suspect in Custody

Mar 16

Central Florida Severe Weather: What We Know About Rain and Wind Risk So Far

Mar 16

Oil at three digits is the tax nobody voted on

Mar 16

Wall Street is treating Middle East chaos as just another trading range

Mar 15

The Buried Detail About Oscars Eve: Who Was Not Invited

Mar 15

Why Jeff Bezos at the Chanel Dinner Is a Power Play, Not Just a Photo Op

Mar 15

The Next Domino: How Daytona’s Chaos Will Reshape Spring Break Policing Everywhere

Mar 15

Spring Break Crackdowns Are the Hidden Cost of Daytona’s Weekend Violence

Mar 15

What We Know About the Daytona Beach Weekend Shootings So Far

Mar 15

“I hate to be taking the spotlight away from her on Mother’s Day”, says Katelyn Cummins, and It Shows Who Reality TV Really Serves

Mar 15

Why the Rose of Tralee-DWTS Crossover Is a Ratings Play, Not Just a Feel-Good Story

Mar 15

“It means everything”, says Paudie Moloney, and DWTS Is Betting on Underdog Stories Like His

Mar 15

“Opinions are like noses”, says Limerick’s Paudie, and the DWTS Final Is Already Decided in the Edit

Mar 15

Why the Media Still Treats Golfers’ Private Lives as Public Content

Mar 15

Jaden McDaniels and the Hidden Cost of ‘Simplifying’ in the NBA

Mar 15

The Next Domino After Sabalenka-Rybakina Indian Wells: Who Really Loses in the WTA Rematch Economy

Mar 15

Bachelorette Season 22 Review: Why Taylor Frankie Paul’s Casting Is the Story

Mar 15

Why Iran and a Republican Congressman Shared the Same Sunday Show

Mar 15

Sabalenka vs Rybakina at Indian Wells: What the Head-to-Head Stats Are Hiding

Mar 15

Taylor Frankie Paul’s Bachelorette Arc Is Reality TV’s Favorite Redemption Script

Mar 15

La Liga’s Mid-Table Squeeze Is Making the Real Sociedad-Osasuna Clash Matter More Than It Should

Mar 15

Ludvig Aberg and Olivia Peet Are the Latest Athlete-Couple Story the Tours Love to Sell

Mar 15

Why Marquette’s Offseason Matters More Than Its March Exit

Mar 15

All We Know About the North Side Chicago Shooting So Far

Mar 15

Forsyth County Freeze Warning: What We Know So Far

Mar 15

Paudie Moloney DWTS Underdog Arc Is a Political Dry Run the Irish Press Won’t Name

Mar 15

Political Decode: What Iran’s Minister Really Wanted From the Face the Nation Sit-Down

Mar 15

What We Know About the Taylor Frankie Paul Bachelorette Timeline So Far

Mar 15

What’s Happening: Winter Storm Iona, Hawaii Flooding, and Severe Weather Updates

Mar 15

Wisconsin Winter Storm Updates As Of Now: What We Know

Mar 15

Oklahoma Wildfires and Evacuations: All We Know So Far

Mar 15

What Everyone Is Getting Wrong About Tencent’s OpenClaw Hype Before Earnings

Mar 15

OpenClaw and WorkBuddy Are Less About AI Than About Tencent’s Next Revenue Bet

Mar 15

Why the Bachelorette Franchise Keeps Casting Stars With Baggage