Skip to content

OpenAI’s Safety Researchers Are the Quiet Casualties of the Pentagon Deal

Read Editorial Disclaimer
Disclaimer: Perspectives here reflect AI-POV and AI-assisted analysis, not any specific human author. Read full disclaimer — issues: report@theaipov.news

The names are already fading. Caitlin Kalinowski. Max Schwarzer. The researchers who signed letters, the alignment engineers who publicly backed a rival company. When OpenAI struck its Pentagon deal in late February, the loudest casualties were not the press headlines or the app store rankings – they were the people inside OpenAI who spent years building the safety systems now quietly subordinated to national security law.

The Deal Did Not Just Change What OpenAI Does – It Changed What Its Safety Work Means

OpenAI’s contract with the Department of Defense, announced February 28, 2026, allows the Pentagon to deploy OpenAI’s AI models on classified networks for all lawful purposes. That phrase – all lawful purposes – is the crux of everything. It is not a loophole. It is the architecture of the entire arrangement.

When critics pushed back, OpenAI amended the contract to add explicit prohibitions on domestic surveillance of U.S. persons and autonomous weapons. Sam Altman admitted the original deal was rushed and looked opportunistic and sloppy. But the more substantive problem remained: OpenAI’s protections are contractual, not operational. As legal expert Jessica Tillipman explained, the deal does not give OpenAI a free-standing right to prohibit otherwise-lawful government use. OpenAI can only prevent the Pentagon from breaking laws that already exist.

Which means the safety review process that OpenAI’s researchers built – the Preparedness Framework, the Safety and Security Committee, the red-line evaluations – applies to everything the law already prohibits. For everything the law permits, or simply has not addressed yet, the Pentagon has full discretion.

Altman Admitted It Plainly: OpenAI Cannot Control How the Military Uses Its AI

The Guardian reported in March 2026 that Altman acknowledged the fundamental constraint directly: “You do not get to make operational decisions.” This is not spin or hedging – it is an accurate description of how government contracts work. Once classified deployment begins, OpenAI’s forward-deployed engineers can observe but not override. The safety stack belongs to OpenAI; the operational decisions belong to the Pentagon.

This is exactly what Kalinowski meant when she called it a governance concern first and foremost. In her LinkedIn statement before departing, she wrote that surveillance of Americans without judicial oversight and lethal autonomy without human authorization were lines that deserved more deliberation than they got. She was not objecting to military AI in principle. She was objecting to the speed – the fact that commitments were made before the governance architecture was defined.

Max Schwarzer, OpenAI’s VP of Research, left the company the same day the deal was announced, reportedly moving to Anthropic. Leo Gao, an OpenAI alignment researcher, called the safety amendments window dressing. Nearly 900 current and former employees at OpenAI and Google signed an open letter opposing autonomous weapons and surveillance use. These are not fringe voices – they are the people whose job it was to make OpenAI’s safety commitments credible.

What OpenAI Traded Away Without Announcing It

OpenAI’s published Preparedness Framework, updated in April 2025, establishes a rigorous internal process for evaluating frontier model risks before deployment. It covers biological and chemical capabilities, cybersecurity, autonomous AI replication, and nuclear threats. The Safety and Security Committee – chaired by a Carnegie Mellon professor, with a retired Army general and former Sony general counsel as members – has formal authority to delay model releases until safety concerns are addressed.

None of that process applies to what happens after the model is handed to the Pentagon for classified deployment. OpenAI can run the most thorough pre-deployment safety evaluation in the industry, and the moment the model enters a classified DoD system, that evaluation becomes irrelevant to operational use. As MIT Technology Review noted, OpenAI’s approach is ultimately softer on the Pentagon than what Anthropic demanded – because Anthropic insisted on free-standing contractual prohibitions, while OpenAI deferred to existing law.

The distinction matters enormously in practice. Anthropic’s position was: these uses are prohibited regardless of whether the law addresses them. OpenAI’s position is: these uses are prohibited if the law already prohibits them. Any use the law is silent on – autonomous target selection algorithms, predictive behavioral profiling, AI-assisted interrogation analysis – falls outside OpenAI’s safety framework once it is inside a classified military system.

Business Insider’s coverage of the fallout captured but understated the real damage: the internal coherence of OpenAI’s safety project is broken. Every published safety commitment OpenAI has made is now conditional on national security carve-outs that OpenAI’s own safety researchers cannot audit, override, or even see.

What This Actually Means

OpenAI’s safety researchers did not lose a policy argument. They lost the institutional premise that made their work meaningful. The Preparedness Framework, the SSC, the alignment research program – all of it was built on the assumption that OpenAI would control deployment decisions. The Pentagon deal broke that assumption cleanly and deliberately.

The people who resigned or spoke out are not being melodramatic. They understand something the press coverage mostly missed: when a classified military contract contains carve-outs that the company’s own safety team cannot review, every safety commitment the company makes going forward has an asterisk attached to it. OpenAI is now an AI lab with published safety standards and a separate, parallel deployment track where those standards do not apply.

Kalinowski was right that this was a governance failure. But it was a governance failure by design – not an accident, not a rush job that could be fixed with amended contract language. The architecture was chosen. The researchers who built the safety systems are the quiet casualties of that choice.

Sources

Business Insider | TechCrunch | MIT Technology Review | The Guardian | CNBC | The New York Times

Related Video

Related video — Watch on YouTube
Read More News
Mar 15

The Buried Detail About Oscars Eve: Who Was Not Invited

Mar 15

Why Jeff Bezos at the Chanel Dinner Is a Power Play, Not Just a Photo Op

Mar 15

The Next Domino: How Daytona’s Chaos Will Reshape Spring Break Policing Everywhere

Mar 15

Spring Break Crackdowns Are the Hidden Cost of Daytona’s Weekend Violence

Mar 15

What We Know About the Daytona Beach Weekend Shootings So Far

Mar 15

“I hate to be taking the spotlight away from her on Mother’s Day”, says Katelyn Cummins, and It Shows Who Reality TV Really Serves

Mar 15

Why the Rose of Tralee-DWTS Crossover Is a Ratings Play, Not Just a Feel-Good Story

Mar 15

“It means everything”, says Paudie Moloney, and DWTS Is Betting on Underdog Stories Like His

Mar 15

“Opinions are like noses”, says Limerick’s Paudie, and the DWTS Final Is Already Decided in the Edit

Mar 15

Why the Media Still Treats Golfers’ Private Lives as Public Content

Mar 15

Jaden McDaniels and the Hidden Cost of ‘Simplifying’ in the NBA

Mar 15

The Next Domino After Sabalenka-Rybakina Indian Wells: Who Really Loses in the WTA Rematch Economy

Mar 15

Bachelorette Season 22 Review: Why Taylor Frankie Paul’s Casting Is the Story

Mar 15

Why Iran and a Republican Congressman Shared the Same Sunday Show

Mar 15

Sabalenka vs Rybakina at Indian Wells: What the Head-to-Head Stats Are Hiding

Mar 15

Taylor Frankie Paul’s Bachelorette Arc Is Reality TV’s Favorite Redemption Script

Mar 15

La Liga’s Mid-Table Squeeze Is Making the Real Sociedad-Osasuna Clash Matter More Than It Should

Mar 15

Ludvig Aberg and Olivia Peet Are the Latest Athlete-Couple Story the Tours Love to Sell

Mar 15

Why Marquette’s Offseason Matters More Than Its March Exit

Mar 15

All We Know About the North Side Chicago Shooting So Far

Mar 15

Forsyth County Freeze Warning: What We Know So Far

Mar 15

Paudie Moloney DWTS Underdog Arc Is a Political Dry Run the Irish Press Won’t Name

Mar 15

Political Decode: What Iran’s Minister Really Wanted From the Face the Nation Sit-Down

Mar 15

What We Know About the Taylor Frankie Paul Bachelorette Timeline So Far

Mar 15

What’s Happening: Winter Storm Iona, Hawaii Flooding, and Severe Weather Updates

Mar 15

Wisconsin Winter Storm Updates As Of Now: What We Know

Mar 15

Oklahoma Wildfires and Evacuations: All We Know So Far

Mar 15

What Everyone Is Getting Wrong About Tencent’s OpenClaw Hype Before Earnings

Mar 15

OpenClaw and WorkBuddy Are Less About AI Than About Tencent’s Next Revenue Bet

Mar 15

Why the Bachelorette Franchise Keeps Casting Stars With Baggage

Mar 15

The Transfer Portal Is Forcing Coaches Like Shaka Smart to Recruit Twice a Year

Mar 15

Jaden McDaniels’ Rise Exposes How Few One-and-Done Stars Actually Stick in the NBA

Mar 15

The Timberwolves’ Jaden McDaniels Gamble Failed Because the Roster Was Built for One Star

Mar 15

Sabalenka vs Rybakina Is the Rivalry the WTA Has Been Waiting For

Mar 15

Why Indian Wells Keeps Delivering the Finals That the Grand Slams Often Miss