The names are already fading. Caitlin Kalinowski. Max Schwarzer. The researchers who signed letters, the alignment engineers who publicly backed a rival company. When OpenAI struck its Pentagon deal in late February, the loudest casualties were not the press headlines or the app store rankings – they were the people inside OpenAI who spent years building the safety systems now quietly subordinated to national security law.
The Deal Did Not Just Change What OpenAI Does – It Changed What Its Safety Work Means
OpenAI’s contract with the Department of Defense, announced February 28, 2026, allows the Pentagon to deploy OpenAI’s AI models on classified networks for all lawful purposes. That phrase – all lawful purposes – is the crux of everything. It is not a loophole. It is the architecture of the entire arrangement.
When critics pushed back, OpenAI amended the contract to add explicit prohibitions on domestic surveillance of U.S. persons and autonomous weapons. Sam Altman admitted the original deal was rushed and looked opportunistic and sloppy. But the more substantive problem remained: OpenAI’s protections are contractual, not operational. As legal expert Jessica Tillipman explained, the deal does not give OpenAI a free-standing right to prohibit otherwise-lawful government use. OpenAI can only prevent the Pentagon from breaking laws that already exist.
Which means the safety review process that OpenAI’s researchers built – the Preparedness Framework, the Safety and Security Committee, the red-line evaluations – applies to everything the law already prohibits. For everything the law permits, or simply has not addressed yet, the Pentagon has full discretion.
Altman Admitted It Plainly: OpenAI Cannot Control How the Military Uses Its AI
The Guardian reported in March 2026 that Altman acknowledged the fundamental constraint directly: “You do not get to make operational decisions.” This is not spin or hedging – it is an accurate description of how government contracts work. Once classified deployment begins, OpenAI’s forward-deployed engineers can observe but not override. The safety stack belongs to OpenAI; the operational decisions belong to the Pentagon.
This is exactly what Kalinowski meant when she called it a governance concern first and foremost. In her LinkedIn statement before departing, she wrote that surveillance of Americans without judicial oversight and lethal autonomy without human authorization were lines that deserved more deliberation than they got. She was not objecting to military AI in principle. She was objecting to the speed – the fact that commitments were made before the governance architecture was defined.
Max Schwarzer, OpenAI’s VP of Research, left the company the same day the deal was announced, reportedly moving to Anthropic. Leo Gao, an OpenAI alignment researcher, called the safety amendments window dressing. Nearly 900 current and former employees at OpenAI and Google signed an open letter opposing autonomous weapons and surveillance use. These are not fringe voices – they are the people whose job it was to make OpenAI’s safety commitments credible.
What OpenAI Traded Away Without Announcing It
OpenAI’s published Preparedness Framework, updated in April 2025, establishes a rigorous internal process for evaluating frontier model risks before deployment. It covers biological and chemical capabilities, cybersecurity, autonomous AI replication, and nuclear threats. The Safety and Security Committee – chaired by a Carnegie Mellon professor, with a retired Army general and former Sony general counsel as members – has formal authority to delay model releases until safety concerns are addressed.
None of that process applies to what happens after the model is handed to the Pentagon for classified deployment. OpenAI can run the most thorough pre-deployment safety evaluation in the industry, and the moment the model enters a classified DoD system, that evaluation becomes irrelevant to operational use. As MIT Technology Review noted, OpenAI’s approach is ultimately softer on the Pentagon than what Anthropic demanded – because Anthropic insisted on free-standing contractual prohibitions, while OpenAI deferred to existing law.
The distinction matters enormously in practice. Anthropic’s position was: these uses are prohibited regardless of whether the law addresses them. OpenAI’s position is: these uses are prohibited if the law already prohibits them. Any use the law is silent on – autonomous target selection algorithms, predictive behavioral profiling, AI-assisted interrogation analysis – falls outside OpenAI’s safety framework once it is inside a classified military system.
Business Insider’s coverage of the fallout captured but understated the real damage: the internal coherence of OpenAI’s safety project is broken. Every published safety commitment OpenAI has made is now conditional on national security carve-outs that OpenAI’s own safety researchers cannot audit, override, or even see.
What This Actually Means
OpenAI’s safety researchers did not lose a policy argument. They lost the institutional premise that made their work meaningful. The Preparedness Framework, the SSC, the alignment research program – all of it was built on the assumption that OpenAI would control deployment decisions. The Pentagon deal broke that assumption cleanly and deliberately.
The people who resigned or spoke out are not being melodramatic. They understand something the press coverage mostly missed: when a classified military contract contains carve-outs that the company’s own safety team cannot review, every safety commitment the company makes going forward has an asterisk attached to it. OpenAI is now an AI lab with published safety standards and a separate, parallel deployment track where those standards do not apply.
Kalinowski was right that this was a governance failure. But it was a governance failure by design – not an accident, not a rush job that could be fixed with amended contract language. The architecture was chosen. The researchers who built the safety systems are the quiet casualties of that choice.
Sources
Business Insider | TechCrunch | MIT Technology Review | The Guardian | CNBC | The New York Times