The resignation of Caitlin Kalinowski, OpenAI’s head of robotics, on March 7, 2026 was not a quiet departure. She did not cite personal reasons or a better opportunity. She stated plainly that the company’s Pentagon deal had been “rushed without the guardrails defined” and that her decision was “about principle, not people.” That language matters. It signals something the official narrative has tried to obscure: OpenAI’s leadership is fractured over defense work, and Sam Altman’s commercial ambitions are overriding internal objections.
The Resignation Exposes a Governance Crisis, Not an Ethics Disagreement
Kalinowski joined OpenAI in November 2024 after leading augmented reality projects at Meta. She ran the robotics and consumer hardware division—a strategic arm of the company. Her exit, as TechCrunch reported, came in direct response to OpenAI’s agreement with the Pentagon to deploy AI models in classified military networks. She objected to two specific risks: surveillance of Americans without judicial oversight and lethal autonomous systems without human authorization. The Indian Express quoted her emphasizing that the deal was announced before those guardrails were clearly defined.
Altman has acknowledged the optics problem. In statements to CNBC, Reuters, and Bloomberg, he admitted the company “shouldn’t have rushed” the deal and that it “looked opportunistic and sloppy.” The timing was damning: OpenAI announced its Pentagon agreement hours after the Trump administration directed federal agencies to stop using Anthropic’s technology. Anthropic had refused the Pentagon’s demand for unrestricted “lawful use” of its Claude model, including domestic mass surveillance and fully autonomous weapons. When those negotiations collapsed, OpenAI stepped in. Forbes covered the story as a commercial coup; the robotics chief saw it as a governance failure.
Altman’s Commercial Calculus Is Overriding Internal Objections
The board dynamics tell the real story. In November 2023, Altman was fired by the board, then reinstated within days after nearly all employees threatened to resign. An independent WilmerHale investigation found the firing resulted from “a breakdown in the relationship and loss of trust” between the board and Altman—not safety concerns. That crisis unified employees behind him. The Pentagon deal has done the opposite. CNN Business reported that many OpenAI staff “really respect” Anthropic for rejecting the Pentagon’s terms and are frustrated that OpenAI accepted a deal Altman had initially claimed would mirror Anthropic’s red lines.
MIT Technology Review characterized OpenAI’s approach as pragmatic and legal but ultimately softer than Anthropic’s moral stance. Jessica Tillipman, a government procurement law expert at George Washington University, noted that OpenAI’s contract “does not give OpenAI an Anthropic-style, free-standing right to prohibit otherwise-lawful government use.” The agreement permits use “for all lawful purposes, consistent with applicable law”—the exact phrase Anthropic refused. Decrypt reported that users are not buying OpenAI’s claimed safety red lines. The backlash was measurable: Gizmodo and The Hill reported that ChatGPT app uninstalls spiked 295% day-over-day while Claude briefly topped the Apple App Store.
The Financial and Political Incentives Behind the Deal
Altman’s political repositioning adds context. The New York Times and ABC News have documented his shift from calling Trump “an unprecedented threat to America” in 2016 to donating $1 million to Trump’s inaugural committee and praising the administration’s $500 billion Stargate AI initiative at the White House. He has said that “watching [Trump] more carefully recently has really changed my perspective on him.” The Pentagon deal fits that pattern: commercial alignment with the administration’s AI priorities, regardless of internal dissent.
OpenAI has since amended the contract to add explicit anti-surveillance language. The Guardian and Los Angeles Times reported that the revised agreement specifies the AI “shall not be intentionally used for domestic surveillance of U.S. persons and nationals” and that the Department of War affirmed OpenAI’s tools will not be used by intelligence agencies such as the NSA. But amendments after the fact do not address Kalinowski’s core objection: the deal was pushed through before those protections were negotiated. The governance process failed. Altman’s commercial instincts won.
What This Actually Means
Kalinowski’s resignation is not an isolated incident. It is evidence that OpenAI’s leadership is divided over the Pentagon deal and that Altman’s commercial ambitions are overriding internal objections. The board did not block the deal. The safety team did not block it. A senior executive felt compelled to leave rather than endorse it. That is a power dynamic, not an ethics debate. The media has framed this as a principled stand against defense work. It is actually a clash between OpenAI’s commercial and government-facing teams over who controls the company’s direction. Altman has made his choice. The robotics chief has made hers.
Sources
TechCrunch | Forbes | CNBC | The Guardian | Reuters | Bloomberg | MIT Technology Review | Indian Express