OpenAI CEO Sam Altman finds himself at the center of a firestorm of controversy in early 2026, facing mounting public scrutiny and an exodus of senior executives over his leadership and business decisions. The latest crisis stems from a highly controversial, hastily announced $200 million contract with the U.S. Department of Defense, combined with severe criticism from prominent AI researchers regarding his integrity. As public backlash grows and legal challenges mount, the fundamental question of whether OpenAI’s stated mission to benefit humanity has been entirely subsumed by profit and power is dominating the tech industry discourse.
The Pentagon Deal and the Ethics of Military AI
The catalyst for the most recent wave of outrage was OpenAI’s decision to strike a deal with the Pentagon on February 28, 2026. The timing was heavily scrutinized, as the agreement was announced mere hours after rival AI firm Anthropic publicly rejected similar contract terms and was subsequently blacklisted by the Trump administration as a “supply chain risk.” Anthropic’s CEO, Dario Amodei, had demanded strict contractual guarantees preventing their AI from being used for domestic mass surveillance or integrated into autonomous weapons systems—terms the Pentagon refused. By stepping in immediately where Anthropic drew a hard ethical line, Altman sparked accusations of opportunistic and hypocritical behavior, especially since he had publicly claimed earlier that day that OpenAI shared Anthropic’s “red lines.”
Facing intense backlash, Altman was forced into damage control. During a tense internal all-hands meeting, he reportedly called the public reaction “really painful” and admitted that the deal “looked opportunistic and sloppy.” He subsequently announced that OpenAI was hastily amending the contract to include explicit language prohibiting the intentional use of their models for domestic surveillance of U.S. citizens by intelligence agencies like the NSA. However, for many within the company, the damage was already done.
Executive Exodus and Internal Turmoil
The ethical compromises perceived in the Pentagon deal triggered a high-profile wave of resignations at OpenAI. Most notably, Caitlin Kalinowski, the head of robotics and consumer hardware, resigned in early March. In a pointed public statement, she criticized the deployment of models on classified military networks without clearly defined guardrails, stating that such decisions “deserved more deliberation than they got.” Her departure was accompanied by VP of Research Max Schwarzer, who tellingly left to join rival Anthropic.
This follows an earlier wave of senior departures tied to Altman’s strategic shifts. Several top researchers, including VP of Research Jerry Tworek, left the company in January 2026 after Altman issued a “code red” directive reallocating massive computing resources away from long-term, fundamental AI safety and reasoning research toward immediate, commercial improvements to ChatGPT. This shift alienated researchers who felt the company was abandoning its scientific roots in favor of rapid monetization.
Gary Marcus and the Allegations of Greed
The internal strife has been amplified by external critics, most notably prominent AI researcher and author Gary Marcus. In a blistering series of publications, Marcus argued that Altman’s “greed and dishonesty are finally catching up to him.” Marcus categorized the Pentagon deal as a coordinated political and financial maneuver, pointing out the timing surrounding OpenAI president Greg Brockman’s $25 million donation to a political PAC. Marcus revived discussions of Altman’s brief firing by the OpenAI board in late 2023 for being “not consistently candid,” arguing that Altman’s questionable character poses a tangible danger to the world given the immense power of the technology he controls. The criticism contributed to a measurable consumer backlash, with reports indicating ChatGPT lost hundreds of thousands of paid subscribers in the immediate aftermath of the military contract announcement.
The Looming Legal Battle with Elon Musk
As if the ethical controversies weren’t enough, Altman is also facing a massive legal reckoning. A federal judge recently cleared the way for a jury trial in Elon Musk’s fraud lawsuit against OpenAI and Altman, scheduled for April 2026. Musk alleges that Altman deceived him and other early backers about the organization’s commitment to its nonprofit mission, essentially using their foundational funding to build a closed, deeply profitable enterprise heavily tethered to Microsoft. The recent October 2025 restructuring—which saw the nonprofit entity take a massive stake in a new Public Benefit Corporation to facilitate billions in new investment—sits at the heart of this legal battle over the company’s soul.
What To Watch
The coming months will test Sam Altman’s ability to maintain control of the AI narrative. With the Elon Musk fraud trial commencing in April and the ongoing challenge of retaining top-tier AI safety talent amid deep ethical concerns, Altman must prove that OpenAI can balance its aggressive commercial ambitions with the stringent safety and ethical guardrails demanded by both its workforce and the public.