When governments want to avoid accountability for decisions, they have historically buried them in bureaucracy – long review chains, opaque criteria, complex procurement rules that make challenge difficult and expensive. DOGE found a faster method. Use ChatGPT to make the decisions, send termination notices from a non-governmental email address, and produce an administrative record so thin that courts cannot determine whether anyone actually reviewed anything. The National Endowment for the Humanities is the proof of concept. The rest of the federal grant system is the target.
The Design Was Accountability Avoidance, Not Efficiency
The facts documented in court proceedings deserve more attention than they have received. According to Techdirt and verified in court filings from the Southern District of New York, DOGE officials Nate Cavanaugh and Justin Fox – described as inexperienced outsiders with no prior government service – used ChatGPT to review over 1,400 National Endowment for the Humanities grants. The review process consisted of asking the AI whether a grant was related to DEI in a response limited to 120 characters. Based on those responses, previously approved federal grants were cancelled.
Fox then emailed termination notices to approximately 1,500 grantees from a non-NEH email address – a detail that is legally significant because it means DOGE, not NEH leadership, made the termination decisions. The federal agency whose mandate was being used had not authorised the cancellations through its own processes. The New York Times documented how DOGE initially removed identifying information from its published list of cancelled grants, making fact-checking structurally impossible, before reversing that decision after public criticism.
This is not bureaucratic sloppiness. A 120-character ChatGPT query cannot distinguish between a grant studying racial disparities in healthcare and a DEI training programme – and the officials running this process knew that. The imprecision was the point. Broad, unauditable AI decisions maximise the number of cancellations while minimising the paper trail that would allow any individual recipient to mount a successful challenge.
The Legal Record Confirms the Challenge Problem
Federal court proceedings bear this analysis out. The Authors Guild obtained a preliminary injunction in July 2025 from Judge Colleen McMahon of the Southern District of New York, who found that grant recipients would suffer irreparable harm if the cancellations proceeded – and that the terminations appeared to violate the First Amendment and the Administrative Procedure Act by targeting grants based on recipients’ perceived viewpoints.
What the litigation also revealed is how difficult it is to challenge decisions made through AI intermediaries. In February 2026, the court was forced to grant a motion to compel discovery, overruling all of the government’s assertions of privilege and directing production of approximately 3,700 withheld or redacted documents. The court found that the government’s initial administrative record was incomplete and that officials had not conducted good-faith reviews. When the record of a government decision consists primarily of ChatGPT outputs and emails from personal accounts, the normal mechanisms of administrative accountability – appeal, review, challenge – become functionally inoperable.
Each affected grant recipient faces the same problem: to challenge their specific cancellation, they need to understand the reasoning behind it. When the reasoning is a 120-character AI output generated by someone with no government authority to make the decision, there is no reasoning to challenge. The AI layer was not chosen because it was better at evaluating grants than human reviewers. It was chosen because it creates a decision trail nobody can meaningfully interrogate.
Elon Musk’s DOGE and the Structural Incentive
Elon Musk’s involvement in DOGE adds a dimension that the New York Times documentation of the NEH process begins to illuminate. DOGE’s published accounting of savings was, as the Times separately reported, riddled with errors – including credits for cancelling contracts already completed years earlier. The accountability structure of DOGE itself mirrors the accountability structure of the ChatGPT grant review: opaque outputs, no auditable methodology, aggressive resistance to disclosure.
This consistency is not accidental. An organisation designed to avoid accountability will naturally gravitate toward tools that maximise output while minimising the paper trail that enables oversight. ChatGPT in this context is not an efficiency tool. It is a decision-laundering tool – one that converts a politically directed outcome (eliminating DEI-adjacent funding) into something that resembles an objective technical process. The Trump administration’s simultaneous decision to blacklist Anthropic and mandate government use of OpenAI’s ChatGPT means the tool doing the laundering is now baked into federal procurement infrastructure.
What This Actually Means
The NEH ChatGPT grant review is not a one-off. It is a template. DOGE has now demonstrated that generative AI can be used to make binding federal funding decisions at scale, that the resulting paper trail is thin enough to survive initial legal challenge, and that the process is fast enough to disburse – or cancel – billions of dollars before courts can fully intervene.
Judge McMahon’s injunction is a speed bump, not a permanent barrier. The litigation will continue for years. In the meantime, the administrative record produced by AI-intermediated decisions remains structurally opaque, and the 1,500 grant recipients who received termination notices from a personal email address are left navigating a legal system designed for human accountability applied to decisions that deliberately avoided it. The AI layer was chosen precisely because it makes accountability structurally impossible. The New York Times documented it. The courts are beginning to confirm it. The only question now is which federal programme is next.
Sources
The New York Times |
Techdirt |
Bloomberg Law |
Authors Guild |
The New York Times |
NPR