Something unprecedented happened in April 2025, and the political press treated it as a footnote to a budget fight. For the first time in American history, a federal agency used generative AI to make binding funding decisions affecting thousands of previously approved grant recipients – and it did so without producing the reasoned administrative record that US law has required of government decisions since 1946. DOGE used ChatGPT. The grants were cancelled. The accountability gap was opened. And now every future administration knows the method exists.
The NEH Case Was the First – It Will Not Be the Last
The New York Times obtained documents showing how DOGE operatives Nate Cavanaugh and Justin Fox requested a list of all National Endowment for the Humanities grants and used ChatGPT to review them. The queries were reportedly under 120 characters. The AI was asked, essentially, whether each grant was related to DEI. Based on those determinations, nearly 1,500 cancellation letters went out, terminating $207 million in approved funding across all 50 states. The letters were written by DOGE staff but signed by NEH leadership, giving them the formal appearance of institutional decisions while the actual reasoning was produced by a language model.
Courts moved quickly. A federal judge in Oregon ruled in August 2025 that the terminations were unlawful. A New York federal court found violations of the First Amendment and the Administrative Procedures Act. Dozens of lawsuits are ongoing. The legal system, functioning as designed, identified what the Administrative Procedure Act has required since 1946: government decisions must be well-reasoned, must engage with evidence, must be explainable.
But here is the structural shift the legal victories did not reverse: DOGE demonstrated that an administration willing to absorb the legal losses can use AI to make thousands of binding decisions faster than the judicial system can respond. By the time the Oregon injunction was issued in August, the NEH was functionally gutted. Grant recipients had lost funding, project timelines had collapsed, and the organisational infrastructure of the nation’s humanities councils had been disrupted beyond what a court order could repair. The template worked, even though the specific deployment failed.
The Administrative Procedure Act Was Not Built for This
The APA requires agencies to demonstrate their decisions are well-reasoned, to engage with significant public comments, and to keep records sufficient to reconstruct the basis for each decision. Bloomberg Law’s analysis identified the central problem with AI-driven regulatory and funding decisions: AI models often fail to disclose or explain their decision-making processes, making conclusions legally vulnerable but not operationally stoppable at scale.
University of Pennsylvania law professor Cary Coglianese described DOGE’s AI deregulation plan as naive on both the technology side and the side of understanding regulation – because even if the AI flags the right rules, the agency still must explain why each rule is being changed, must address public comments, and must produce documentation that survives judicial review. DOGE’s former general counsel James Burnham argued courts should judge actions on final product quality rather than methodology. The courts disagreed. But a sufficiently sympathetic appellate panel might not.
This is the precedent danger. Not that ChatGPT cancelled NEH grants in 2025. But that the legal architecture preventing that from becoming permanent is thinner than it looks. The APA’s reasoned decision-making requirements are not constitutional law – they are statutory law, subject to legislative change. An executive order could redefine what counts as adequate administrative reasoning. A single favorable appellate ruling could establish that AI-assisted review constitutes sufficient process. Neither outcome requires a revolution. They require procedural erosion, which governments are historically good at accomplishing when motivated.
The Method Is Already Spreading
DOGE terminated nearly 30,000 federal grants and contracts across 64 agencies by February 2026, reporting $110 billion in savings. The National Science Foundation cancelled over 1,100 grants. The Department of Education cut $881 million in research contracts. The Department of Transportation planned to use Google Gemini AI to draft federal transportation safety regulations, with its general counsel explicitly saying: we do not need the perfect rule, we want good enough.
Forty-eight lawmakers sent a letter to OMB Director Russell Vought questioning DOGE’s use of AI in these processes. Democrats attempted to subpoena Elon Musk to testify before Congress about DOGE’s access to government data systems. House Republicans blocked that subpoena. The oversight mechanisms are functioning – slowly, partially, against a process that moves at algorithmic speed.
The structural issue is not partisan. A future Democratic administration could build the same apparatus and use it to cancel defence research grants, agricultural subsidy programmes, or tax exemptions for energy companies. The ideology embedded in the AI review prompt changes with the administration. The method – AI makes determinations at scale, humans sign letters, legal challenges arrive months later – is administration-agnostic. That is precisely what makes the NEH precedent so durable.
What This Actually Means
For the first time in US history, a federal agency used generative AI to make binding funding decisions affecting thousands of people who had applied through legitimate processes, waited for review, received approval, and planned around the commitments the government made to them. And it worked – not legally, but operationally. The damage was done before the courts caught up.
What DOGE proved is not that AI should govern. It proved that AI can govern in the narrow technical sense: it can generate decisions, produce documents, and create facts on the ground faster than democratic accountability mechanisms can respond. Human liability disappears into the algorithm. The official who signs the letter did not reason through the decision. The official who directed the process can point to the tool. The company that built the tool has no government accountability at all.
The outsourcing of accountability to algorithms is not a future risk. It is the operating precedent of the current administration, documented in court filings, reported by the New York Times, and now available as a template to every government that wants to cut programmes faster than its citizens can challenge the cuts in court. The question is not whether this will be used again. It is whether any legal architecture will be built strong enough to stop it before it scales.
Sources
The New York Times | Bloomberg Law | NPR | Techdirt | ProPublica | CNN