When DOGE operatives Nate Cavanaugh and Justin Fox needed to decide which National Endowment for the Humanities grants to cancel, they asked ChatGPT. The queries were reportedly under 120 characters. The question was roughly: is this related to DEI? The answer determined whether years of approved scholarship, community history projects, and museum programming would survive. This is not a story about humanities funding. It is a story about a new method of government that, if it survives legal challenge, will reach every federal agency that writes cheques.
The NEH Was the Proof of Concept
The New York Times obtained documents showing how AI was used to cancel the vast majority of previously approved NEH grants when DOGE moved to dismantle the agency. In April 2025, DOGE employees requested a complete list of NEH grants and issued nearly 1,500 cancellation letters, terminating approximately $207 million in annual funding across all 50 states. The letters were sent from a newly created email account, not the standard grant-making address. They were written by DOGE staff but signed by NEH leadership.
The legal consequences arrived quickly. A federal judge in Oregon ruled in August 2025 that the terminations were unlawful, writing in an 81-page opinion that the power of the purse belongs exclusively to Congress, not the President. A separate New York federal court found violations of both the First Amendment and the Administrative Procedures Act. The Authors Guild won a major court victory restoring 1,400 terminated research grants. Multiple injunctions are in place.
But here is what the legal victories missed: the methodology worked. DOGE demonstrated that a generative AI tool could process an entire agency’s grant portfolio, generate cancellation determinations at scale, and produce thousands of termination letters faster than any human bureaucracy could respond. The courts stopped that specific deployment. They did not stop the template.
The Pattern Is Already Repeating Across Federal Agencies
The National Science Foundation cancelled over 1,100 grants in April 2025, including more than 400 STEM education grants worth approximately $328 million. The Department of Education cut $881 million in research contracts from its Institute of Education Sciences. DOGE announced $900 million in cuts from the agency tracking American academic progress. By February 2026, DOGE had terminated nearly 30,000 federal grants and contracts across 64 agencies, claiming $110 billion in savings – though approximately 30% of those terminations involved grants already fully paid, generating no actual savings.
The common thread is not ideology – it is automation. The DOGE AI Deregulation Decision Tool is designed to cut 50% of approximately 200,000 federal regulations, identifying rules deemed unnecessary or legally redundant. Early pilots at HUD and the CFPB processed over 1,000 regulatory sections in under two weeks. DOGE’s own reported timeline targeted 100,000 eliminated rules by January 2026. Forbes reported DOGE was specifically deploying AI to identify Education Department spending for cuts. AP News confirmed the same method reached the agency tracking academic progress nationwide.
The New York Times’ earlier coverage of DOGE errors documented how the organisation obscured details of grant decisions on its website, with Times reporters finding federal identification numbers buried in source code. CNN’s legal analysis identified what experts called a massive risk: AI models often cannot explain their decision-making processes, making the conclusions legally vulnerable but difficult to challenge at scale, because most affected grantees lack the legal resources of the Authors Guild.
If the Legal Challenge Fails, the Template Deploys Everywhere
The NEH case is currently winding through federal courts with multiple injunctions in place. The government has appealed to the Ninth Circuit. The Supreme Court may eventually weigh in on jurisdictional questions about federal funding disputes. Legal experts at Bloomberg Law have noted that DOGE’s AI deregulation plans have “a bark bigger than their bite” in the regulatory context – because agencies must show decisions are well-reasoned, and AI models do not produce the kind of documented reasoning that survives administrative law review.
That is the specific vulnerability right now. But administrative law can change. A sufficiently friendly appellate court could rule that AI-assisted review constitutes adequate reasoning. Congress could legislate new standards for automated federal decisions. An executive order could redefine what counts as sufficient agency discretion. None of those outcomes require a revolution. They require one favorable ruling that sets a precedent.
Elon Musk and DOGE understand this. The NEH was chosen as a target precisely because it is small, politically isolated, and defended primarily by academics, writers, and cultural organisations with limited legal infrastructure. The $207 million at stake is a rounding error in federal spending. But the precedent – that generative AI can make binding funding decisions affecting thousands of approved recipients – is worth far more than $207 million if it survives judicial review.
HHS administers $1.7 trillion in annual spending. The NSF, after its 2025 cuts, still distributes billions annually. The Department of Education touches every school district in the country. An automated grant-review and termination system that can process those portfolios at the speed DOGE deployed against NEH would represent a fundamental shift in how the federal government exercises spending power – with an algorithm making determinations that previously required human review, legal vetting, and a documented administrative record.
What This Actually Means
The fight over NEH grants is not about the humanities. The academics, museum curators, and local historians who lost their funding are collateral damage in a much larger experiment. DOGE ran a proof of concept: can AI make binding federal spending decisions faster than legal challenge can stop them? In the NEH case, the courts said no – but only after the damage was done, the agency was gutted, and the template was documented and deployed elsewhere.
The question now is not whether DOGE will try this again. It already has, across 64 agencies. The question is whether any court will establish a durable precedent that AI-assisted decisions of this scale require the same documented reasoning that human bureaucrats are required to produce. If that precedent does not hold, expect the same approach deployed against HHS grant portfolios, NSF research funding, Department of Education contracts, and any other federal programme that disburses money and can be processed by an AI flagging terms in under 120 characters.
Sources
The New York Times | Techdirt | NPR | Bloomberg Law | AP News | Nextgov