Skip to content

DOGE’s AI Grant Cuts Prove Governments Can Outsource Accountability to Algorithms

Read Editorial Disclaimer
Disclaimer: Perspectives here reflect AI-POV and AI-assisted analysis, not any specific human author. Read full disclaimer — issues: report@theaipov.news

Something unprecedented happened in April 2025, and the political press treated it as a footnote to a budget fight. For the first time in American history, a federal agency used generative AI to make binding funding decisions affecting thousands of previously approved grant recipients – and it did so without producing the reasoned administrative record that US law has required of government decisions since 1946. DOGE used ChatGPT. The grants were cancelled. The accountability gap was opened. And now every future administration knows the method exists.

The NEH Case Was the First – It Will Not Be the Last

The New York Times obtained documents showing how DOGE operatives Nate Cavanaugh and Justin Fox requested a list of all National Endowment for the Humanities grants and used ChatGPT to review them. The queries were reportedly under 120 characters. The AI was asked, essentially, whether each grant was related to DEI. Based on those determinations, nearly 1,500 cancellation letters went out, terminating $207 million in approved funding across all 50 states. The letters were written by DOGE staff but signed by NEH leadership, giving them the formal appearance of institutional decisions while the actual reasoning was produced by a language model.

Courts moved quickly. A federal judge in Oregon ruled in August 2025 that the terminations were unlawful. A New York federal court found violations of the First Amendment and the Administrative Procedures Act. Dozens of lawsuits are ongoing. The legal system, functioning as designed, identified what the Administrative Procedure Act has required since 1946: government decisions must be well-reasoned, must engage with evidence, must be explainable.

But here is the structural shift the legal victories did not reverse: DOGE demonstrated that an administration willing to absorb the legal losses can use AI to make thousands of binding decisions faster than the judicial system can respond. By the time the Oregon injunction was issued in August, the NEH was functionally gutted. Grant recipients had lost funding, project timelines had collapsed, and the organisational infrastructure of the nation’s humanities councils had been disrupted beyond what a court order could repair. The template worked, even though the specific deployment failed.

The Administrative Procedure Act Was Not Built for This

The APA requires agencies to demonstrate their decisions are well-reasoned, to engage with significant public comments, and to keep records sufficient to reconstruct the basis for each decision. Bloomberg Law’s analysis identified the central problem with AI-driven regulatory and funding decisions: AI models often fail to disclose or explain their decision-making processes, making conclusions legally vulnerable but not operationally stoppable at scale.

University of Pennsylvania law professor Cary Coglianese described DOGE’s AI deregulation plan as naive on both the technology side and the side of understanding regulation – because even if the AI flags the right rules, the agency still must explain why each rule is being changed, must address public comments, and must produce documentation that survives judicial review. DOGE’s former general counsel James Burnham argued courts should judge actions on final product quality rather than methodology. The courts disagreed. But a sufficiently sympathetic appellate panel might not.

This is the precedent danger. Not that ChatGPT cancelled NEH grants in 2025. But that the legal architecture preventing that from becoming permanent is thinner than it looks. The APA’s reasoned decision-making requirements are not constitutional law – they are statutory law, subject to legislative change. An executive order could redefine what counts as adequate administrative reasoning. A single favorable appellate ruling could establish that AI-assisted review constitutes sufficient process. Neither outcome requires a revolution. They require procedural erosion, which governments are historically good at accomplishing when motivated.

The Method Is Already Spreading

DOGE terminated nearly 30,000 federal grants and contracts across 64 agencies by February 2026, reporting $110 billion in savings. The National Science Foundation cancelled over 1,100 grants. The Department of Education cut $881 million in research contracts. The Department of Transportation planned to use Google Gemini AI to draft federal transportation safety regulations, with its general counsel explicitly saying: we do not need the perfect rule, we want good enough.

Forty-eight lawmakers sent a letter to OMB Director Russell Vought questioning DOGE’s use of AI in these processes. Democrats attempted to subpoena Elon Musk to testify before Congress about DOGE’s access to government data systems. House Republicans blocked that subpoena. The oversight mechanisms are functioning – slowly, partially, against a process that moves at algorithmic speed.

The structural issue is not partisan. A future Democratic administration could build the same apparatus and use it to cancel defence research grants, agricultural subsidy programmes, or tax exemptions for energy companies. The ideology embedded in the AI review prompt changes with the administration. The method – AI makes determinations at scale, humans sign letters, legal challenges arrive months later – is administration-agnostic. That is precisely what makes the NEH precedent so durable.

What This Actually Means

For the first time in US history, a federal agency used generative AI to make binding funding decisions affecting thousands of people who had applied through legitimate processes, waited for review, received approval, and planned around the commitments the government made to them. And it worked – not legally, but operationally. The damage was done before the courts caught up.

What DOGE proved is not that AI should govern. It proved that AI can govern in the narrow technical sense: it can generate decisions, produce documents, and create facts on the ground faster than democratic accountability mechanisms can respond. Human liability disappears into the algorithm. The official who signs the letter did not reason through the decision. The official who directed the process can point to the tool. The company that built the tool has no government accountability at all.

The outsourcing of accountability to algorithms is not a future risk. It is the operating precedent of the current administration, documented in court filings, reported by the New York Times, and now available as a template to every government that wants to cut programmes faster than its citizens can challenge the cuts in court. The question is not whether this will be used again. It is whether any legal architecture will be built strong enough to stop it before it scales.

Sources

The New York Times | Bloomberg Law | NPR | Techdirt | ProPublica | CNN

Related Video

Related video — Watch on YouTube
Read More News
Mar 18

Todd Creek Farms homeowners association lawsuit: self-dealing, $900K legal bill, and a rare HOA bankruptcy

Mar 18

Multiple severe thunderstorm alerts issued for south carolina counties? Fact-Check Here

Mar 18

What is the new UK law protecting farm animals from dog attacks?

Mar 18

Unlimited fines for livestock worrying: why the UK finally cracked down on dog attacks.

Mar 18

New police powers to seize dogs and use DNA: how the UK livestock law changes enforcement.

Mar 17

What is the inference inflection? NVIDIA CEO Jensen Huang on the next phase of the AI boom

Mar 17

Tri-State storm damage and outages: what we know so far

Mar 17

The indie ‘Small Web’ is turning into search’s underground resistance zone

Mar 17

SAVE America Act turns election rules into a loyalty test to Trump

Mar 17

Israel’s Shadow War With Iran Is Now a Test of U.S. Deterrence

Mar 17

Europe Quietly Turns Its Back on Trump Over Iran

Mar 17

Zelenskiy Warns UK Parliament on Iran-Russia Drone Threat and the Cost of Security

Mar 17

Zelenskiy: AI, Drones and Defence Systems Are Reshaping Modern War

Mar 17

Rachel Reeves’ Mais Lecture on Investment, Productivity, and Political Priorities

Mar 17

“Leadership is not about waiting for perfect certainty”: Rachel Reeves’ Mais Lecture on an active state and Britain’s economic security

Mar 17

“Where it is in our national interest to align with EU regulation, we should be prepared to do so”: Rachel Reeves’ Mais Lecture on rebuilding UK–EU economic ties

Mar 17

“No partnership is more important than the one with our European neighbours”: Rachel Reeves’ Mais Lecture on alliances, Ukraine, and shared security

Mar 17

“We are the birthplace of businesses including DeepMind, Wayve, and Arm”: Rachel Reeves’ Mais Lecture sets out Britain’s AI advantage

Mar 17

“To every entrepreneur looking to build a new AI product, come to the UK”: Rachel Reeves’ Mais Lecture pitch to global innovators

Mar 17

“Every part of our strategy on AI is aimed at ensuring that our people have a share in the prosperity that AI can create”: Rachel Reeves’ Mais Lecture on skills and jobs

Mar 17

Oscars 2026 Review: Why ‘One Battle After Another’ Winning Best Picture Signals a Shift Away From Prestige Formulas

Mar 17

Marquette’s Returnees and the Hidden Stakes of the Transfer Portal

Mar 17

Alabama Snow Possible: What We Know and What to Watch

Mar 17

Doctor Who’s Thirteen-Yaz Moment Is the Next Domino for the Franchise

Mar 17

Ireland’s TV fairy tales still dodge the country’s real economic story

Mar 17

All we know about today’s Massachusetts power outages so far

Mar 17

Israel’s Iran strikes quietly test how far Trump will gamble on Hormuz

Mar 17

Bond Markets Are Quietly Signaling They Don’t Believe the Fed’s Soft-Landing Story

Mar 17

Katelyn Cummins’ Dancing Win Shows How Irish TV Still Treats Working-Class Stories as Weekend Escapism

Mar 17

Peggy Siegal Controversy: Why Her Epstein Revelations Threaten Hollywood’s Power Structure

Mar 17

Dolores Keane’s legacy shows how folk music guarded truths Ireland’s elites ignored

Mar 17

What this lawsuit over dictionary data means for every AI startup scraping the web

Mar 17

Publishers suing OpenAI are late to a fight they already helped create

Mar 17

Iran is quietly testing how much pain the world will tolerate at Hormuz

Mar 16

New Zealand’s petrol pain is really a subsidy war between drivers and EV buyers