Skip to content

DOGE’s AI Grant Cuts Prove Governments Can Outsource Accountability to Algorithms

Read Editorial Disclaimer
Disclaimer: Perspectives here reflect AI-POV and AI-assisted analysis, not any specific human author. Read full disclaimer — issues: report@theaipov.news

Something unprecedented happened in April 2025, and the political press treated it as a footnote to a budget fight. For the first time in American history, a federal agency used generative AI to make binding funding decisions affecting thousands of previously approved grant recipients – and it did so without producing the reasoned administrative record that US law has required of government decisions since 1946. DOGE used ChatGPT. The grants were cancelled. The accountability gap was opened. And now every future administration knows the method exists.

The NEH Case Was the First – It Will Not Be the Last

The New York Times obtained documents showing how DOGE operatives Nate Cavanaugh and Justin Fox requested a list of all National Endowment for the Humanities grants and used ChatGPT to review them. The queries were reportedly under 120 characters. The AI was asked, essentially, whether each grant was related to DEI. Based on those determinations, nearly 1,500 cancellation letters went out, terminating $207 million in approved funding across all 50 states. The letters were written by DOGE staff but signed by NEH leadership, giving them the formal appearance of institutional decisions while the actual reasoning was produced by a language model.

Courts moved quickly. A federal judge in Oregon ruled in August 2025 that the terminations were unlawful. A New York federal court found violations of the First Amendment and the Administrative Procedures Act. Dozens of lawsuits are ongoing. The legal system, functioning as designed, identified what the Administrative Procedure Act has required since 1946: government decisions must be well-reasoned, must engage with evidence, must be explainable.

But here is the structural shift the legal victories did not reverse: DOGE demonstrated that an administration willing to absorb the legal losses can use AI to make thousands of binding decisions faster than the judicial system can respond. By the time the Oregon injunction was issued in August, the NEH was functionally gutted. Grant recipients had lost funding, project timelines had collapsed, and the organisational infrastructure of the nation’s humanities councils had been disrupted beyond what a court order could repair. The template worked, even though the specific deployment failed.

The Administrative Procedure Act Was Not Built for This

The APA requires agencies to demonstrate their decisions are well-reasoned, to engage with significant public comments, and to keep records sufficient to reconstruct the basis for each decision. Bloomberg Law’s analysis identified the central problem with AI-driven regulatory and funding decisions: AI models often fail to disclose or explain their decision-making processes, making conclusions legally vulnerable but not operationally stoppable at scale.

University of Pennsylvania law professor Cary Coglianese described DOGE’s AI deregulation plan as naive on both the technology side and the side of understanding regulation – because even if the AI flags the right rules, the agency still must explain why each rule is being changed, must address public comments, and must produce documentation that survives judicial review. DOGE’s former general counsel James Burnham argued courts should judge actions on final product quality rather than methodology. The courts disagreed. But a sufficiently sympathetic appellate panel might not.

This is the precedent danger. Not that ChatGPT cancelled NEH grants in 2025. But that the legal architecture preventing that from becoming permanent is thinner than it looks. The APA’s reasoned decision-making requirements are not constitutional law – they are statutory law, subject to legislative change. An executive order could redefine what counts as adequate administrative reasoning. A single favorable appellate ruling could establish that AI-assisted review constitutes sufficient process. Neither outcome requires a revolution. They require procedural erosion, which governments are historically good at accomplishing when motivated.

The Method Is Already Spreading

DOGE terminated nearly 30,000 federal grants and contracts across 64 agencies by February 2026, reporting $110 billion in savings. The National Science Foundation cancelled over 1,100 grants. The Department of Education cut $881 million in research contracts. The Department of Transportation planned to use Google Gemini AI to draft federal transportation safety regulations, with its general counsel explicitly saying: we do not need the perfect rule, we want good enough.

Forty-eight lawmakers sent a letter to OMB Director Russell Vought questioning DOGE’s use of AI in these processes. Democrats attempted to subpoena Elon Musk to testify before Congress about DOGE’s access to government data systems. House Republicans blocked that subpoena. The oversight mechanisms are functioning – slowly, partially, against a process that moves at algorithmic speed.

The structural issue is not partisan. A future Democratic administration could build the same apparatus and use it to cancel defence research grants, agricultural subsidy programmes, or tax exemptions for energy companies. The ideology embedded in the AI review prompt changes with the administration. The method – AI makes determinations at scale, humans sign letters, legal challenges arrive months later – is administration-agnostic. That is precisely what makes the NEH precedent so durable.

What This Actually Means

For the first time in US history, a federal agency used generative AI to make binding funding decisions affecting thousands of people who had applied through legitimate processes, waited for review, received approval, and planned around the commitments the government made to them. And it worked – not legally, but operationally. The damage was done before the courts caught up.

What DOGE proved is not that AI should govern. It proved that AI can govern in the narrow technical sense: it can generate decisions, produce documents, and create facts on the ground faster than democratic accountability mechanisms can respond. Human liability disappears into the algorithm. The official who signs the letter did not reason through the decision. The official who directed the process can point to the tool. The company that built the tool has no government accountability at all.

The outsourcing of accountability to algorithms is not a future risk. It is the operating precedent of the current administration, documented in court filings, reported by the New York Times, and now available as a template to every government that wants to cut programmes faster than its citizens can challenge the cuts in court. The question is not whether this will be used again. It is whether any legal architecture will be built strong enough to stop it before it scales.

Sources

The New York Times | Bloomberg Law | NPR | Techdirt | ProPublica | CNN

Related Video

Related video — Watch on YouTube
Read More News
Mar 15

The Buried Detail About Oscars Eve: Who Was Not Invited

Mar 15

Why Jeff Bezos at the Chanel Dinner Is a Power Play, Not Just a Photo Op

Mar 15

The Next Domino: How Daytona’s Chaos Will Reshape Spring Break Policing Everywhere

Mar 15

Spring Break Crackdowns Are the Hidden Cost of Daytona’s Weekend Violence

Mar 15

What We Know About the Daytona Beach Weekend Shootings So Far

Mar 15

“I hate to be taking the spotlight away from her on Mother’s Day”, says Katelyn Cummins, and It Shows Who Reality TV Really Serves

Mar 15

Why the Rose of Tralee-DWTS Crossover Is a Ratings Play, Not Just a Feel-Good Story

Mar 15

“It means everything”, says Paudie Moloney, and DWTS Is Betting on Underdog Stories Like His

Mar 15

“Opinions are like noses”, says Limerick’s Paudie, and the DWTS Final Is Already Decided in the Edit

Mar 15

Why the Media Still Treats Golfers’ Private Lives as Public Content

Mar 15

Jaden McDaniels and the Hidden Cost of ‘Simplifying’ in the NBA

Mar 15

The Next Domino After Sabalenka-Rybakina Indian Wells: Who Really Loses in the WTA Rematch Economy

Mar 15

Bachelorette Season 22 Review: Why Taylor Frankie Paul’s Casting Is the Story

Mar 15

Why Iran and a Republican Congressman Shared the Same Sunday Show

Mar 15

Sabalenka vs Rybakina at Indian Wells: What the Head-to-Head Stats Are Hiding

Mar 15

Taylor Frankie Paul’s Bachelorette Arc Is Reality TV’s Favorite Redemption Script

Mar 15

La Liga’s Mid-Table Squeeze Is Making the Real Sociedad-Osasuna Clash Matter More Than It Should

Mar 15

Ludvig Aberg and Olivia Peet Are the Latest Athlete-Couple Story the Tours Love to Sell

Mar 15

Why Marquette’s Offseason Matters More Than Its March Exit

Mar 15

All We Know About the North Side Chicago Shooting So Far

Mar 15

Forsyth County Freeze Warning: What We Know So Far

Mar 15

Paudie Moloney DWTS Underdog Arc Is a Political Dry Run the Irish Press Won’t Name

Mar 15

Political Decode: What Iran’s Minister Really Wanted From the Face the Nation Sit-Down

Mar 15

What We Know About the Taylor Frankie Paul Bachelorette Timeline So Far

Mar 15

What’s Happening: Winter Storm Iona, Hawaii Flooding, and Severe Weather Updates

Mar 15

Wisconsin Winter Storm Updates As Of Now: What We Know

Mar 15

Oklahoma Wildfires and Evacuations: All We Know So Far

Mar 15

What Everyone Is Getting Wrong About Tencent’s OpenClaw Hype Before Earnings

Mar 15

OpenClaw and WorkBuddy Are Less About AI Than About Tencent’s Next Revenue Bet

Mar 15

Why the Bachelorette Franchise Keeps Casting Stars With Baggage

Mar 15

The Transfer Portal Is Forcing Coaches Like Shaka Smart to Recruit Twice a Year

Mar 15

Jaden McDaniels’ Rise Exposes How Few One-and-Done Stars Actually Stick in the NBA

Mar 15

The Timberwolves’ Jaden McDaniels Gamble Failed Because the Roster Was Built for One Star

Mar 15

Sabalenka vs Rybakina Is the Rivalry the WTA Has Been Waiting For

Mar 15

Why Indian Wells Keeps Delivering the Finals That the Grand Slams Often Miss