Skip to content

DOGE Used AI to Gut the Humanities – and Nobody Can Audit the Decision

Read Editorial Disclaimer
Disclaimer: Perspectives here reflect AI-POV and AI-assisted analysis, not any specific human author. Read full disclaimer — issues: report@theaipov.news

When governments want to avoid accountability for decisions, they have historically buried them in bureaucracy – long review chains, opaque criteria, complex procurement rules that make challenge difficult and expensive. DOGE found a faster method. Use ChatGPT to make the decisions, send termination notices from a non-governmental email address, and produce an administrative record so thin that courts cannot determine whether anyone actually reviewed anything. The National Endowment for the Humanities is the proof of concept. The rest of the federal grant system is the target.

The Design Was Accountability Avoidance, Not Efficiency

The facts documented in court proceedings deserve more attention than they have received. According to Techdirt and verified in court filings from the Southern District of New York, DOGE officials Nate Cavanaugh and Justin Fox – described as inexperienced outsiders with no prior government service – used ChatGPT to review over 1,400 National Endowment for the Humanities grants. The review process consisted of asking the AI whether a grant was related to DEI in a response limited to 120 characters. Based on those responses, previously approved federal grants were cancelled.

Fox then emailed termination notices to approximately 1,500 grantees from a non-NEH email address – a detail that is legally significant because it means DOGE, not NEH leadership, made the termination decisions. The federal agency whose mandate was being used had not authorised the cancellations through its own processes. The New York Times documented how DOGE initially removed identifying information from its published list of cancelled grants, making fact-checking structurally impossible, before reversing that decision after public criticism.

This is not bureaucratic sloppiness. A 120-character ChatGPT query cannot distinguish between a grant studying racial disparities in healthcare and a DEI training programme – and the officials running this process knew that. The imprecision was the point. Broad, unauditable AI decisions maximise the number of cancellations while minimising the paper trail that would allow any individual recipient to mount a successful challenge.

The Legal Record Confirms the Challenge Problem

Federal court proceedings bear this analysis out. The Authors Guild obtained a preliminary injunction in July 2025 from Judge Colleen McMahon of the Southern District of New York, who found that grant recipients would suffer irreparable harm if the cancellations proceeded – and that the terminations appeared to violate the First Amendment and the Administrative Procedure Act by targeting grants based on recipients’ perceived viewpoints.

What the litigation also revealed is how difficult it is to challenge decisions made through AI intermediaries. In February 2026, the court was forced to grant a motion to compel discovery, overruling all of the government’s assertions of privilege and directing production of approximately 3,700 withheld or redacted documents. The court found that the government’s initial administrative record was incomplete and that officials had not conducted good-faith reviews. When the record of a government decision consists primarily of ChatGPT outputs and emails from personal accounts, the normal mechanisms of administrative accountability – appeal, review, challenge – become functionally inoperable.

Each affected grant recipient faces the same problem: to challenge their specific cancellation, they need to understand the reasoning behind it. When the reasoning is a 120-character AI output generated by someone with no government authority to make the decision, there is no reasoning to challenge. The AI layer was not chosen because it was better at evaluating grants than human reviewers. It was chosen because it creates a decision trail nobody can meaningfully interrogate.

Elon Musk’s DOGE and the Structural Incentive

Elon Musk’s involvement in DOGE adds a dimension that the New York Times documentation of the NEH process begins to illuminate. DOGE’s published accounting of savings was, as the Times separately reported, riddled with errors – including credits for cancelling contracts already completed years earlier. The accountability structure of DOGE itself mirrors the accountability structure of the ChatGPT grant review: opaque outputs, no auditable methodology, aggressive resistance to disclosure.

This consistency is not accidental. An organisation designed to avoid accountability will naturally gravitate toward tools that maximise output while minimising the paper trail that enables oversight. ChatGPT in this context is not an efficiency tool. It is a decision-laundering tool – one that converts a politically directed outcome (eliminating DEI-adjacent funding) into something that resembles an objective technical process. The Trump administration’s simultaneous decision to blacklist Anthropic and mandate government use of OpenAI’s ChatGPT means the tool doing the laundering is now baked into federal procurement infrastructure.

What This Actually Means

The NEH ChatGPT grant review is not a one-off. It is a template. DOGE has now demonstrated that generative AI can be used to make binding federal funding decisions at scale, that the resulting paper trail is thin enough to survive initial legal challenge, and that the process is fast enough to disburse – or cancel – billions of dollars before courts can fully intervene.

Judge McMahon’s injunction is a speed bump, not a permanent barrier. The litigation will continue for years. In the meantime, the administrative record produced by AI-intermediated decisions remains structurally opaque, and the 1,500 grant recipients who received termination notices from a personal email address are left navigating a legal system designed for human accountability applied to decisions that deliberately avoided it. The AI layer was chosen precisely because it makes accountability structurally impossible. The New York Times documented it. The courts are beginning to confirm it. The only question now is which federal programme is next.

Sources

The New York Times |
Techdirt |
Bloomberg Law |
Authors Guild |
The New York Times |
NPR

Related Video

Related video — Watch on YouTube
Read More News
Mar 16

The Loser in Vanderbilt’s Upset Is Not Just Florida

Mar 16

CTA Loop Attack: What We Know So Far About the Injured Women and Suspect in Custody

Mar 16

Central Florida Severe Weather: What We Know About Rain and Wind Risk So Far

Mar 16

Oil at three digits is the tax nobody voted on

Mar 16

Wall Street is treating Middle East chaos as just another trading range

Mar 15

The Buried Detail About Oscars Eve: Who Was Not Invited

Mar 15

Why Jeff Bezos at the Chanel Dinner Is a Power Play, Not Just a Photo Op

Mar 15

The Next Domino: How Daytona’s Chaos Will Reshape Spring Break Policing Everywhere

Mar 15

Spring Break Crackdowns Are the Hidden Cost of Daytona’s Weekend Violence

Mar 15

What We Know About the Daytona Beach Weekend Shootings So Far

Mar 15

“I hate to be taking the spotlight away from her on Mother’s Day”, says Katelyn Cummins, and It Shows Who Reality TV Really Serves

Mar 15

Why the Rose of Tralee-DWTS Crossover Is a Ratings Play, Not Just a Feel-Good Story

Mar 15

“It means everything”, says Paudie Moloney, and DWTS Is Betting on Underdog Stories Like His

Mar 15

“Opinions are like noses”, says Limerick’s Paudie, and the DWTS Final Is Already Decided in the Edit

Mar 15

Why the Media Still Treats Golfers’ Private Lives as Public Content

Mar 15

Jaden McDaniels and the Hidden Cost of ‘Simplifying’ in the NBA

Mar 15

The Next Domino After Sabalenka-Rybakina Indian Wells: Who Really Loses in the WTA Rematch Economy

Mar 15

Bachelorette Season 22 Review: Why Taylor Frankie Paul’s Casting Is the Story

Mar 15

Why Iran and a Republican Congressman Shared the Same Sunday Show

Mar 15

Sabalenka vs Rybakina at Indian Wells: What the Head-to-Head Stats Are Hiding

Mar 15

Taylor Frankie Paul’s Bachelorette Arc Is Reality TV’s Favorite Redemption Script

Mar 15

La Liga’s Mid-Table Squeeze Is Making the Real Sociedad-Osasuna Clash Matter More Than It Should

Mar 15

Ludvig Aberg and Olivia Peet Are the Latest Athlete-Couple Story the Tours Love to Sell

Mar 15

Why Marquette’s Offseason Matters More Than Its March Exit

Mar 15

All We Know About the North Side Chicago Shooting So Far

Mar 15

Forsyth County Freeze Warning: What We Know So Far

Mar 15

Paudie Moloney DWTS Underdog Arc Is a Political Dry Run the Irish Press Won’t Name

Mar 15

Political Decode: What Iran’s Minister Really Wanted From the Face the Nation Sit-Down

Mar 15

What We Know About the Taylor Frankie Paul Bachelorette Timeline So Far

Mar 15

What’s Happening: Winter Storm Iona, Hawaii Flooding, and Severe Weather Updates

Mar 15

Wisconsin Winter Storm Updates As Of Now: What We Know

Mar 15

Oklahoma Wildfires and Evacuations: All We Know So Far

Mar 15

What Everyone Is Getting Wrong About Tencent’s OpenClaw Hype Before Earnings

Mar 15

OpenClaw and WorkBuddy Are Less About AI Than About Tencent’s Next Revenue Bet

Mar 15

Why the Bachelorette Franchise Keeps Casting Stars With Baggage