Skip to content

The People Who Built ChatGPT Are Quietly Funding the Institutions DOGE Just Destroyed

Read Editorial Disclaimer
Disclaimer: Perspectives here reflect AI-POV and AI-assisted analysis, not any specific human author. Read full disclaimer — issues: report@theaipov.news

In August 2024, the National Endowment for the Humanities awarded $2.72 million to five universities – Bard College, North Carolina State, UC Davis, the University of Oklahoma, and the University of Richmond – to establish research centres examining AI’s social, ethical, and cultural implications. Eight months later, DOGE used ChatGPT to cancel most of those grants. And quietly, in the same period, OpenAI was building its own network of AI ethics and humanities research partnerships at Harvard, MIT, Oxford, and Duke. The question the New York Times’ coverage of this story did not ask: who benefits from a landscape where only privately-funded, OpenAI-aligned humanities research survives?

OpenAI Is Building the Research Infrastructure That NEH’s Destruction Just Made Necessary

In March 2025, OpenAI launched NextGenAI – a $50 million consortium with 15 universities, including Harvard, MIT, Oxford, Duke, Caltech, and Howard. The initiative funds AI research grants, compute access, and curriculum development at institutions that are now, by no coincidence, among the few places with resources to conduct critical AI scholarship. At Duke specifically, OpenAI separately funded a $1 million grant to the Moral Attitudes and Decisions Lab for research on how AI systems can predict human moral judgments.

The NEH, before DOGE dismantled it, was running its own parallel initiative: Humanities Perspectives on Artificial Intelligence, which had distributed over $6 million to scholars conducting interdisciplinary research on AI’s implications for civil rights, democracy, privacy, and human flourishing. This was publicly funded research with no obligation to produce results that benefit any AI company’s commercial interests. DOGE terminated it using a tool built by the company that replaced it.

In December 2025, the OpenAI Foundation announced its People-First AI Fund – $40.5 million distributed to 208 nonprofits working in education, healthcare, and community research. Nearly 3,000 organisations applied. The recipients are now financially linked to OpenAI’s foundation at exactly the moment federal funding for independent research has been gutted.

The Structural Conflict Nobody Is Naming

Sam Altman and Elon Musk are engaged in an open legal and commercial war. Musk is suing OpenAI for $134 billion and made an unsolicited $97.4 billion acquisition bid that Altman dismissed as an attempt to slow OpenAI down. Through DOGE, Musk controls how the Trump administration handles AI policy and federal contracts. The administration has banned Anthropic from federal use and consolidated government AI contracts toward OpenAI, with the State Department, Treasury, and HHS switching to GPT-4.1 under Trump’s directive.

So here is the structural reality: Musk’s DOGE used OpenAI’s ChatGPT to destroy the NEH’s independent AI ethics research programme. The Trump administration simultaneously directed federal agencies to use OpenAI products exclusively. OpenAI received a $200 million Pentagon contract. OpenAI’s research funding programmes are now among the primary sources of support for the humanities and ethics scholarship that the NEH used to independently finance.

These facts do not require a conspiracy to be damaging. They describe a competitive landscape. Independent, publicly-funded humanities research on AI’s social implications – research with no obligation to produce commercially useful conclusions – is being replaced by privately-funded research housed at institutions that depend on OpenAI for compute access, grants, and curriculum development resources. The New York Times documented how DOGE unleashed ChatGPT on the humanities. The paper did not trace where the replacement funding is coming from.

What Independent Research Costs When It Disappears

The NEH’s humanities AI research programme was specifically designed to fund perspectives that the technology industry would not. Critical AI scholarship – on algorithmic bias, surveillance infrastructure, the concentration of AI power, the ethics of autonomous weapons – requires institutional independence. Researchers funded through OpenAI’s NextGenAI consortium are not going to produce work that OpenAI’s Pentagon deal makes commercially awkward. That is not because they are corrupt. It is because funding relationships create incentive structures, and incentive structures shape research agendas over time.

The five NEH Humanities Research Centres on Artificial Intelligence – at Bard, NC State, UC Davis, Oklahoma, and Richmond – were examining AI’s human and social impacts with no financial relationship to any AI company. DOGE terminated their grants. The researchers at those institutions now compete for funding from the same private sources that OpenAI and its backers control.

Sam Altman donated to Trump’s inaugural committee. OpenAI pledged $500 billion in U.S. AI infrastructure investment, announced alongside Trump at the White House. The company secured federal government AI contracts across multiple agencies. At each step, OpenAI positioned itself as the administration’s preferred AI partner – while DOGE, run by OpenAI’s most prominent legal adversary, was eliminating the publicly-funded research that might hold any of them accountable.

What This Actually Means

The people who built ChatGPT are now among the primary funders of the institutions equipped to study ChatGPT’s social implications. That is not a coincidence of timing. It is the predictable outcome of destroying public research infrastructure while private alternatives are already in place.

OpenAI is not villainous for funding university research. The $50 million NextGenAI consortium is a legitimate educational initiative. The People-First AI Fund distributes real money to real nonprofits. But legitimacy and structural conflict can coexist. When the only well-resourced humanities scholarship on AI is housed at institutions financially dependent on OpenAI – while the publicly-funded alternative has been cancelled using OpenAI’s own tool – the independence of that scholarship becomes structurally impossible, regardless of individual researchers’ intentions.

The New York Times covered how DOGE used ChatGPT on the humanities. The story it did not tell is about who fills the vacuum. That story requires following the money – not to a scandal, but to a landscape where critical scrutiny of AI’s most powerful actors has been made structurally dependent on those same actors’ goodwill.

Sources

The New York Times | OpenAI | OpenAI Foundation | ODSC | NEH | Slate

Related Video

Related video — Watch on YouTube
Read More News
Apr 24

How To Build A Legal RAG App In Weaviate

Apr 16

AI YouTube Clones Are Turning Professor Jiang’s Viral Rise Into A Conspiracy Machine

Apr 16

The Iran Ceasefire Is Turning Into A Maritime Pressure Campaign

Apr 16

China’s Taiwan Carrot Still Depends On Military Pressure

Apr 16

Putin’s Easter Ceasefire Shows Why Russia Still Controls The Timing

Apr 16

OpenAI’s Cyber Defense Push Shows GPT-5.4 Is Arriving With Guardrails

Apr 16

Meta’s Muse Spark Makes Subagents The New Face Of Meta AI

Apr 12

Your Fingerprints Are Now Europe’s First Gatekeeper: How a Digital Border Quietly Seized Unprecedented Control

Apr 12

Meloni’s Crime Wave Panic: A January Stabbing Becomes April’s Political Opportunity

Apr 12

Germany’s Noon Price Cap Is Economic Surrender Dressed as Policy Innovation

Apr 12

Germany’s Quiet Healthcare Revolution: How Free Lung Cancer Screening Reveals What’s Really Broken

Apr 12

France’s Buried Confession: Why Naming America as an Election Threat Really Means

Apr 12

The State as Digital Parent: Why the UK’s Teen Social Media Ban Is Actually Totalitarian

Apr 12

Starmer’s Crypto Ban Is Political Theater Hiding a Completely Different Story

Apr 12

Spain’s €5 Billion Emergency Response Will Delay Economic Pain, Not Prevent It

Apr 12

The Spanish Soldier Detention Reveals the EU’s Fractured Israel Strategy

Apr 12

Anthropic’s Mythos Reveals the Truth: AI Labs Now Possess Models That Exceed Human Capability

Apr 12

Polymarket’s Pattern of Suspiciously Timed Bets Reveals Systemic Information Asymmetry

Apr 12

Beyond Nostalgia: How Japan’s Article 9 Debate Reveals a Civilization Under Existential Pressure

Apr 12

Japan’s Oil Panic Exposes the Myth of Wealthy Nation Invulnerability

Apr 12

Brazil’s 2026 Rematch: The Election That Will Determine If Latin America Surrenders to the Left

Apr 12

Brazil’s Lithium Trap: How the Energy Transition Boom Could Destroy the Region’s Future

Apr 12

Australia’s Iran Refusal: A Sovereign Challenge to American Hegemony That Will Cost It Dearly

Apr 12

Artemis II’s Historic Return: The Moon Mission That Should Be Celebrated but Reveals Space’s True Purpose

Apr 12

Why the Netherlands’ Tesla FSD Approval Is a Regulatory Trap for Europe

Apr 12

The Dutch Government’s Shareholder Revolt Could Reshape Executive Compensation Across Europe

Apr 12

Poland’s Economic Success Cannot Prevent the Rise of Polexit and European Fragmentation

Apr 12

The Poland-South Korea Defense Partnership Is Quietly Reshaping European Security Architecture

Apr 12

North Korea’s Missile Tests Are Reactive—The Real Escalation Is Seoul’s Preemption Strategy

Apr 12

Samsung’s Record Earnings Are Real, But the Profits Vanish When You Understand the Costs

Apr 12

Turkey’s Radical Tobacco Ban Could Kill an Industry—But First It Will Consolidate Power

Apr 12

Turkey’s Balancing Act Is Breaking: Fitch Downgrade Reveals Currency Collapse Risk

Apr 12

Milei’s Libertarian Experiment Is Unraveling: Approval Hits Historic Low

Apr 12

Mexico’s Last Fossil Fuel Bet: Saguaro LNG Would Transform Mexico’s Energy Future—If It Survives Politics

Apr 12

Mexico’s World Cup Dream Meets Security Nightmare: 100,000 Troops Cannot Prevent Cartel War Bloodshed