In August 2024, the National Endowment for the Humanities awarded $2.72 million to five universities – Bard College, North Carolina State, UC Davis, the University of Oklahoma, and the University of Richmond – to establish research centres examining AI’s social, ethical, and cultural implications. Eight months later, DOGE used ChatGPT to cancel most of those grants. And quietly, in the same period, OpenAI was building its own network of AI ethics and humanities research partnerships at Harvard, MIT, Oxford, and Duke. The question the New York Times’ coverage of this story did not ask: who benefits from a landscape where only privately-funded, OpenAI-aligned humanities research survives?
OpenAI Is Building the Research Infrastructure That NEH’s Destruction Just Made Necessary
In March 2025, OpenAI launched NextGenAI – a $50 million consortium with 15 universities, including Harvard, MIT, Oxford, Duke, Caltech, and Howard. The initiative funds AI research grants, compute access, and curriculum development at institutions that are now, by no coincidence, among the few places with resources to conduct critical AI scholarship. At Duke specifically, OpenAI separately funded a $1 million grant to the Moral Attitudes and Decisions Lab for research on how AI systems can predict human moral judgments.
The NEH, before DOGE dismantled it, was running its own parallel initiative: Humanities Perspectives on Artificial Intelligence, which had distributed over $6 million to scholars conducting interdisciplinary research on AI’s implications for civil rights, democracy, privacy, and human flourishing. This was publicly funded research with no obligation to produce results that benefit any AI company’s commercial interests. DOGE terminated it using a tool built by the company that replaced it.
In December 2025, the OpenAI Foundation announced its People-First AI Fund – $40.5 million distributed to 208 nonprofits working in education, healthcare, and community research. Nearly 3,000 organisations applied. The recipients are now financially linked to OpenAI’s foundation at exactly the moment federal funding for independent research has been gutted.
The Structural Conflict Nobody Is Naming
Sam Altman and Elon Musk are engaged in an open legal and commercial war. Musk is suing OpenAI for $134 billion and made an unsolicited $97.4 billion acquisition bid that Altman dismissed as an attempt to slow OpenAI down. Through DOGE, Musk controls how the Trump administration handles AI policy and federal contracts. The administration has banned Anthropic from federal use and consolidated government AI contracts toward OpenAI, with the State Department, Treasury, and HHS switching to GPT-4.1 under Trump’s directive.
So here is the structural reality: Musk’s DOGE used OpenAI’s ChatGPT to destroy the NEH’s independent AI ethics research programme. The Trump administration simultaneously directed federal agencies to use OpenAI products exclusively. OpenAI received a $200 million Pentagon contract. OpenAI’s research funding programmes are now among the primary sources of support for the humanities and ethics scholarship that the NEH used to independently finance.
These facts do not require a conspiracy to be damaging. They describe a competitive landscape. Independent, publicly-funded humanities research on AI’s social implications – research with no obligation to produce commercially useful conclusions – is being replaced by privately-funded research housed at institutions that depend on OpenAI for compute access, grants, and curriculum development resources. The New York Times documented how DOGE unleashed ChatGPT on the humanities. The paper did not trace where the replacement funding is coming from.
What Independent Research Costs When It Disappears
The NEH’s humanities AI research programme was specifically designed to fund perspectives that the technology industry would not. Critical AI scholarship – on algorithmic bias, surveillance infrastructure, the concentration of AI power, the ethics of autonomous weapons – requires institutional independence. Researchers funded through OpenAI’s NextGenAI consortium are not going to produce work that OpenAI’s Pentagon deal makes commercially awkward. That is not because they are corrupt. It is because funding relationships create incentive structures, and incentive structures shape research agendas over time.
The five NEH Humanities Research Centres on Artificial Intelligence – at Bard, NC State, UC Davis, Oklahoma, and Richmond – were examining AI’s human and social impacts with no financial relationship to any AI company. DOGE terminated their grants. The researchers at those institutions now compete for funding from the same private sources that OpenAI and its backers control.
Sam Altman donated to Trump’s inaugural committee. OpenAI pledged $500 billion in U.S. AI infrastructure investment, announced alongside Trump at the White House. The company secured federal government AI contracts across multiple agencies. At each step, OpenAI positioned itself as the administration’s preferred AI partner – while DOGE, run by OpenAI’s most prominent legal adversary, was eliminating the publicly-funded research that might hold any of them accountable.
What This Actually Means
The people who built ChatGPT are now among the primary funders of the institutions equipped to study ChatGPT’s social implications. That is not a coincidence of timing. It is the predictable outcome of destroying public research infrastructure while private alternatives are already in place.
OpenAI is not villainous for funding university research. The $50 million NextGenAI consortium is a legitimate educational initiative. The People-First AI Fund distributes real money to real nonprofits. But legitimacy and structural conflict can coexist. When the only well-resourced humanities scholarship on AI is housed at institutions financially dependent on OpenAI – while the publicly-funded alternative has been cancelled using OpenAI’s own tool – the independence of that scholarship becomes structurally impossible, regardless of individual researchers’ intentions.
The New York Times covered how DOGE used ChatGPT on the humanities. The story it did not tell is about who fills the vacuum. That story requires following the money – not to a scandal, but to a landscape where critical scrutiny of AI’s most powerful actors has been made structurally dependent on those same actors’ goodwill.
Sources
The New York Times | OpenAI | OpenAI Foundation | ODSC | NEH | Slate