Skip to content

Copyright lawsuits against OpenAI are really about who owns the language we use

Read Editorial Disclaimer
Disclaimer: Perspectives here reflect AI-POV and AI-assisted analysis, not any specific human author. Read full disclaimer — issues: report@theaipov.news

The fight over whether ChatGPT can recite a dictionary definition is not a narrow licensing spat. It is a referendum on whether the language we all use every day—the definitions, the encyclopedic facts, the phrasing that reference publishers have curated for decades—belongs to anyone at all, or to the companies that swept it into training data without asking. When Merriam-Webster and Encyclopedia Britannica sued OpenAI in March 2026 in New York federal court, they did not only allege that nearly 100,000 of their articles had been copied to train ChatGPT. They exposed how AI builders have treated the written commons as free fuel while locking their own outputs behind terms that forbid anyone from doing the same to them.

Copyright lawsuits against OpenAI are really about who owns the language we use

According to the complaint filed in the Southern District of New York (Case 1:26-cv-02097) on 13 March 2026, OpenAI used Merriam-Webster and Britannica content to train its language models without permission or payment. The plaintiffs argue that ChatGPT produces verbatim or near-verbatim reproductions of definitions and encyclopedia entries, and that the system cannibalises traffic to their sites by answering queries that would otherwise send users to the publishers. As techcrunch.com reported, the dispute centres on almost 100,000 articles the plaintiffs say were used for training. OpenAI has responded that its models are trained on publicly available data and that their use is grounded in fair use—a defence that is under growing pressure in courts elsewhere.

The written commons were treated as free fuel

Britannica had attempted to negotiate licensing with OpenAI as early as November 2024, according to reporting on the case. Those overtures were rejected while OpenAI signed licensing deals with other publishers, creating a pattern where some rightsholders are paid and others are not. That asymmetry is at the heart of the "written commons" argument: the same language and reference material that schools, writers, and the public have relied on are now embedded inside a commercial product, with no cut for the institutions that compiled and maintained it. The complaint also includes trademark claims, accusing OpenAI of falsely attributing errors or incomplete answers to the publishers when the model hallucinates.

Fair use is no longer a safe haven for training

Legal precedent is shifting. In February 2025, a Delaware court in Thomson Reuters v. Ross Intelligence reversed an earlier ruling and held that using copyrighted material to train an AI system can constitute direct copyright infringement, rejecting the defendant's fair use defence. Ropes & Gray and other analysts have noted that this casts doubt on whether fair use will reliably shield AI companies from liability for training on copyrighted works. At the same time, the UK government was due to deliver an economic impact assessment by 18 March 2026 on proposed copyright changes that could allow AI firms to use protected work without permission unless owners opt out—a move that drew protests from thousands of authors who published a symbolic "Don't Steal This Book" in March 2026, as reported by The Guardian.

Expert commentary has sharpened. The Copyright Alliance and others have criticised some 2026 rulings that favoured AI companies for applying "woefully superficial" fair-use analysis, concluding that use is transformative simply because generative AI is new technology rather than applying the legal standard from Campbell v. Acuff-Rose. IP Watchdog reported in February 2026 that litigation is increasingly paving the way to licensing: large publishers such as News Corp have secured deals with OpenAI worth hundreds of millions of dollars, while smaller and reference publishers often lack the leverage to negotiate. The Merriam-Webster and Britannica suit fits that pattern: reference works are part of the shared linguistic and factual infrastructure, yet they were used without a licence. The Bartz v. Anthropic settlement in September 2025—roughly $1.5 billion after a court held that training on pirated books was not fair use—shows that courts are willing to attach serious financial consequences to how training data is sourced.

What This Actually Means

The Merriam-Webster and Britannica case is not just about two reference brands. It is about who gets to monetise the shared infrastructure of language and fact. If courts side with OpenAI on a broad fair-use theory, reference publishers and other small rightsholders will have little leverage; if they side with the plaintiffs, the cost and structure of AI training will change. Either way, the suit makes visible what was long implicit: the industry built on "publicly available data" has been feeding on works that were public in the sense of being readable, not in the sense of being free for commercial ingestion. The question of who owns the language we use is now squarely in front of the courts.

What is the lawsuit about?

Encyclopedia Britannica, Inc. and Merriam-Webster, Inc. sued OpenAI and related entities in the U.S. District Court for the Southern District of New York on 13 March 2026. The 44-page complaint alleges copyright infringement and trademark violations. The plaintiffs claim that OpenAI used close to 100,000 of their online articles to train ChatGPT without authorisation or payment, and that the model outputs verbatim or near-verbatim copies of their content. They also allege that ChatGPT diverts users who would otherwise visit the publishers' sites, and that OpenAI has misattributed inaccurate or incomplete outputs to them. OpenAI disputes the claims and asserts that training on publicly available data is protected by fair use.

Who are the plaintiffs?

Encyclopedia Britannica, Inc. publishes the Encyclopaedia Britannica and related reference products. Merriam-Webster, Inc. is the oldest dictionary publisher in the United States and publishes Merriam-Webster dictionaries. Both are represented by Susman Godfrey L.L.P. in the case. According to court filings, Merriam-Webster's corporate parent is Aletheia Holdings, LP; the same parent is identified for Encyclopedia Britannica, Inc. The case is docketed as 1:26-cv-02097 and has been filed as related to a larger multi-district litigation (1:25-md-03143) concerning OpenAI and copyright.

Sources

techcrunch.com, Pacer Monitor – Encyclopedia Britannica et al v. OpenAI, Ropes & Gray – AI training and copyright, The Guardian – Authors protest AI use of works, IP Watchdog – AI copyright and licensing

Related Video

Related video — Watch on YouTube
Read More News
Mar 18

It’s almost like they can’t be trusted 🤔”, says Elon Musk, Microsoft vs Amazon over OpenAI trust and cloud contracts.

Mar 18

“It Was My Mistake”: UK Prime Minister Clashes with Kemi Badenoch Over Peter Mandelson and Jeffrey Epstein Appointment Row

Mar 18

Juliana Stratton’s primary win is the real Chicago power map redraw

Mar 18

This conflict’s public story is deterrence; its private story is miscalculation

Mar 18

Americans will pay for this war first at the pump, then in tariffs

Mar 18

Hormuz is the leverage test, not the endgame of this war

Mar 18

Iran-Israel escalation is now a succession plan in uniform

Mar 18

Australian leadership leaks are the real campaign ad money can’t buy

Mar 18

Leadership ‘rumblings’ in a poll year usually mean policy panic, not personality

Mar 18

Kagi Search Engine: The Paid, Ad-Free Alternative to Google – Who It’s Really For, Pros, Cons, and Semantic Reality in 2026

Mar 18

Kagi’s ‘Small Web’ shows how AI-era search can still stay human

Mar 18

What Top Voices Are Saying About Token Cost in Upcoming Times

Mar 18

Trump’s Hormuz ask exposes the gap between US power and allied trust

Mar 18

Iranian Women’s Soccer Team Expected to Return to Iran After Stop in Turkey

Mar 18

Will Hormuz closures force the world to finally pay Iran’s price?

Mar 18

Todd Creek Farms homeowners association lawsuit: self-dealing, $900K legal bill, and a rare HOA bankruptcy

Mar 18

Multiple severe thunderstorm alerts issued for south carolina counties? Fact-Check Here

Mar 18

What is the new UK law protecting farm animals from dog attacks?

Mar 18

Unlimited fines for livestock worrying: why the UK finally cracked down on dog attacks.

Mar 18

New police powers to seize dogs and use DNA: how the UK livestock law changes enforcement.

Mar 17

What is the inference inflection? NVIDIA CEO Jensen Huang on the next phase of the AI boom

Mar 17

Tri-State storm damage and outages: what we know so far

Mar 17

The indie ‘Small Web’ is turning into search’s underground resistance zone

Mar 17

SAVE America Act turns election rules into a loyalty test to Trump

Mar 17

Israel’s Shadow War With Iran Is Now a Test of U.S. Deterrence

Mar 17

Europe Quietly Turns Its Back on Trump Over Iran

Mar 17

Zelenskiy Warns UK Parliament on Iran-Russia Drone Threat and the Cost of Security

Mar 17

Zelenskiy: AI, Drones and Defence Systems Are Reshaping Modern War

Mar 17

Rachel Reeves’ Mais Lecture on Investment, Productivity, and Political Priorities

Mar 17

“Leadership is not about waiting for perfect certainty”: Rachel Reeves’ Mais Lecture on an active state and Britain’s economic security

Mar 17

“Where it is in our national interest to align with EU regulation, we should be prepared to do so”: Rachel Reeves’ Mais Lecture on rebuilding UK–EU economic ties

Mar 17

“No partnership is more important than the one with our European neighbours”: Rachel Reeves’ Mais Lecture on alliances, Ukraine, and shared security

Mar 17

“We are the birthplace of businesses including DeepMind, Wayve, and Arm”: Rachel Reeves’ Mais Lecture sets out Britain’s AI advantage

Mar 17

“To every entrepreneur looking to build a new AI product, come to the UK”: Rachel Reeves’ Mais Lecture pitch to global innovators

Mar 17

“Every part of our strategy on AI is aimed at ensuring that our people have a share in the prosperity that AI can create”: Rachel Reeves’ Mais Lecture on skills and jobs