Legal Intelligence for AI Era
Newsletter|Est. 2025
case-lawintellectual-propertyBreakingFeatured

Encyclopædia Britannica v. OpenAI: When AI Hallucinations Become Trademark Violations

Decision & Law Editorial Team
March 13, 2026
13 min read
3200 words
copyrighttrademarkai-hallucinationslanham-actfair-usellm-training-data
Pre-Trial

Encyclopædia Britannica, Inc. v. OpenAI, Inc.

1:26-cv-2097
S.D.N.Y.
March 13, 2026
AI Tool: GPT-4 (OpenAI)

Key Issue

Copyright + Lanham Act liability for AI hallucinations misattributing content to trusted knowledge brands

Key Takeaways for Practitioners

  • First major lawsuit to combine copyright infringement with a Lanham Act 'false designation of origin' theory based on AI hallucinations.

  • Britannica and Merriam-Webster allege OpenAI trained on ~100,000 of their articles without authorization, constituting commercial substitution rather than transformative fair use.

  • The novel Lanham Act theory: when ChatGPT generates false information and attributes it to Britannica, this constitutes reputational harm independent of literal copying.

  • The case has been consolidated into MDL 3143 in S.D.N.Y. — the growing AI copyright MDL that already includes the New York Times and Authors Guild cases.

  • After Thomson Reuters v. Ross, the AI training data market is legally cognizable — fair use arguments face an uphill battle.

  • The hallucination-as-trademark-harm theory is genuinely novel and could extend AI liability far beyond training data into inference-time outputs.

The Lawsuit That Adds a New Dimension to AI Copyright Litigation

On March 13, 2026, Encyclopædia Britannica and Merriam-Webster filed a federal complaint against OpenAI in the Southern District of New York that does something no prior AI copyright case has done: it combines mass copyright infringement claims with an innovative trademark theory built entirely on AI hallucinations.

The copyright claims follow the now-familiar pattern established in Thomson Reuters v. Ross Intelligence — unauthorized use of proprietary content to train a competing AI product, commercial substitution rather than transformative use. But the Lanham Act theory is genuinely novel: when ChatGPT generates false information and attributes it to Britannica or Merriam-Webster, plaintiffs argue that constitutes false designation of origin causing reputational harm independent of any literal copying.

If this theory succeeds, it would extend AI liability from training-time conduct into inference-time outputs — a fundamental expansion of AI developer responsibility.


Background: What Britannica and Merriam-Webster Allege

The Plaintiffs

Encyclopædia Britannica has produced verified reference content for over 250 years. Merriam-Webster has published authoritative American English dictionaries for over 180 years. Both institutions' value proposition rests on a single promise: accuracy through human editorial oversight.

That promise is precisely what the plaintiffs allege OpenAI has destroyed.

The Copyright Claims

OpenAI allegedly used approximately 100,000 Britannica and Merriam-Webster articles and definitions to train and operate GPT-4 and related models — without authorization, license, or compensation. The articles were used both for initial training and in Retrieval-Augmented Generation (RAG) systems, where the model retrieves and references specific Britannica content in real-time when responding to user queries.

The RAG use is particularly significant. Unlike training data that gets embedded diffusely into model weights, RAG involves the model explicitly accessing and drawing upon Britannica content to generate responses — a much closer analog to copying than statistical learning from a corpus.

The plaintiffs argue this is commercial substitution: users who would have consulted Britannica or Merriam-Webster directly now ask ChatGPT, which provides AI-generated answers derived from (or falsely attributed to) those same sources. The market harm is direct and measurable.


ISSUE 1: Does Using Britannica's Content to Train and Operate GPT-4 Constitute Fair Use?

Rule

Fair use under 17 U.S.C. § 107 requires balancing four factors, with the first (purpose and character) and fourth (market effect) weighing most heavily. Authors Guild v. Google, Inc., 804 F.3d 202 (2d Cir. 2015). The Supreme Court's Warhol decision (2023) tightened the transformativeness analysis: uses that serve "the same or highly similar purposes" as the original, in a commercial context, disfavor fair use.

Following Thomson Reuters v. Ross Intelligence (D. Del. Feb. 11, 2025), the AI training data market is a cognizable derivative market that copyright holders can protect.

Application

Factor One — Purpose and Character:

OpenAI's use is commercial. The transformativeness question turns on whether ChatGPT serves a "further purpose or different character" from Britannica's reference content. The answer is almost certainly no: both provide authoritative factual answers to information queries. ChatGPT competes directly with Britannica's core function.

The RAG use is even less transformative than the training use: the model is actively retrieving Britannica content to answer questions that Britannica itself exists to answer.

Factor Four — Market Effect:

The AI training data market is legally cognizable after Ross. Britannica has an established licensing program and would license AI training use at market rates. OpenAI's use without payment harms both the direct licensing market and the substitution market — users consulting ChatGPT instead of Britannica.

Conclusion — Issue 1

The copyright claims are strong after Ross. The commercial, non-transformative character of the use and the direct market substitution effect align squarely with the framework that prevailed in Ross. Fair use is unlikely to succeed.


ISSUE 2: Do AI Hallucinations Constitute Lanham Act Violations?

This is the genuinely novel question the case presents.

Rule

Section 43(a) of the Lanham Act, 15 U.S.C. § 1125(a), prohibits "false designation of origin" — any false or misleading representation of fact that causes confusion about the source, sponsorship, or approval of goods or services. Trademark dilution under 15 U.S.C. § 1125(c) protects famous marks from uses that blur or tarnish their distinctiveness, regardless of consumer confusion.

Application

The hallucination theory works as follows:

  1. ChatGPT generates false factual information — incorrect definitions, inaccurate encyclopedia entries, fabricated historical claims.
  2. ChatGPT attributes this false information to Britannica or Merriam-Webster, either explicitly ("According to Britannica...") or implicitly through RAG citations.
  3. Users rely on these false attributions, associating the inaccurate content with Britannica's or Merriam-Webster's brands.
  4. This constitutes false designation of origin: OpenAI is marketing AI-generated content as originating from, or being approved by, trusted reference institutions.
  5. Reputational harm results: users who receive false information attributed to Britannica lose trust in the brand, independent of whether they ever consult Britannica directly.

The tarnishment theory under § 1125(c) is arguably stronger: using a famous mark in connection with false or low-quality content — even without consumer confusion — can constitute actionable dilution if it harms the mark's reputation for accuracy and reliability.

The Novelty and Its Limits

No court has squarely held that AI hallucinations misattributing content to a trademark holder constitute Lanham Act violations. The theory faces significant challenges:

The Lanham Act requires that the false designation occur "in commerce" in connection with "goods or services." OpenAI's commercial AI service satisfies this. But courts have traditionally applied § 43(a) to false statements about one's own products or services, not to statements generated by an AI system that the defendant did not specifically intend to make.

The causation question is also difficult: OpenAI did not program ChatGPT to falsely attribute content to Britannica. The hallucinations are emergent behaviors of the language model, not deliberate misrepresentations. Whether emergent AI behavior can constitute "false designation of origin" under § 43(a) is genuinely unresolved.

Conclusion — Issue 2

The copyright claims are the stronger ground. The Lanham Act theory is novel, intellectually compelling, and faces significant doctrinal obstacles — but it is not frivolous, and if it succeeds, the implications for AI liability extend far beyond this case.


Why This Case Matters Beyond Its Facts

What Britannica v. OpenAI Means for AI Developers and Content Owners

For AI developers: The RAG use theory is the most immediate threat. Companies using RAG architectures to retrieve and reference proprietary content in real-time — rather than embedding it statistically in model weights — face the most direct copyright exposure. Review your RAG architecture and content sourcing.

For content owners: The hallucination-as-trademark-harm theory opens a new avenue for institutions whose brands are associated with accuracy. Medical publishers, scientific journals, and legal databases whose content is routinely misattributed by AI systems should assess Lanham Act exposure — even if they have not licensed their content for AI training.

For the MDL: The consolidation into MDL 3143 means Britannica will develop alongside the New York Times and Authors Guild cases. Coordinated discovery may reveal evidence about OpenAI's training data practices that affects all three.

For the licensing market: After Ross established the training data market as legally cognizable, each major copyright plaintiff that files suit strengthens the argument that licensing is the standard — not the exception. OpenAI's legal exposure is cumulative.


Related Coverage

Legal Citation

Encyclopædia Britannica, Inc. v. OpenAI, Inc., 1:26-cv-2097, S.D.N.Y. (March 13, 2026) (Complaint) (Docket No. Filed March 13, 2026)

Case Name:Encyclopædia Britannica, Inc. v. OpenAI, Inc.
Case Number:1:26-cv-2097
Court:S.D.N.Y.
Date:March 13, 2026
Document:Complaint
Docket No.:Filed March 13, 2026

This analysis is based on publicly available court filings. It does not constitute legal advice.

Related Coverage

Back to News