TLDRs; Anthropic will pay $1.5B to settle authors’ lawsuit over using pirated books in AI training datasets. The settlement requires Anthropic to destroy datasets containing copyrighted works, marking a historic precedent. Authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson led the case against the $183B AI firm. The record-breaking payout highlights risks tied to [...] The post Anthropic Ordered to Destroy Datasets in $1.5B Author Settlement appeared first on CoinCentral.TLDRs; Anthropic will pay $1.5B to settle authors’ lawsuit over using pirated books in AI training datasets. The settlement requires Anthropic to destroy datasets containing copyrighted works, marking a historic precedent. Authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson led the case against the $183B AI firm. The record-breaking payout highlights risks tied to [...] The post Anthropic Ordered to Destroy Datasets in $1.5B Author Settlement appeared first on CoinCentral.

Anthropic Ordered to Destroy Datasets in $1.5B Author Settlement

TLDRs;

  • Anthropic will pay $1.5B to settle authors’ lawsuit over using pirated books in AI training datasets.
  • The settlement requires Anthropic to destroy datasets containing copyrighted works, marking a historic precedent.
  • Authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson led the case against the $183B AI firm.
  • The record-breaking payout highlights risks tied to sourcing pirated materials, even under “fair use” defenses.

Anthropic, one of the fastest-growing artificial intelligence startups, has agreed to pay at least $1.5 billion to settle a class-action lawsuit brought by a group of U.S. authors.

The case, filed in the Northern District of California in 2024, accused the company of using thousands of copyrighted books, many allegedly obtained from pirated sources, to train its large language models without permission.

If approved by the court, the deal would mark the largest publicly reported copyright settlement in history, surpassing prior cases involving technology and media companies. The settlement will compensate authors at a rate of roughly $3,000 per book, plus interest, and crucially, requires Anthropic to permanently destroy training datasets that contained the disputed material.

Disputed Datasets Must Be Deleted

While Anthropic initially defended its practices under “fair use,” the lawsuit turned on the source of the training data. Plaintiffs argued that much of the content had been scraped from shadow libraries such as Library Genesis and Pirate Library Mirror, both known repositories of pirated works.

A judge had earlier ruled that Anthropic’s use of books could be considered transformative and therefore protected under fair use. However, the court noted that the company’s reliance on pirated datasets exposed it to separate legal vulnerabilities. This dual finding created a complex risk landscape: even a legal win on fair use did not shield the company from potential damages tied to unlawful data acquisition.

By agreeing to destroy the datasets, Anthropic is not only offering financial compensation but also addressing authors’ calls for stronger protections against unauthorized use of their works in AI training.

Authors Behind the Case

The lawsuit,was filed by authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson. They argued that Anthropic’s model training devalued their creative work and undermined their intellectual property rights.

Legal experts say the case underscores a new era of copyright litigation, where creators are pushing back against AI companies’ vast data practices. While courts have recognized transformative uses under fair use, the source of that data has emerged as a separate battlefield.

“This settlement shows that creators’ rights can’t be brushed aside simply because the technology is new,” said one copyright attorney familiar with the case.

Industry Impact Beyond Anthropic

The timing of the settlement is particularly notable. Just days before the agreement, Anthropic announced a $13 billion funding round, pushing its valuation to $183 billion. Analysts say the company likely pursued a settlement to remove uncertainty as it seeks to expand partnerships and secure regulatory approval in key markets.

For the broader AI sector, the case offers a cautionary tale. Even as courts rule favorably on fair use, the origin of training data can leave companies exposed to massive liabilities.

Industry observers believe this record-breaking deal could set a precedent for future lawsuits involving other AI developers accused of training their systems on copyrighted works without consent. It may also accelerate efforts to establish licensing frameworks between AI companies and publishers to avoid costly litigation.

 

 

The post Anthropic Ordered to Destroy Datasets in $1.5B Author Settlement appeared first on CoinCentral.

Market Opportunity
null Logo
null Price(null)
--
----
USD
null (null) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.