AI research company Anthropic is facing a new copyright lawsuit filed on Wednesday in the US District Court for the Northern District of California by a group of major music publishers, including Universal Music Group, Concord and ABKCO. The complaint alleges that the company improperly used copyrighted musical works to train its Claude chatbot without authorization.
According to the publishers, Anthropic unlawfully reproduced and incorporated more than 700 individual compositions, including song lyrics and sheet music for titles such as “Wild Horses” by the Rolling Stones, Neil Diamond’s “Sweet Caroline,” and Elton John’s “Bennie and the Jets.” The filing further claims that the company’s use of protected material extends beyond those examples and affects thousands of additional works owned or administered by the plaintiffs.
In a public statement accompanying the filing, the publishers said they are pursuing infringement claims covering more than 20,000 songs in total and are seeking statutory damages that could exceed $3 billion. They described the action as potentially one of the largest non-class action copyright cases ever brought in the United States.
The lawsuit states that collections of allegedly pirated books used in the training process contained copyrighted sheet music and lyrics associated with at least 714 songs controlled by the publishers. The plaintiffs argue that the reproduction and use of these materials violated their exclusive rights under U.S. copyright law.
The case has been filed under the title Concord Music Group Inc. v. Anthropic PBC, No. 3:26-cv-00880, in the US District Court for the Northern District of California. Legal representation for the publishers includes attorneys from Oppenheim + Zebrak, Coblentz Patch Duffy & Bass, and Cowan, Liebowitz & Latman.
This newly filed lawsuit follows and expands upon a separate case initiated by the same group of music publishers against Anthropic in 2023, which also centered on allegations that the company used their copyrighted material to train its Claude large language model to generate responses to user prompts. Anthropic has publicly rejected the claims made in both actions and maintains that its training practices comply with applicable law.
The dispute emerges against the backdrop of a major legal development involving Anthropic last year, when the company reached a $1.5 billion settlement in a separate copyright case brought by a group of book authors. In that matter, US District Judge William Alsup concluded that the doctrine of fair use protected Anthropic from infringement liability arising specifically from the use of copyrighted books for artificial intelligence training. At the same time, the court indicated that the company could nonetheless face separate exposure for the alleged unauthorized copying of the underlying books themselves, noting that such conduct could, in theory, give rise to damages reaching as high as $1 trillion.
The growing body of litigation reflects a broader legal and regulatory confrontation between creators and artificial intelligence developers. Across the technology sector, major AI companies are increasingly being targeted by copyright holders, including writers, music publishers, and visual artists, who claim that their works were used without permission to train generative models. In response, many technology firms have consistently argued that the ingestion of copyrighted material for training purposes falls within the scope of fair use under US copyright law, setting the stage for a series of landmark court decisions that are expected to shape the future of AI development and intellectual property rights.
The post Anthropic Faces $3B Copyright Lawsuit From Major Music Publishers Over AI Training Practices appeared first on Metaverse Post.


