homehome Home chatchat Notifications


Anthropic AI Wanted to Settle Pirated Books Case for $1.5 Billion. A Judge Thinks We Can Do Better

This case is quickly shaping up to be a landmark in AI history.

Mihai Andrei
September 9, 2025 @ 7:14 pm

share Share

a gavel
Image via Unsplash.

In a stunning courtroom decision, U.S. District Judge William Alsup rejected a record-breaking $1.5 billion settlement between the AI company Anthropic and hundreds of thousands of authors, blasting the deal as a half-baked plan that was being forced “down the throat of authors.”

Just days ago, this settlement was being hailed as a monumental victory. It was the first major resolution in a wave of lawsuits filed by creators against the tech giants building generative AI. Anthropic, the maker of the chatbot Claude and a rival to OpenAI, had agreed to pay this staggering sum to resolve claims that it built its multibillion-dollar business on a foundation of stolen books.

Lawyers for the authors were also triumphant. They called it “the first of its kind in the AI era” and a message to all AI companies that they could not simply take copyrighted works without paying. It seemed to become a new precedent, a potential template for resolving similar blockbuster lawsuits against Meta, Microsoft, and Google. Then, Judge Alsup stepped in.

A Landmark Deal Hits a Wall

The conflict started, as it so often does, with the messy things tech companies do at the edge of what’s legal.

Anthropic, like its competitors, needed to feed its large language model (LLM), Claude, an unimaginable amount of text. The more data the AI ingests, the more fluently and coherently it can generate human-like text. The entire industry of LLMs is based on this massive amount of text.

To build this digital brain, companies scoured the internet, scraping data from every source they could get their hands on. But the internet’s vast library includes countless copyrighted books, many of which were available on “pirate” websites. Most people would try to access these books legally or compensate the authors somehow; but tech companies aren’t like most people.

Last year, a group of prominent authors, including best-selling thriller writer Andrea Bartz, decided to fight back. So, they filed a class-action lawsuit, accusing Anthropic of mass-scale copyright infringement. Their argument was simple: Anthropic had used their life’s work without permission or payment. They used it as raw fuel for its commercial AI engine. Court documents suggested the staggering scale of the operation. The claimants alleged Anthropic had access to a library of over seven million pirated books. With statutory damages reaching up to $150,000 per infringed work, the AI company faced a potentially ruinous financial liability.

AI and the Law

Judge Alsup turned out to be a key person in this case, and quite possibly, in the history of AI.

The judge’s position was nuanced: he argued that, in principle, using books to train an AI is “exceedingly transformative.” So in principle, this could be considered fair use under US copyright law. The AI hailed it as a huge victory. But the same judge argued that those books needed to be obtained legally. He ruled that Anthropic must stand trial for using pirated copies to build its training library. The company could not, in his view, use the fruits of a poisoned tree.

Yet again, Anthropic did what big tech companies often do: they settled. Less than a week ago, on September 5th, they announced a $1.5 billion deal. Under the proposed terms, nearly 500,000 authors stood to receive about $3,000 per book that was ingested by Claude. It was four times more than the minimum sum, but nowhere near the maximum.

Justin Nelson, a lawyer for the authors, declared that the deal would “provide meaningful compensation” and send “a powerful message to AI companies.” Anthropic, which has long marketed itself as the more ethical AI player, said the settlement would resolve the claims and allow it to continue its mission of developing safe AI. It looked like a win-win.

Then, Alsup stepped in again.

“Down the Throat of Authors”

Class action lawsuits are complex. The lawyers need to focus on a consensus among all the claimants. It’s not clear if all or even most of the authors were happy. But Alsup wasn’t. He didn’t just question the settlement; he dismantled it.

He told the assembled lawyers he felt “misled,” declaring the agreement “nowhere close to complete.” His primary concern was for the authors themselves. He worried that in the rush to secure a massive headline number and hefty legal fees, the individual writers would be left behind. “I have an uneasy feeling about hangers on with all this money on the table,” Alsup said from the bench.

Too often, he argued, class members “get the shaft” after the money is agreed upon and the lawyers lose interest in the messy details of getting it to the right people.

Judge Alsup pointed to a slew of holes in the proposal. The lawyers had come to him asking for approval but couldn’t even provide a final list of the nearly half-million books involved in the case. They didn’t have a finalized list of the authors involved. They hadn’t designed the claim form that authors would use to get their money, nor had they outlined the exact process for notifying potentially hundreds of thousands of writers that they were part of this historic deal.

Precedents and Concerns

In his order, Alsup said he was “disappointed that counsel have left important questions to be answered in the future.” He demanded that the lawyers give authors “very good notice” and design a clear claim form that gave every single copyright holder for a specific work the explicit choice to opt in or opt out. This includes authors, co-authors, and publishers. If even one owner of a book’s copyright opted out, that book would be excluded from the deal.

But the judge also argued that Anthropic itself wasn’t well protected from future lawsuits with this deal. This incomplete framework left the potential for other authors to sue again. This would defeat the whole purpose of a global settlement, he argued. The judge has now postponed his approval, giving the lawyers a tight deadline of September 15 to submit the final list of works and until October 10 to present the claim form and notification plan for his approval. The deal isn’t dead, but it’s on life support.

The Anthropic settlement was being closely watched by every tech company, law firm, and creative guild in the country. It’s set to become a precedent that will affect the AI industry for years to come. The settlement was seen as a potential off-ramp from years of costly and uncertain litigation. Now, that path looks far more complicated.

The judge’s skepticism highlights a fundamental question: can a single, sweeping deal truly provide justice for hundreds of thousands of individual creators? Or does it inevitably prioritize the interests of the lawyers and the settling corporation over the very people whose work was taken?

What Comes Next?

The Association of American Publishers supported the deal and scoffed at what the judge said, saying he demonstrated a “lack of understanding of how the publishing industry works”. They argued the judge was asking for an “unworkable” claims process. But it’s less clear how individual authors feel. For some, the judge’s scrutiny may come a welcome development, ensuring that their rights are not simply signed away in a backroom deal.

Ultimately, this judicial roadblock forces everyone back to the drawing board. AI companies facing similar lawsuits now know that a quick, massive settlement might not be enough to get a judge’s blessing. The courts will be looking under the hood, demanding meticulous detail and ironclad protections for the class members. It raises the bar for what constitutes a “fair” deal in the age of AI.

Lastly, this still doesn’t address the core question of how authors should be compensated when AI companies use their work without approval. Alsup argued that the source of the material matters, but for authors, it makes little difference if AI is absorbing your work from a legal or a pirated source: your books are still being used to feed an algorithm. For now, we don’t seem to have a workable, fair framework to deal with this.

share Share

First Mammalian Brain-Wide Map May Reveal How Intuition and Decision-Making Works

The brain’s decision signals light up like a Christmas tree, from cortex to cerebellum.

Archaeologists Uncovered a Stunning 4,000-Year-Old Mural Unlike Anything Ever Seen in Peru That Predates the Inca by Millennia

A 3D temple wall with stars, birds, and shamanic visions stuns archaeologists in Peru

Scientists Finally Prove Dust Helps Clouds Freeze and It Could Change Climate Models

New analysis links desert dust to cloud freezing, with big implications for weather and climate models.

Eight Seconds Is All You Get. Why Attention Spans Are Shrinking and What To Do About It

If the content is interesting, motivation can improve sustained attention.

Mars Seems to Have a Hot, Solid Core and That's Surprisingly Earth-Like

Using a unique approach to observing marsquakes, researchers propose a structure for Mars' core.

New Catalyst Recycles Plastics Without Sorting. It Even Works on Dirty Trash

A nickel catalyst just solved the biggest problem in plastic recycling.

Scientists Just Discovered a Massive Source of Drinking Water Hiding Beneath the Atlantic Ocean

Scientists drill off Cape Cod and uncover vast undersea aquifers that may reshape our water future.

Your Next Therapist Could be a Video Game or a Wearable and It Might Actually Work

An inside look at a new wave of evidence-backed digital therapies.

This Bizarre Deep Sea Fish Uses a Tooth-Covered Forehead Club to Grip Mates During Sex

Scientists studying a strange deep sea fish uncovered the first true teeth outside the jaw.

Researchers Discovered How to Trap Cancer Cells by "Reprogramming" Their Environment

Scientists find a way to stop glioblastoma cells by stiffening a key brain molecule