Federal Judge William Alsup ruled that Anthropic’s use of published books to train its AI models without obtaining author permission is legal. This is the first court decision to support the argument that fair use can shield AI companies from liability when using copyrighted content to develop large language models (LLMs).
Alsup’s Ruling May Shape Future AI Copyright Battles, Tilting Courts Toward Tech Firms
The ruling is a setback for authors, artists, and publishers who have filed numerous lawsuits against companies like OpenAI, Meta, Midjourney, and Google. While Alsup’s decision doesn’t set a binding precedent for other courts, it could influence future cases and signals a potential judicial shift in favor of tech companies over content creators.
These lawsuits often hinge on how a judge interprets fair use—a complex and outdated exception in copyright law that hasn’t been significantly revised since 1976, long before the internet or the rise of generative AI.
Fair use decisions consider factors like the purpose of the use (e.g., parody or education), whether it’s for commercial benefit (you can write “Star Wars” fan fiction, but not sell it), and how much the new work transforms the original.
Companies like Meta have also defended their use of copyrighted material in AI training by invoking fair use, but until this week’s ruling, it was unclear how courts might respond.
Authors Allege Anthropic Built a Massive Book Database Using Pirated Copies
In Bartz v. Anthropic, the plaintiff authors also raised concerns about how Anthropic obtained and stored their books. The lawsuit alleges the company aimed to build a “central library” containing “all the books in the world” for permanent retention—and that millions of those books were downloaded illegally from pirate websites.
While the judge found that using the books for AI training qualified as fair use, the court will still hold a trial to examine the legality of how Anthropic built and maintains this “central library.”
“We will proceed to trial over the pirated copies used to build Anthropic’s central library and any resulting damages,” Judge Alsup wrote in his ruling. “The fact that Anthropic later purchased a book it had previously downloaded illegally doesn’t erase the original act of infringement, though it may influence the amount of statutory damages awarded.”
Anthropic is introducing Claude 3.7 Sonnet, a next-generation AI model designed to “think” about questions for as long as users prefer.
Described as the industry’s first “hybrid AI reasoning model,” Claude 3.7 Sonnet can provide both instant responses and more in-depth, deliberative answers. Users have the option to enable its reasoning mode, allowing the AI to process questions for a shorter or longer duration.
This model aligns with Anthropic’s goal of simplifying AI interactions. Many current AI chatbots require users to choose between multiple models with varying costs and capabilities. Anthropic aims to streamline this by offering a single model that handles both quick and complex reasoning tasks.
Claude 3.7 Sonnet is launching on Monday for all users and developers. However, only subscribers to Anthropic’s premium Claude plans will gain access to its reasoning features. Free users will receive a standard version without advanced reasoning, though Anthropic claims it still surpasses the previous flagship model, Claude 3.5 Sonnet. (The company notably skipped a version number.)
Pricing and Comparison
Pricing for Claude 3.7 Sonnet is set at $3 per million input tokens—equivalent to around 750,000 words, more than the entire Lord of the Rings trilogy—and $15 per million output tokens. While this makes it pricier than OpenAI’s o3-mini ($1.10 per million input tokens/$4.40 per million output tokens) and DeepSeek’s R1 (55 cents per million input tokens/$2.19 per million output tokens), those models specialize in reasoning alone, whereas Claude 3.7 Sonnet integrates both real-time and extended reasoning capabilities.
Anthropic’s new thinking modes Image Credits:Anthropic
Claude 3.7 Sonnet is Anthropic’s first AI model designed for “reasoning,” a technique increasingly adopted by AI labs as traditional performance improvements slow down.
Models like o3-mini, R1, Google’s Gemini 2.0 Flash Thinking, and xAI’s Grok 3 (Think) take more time and computing power before generating responses. By breaking down problems into smaller steps, these models typically enhance accuracy. While they don’t think or reason like humans, their approach is inspired by deductive processes.
Future Automation of AI Reasoning
Anthropic aims for future versions of Claude to determine on their own how long to “think” about questions, eliminating the need for users to make that choice manually, according to Dianne Penn, the company’s product and research lead, in an interview with TechCrunch.
In a blog post shared with TechCrunch, Anthropic compared this approach to human cognition: “Just as people don’t have separate brains for immediate answers versus deep thinking, we believe reasoning should be a seamless capability within a frontier model rather than a feature confined to a separate system.”
To enhance transparency, Claude 3.7 Sonnet includes a “visible scratch pad” that reveals its internal planning process. Penn noted that while users will be able to see most of the AI’s reasoning, certain parts may be redacted for trust and safety reasons.
Claude’s thinking process in the claude app Image Credits:Anthropic
Anthropic has fine-tuned Claude’s reasoning modes for practical applications, such as solving complex coding challenges and handling autonomous tasks. Developers using Anthropic’s API can adjust the model’s “thinking budget,” balancing speed and cost against answer quality.
In real-world coding evaluations, Claude 3.7 Sonnet demonstrated strong performance. On SWE-Bench, a benchmark for coding tasks, it achieved 62.3% accuracy, outperforming OpenAI’s o3-mini, which scored 49.3%. In TAU-Bench, a test assessing AI interaction with simulated users and external APIs in a retail environment, Claude 3.7 Sonnet scored 81.2%, surpassing OpenAI’s o1 model at 73.5%.
Improved Response Flexibility
Anthropic also claims that Claude 3.7 Sonnet is less likely to refuse valid prompts than previous versions. The model is designed to better distinguish between harmful and benign requests, reducing unnecessary refusals by 45% compared to Claude 3.5 Sonnet. This shift comes as some AI labs reconsider their approach to content restrictions.
Alongside Claude 3.7 Sonnet, Anthropic is introducing Claude Code, an agentic coding tool launching as a research preview. This tool allows developers to execute tasks directly from their terminal. In a demo, Anthropic employees showcased how a simple command like “Explain this project structure” enables Claude Code to analyze a codebase. Developers can modify code using plain English, while the tool explains its edits, tests for errors, and even pushes updates to GitHub.
Claude Code will initially be available to a limited number of users on a first come, first serve basis, according to an Anthropic spokesperson.
Anthropic is launching Claude 3.7 Sonnet at a time when AI labs are rapidly releasing new models. The company has traditionally taken a cautious, safety-focused approach, but with this release, it aims to set the pace. However, competition looms—OpenAI’s CEO, Sam Altman, has hinted that OpenAI may introduce its own hybrid AI model within months.
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
Cookie
Duration
Description
cookielawinfo-checkbox-analytics
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional
11 months
The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy
11 months
The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.