16 Million Queries Used to Clone Claude's Capabilities
Anthropic disclosed on February 24, 2026 that it identified "industrial-scale" distillation campaigns by three Chinese AI companies — DeepSeek, Moonshot AI, and MiniMax — that collectively generated over 16 million exchanges with Claude through approximately 24,000 fraudulent accounts. The campaigns violated Anthropic's terms of service and regional access restrictions, systematically extracting Claude's reasoning capabilities to improve rival models.
Of the three firms, MiniMax drove the most traffic, accounting for over 13 million of the 16 million exchanges. The campaigns followed a similar playbook — using fraudulent accounts and proxy services to access Claude at scale while evading detection.
Campaign Breakdown
| Attribute | DeepSeek | Moonshot AI | MiniMax |
|---|---|---|---|
| Exchanges | ~150,000 | ~3.4 million | ~13+ million |
| Focus Areas | Reasoning, grading rubrics, censorship-safe alternatives | Agentic reasoning, tool use, coding, computer vision | Broad capability extraction |
| Notable Targeting | Politically sensitive query handling | Computer-use agent development | Highest volume distillation |
| Total Fraudulent Accounts | Part of ~24,000 across all three campaigns |
What Is Distillation?
Model distillation is a technique where a smaller AI model learns to mimic the performance of a larger, more capable model by training on the larger model's outputs. Through distillation, a less-resourced team can effectively clone the capabilities of a frontier model without the compute, data, and research investment required to develop those capabilities independently.
In a distillation attack, the attacker:
- Generates massive volumes of prompts targeting specific capabilities
- Collects the target model's responses across thousands of task variations
- Uses those input-output pairs as training data for their own model
- Achieves comparable performance at a fraction of the development cost
How Each Campaign Operated
DeepSeek: Reasoning and Censorship Evasion
DeepSeek's campaign targeted Claude's reasoning capabilities and rubric-based grading tasks, generating approximately 150,000 exchanges. Notably, DeepSeek also sought Claude's help in generating censorship-safe alternatives to politically sensitive queries — including questions about dissidents, party leaders, and authoritarianism — suggesting an effort to train models that could handle sensitive topics without triggering Chinese content filters.
Moonshot AI: Agentic and Coding Capabilities
Moonshot AI focused on Claude's agentic reasoning and tool use, coding capabilities, computer-use agent development, and computer vision across over 3.4 million exchanges. This campaign targeted the specific capabilities that differentiate Claude in the agentic AI space — the ability to reason about and use external tools, write code, and interact with computer interfaces.
MiniMax: Broad-Spectrum Extraction
MiniMax conducted the largest campaign by far, driving over 13 million exchanges with Claude. The breadth of MiniMax's prompts suggests a strategy of comprehensive capability extraction rather than targeting specific features — effectively attempting to distill Claude's general intelligence at scale.
Detection and Attribution
Anthropic detected the campaigns through anomalous usage patterns that distinguished the distillation traffic from legitimate use:
- Volume: The sheer number of exchanges from coordinated account clusters
- Structure: Prompt patterns designed for systematic capability extraction rather than normal conversation
- Focus: Concentrated targeting of specific capability areas across related accounts
- Evasion: Use of proxy services and fraudulent accounts to circumvent geographic and rate-limit restrictions
The three campaigns followed nearly identical operational playbooks, despite originating from different companies — suggesting either shared tactics or a common understanding of distillation attack methodologies in the Chinese AI ecosystem.
Impact Assessment
| Impact Area | Description |
|---|---|
| IP theft | Systematic extraction of Claude's trained capabilities worth billions in R&D investment |
| Competitive advantage | Enables Chinese labs to close the capability gap without equivalent research spending |
| Export control implications | Distillation attacks circumvent the intent of U.S. AI chip export controls |
| Terms of service | 24,000 fraudulent accounts created in violation of Anthropic's ToS and regional restrictions |
| Industry precedent | First major public disclosure of coordinated distillation attacks by named companies |
| Policy debate | Disclosure timed as U.S. debates AI chip export policy, strengthening case for restrictions |
Industry Response
The disclosure arrives at a politically charged moment — the U.S. government is actively debating whether to tighten export controls on AI chips and models to China. Anthropic's public naming of DeepSeek, Moonshot AI, and MiniMax provides concrete evidence for policymakers arguing that Chinese AI companies are systematically extracting capabilities from U.S. frontier models rather than developing them independently.
Google has also reported similar distillation-style attacks on its Gemini models, with attackers using hundreds of thousands of prompts to extract capabilities. The Anthropic disclosure suggests this is an industry-wide problem affecting all major frontier AI providers.
Key Takeaways
- 16 million exchanges across 24,000 fraudulent accounts used by three Chinese AI companies to extract Claude's capabilities
- MiniMax was the largest attacker with 13+ million exchanges; Moonshot AI generated 3.4 million; DeepSeek conducted 150,000 targeted exchanges
- DeepSeek targeted censorship evasion — sought Claude's help generating alternatives to politically sensitive queries
- Moonshot AI targeted agentic capabilities — focused on tool use, coding, and computer-use agent development
- First public attribution of industrial-scale distillation attacks against a named frontier AI provider
- Policy implications — Disclosure strengthens the case for U.S. AI export controls as Chinese labs exploit API access to clone capabilities
Sources
- CNBC — Anthropic Accuses DeepSeek, Moonshot and MiniMax of Distillation Attacks on Claude
- The Hacker News — Anthropic Says Chinese AI Firms Used 16 Million Claude Queries to Copy Model
- Bloomberg — Anthropic Accuses DeepSeek, MiniMax, Moonshot of Illicit AI Model Distillation
- TechCrunch — Anthropic Accuses Chinese AI Labs of Mining Claude as U.S. Debates AI Chip Exports
- Anthropic — Detecting and Preventing Distillation Attacks