US artificial intelligence company Anthropic announced on Monday that it had discovered what it described as industrial-scale intellectual property theft by three Chinese AI companies that illegally extracted functionality from its Claude chatbot. OpenAI made a similar claim last month.
According to Anthropic, DeepSeek, Moonshot AI, and MiniMax use a technique called “distillation,” which uses output from more powerful AI systems to rapidly improve the performance of less capable AI systems.
“These campaigns are increasing in intensity and sophistication,” the company said in a statement. “There is little room for maneuver.”
Distillation is a common technique in AI development, often used by companies to create cheaper, smaller versions of their own models.
The practice made headlines a year ago when a low-cost generative AI model released by DeepSeek showed performance on par with ChatGPT and other top US chatbots, overturning assumptions of US superiority in sensitive areas.
Anthropic said the companies achieved their goals through approximately 16 million interactions with Claude Model and 24,000 fake accounts.
These allowed the three institutes to siphon off capabilities they had not developed themselves at a fraction of the cost, thereby circumventing strong U.S. technology export controls aimed at maintaining U.S. dominance in AI.
The company argued that the practice poses a national security risk and said models built through illegal distillation are unlikely to be upheld by safety guardrails designed to prevent abuse, such as restrictions on supporting the development of biological weapons or enabling cyberattacks.
ChatGPT developer Anthropic’s biggest rival, OpenAI, made similar accusations to U.S. lawmakers earlier this month, saying Chinese companies were using the technology in “ongoing efforts to free ride on capabilities developed by OpenAI and other U.S. frontier laboratories.”
Anthropic said MiniMax had its largest operation, with more than 13 million transactions. Each campaign focuses on coding, agent inference, and tool usage, areas in which Claude is considered a leader.
To circumvent Anthropic’s ban on commercial access from China, the labs allegedly routed their traffic through proxy services that managed vast networks of fraudulent accounts.
Anthropic called for a coordinated response between industry and government to address an issue it said no single company could tackle alone.
