Anthropic claims its Chinese competitors are stealing from AI startups to gain an edge in the global AI race.
Anthropic said Monday that three of China’s largest AI labs, DeepSeek, MiniMax and Moonshot AI, are “illegally” using Claude “to improve their own models” through a process called distillation.
“These campaigns are increasing in intensity and sophistication,” Anthropic said as part of a lengthy statement on Monday. “The room for action is narrow, and the threat extends beyond a single company or region. Addressing this requires swift and coordinated action among industry players, policymakers, and the global AI community.”
Anthropic said the distillation activity was an “industrial-scale campaign” that involved approximately 24,000 fraudulent Claude accounts and caused more than 16 million exchanges “in violation of our terms of service and local access restrictions.”
Distillation is the process of training a less powerful model based on the output of a more powerful model. This practice is a legitimate method used by many US companies to train models for publication. Large U.S. companies are also increasingly saying that their Chinese competitors are exploiting the practice to steal their jobs.
OpenAI said in January 2025 that DeepSeek may have “improperly” used OpenAI output to train its models. Earlier this month, Google revealed that it had seen an increase in model extraction attempts, or ‘distillation attacks.’
“Competitors can take advantage of this to obtain powerful capabilities from other labs in a fraction of the time and at a fraction of the cost of developing them themselves,” Antropic said Monday.
Anthropic has revealed shocking details about the extent to which DeepSeek, MiniMax, and Moonshot AI have “rigged” its systems. Claude is not commercially accessible in China, but Anthropic said a rival lab has found a workaround.
Among its notable findings, Anthropic said DeepSeek aimed to create a “censorship-safe alternative for policy-sensitive queries.” The company also said it was able to detect MiniMax campaigns while they were “still active” and take a closer look at what its competitors were doing.
“When we released a new model during MiniMax’s active campaign, they pivoted within 24 hours and redirected nearly half of their traffic to capture the capabilities of their latest system,” Anthropic said.
Representatives for DeepSeek, MiniMax and Moonshot AI did not immediately respond to Business Insider’s requests for comment.
Beyond AI fraud, Anthropic said improper distillation poses a security risk as poorly trained models may lack adequate safeguards to prevent the development of biological weapons.
In response to such distillation campaigns, Anthropic said it has incorporated a “behavioral fingerprinting system,” shared data with other AI companies on what to look out for, and continues to develop additional countermeasures.
Anthropic CEO Dario Amodei recently wrote that the leading model is approaching the stage where it could help someone direct the production of biological weapons without proper safeguards.
Mr. Amodei is also an outspoken supporter of U.S. export controls, a topic that is divisive among CEOs of some major technology companies. Nvidia CEO Jensen Huang has repeatedly said that restricting sales of advanced chips to China by U.S. companies, including his own, will not stifle China’s AI advances.
“The distillation attack therefore strengthens the rationale for export controls. Restricting access to chips limits both direct model training and the scale of illegal distillation,” Antropic said.
Anthropic also faces allegations that it used copyrighted material to train its models. In January, the Washington Post reported new details about the company’s effort, called Project Panama, which the company described as a “destructive scan of every book in the world.” Last year, Anthropic settled a class-action lawsuit brought by authors and publishers of some of its books for $1.5 billion. As part of the settlement, the company admitted no wrongdoing.
