Anthropic claims illegal use of Claude by Chinese AI

Applications of AI


Anthropic claims that DeepSeek and two other Chinese AI companies are using its Claude model in fraudulent ways.

Much has been said about what is and isn’t allowed when it comes to training AI models, but recent actions by a small number of China-based AI companies have drawn the ire of Anthropic.

“We identified an industrial-scale campaign by three AI labs, DeepSeek, Moonshot, and MiniMax, that fraudulently extracted Claude’s functionality to refine their own models. Through approximately 24,000 fraudulent accounts, these labs generated more than 16 million interactions with Claude, in violation of our Terms of Service and local access restrictions,” Anthropic explained in a blog post.

AI companies have taken particular issue with a training technique called distillation that their Chinese competitors are said to have used. In general, this training technique is not considered illegal in any way, as it is a common technique and amounts to training a less capable model based on the output of a more powerful model.

In these instances, Anthropic believes its Chinese competitors are abusing the system and taking shortcuts that would have taken them more time to do themselves.

“Distillation is a widely used and legitimate training method. For example, Frontier AI Labs regularly distills its own models to create smaller, cheaper versions for its customers. But distillation can also be used for illicit purposes. Competitors can use distillation to obtain powerful capabilities from other labs in a fraction of the time and at a fraction of the cost of developing them independently,” he explained.

Anthropic also claims that such training techniques could lead to potentially dangerous uses of AI. More specifically, he pointed out that implementation in the military field should be considered.

“Anthrop and other US companies are building systems that prevent state and non-state actors from using AI to develop biological weapons or carry out malicious cyber activities, for example. Models built through illegal distillation mean that these safeguards are unlikely to be maintained, and dangerous capabilities can proliferate with many protections completely stripped away,” the report stressed.

“Foreign labs that distill the U.S. model could supply these unprotected capabilities to military, intelligence, and surveillance systems, allowing authoritarian governments to deploy cutting-edge AI for offensive cyber operations, disinformation campaigns, and mass surveillance,” the report continued.

As of this writing, none of the Chinese AI companies have commented on the matter, but as Anthropic said, it should happen, and regulators and even the US government could intervene.

The United States and China have been at odds over AI for several years, particularly over GPUs and other silicon needed to develop AI-powered features. If Chinese AI companies are indeed engaged in illegal distillation, action may be taken.

Anthropic has taken steps to counter this latest discovery, but it remains to be seen how effective they will be given the evolving sophistication of training.

Get the technology news you want to read. Tell us how we can better support you by taking our reader survey.



Source link