Anthropic disclosed on Monday that three Chinese companies attempted to leverage its Claude chatbot in an effort to extract capabilities that could be used to improve their own models. In a company blog post, Anthropic named the firms as DeepSeek, Moonshot and MiniMax and said their activity produced more than 16 million interactions with Claude.
According to Anthropic, the interactions were generated through roughly 24,000 fake accounts. The company said those accounts and the resulting usage were contrary to Anthropic's terms of service and to regional access restrictions the company enforces.
Anthropic described the technique the groups used as "distillation." In the firm's description, distillation involves having an older, more established and more powerful AI model assess the quality of responses generated by a newer model. The process effectively transfers learnings from the older model to the newer one by using the older model's evaluations as a guide.
Anthropic warned that the campaigns it observed are intensifying in both volume and sophistication. The company said: "These campaigns are growing in intensity and sophistication. The window to act is narrow, and the threat extends beyond any single company or region." That statement frames the activity as a broader concern rather than an isolated misuse of the service.
The three named companies were reported to have created a substantial volume of interactions and to have relied on a large number of fabricated accounts to do so. Anthropic characterized the behavior as a violation of its policies and as a circumvention of regional controls intended to limit access.
Anthropic noted the scale of the activity - more than 16 million interactions - and the method used to obtain model insights, but did not provide additional operational details about how the fake accounts were created or how the interactions were orchestrated beyond describing the distillation technique.
Requests for comment sent to DeepSeek AI and MiniMax were not immediately answered, Anthropic said. The company's public disclosure makes clear it regards the pattern of behavior as a growing, cross-border challenge for AI providers.
Key points
- Three Chinese firms - DeepSeek, Moonshot and MiniMax - generated over 16 million interactions with Claude using roughly 24,000 fake accounts.
- The groups used a method called distillation, where an established model evaluates the outputs of a newer model to transfer the established model's learnings.
- Anthropic says the activity violated its terms of service and regional access restrictions and warns the campaigns are becoming more intense and sophisticated.
Risks and uncertainties
- The potential for broader misuse - Anthropic characterizes the threat as extending beyond any single company or region, indicating uncertainty about the scale and reach of similar campaigns.
- The increasing sophistication of these campaigns could make detection and enforcement more difficult for AI providers and regulators.
- Limited public detail - Anthropic described the technique and scope but did not disclose operational specifics about account creation or orchestration, leaving some questions about the exact mechanics of the activity.