Stock Markets February 23, 2026

Anthropic Says Three Chinese Firms Used Claude to Extract Capabilities for Their Own Models

Company reports millions of interactions via thousands of fake accounts and warns of growing, sophisticated campaigns

By Marcus Reed
Anthropic Says Three Chinese Firms Used Claude to Extract Capabilities for Their Own Models

Anthropic reported that three Chinese companies - DeepSeek, Moonshot and MiniMax - created more than 16 million interactions with the Claude chatbot using about 24,000 fake accounts to improperly obtain capabilities for their own AI models. The firm says the groups relied on a technique called distillation and that the activity violated its terms of service and regional access restrictions.

Key Points

  • Three Chinese firms - DeepSeek, Moonshot and MiniMax - generated over 16 million interactions with Claude using roughly 24,000 fake accounts.
  • They employed a technique called distillation, where an older, more powerful model evaluates and thereby transfers learnings to a newer model.
  • Anthropic says the actions violated its terms of service and regional access restrictions and warns the campaigns are increasing in intensity and sophistication.

Anthropic disclosed on Monday that three Chinese companies attempted to leverage its Claude chatbot in an effort to extract capabilities that could be used to improve their own models. In a company blog post, Anthropic named the firms as DeepSeek, Moonshot and MiniMax and said their activity produced more than 16 million interactions with Claude.

According to Anthropic, the interactions were generated through roughly 24,000 fake accounts. The company said those accounts and the resulting usage were contrary to Anthropic's terms of service and to regional access restrictions the company enforces.

Anthropic described the technique the groups used as "distillation." In the firm's description, distillation involves having an older, more established and more powerful AI model assess the quality of responses generated by a newer model. The process effectively transfers learnings from the older model to the newer one by using the older model's evaluations as a guide.

Anthropic warned that the campaigns it observed are intensifying in both volume and sophistication. The company said: "These campaigns are growing in intensity and sophistication. The window to act is narrow, and the threat extends beyond any single company or region." That statement frames the activity as a broader concern rather than an isolated misuse of the service.

The three named companies were reported to have created a substantial volume of interactions and to have relied on a large number of fabricated accounts to do so. Anthropic characterized the behavior as a violation of its policies and as a circumvention of regional controls intended to limit access.

Anthropic noted the scale of the activity - more than 16 million interactions - and the method used to obtain model insights, but did not provide additional operational details about how the fake accounts were created or how the interactions were orchestrated beyond describing the distillation technique.

Requests for comment sent to DeepSeek AI and MiniMax were not immediately answered, Anthropic said. The company's public disclosure makes clear it regards the pattern of behavior as a growing, cross-border challenge for AI providers.


Key points

  • Three Chinese firms - DeepSeek, Moonshot and MiniMax - generated over 16 million interactions with Claude using roughly 24,000 fake accounts.
  • The groups used a method called distillation, where an established model evaluates the outputs of a newer model to transfer the established model's learnings.
  • Anthropic says the activity violated its terms of service and regional access restrictions and warns the campaigns are becoming more intense and sophisticated.

Risks and uncertainties

  • The potential for broader misuse - Anthropic characterizes the threat as extending beyond any single company or region, indicating uncertainty about the scale and reach of similar campaigns.
  • The increasing sophistication of these campaigns could make detection and enforcement more difficult for AI providers and regulators.
  • Limited public detail - Anthropic described the technique and scope but did not disclose operational specifics about account creation or orchestration, leaving some questions about the exact mechanics of the activity.

Risks

  • The threat may be broader than a single company or region, creating uncertainty for AI providers and regulators.
  • Greater sophistication in these campaigns could make detection and enforcement more challenging for affected sectors.
  • Anthropic provided limited operational details, leaving open questions about the exact mechanics of the fake accounts and interaction orchestration.

More from Stock Markets

Analysts Lift Walmart Targets, Point to Digital Expansion and Margin Levers Feb 23, 2026 Wells Fargo Sees Hyperscaler Compute Capacity Doubling by 2027 Amid AI-Driven Demand Feb 23, 2026 Exxon Seeks Damages Over Cuban Assets Seized in 1960 as Court Hearings Proceed Feb 23, 2026 IBM Shares Slide After Anthropic Unveils AI Tool Targeting COBOL Modernization Feb 23, 2026 Market Movers: Tech, Financials and Biotech Drive Monday Swings Feb 23, 2026