Anthropic, the creator of the chatbot Claude, published a detailed report on Monday, February 23, 2026. This revelation highlights the intensifying “AI Cold War.” The report alleges that three major Chinese AI companies DeepSeek, Moonshot, and MiniMax engaged in a massive, coordinated campaign to harvest Claude’s proprietary capabilities.
The scale of this operation is staggering. Anthropic claims these companies generated more than 16 million interactions with Claude. To execute this, the firms allegedly bypassed regional restrictions and safety protocols.
They created approximately 24,000 fraudulent accounts to hide their activity. This breach violates Anthropic’s terms of service and raises significant questions about global AI ethics.
What is “AI Distillation”? Understanding the Heist
At the heart of this controversy lies a sophisticated technique known as “AI Distillation.” While researchers usually use distillation to make models more efficient, Anthropic argues these firms used it for intellectual property theft.
In an ethical setting, a “Teacher” model (a powerful AI) helps a “Student” model (a smaller AI) learn how to process information. However, the Chinese firms allegedly treated Claude as an unwitting “Teacher.”
The firms fed their own models’ outputs into Claude. They then asked Claude to grade, refine, or explain those results. This process effectively “transferred” advanced reasoning into their own systems. Consequently, they improved their models at a fraction of the cost and time it took Anthropic to build Claude from scratch.
The Trio of Accused: Heavyweights of China’s AI Sector
The report names three major players in China’s burgeoning AI sector:
- DeepSeek AI: This firm builds high-performance open-source models that rival Western counterparts in coding.
- Moonshot AI: This Beijing-based “unicorn” focuses on long-context processing, a signature feature of Claude.
- MiniMax: This company specializes in social AI and character-driven interactions for the Asian market.
Anthropic’s report suggests that these companies sought to close the “capability gap” between Eastern and Western AI. By using Claude’s refined responses, they bypassed the expensive “trial and error” phase of model training.
A Sophisticated and Growing Threat
Anthropic did more than report a breach; they issued a warning to the entire industry. The blog post emphasized that these campaigns grow in intensity and sophistication every day.
“The window to act is narrow,” the post stated. “The threat extends beyond any single company or region.” This sentiment echoes growing concerns in Washington and Brussels about “data sovereignty.” If a competitor can siphon intelligence through a user interface, traditional trade secrets offer little protection.
The use of 24,000 fake accounts points to a highly automated infrastructure. These accounts likely used sophisticated VPNs to hide their origins in China a region where Anthropic does not officially offer Claude.
The Ethical and Legal Grey Zone
This controversy sits in a difficult legal area. While the law often protects web scraping, training a direct competitor using a rival’s API clearly violates “Terms of Service” (ToS) agreements.
However, U.S. companies struggle to enforce their ToS against firms based in China. This incident will likely increase pressure on the U.S. Department of Commerce. Lawmakers may implement stricter “Know Your Customer” (KYC) requirements for AI cloud providers to prevent unauthorized access.
Industry Response and Strategic Silence
As of Monday afternoon, the accused companies have remained silent. DeepSeek AI and MiniMax did not immediately respond to requests for comment.
Industry analysts suggest that this silence is strategic. If these firms admit to using distillation, Western cloud services like AWS or Google Cloud might “blacklist” them. Many international firms still rely on these services for global reach and compute power.
The Future of AI Protection
This event serves as a wake-up call for AI developers. In the coming months, we expect a new era of “Anti-Distillation” technology to emerge.
Potential Defense Strategies:
- Watermarking Responses: Developers can embed invisible patterns in AI text to identify if other models used it for training.
- Behavioral Analysis: Security teams can use AI to watch for “non-human” questioning patterns that suggest model probing.
- Regional Lockouts: Companies may implement aggressive geolocation and identity verification for all users.
Conclusion: A New Era of Competition
The “Claude Heist” reminds us that in the age of generative AI, data is the new oil, but reasoning is the new gold. Anthropic’s decision to go public marks a shift from quiet security to “naming and shaming” as a deterrent.
As the AI race accelerates, the lines between competition and theft will continue to blur. For now, Anthropic has drawn a line in the sand. They have signaled that they will no longer allow rivals to export their hard-earned intelligence through the back door.
READ ALSO: Burkina Faso Attacks Expose Junta’s Failure as Militants Kill 180
