
The team behind Claude is speaking out.
Anthropic has revealed what it describes as a large-scale and coordinated attempt by several Chinese AI firms to copy its flagship model, Claude, through systematic misuse of its paid API. In a detailed blog post released Monday, the company outlined evidence that three companies — DeepSeek, Moonshot AI, and MiniMax — allegedly carried out massive “distillation attacks” designed to replicate Claude’s reasoning and coding capabilities.
According to Anthropic, the activity involved the creation of more than 24,000 fraudulent accounts and generated approximately 16 million interactions targeting Claude’s strongest capabilities. MiniMax accounted for more than 13 million exchanges, Moonshot AI for roughly 3.4 million, and DeepSeek for around 150,000.
Anthropic says its internal investigation identified patterns consistent with coordinated scraping efforts, including synchronized traffic bursts and activity linked to specific research groups. The company emphasized that the attacks were not casual misuse but sustained attempts to extract high-value outputs at scale.
We’ve identified industrial-scale distillation attacks on our models by DeepSeek, Moonshot AI, and MiniMax.
These labs created over 24,000 fraudulent accounts and generated over 16 million exchanges with Claude, extracting its capabilities to train and improve their own models.
— Anthropic (@AnthropicAI) February 23, 2026
Drawing a Line Between Public Data and Paid Access
The accusations quickly sparked debate across the AI community.
Critics, including Elon Musk, questioned Anthropic’s stance by pointing to past industry practices around scraping publicly available content for AI training. Musk referenced prior controversies and legal settlements involving book data collection, arguing that the broader AI ecosystem has grappled with murky data boundaries.
Anthropic responded by drawing a sharp distinction: training on publicly accessible web data, it argues, is fundamentally different from breaching paid API terms through fake accounts and automated extraction. The company framed the issue as one of contractual violation and cybersecurity, not open-web training norms.
The clash underscores escalating U.S.–China tensions in artificial intelligence, where technical competition increasingly overlaps with national security concerns. Anthropic called for industry-wide defensive measures to protect model integrity and prevent unauthorized distillation attacks.
Anthropic is guilty of stealing training data at massive scale and has had to pay multi-billion dollar settlements for their theft. This is just a fact. https://t.co/EEtdsJQ1Op
— Elon Musk (@elonmusk) February 23, 2026
Pentagon Meeting Signals Bigger Questions
The timing of the disclosure is notable.
U.S. Defense Secretary Pete Hegseth is scheduled to meet Tuesday with Anthropic CEO Dario Amodei. The meeting comes amid ongoing debate over the military’s adoption of advanced AI systems.
Anthropic stands apart from peers such as OpenAI, Google, and xAI in its cautious posture toward certain military deployments. While the Pentagon has awarded contracts of up to $200 million to each company, Anthropic has publicly voiced ethical concerns about fully autonomous weapons and AI-enabled mass surveillance.
In a recent essay, Amodei warned about the risks of powerful AI systems being used to monitor populations or make high-stakes battlefield decisions without sufficient oversight. He has argued for what he calls a “realistic, pragmatic” approach to managing AI risks, even as adoption accelerates.
Meanwhile, Hegseth has signaled a preference for AI systems “without ideological constraints” in lawful military applications. The Pentagon’s evolving internal AI network — including systems like Grok from xAI and custom deployments of ChatGPT — reflects a growing embrace of AI tools in defense environments.
A Defining Moment for AI Governance
The dispute places Anthropic at the center of two major global conversations: protecting proprietary AI systems from foreign copying efforts and defining ethical guardrails for military AI use.
Industry analysts note that AI is already embedded in logistics, intelligence analysis, and back-office defense functions. However, battlefield and lethal-force applications raise higher-stakes questions that governments and companies alike are still working to answer.
Anthropic says it remains committed to advancing AI responsibly while protecting its technology from misuse. The company is calling for stronger safeguards, clearer enforcement mechanisms, and broader collaboration across the industry to defend against coordinated model extraction efforts.
As AI systems grow more capable — and more geopolitically significant — the tension between innovation, security, and ethics is only intensifying.
-
Credit: Shutterstock You capture a breathtaking sunset or a perfect candid of friends, but when you look at the...
-
Credit: Shutterstock In the digital landscape of 2026, we have officially entered the era of “Post-Visual Truth.” With the...
-
Credit: Shutterstock If your iPhone keeps flashing that dreaded “Storage Almost Full” warning, you’re not alone. Between photos, apps,...
-
Credit: Shutterstock If you’ve ever glanced at your phone during a long workday and wondered whether your dog is...
-
Credit: Shutterstock If you’re a Gmail user, you might want to pause your inbox scrolling for a moment. A...
-
Credit: Shutterstock Today marks an exciting moment for the developer community as xAI officially introduces the Grok Voice Agent...
-
Credit: Shutterstock Big news in the dating world! Justin McLeod, the founder and CEO of Hinge, is stepping down...
-
Credit: Shutterstock Dubai’s dining scene has never been afraid of bold ideas—but the city’s newest culinary experiment might just...
-
Credit: Shutterstock Disney Brings Olaf to Life with AI-Powered Snowman Robot Disney has accomplished the unthinkable by transforming one...
-
Credit: Shutterstock Thirty years ago, before most people had ever heard the term “cybersecurity,” a Japanese anime depicted a...
-
Credit: Shutterstock Today, Apple announced the release of Digital ID, a new and safe method for American iPhone and...
-
Credit: Shutterstock Artificial intelligence (AI) is already a part of our everyday lives and is no longer a sci-fi...
