Anthropic says three Chinese AI firms built more than 24,000 fake accounts to pull data from its Claude system. The company says the goal was to boost their ownAnthropic says three Chinese AI firms built more than 24,000 fake accounts to pull data from its Claude system. The company says the goal was to boost their own

Anthropic says DeepSeek, Moonshot AI, and MiniMax created over 24,000 fake accounts to extract data from Claude

2026/02/24 03:21
3 min read

Anthropic says three Chinese AI firms built more than 24,000 fake accounts to pull data from its Claude system. The company says the goal was to boost their own models fast.

The firms named were DeepSeek, Moonshot AI, and MiniMax. Anthropic said those accounts sent over 16 million prompts into Claude to gather responses and patterns that could be reused for training.

Anthropic shared the details in a blog post on Monday. The company said the activity was a form of distillation. That process uses outputs from one model to train another model. Dario Amodei leads Anthropic.

Anthropic allegedly said DeepSeek ran about 150,000 interactions with Claude. Moonshot AI logged more than 3.4 million prompts. MiniMax reached over 13 million prompts. Anthropic said the scale shows a clear intent to extract value at speed.

OpenAI flags similar behavior in Washington

Earlier this month, OpenAI sent a memo to House lawmakers accusing DeepSeek of using the same distillation tactic to copy its systems. Sam Altman runs OpenAI. After first naming OpenAI, the company told lawmakers that DeepSeek tried to mimic its products through large prompt volumes.

Anthropic said distillation itself has valid uses. Companies use it to build smaller versions of their own models. Anthropic also said the same method can create rival systems in a fraction of the time and at a fraction of the cost.

Synthetic data now plays a large role in training big foundation models. Developers use it because high-quality real data is limited. Many labs are also building agentic systems that can take action for users. In a July technical report, Moonshot said it used synthetic data to train its Kimi K2 model.

Anthropic said the activity raises national security concerns. The company stated that foreign labs that distill American models can feed those capabilities into military, intelligence, and surveillance systems.

Markets react as Anthropic launches new security tool

Anthropic also rolled out a new security tool for Claude on Friday in a limited research preview. The tool scans software code for weaknesses and suggests fixes. Anthropic plans to hold an enterprise briefing on Tuesday with more product announcements.

Markets reacted fast. Cybersecurity stocks fell for a second day on Monday as investors worried that new AI tools could replace older security services.

CrowdStrike dropped about 9 percent. Zscaler also fell about 9 percent. Netskope slid nearly 10 percent. SailPoint declined 6 percent. Okta, SentinelOne, and Fortinet each lost more than 4 percent. Palo Alto Networks was down 2 percent.

Cloudflare fell 7 percent after recent gains tied to Moltbot interest. The iShares Cybersecurity and Tech ETF fell almost 4 percent. The Global X Cybersecurity ETF hit its lowest level since November 2023.

The pressure extends beyond security stocks. AI tools that build apps and websites from simple prompts have shaken software companies this year.

Salesforce has lost about one-third of its value. ServiceNow has fallen more than 34 percent. Microsoft has dropped roughly 20 percent.

Bank of America said the Anthropic tool mainly threatens code scanning platforms such as GitLab and JFrog. GitLab fell 8 percent on Friday. JFrog dropped 25 percent the same day.

Sharpen your strategy with mentorship + daily ideas - 30 days free access to our trading program

Market Opportunity
Boost Logo
Boost Price(BOOST)
$0.0000655
$0.0000655$0.0000655
+2.34%
USD
Boost (BOOST) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.