NewsWorld
PredictionsDigestsScorecardTimelinesArticles
NewsWorld
HomePredictionsDigestsScorecardTimelinesArticlesWorldTechnologyPoliticsBusiness
AI-powered predictive news aggregation© 2026 NewsWorld. All rights reserved.
Trending
AnnouncesFebruaryNewsMilitaryDailyReportCourtCrisisPublicDigestTimelineYearsTrumpWarCampaignTariffsFormalEuropeanIranTrump'sSecurityPowerGreenlandCovid-19
AnnouncesFebruaryNewsMilitaryDailyReportCourtCrisisPublicDigestTimelineYearsTrumpWarCampaignTariffsFormalEuropeanIranTrump'sSecurityPowerGreenlandCovid-19
All Articles
Anthropic Says Chinese AI Companies Improved Models By ‘Illicitly’ Copying Its Capabilities
Gizmodo
Clustered Story
Published about 4 hours ago

Anthropic Says Chinese AI Companies Improved Models By ‘Illicitly’ Copying Its Capabilities

Gizmodo · Feb 24, 2026 · Collected from RSS

Summary

Keep in mind that DeepSeek is expected to release a new flagship model any day now.

Full Article

Did you know that there’s a way of using outputs from LLMs that may involve no hacking—essentially just taking large quantities of text and repurposing it as training data—that upsets AI companies a great deal? In a blog post on Monday, Anthropic said that the China-based AI companies DeepSeek, Moonshot, and MiniMax broke Anthropic’s rules in order to “illicitly extract” the capabilities of its signature AI model, Claude. Distillation is a normal practice used by AI companies in which a “teacher” model is prompted with specifically tailored inputs, and the answers provided allow a “student” model to rapidly improve. For example, Anthropic writes, “frontier AI labs routinely distill their own models to create smaller, cheaper versions for their customers.” So to distinguish the actions Anthropic is complaining about from uses of distillation perceived as legitimate, these actions are referred to as “distillation attacks.” Are distillation attacks criminal offenses in the eyes of Anthropic? No such thing seems to be alleged here, but these acts were carried out, Anthropic says, “in violation of our terms of service and regional access restrictions.” Anthropic, which is itself dealing with the threat of being labeled a “supply chain risk” by the Pentagon, strikes a patriotic note in the post. Circumventing regional use restrictions and breaking rules, allows “foreign labs, including those subject to the control of the Chinese Communist Party, to close the competitive advantage that export controls are designed to preserve through other means,” it claims. Among the three China-based companies mentioned, Shanghai-based MiniMax, creator of the viral character chat app Talkie, offended Anthropic the most with the scale of its distillation effort: over 13 million alleged exchanges. That’s compared to Moonshot with over 3.4 million, and the most famous company named in the post, DeepSeek, with only an estimated 150,000. OpenAI, Anthropic’s main competitor, is also mad about distillation from at least one Chinese AI company, having sent a memo to the House of Representatives earlier this month, accusing DeepSeek of “ongoing efforts to free-ride on the capabilities developed by OpenAI and other U.S. frontier labs.” DeepSeek is expected to release its latest flagship model, DeepSeek V4 any day now, and CNBC has warned that this release could cause chaos on Wall Street, at a time when there’s already enough AI-related chaos on Wall Street to go around.


Share this story

Read Original at Gizmodo

Related Articles

Engadgetabout 9 hours ago
Anthropic accuses three Chinese AI labs of abusing Claude to improve their own models

Anthropic is issuing a call to action against AI "distillation attacks," after accusing three AI companies of misusing its Claude chatbot. On its website, Anthropic claimed that DeepSeek, Moonshot and MiniMax have been conducting "industrial-scale campaigns…to illicitly extract Claude’s capabilities to improve their own models." Distillation in the AI world refers to when less capable models lean on the responses of more powerful ones to train themselves. While distillation isn't a bad thing across the board, Anthropic said that these types of attacks can be used in a more nefarious way. According to Anthropic, these three Chinese AI firms were responsible for more than "16 million exchanges with Claude through approximately 24,000 fraudulent accounts." From Anthropic's perspective, these competing companies were using Claude as a shortcut to develop more advanced AI models, which could also lead to circumventing certain safeguards. Anthropic said in its post that it was able to link each of these distilling attack campaigns to the specific companies with "high confidence" thanks to IP address correlation, metadata requests and infrastructure indicators, along with corroborating with others in the AI industry who have noticed similar behaviors. Early last year, OpenAI made similar claims of rival firms distilling its models and banned suspected accounts in response. As for Anthropic, the company behind Claude said it would upgrade its system to make distillation attacks harder to do and easier to identify. While Anthropic is pointing fingers at these other firms, it's also facing a lawsuit from music publishers who accused the AI company of using illegal copies of songs to train its Claude chatbot. This article originally appeared on Engadget at https://www.engadget.com/ai/anthropic-accuses-three-chinese-ai-labs-of-abusing-claude-to-improve-their-own-models-205210613.html?src=rss

The Vergeabout 10 hours ago
Anthropic accuses DeepSeek and other Chinese firms of using Claude to train their AI

Anthropic claims DeepSeek and two other Chinese AI companies misused its Claude AI model in an attempt to improve their own products. In an announcement on Monday, Anthropic says the "industrial-scale campaigns" involved the creation of around 24,000 fraudulent accounts and more than 16 million exchanges with Claude, as reported earlier by The Wall Street Journal. The three companies - DeepSeek, MiniMax, and Moonshot - are accused of "distilling" Claude, or training a smaller AI model based on a more advanced one. Though Anthropic says that distillation is a "legitimate training method," it adds that it can "also be used for illicit purpose … Read the full story at The Verge.

TechCrunchabout 10 hours ago
Anthropic accuses Chinese AI labs of mining Claude as US debates AI chip exports

Anthropic accuses DeepSeek, Moonshot, and MiniMax of using 24,000 fake accounts to distill Claude’s AI capabilities, as U.S. officials debate export controls aimed at slowing China’s AI progress.

NPR News6 days ago
Do the people building the AI chatbot Claude understand what they've created?

Anthropic is one of the world's most powerful AI firms. New Yorker writer Gideon Lewis-Kraus explains how they're trying to make chatbot Claude more ethical, and the implications of AI's widening use.

TechCrunch11 days ago
Anthropic’s Super Bowl ads mocking AI with ads helped push Claude’s app into the top 10

The numbers suggest that Anthropic's Super Bowl commercials, combined with Anthropic's recent release of its new Opus 4.6 model, worked to drive attention to Claude's app and its key differentiator from ChatGPT.

Gizmodoabout 6 hours ago
Punch the Baby Monkey’s Ikea Plushie Is Selling for Hundreds on eBay

Viral videos have caused the stuffed animal to sell out.