NewsWorld
PredictionsDigestsScorecardTimelinesArticles
NewsWorld
HomePredictionsDigestsScorecardTimelinesArticlesWorldTechnologyPoliticsBusiness
AI-powered predictive news aggregation© 2026 NewsWorld. All rights reserved.
Trending
IranMilitaryNuclearTalksStrikesDiplomaticFebruaryIranianLimitedGenevaIssuesThursdayTensionsTimelineTargetsFacilitiesMissileDigestChinaTariffsPricesOccurBreakthroughPotentially
IranMilitaryNuclearTalksStrikesDiplomaticFebruaryIranianLimitedGenevaIssuesThursdayTensionsTimelineTargetsFacilitiesMissileDigestChinaTariffsPricesOccurBreakthroughPotentially
All Articles
The AI Cold War? US tech companies accuse China’s AI firms of stealing billions in research
Euronews
Published about 7 hours ago

The AI Cold War? US tech companies accuse China’s AI firms of stealing billions in research

Euronews · Feb 26, 2026 · Collected from RSS

Summary

So-called distillation attacks gather responses from AI models to teach smaller ones.

Full Article

As the United States and China race to develop artificial intelligence (AI), American firm Anthropic is the latest company to sound the alarm that Chinese AI companies have been stealing the technology that could decide who wins. DeepSeek, Moonshot AI and MiniMax secretly generated over 16 million conversations with Anthropic's AI chatbot Claude, using more than 24,000 fake accounts, to harvest its intelligence and train their own competing models, the company alleges. OpenAI and Google have also warned about similar accusations at Chinese firms this month, raising fears that China is short-circuiting years of costly AI research. What is AI distillation? Model extraction attacks (MEA), otherwise known as “distillation”, is a technique in which someone with access to a powerful AI model uses it to train a cheaper, faster rival. The method feeds the larger model thousands of questions, collects its answers, and uses those responses to teach a new model to think in the same way. The user can ask the larger model questions and use its responses to train the smaller model, which develops the smaller AI faster and “at a fraction of the cost,” than if the threat actor had done the original work themselves, Anthropic alleges. Distillation is a “legitimate” practice when frontier AI labs distil their own models to “create smaller, cheaper versions for their customers, the US company said. Smaller models answer queries much faster and require fewer computer power or energy to run than the larger model, Google said.​ Meanwhile, the models developed using distillation pose significant national security risks because they “lack necessary safeguards,” according to Anthropic, to limit the potential danger of these models. Anthropic stated that distilled models will not have the safeguards to prevent state and non-state actors from using AI in bioweapons or conducting cyberattacks. There are no risks to ordinary AI consumers in a distillation attack, Google added, because the attacks do not “threaten the confidentiality, availability or integrity of AI services”.​ Meanwhile, OpenAI told US lawmakers in February it had caught DeepSeek trying to secretly copy its most powerful AI models — and warned that the Chinese company was developing new methods to disguise what it was doing. What do hackers teach their models? China’s AI companies allegedly routed traffic through proxy addresses that managed a vast “hydra network,” a large group of fake accounts that spread their activity across platforms to get access to Anthropic, since it is banned in China. Once the companies were in, they generated large volumes of prompts either to collect high-quality responses for model training or to generate tens of thousands of tasks for reinforcement learning, how an agent learns to make decisions from feedback. The DeepSeek accounts that hacked Claude asked the model to articulate how it rationalised an answer to a prompt and write it out step by step, which the company said “generated chain-of-thought training data at scale”. Claude was also used by the DeepSeek accounts to “generate censorship-safe alternatives to politically sensitive queries,” such as questions about opponents to the current Communist Party, Anthropic alleges. The US company theorised that those questions trained DeepSeek’s models “to steer conversations away from censored topics,” which could support a recent study that found Chinese AI models likely censor the same topics as their media. MiniMax AI and Moonshoot AI had larger distillation campaigns than DeepSeek, but Anthropic did not offer examples for the types of information that these two companies collected in their prompts. Google said that its AI chatbot Gemini is consistently misused for coding and scripting tasks, or gathering intelligence such as sensitive account credentials and email addresses. Anthropic says it has built detection measures to identify these campaigns as they happen, but noted that no AI company can solve the problem by itself.


Share this story

Read Original at Euronews

Related Articles

Euronewsabout 5 hours ago
Will Portugal be among the EU countries most affected by US tariffs?

Washington has begun applying a 10% blanket import tariff but may raise it to 15%, with economists warning Portugal could see one of the larger effective increases.

Euronewsabout 5 hours ago
Park Chan-wook named as 2026 Cannes Jury President

Director Park has previously premiered four films in Competition in Cannes and won three Palmes at the festival for his films 'Oldboy', 'Thirst' and 'Decision To Leave'.

Euronewsabout 6 hours ago
Newsletter: Top trade MEP wants 'written' US commitment to EU trade deal

Also in this newsletter: The latest on the deepening Druzhba oil pipeline feud, and the UK is on a charm offensive to be included in the EU’s “Made in Europe” industrial strategy.

Euronewsabout 6 hours ago
The EU wants to buy European. But can it?

The EU’s “Buy European” plan is its biggest industrial policy effort in decades. Experts warn that it is also the most complex and risky.

Euronewsabout 6 hours ago
Europe Today: US-Iran nuclear talks resume in Geneva

Tune in to Euronews' new flagship morning programme at 8 am Brussels time. In just 20 minutes, we bring you up to speed on the biggest news of the day.

Euronewsabout 7 hours ago
Yellow and Blue: Everyday life in Ukraine celebrated in historic exhibition

Four years after the invasion, a poster exhibition presents Ukraine through symbols, colour and everyday life. Contemporary artists reinterpret heritage and resilience, offering audiences a cultural portrait shaped by memory, continuity and quiet defiance.