NewsWorld
PredictionsDigestsScorecardTimelinesArticles
NewsWorld
HomePredictionsDigestsScorecardTimelinesArticlesWorldTechnologyPoliticsBusiness
AI-powered predictive news aggregation© 2026 NewsWorld. All rights reserved.
Trending
IranMilitaryTalksNuclearGreenFebruaryLabourTimelineDigestFacesReformsIsraelPolicyFridayTrumpPartyLeadershipStrikesIranianLimitedDrugsPartnershipsBerlinaleFramework
IranMilitaryTalksNuclearGreenFebruaryLabourTimelineDigestFacesReformsIsraelPolicyFridayTrumpPartyLeadershipStrikesIranianLimitedDrugsPartnershipsBerlinaleFramework
All Articles
Pentagon pressures Anthropic in escalating AI showdown
DW News
Clustered Story
Published about 5 hours ago

Pentagon pressures Anthropic in escalating AI showdown

DW News · Feb 27, 2026 · Collected from RSS

Summary

The row between Anthropic, a much vaunted AI startup, and the Trump administration, is sharply escalating. The Pentagon wants complete access to its models, despite safety and ethical concerns.

Full Article

The US government says it will pull AI startup Anthropic from Pentagon supply chains and rip up any agreements it has with it as the row between the two escalates sharply. US Under Secretary of Defense Emil Michael strongly rebuked Anthropic CEO Dario Amodei on Thursday, calling him a "liar" with a "God-complex." He said Amodei "wants nothing more than to try to personally control the US military and is ok putting our nation's safety at risk." The row follows a meeting earlier this week between Amodei and US Defense Secretary Pete Hegseth, where — according to sources familiar with the meeting — Hegseth told Anthropic it had until Friday (February 27) to give the US military full access to its Claude model. If access is not given, Hegseth threatened to cut AI group Anthropic from government supply chains, or possibly compel it to prioritize government orders, according to the sources. Anthropic has so far refused to give Washington complete access to its models for classified military use, including for potentially lethal missions carried out without human control and for domestic mass surveillance. In a statement on Thursday, Amodei said, "Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner. However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values." While Anthropic's Claude model is currently used by the Pentagon, Amodei says he believes the company's red lines of mass surveillance and fully autonomous weapons "need to be deployed with proper guardrails, which don't exist today." It's the latest example of Washington's strong-arm tactics in the corporate sector, while it also shows how control over AI models is becoming a new battleground for the Trump administration. It is also likely to prompt a legal battle between the government and Anthropic. US Defense Secretary Pete Hegseth has reportedly issued an ultimatum to AnthropicImage: Joe Raedle/Getty Images What exactly has Hegseth threatened and why? Hegseth called Anthropic CEO Dario Amodei to Washington for a meeting on Tuesday. An Anthropic spokesperson confirmed the meeting took place and told DW: "We continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government's national security mission in line with what our models can reliably and responsibly do." However, sources familiar with the talks said Hegseth made two direct threats to Amodei if Anthropic did not comply. One was to cut the company out of the Pentagon's supply chain, while the other would be to invoke the Defense Production Act, a measure from the Cold War era, which gives the US president the power to control domestic industry in the supposed interest of national defense. Hegseth wants the Pentagon to have unrestricted access to Anthropic's generative AI chatbot Claude, but Anthropic, which has long billed itself as a safety-oriented AI company, has been consistently opposed. The company is firmly against its Claude technology being used in operations where final military targeting decisions are taken without human intervention, or for mass surveillance within the United States. "To our knowledge, these two exceptions have not been a barrier to accelerating the adoption and use of our models within our armed forces to date," Amodei said in the statement of February 26. "Anthropic views these things as being not in humanity's long-term best interest, at least at the current level of technology and safety guardrails that exist, whereas the Pentagon is pushing to have any lawful use that it wants," Geoffrey Gertz, senior fellow at the Center for a New American Security, told DW. He says talk of invoking the Defense Production Act would be an attempt to exert control over an AI company in an "unprecedented" way, and he is concerned that it could thwart Anthropic's development. "There's a big worry that the government ends up taking actions that hurt Anthropic's ability to continue to be at the forefront of responsible AI," he said. "Actions that are trying to curtail Anthropic's potential markets, I think, could be very harmful and really backfire on what the administration is trying to do on AI policy."What has been the relationship between Anthropic and the US military? Since November 2024, Anthropic has been providing the Claude model to US intelligence and defense agencies. According to the Wall Street Journal, the US military used Claude during the 2026 raid on Venezuela which resulted in the capture of Nicolas Maduro. Neither Anthropic nor the US defense department commented on the claims, and it is not clear precisely how the AI system was used in the raid. Anthropic's Claude model was used in the capture of Nicolas Maduro, according to a WSJ reportImage: X account of Rapid Response 47/AFP The threat by Hegseth to remove Anthropic from Pentagon supply chains would have a financial impact on the company. In July 2025, the US Department of Defense awarded Anthropic a $200 million contract to "prototype frontier AI capabilities that advance US national security." Anthropic hailed the arrangement, with Thiyagu Ramasamy, the company's head of public sector, saying it opened "a new chapter in Anthropic's commitment to supporting US national security." However, at the time, it also emphasized its commitment to "responsible AI deployment." "At the heart of this work lies our conviction that the most powerful technologies carry the greatest responsibility," it said in a statement. "We're building AI systems to be reliable, interpretable, and steerable precisely because we recognize that in government contexts, where decisions affect millions and stakes couldn't be higher, these qualities are essential." Is Anthropic as safety-oriented as it says? Anthropic was founded in 2021 by seven former employees of OpenAI. According to CEO Dario Amodei, it was built "on a simple principle: AI should be a force for human progress, not peril." However, despite the row with the Pentagon, there are signs that Anthropic is reconsidering that commitment in pursuit of commercial ambitions. Anthropic CEO Dario Amodei has touted the company's safety-first credentialsImage: Markus Schreiber/AP Photo/dpa/picture alliance On Tuesday (February 24), the same day as the Hegseth meeting, the company announced it was softening its core safety policy to remain competitive with other leading AI models. "The policy environment has shifted toward prioritizing AI competitiveness and economic growth, while safety-oriented discussions have yet to gain meaningful traction at the federal level," Anthropic said in a blog post announcing the changes. Anthropic faces intense competition from AI rivals such as OpenAI and Google and is making the policy pivot as a result of what it sees as a lack of AI regulation at the federal level. The Trump administration has resisted AI regulation at both the state and federal levels. The spokesperson for Anthropic told DW that the policy shift was unrelated to the Pentagon negotiations. What ethical questions are at stake? If Anthropic submits to Hegseth's demands, or if the defense department were to take control of Anthropic by invoking the Defense Production Act, it would inevitably lead to accusations that the company's AI was no longer being used with a safety-first mindset. Claude is Anthropic's main language modelImage: Andrej Sokolow/dpa/picture alliance The issue also shines a light on the Trump administration's strong willingness to directly intervene in corporate decision-making and in sectors it deems of critical importance. In August 2025, the Trump administration announced it had made a $8.9 billion investment in Intel, part of a series of moves to directly intervene in US chipmaking. It has also intervened directly in the rare-earth sector, making major investments in firms such as Vulcan Elements, MP Materials and USA Rare Earth. Gertz says that corporations and CEOs are "navigating a new landscape" when it comes to the Trump administration's more interventionist policy. "This is an outlier," he said. "This is a big shift from a traditional US approach of more hands-off, let the private sector develop." Edited by: Ashutosh Pandey Editor's note: The article, originally published on February 25, has been updated to reflect the escalation of the dispute between Anthropic and the Trump administration, with a statement from Anthropic CEO Dario Amodei and a comment by US Under Secretary of Defense Emil Michael. Will AI laws protect us? EU AI Act explained To view this video please enable JavaScript, and consider upgrading to a web browser that supports HTML5 video


Share this story

Read Original at DW News

Related Articles

France 24about 1 hour ago
Anthropic refuses to bend to Pentagon on AI safeguards

A public showdown between the Trump administration and Anthropic is hitting an impasse as military officials demand the artificial intelligence company bend its ethical policies by Friday or risk damaging its business. Anthropic CEO Dario Amodei drew a sharp red line 24 hours before the deadline, declaring his company “cannot in good conscience accede” to the Pentagon’s final demand to allow unrestricted use of its technology.

Engadgetabout 6 hours ago
Anthropic refuses to bow to Pentagon despite Hegseth's threats

Despite an ultimatum from Defense Secretary Pete Hegseth, Anthropic said that it can't "in good conscience" comply with a Pentagon edict to remove guardrails on its AI, CEO Dario Amodei wrote in a blog post. The Department of Defense had threatened to cancel a $200 million contract and label Anthropic a "supply chain risk" if it didn't agree to remove safeguards over mass surveillance and autonomous weapons. "Our strong preference is to continue to serve the Department and our warfighters — with our two requested safeguards in place," Amodei said. "We remain ready to continue our work to support the national security of the United States." In response, US Under Secretary of Defense Emil Michael accused Amodei in a post on X of wanting "nothing more than to try to personally control the US military and is OK putting our nation's safety at risk." The standoff began when the Pentagon demanded that Anthropic its Claude AI product available for "all lawful purposes" — including mass surveillance and the development of fully autonomous weapons that can kill without human supervision. Anthropic refused to offer its tech for those things, even with a "safety stack" built into that model. Yesterday, Axios reported that Hegseth gave Anthropic a deadline of 5:01 PM on Friday to agree to the Pentagon's terms. At the same time, the DoD requested an assessment of its reliance on Claude, an initial step toward potentially labelling Anthropic as a "supply chain risk" — a designation usually reserved for firms from adversaries like China and "never before applied to an American company," Anthropic wrote.  Amodei declined to change his stance and stated that if the Pentagon chose to offboard Anthropic, "we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations or other critical missions." Grok is one of the other providers the DoD is reportedly considering, along with Google's Gemini and OpenAI.  It may not be th

Hacker Newsabout 12 hours ago
Google Workers Seek 'Red Lines' on Military A.I., Echoing Anthropic

Article URL: https://www.nytimes.com/2026/02/26/technology/google-deepmind-letter-pentagon.html Comments URL: https://news.ycombinator.com/item?id=47175931 Points: 56 # Comments: 15

NPR Newsabout 13 hours ago
Deadline looms as Anthropic rejects Pentagon demands it remove AI safeguards

The Defense Department has been feuding with Anthropic over military uses of its artificial intelligence tools. At stake are hundreds of millions of dollars in contracts and access to some of the most advanced AI on the planet.

DW Newsabout 13 hours ago
AI firm Anthropic rejects unrestricted US military use

Anthropic fears the unrestricted military use of its AI systems by the US government may harm democracy. Military officials have threatened to invoke Cold War-era legislation to force Anthropic to comply.

Gizmodoabout 15 hours ago
Anthropic Tells Pete Hegseth to Take a Hike

The Pentagon wants Anthropic to drop safeguards against mass domestic surveillance and autonomous weapons.