NewsWorld
PredictionsDigestsScorecardTimelinesArticles
NewsWorld
HomePredictionsDigestsScorecardTimelinesArticlesWorldTechnologyPoliticsBusiness
AI-powered predictive news aggregation© 2026 NewsWorld. All rights reserved.
Trending
IranDiplomaticFebruaryTrumpTensionsElectionSignificantTechnologyTimelineMilitaryIsraelPolicyDigestTalksFacesComplacentHolyroodRegionalChineseCompanyTurkeyCrisisStatesFriday
IranDiplomaticFebruaryTrumpTensionsElectionSignificantTechnologyTimelineMilitaryIsraelPolicyDigestTalksFacesComplacentHolyroodRegionalChineseCompanyTurkeyCrisisStatesFriday
All Articles
Anthropic vs. the Pentagon: What’s actually at stake?
TechCrunch
Clustered Story
Published about 2 hours ago

Anthropic vs. the Pentagon: What’s actually at stake?

TechCrunch · Feb 27, 2026 · Collected from RSS

Summary

Anthropic and the Pentagon are clashing over AI use in autonomous weapons and surveillance, raising high-stakes questions about national security, corporate control, and who sets the rules for military AI.

Full Article

The past two weeks have been defined by a clash between Anthropic CEO Dario Amodei and Defense Secretary Pete Hegseth as the two battle over the military’s use of AI. Anthropic refuses to allow its AI models to be used for mass surveillance of Americans or for fully autonomous weapons that conduct strikes without human input. At the same time, Secretary Hegseth has argued the Department of Defense shouldn’t be limited by the rules of a vendor, arguing any “lawful use” of the technology should be permitted. On Thursday, Amodei publicly signaled that Anthropic isn’t backing down – despite threats that his company could be designated as a supply chain risk as a result. But with the news cycle moving fast, it’s worth revisiting exactly what’s at stake in the fight. At its core, this fight is about who controls powerful AI systems — the companies that build them, or the government that wants to deploy them. What is Anthropic worried about? As we said above, Anthropic doesn’t want its AI models to be used for mass surveillance of Americans or for autonomous weapons with no human in the loop for targeting and firing decisions. Traditional defense contractors typically have little say in how their products will be used, but Anthropic has argued from its inception that AI technology poses unique risks and therefore requires unique safeguards. From the company’s perspective, the question is how to maintain those safeguards when the technology is being used by the military. The U.S. military already relies on highly automated systems, some of which are lethal. The decision to use lethal force has historically been left to humans, but there are few legal restrictions on military use of autonomous weapons.The DoD doesn’t categorically ban fully autonomous weapons systems. According to a 2023 DOD directive, AI systems can select and engage targets without human intervention, as long as they meet certain standards and pass review by senior defense officials. That’s precisely what makes Anthropic nervous. Military technology is secretive by nature, so if the U.S. military were taking steps to automate lethal decision-making, we might not know about it until it was operational. And if it used Anthropic’s models, it could count as ‘lawful use.’ Techcrunch event Boston, MA | June 9, 2026 Anthropic’s position isn’t that such uses should be permanently off the table. It’s that its models aren’t capable enough to support them safely yet. Imagine an autonomous system misidentifying a target, escalating a conflict without human authorization, or making a split-second lethal decision that no one can reverse. Put a less-capable AI in charge of weapons, and you get a very fast, very confident machine that’s bad at making high stakes calls. AI also has the power to supercharge lawful surveillance of American citizens to a concerning degree. Under current U.S. laws, surveillance of American citizens is already possible, whether through collection of texts, emails, and other communication. AI changes the equation by enabling automated large-scale pattern detection, entity resolution across datasets, predictive risk scoring, and continuous behavioral analysis. What does the Pentagon want? The Pentagon’s argument is that it should be able to deploy Anthropic’s technology for any lawful use it deems necessary, rather than be limited by Anthropic’s internal policies on things like autonomous weapons or surveillance. More specifically, Secretary Hegseth has argued the Department of Defense shouldn’t be limited by the rules of a vendor and that it would engage in “lawful use” of the technology. Sean Parnell, the Pentagon’s chief spokesperson, said in a Thursday X post that the department has no interest in conducting mass domestic surveillance or deploying autonomous weapons. “Here’s what we’re asking: Allow the Pentagon to use Anthropic’s model for all lawful purposes,” Parnell said. “This is a simple, common-sense request that will prevent Anthropic from jeopardizing critical military operations and potentially putting our warfighters at risk. We will not let ANY company dictate the terms regarding how we make operational decisions.” He added that Anthropic has until 5:01 PM ET on Friday to decide. “Otherwise, we will terminate our partnership with Anthropic and deem them a supply chain risk for DOW,” he said. Despite the Department’s stance that it simply doesn’t believe it should be limited by a corporation’s usage policies, Secretary Hegseth’s concerns about Anthropic have at times seemed connected to cultural grievance. In a speech at SpaceX and xAI offices in January, Hegseth railed against “woke AI” in a speech that some saw as a preview of his feud with Anthropic. “Department of War AI will not be woke,” Hegseth said. “We’re building war-ready weapons and systems, not chatbots for an Ivy League faculty lounge.” So what now? The Pentagon has threatened to either declare Anthropic a “supply chain risk” — which effectively blacklists Anthropic from doing business with the government — or invoke the Defense Production Act (DPA) to force the company to tailor its model to the military’s needs. Hegseth has given Anthropic until 5:01pm on Friday to respond. But with the deadline approaching, it’s anyone’s guess whether the Pentagon will make good on its threat. This is not a fight either party can easily walk away from. Sachin Seth, a VC at Trousdale Ventures who focuses on defense tech, says a supply chain risk label for Anthropic could mean “lights out” for the company. However, he said, if Anthropic is dropped from the DOD, it could be a national security issue. “[The Department] would have to wait six to 12 months for either OpenAI or xAI to catch up,” Seth told TechCrunch. “That leaves a window of up to a year where they might be working from not the best model, but the second- or third-best.” xAI is gearing up to become classified-ready and replace Anthropic, and it’s fair to say given owner Elon Musk’s rhetoric on the matter that the company would have no problem giving the DOD total control over its technology. Recent reports indicate that OpenAI may stick to the same red lines as Anthropic.


Share this story

Read Original at TechCrunch

Related Articles

Engadgetabout 1 hour ago
Google and OpenAI employees sign open letter in ‘solidarity’ with Anthropic

Hundreds of employees at Google and OpenAI have signed an open letter urging their companies to stand with Anthropic in its standoff with the Pentagon over military applications for AI tools like Claude. The letter, titled “We Will Not Be Divided,” calls on the leadership of both companies to “put aside their differences and stand together to continue to refuse the Department of War’s current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.” These are two lines that Anthropic CEO Dario Amodei has said should not be crossed by his or any other AI company. As of publication, the letter has over 450 signatures, almost 400 of which come from Google employees and the rest from OpenAI. Currently, roughly 50 percent of all participants have chosen to attach their names to the cause, with the rest remaining anonymous. All are verified as current employees of these companies. The original organizers of the letter aren’t Google or OpenAI employees; they say are unaffiliated with any AI company, political party or advocacy group. The open letter is the latest development in the saga between Anthropic and US Defense Secretary Pete Hegseth, who threatened to label the company a “supply chain risk” if it did not agree to withdraw certain guardrails for classified work. The Pentagon has also been in talks with Google and OpenAI about using their models for classified work, with xAI coming on board earlier this week. The letter argues the government is "trying to divide each company with fear that the other will give in.” OpenAI CEO Sam Altman told his employees on Friday that the ChatGPT maker will draw the same red lines as Anthropic, according to an internal memo seen by Axios. He told CNBC on the same day that he doesn't "personally think the Pentagon should be threatening DPA against these companies." This article originally appeared on Engadget at https://www.engadget.com/ai/google-and-openai-em

The Hillabout 3 hours ago
Hundreds of Google, OpenAI employees back Anthropic in Pentagon fight

Hundreds of employees at Google and OpenAI are backing artificial intelligence technology company Anthropic, which faces a Friday evening deadline to give the Pentagon permission to use its AI system as it wishes or face repercussions from the department.  Employees who signed a letter alleged the Pentagon was trying to “get them to agree to...

The Hillabout 3 hours ago
House Democrat: 'Good for Anthropic' in rejecting Pentagon demands

Rep. Ro Khanna (D-Calif.) praised the AI company Anthropic for rejecting the Pentagon’s demands on how its technology is used by Friday evening.  The company and the U.S. government have been in a battle for weeks over Anthropic’s AI policy, which blocks its AI model Claude from being used to conduct mass surveillance or develop...

The Vergeabout 4 hours ago
AI vs. the Pentagon: killer robots, mass surveillance, and red lines

WASHINGTON, DC - JANUARY 29: U.S. Secretary of War Pete Hegseth (C) speaks during a meeting of the Cabinet as U.S. President Donald Trump (L) and U.S. Commerce Secretary Howard Lutnick (R) listen in the Cabinet Room of the White House on January 29, 2026 in Washington, DC. President Trump is holding the meeting as the Senate plans to hold a vote on a spending package to avoid another government shutdown, however Democrats are holding out for a deal to consider funding for the Department of Homeland Security.  (Photo by Win McNamee/Getty Images) | Getty Images Can AI firms set limits on how and where the military uses their models? Anthropic is in heated negotiations with the Pentagon after refusing to comply with new military contract terms that would require it to loosen the guardrails on its AI models, allowing for “any lawful use,” even mass surveillance of Americans and fully autonomous lethal weapons.  Pentagon CTO Emil Michael is pushing for Anthropic to be designated a “supply chain risk” if it doesn’t comply, a label usually only given to national security threats. Anthropic’s rivals OpenAI and xAI have reportedly agreed to the new terms, but even after a White House meeting with Defense Secretary Pete Hegseth, Anthropic CEO Dario Amodei is still refusing to cross his company’s red line, stating that “threats do not change our position: we cannot in good conscience accede to their request.” Follow along here for the latest updates on the clash between AI companies and the Pentagon… We don’t have to have unsupervised killer robots Anthropic refuses Pentagon’s new terms, standing firm on lethal autonomous weapons and mass surveillance Pete Hegseth’s Pentagon AI bro squad includes a former Uber executive and a private equity billionaire Inside Anthropic’s existential negotiations with the Pentagon

The Hillabout 4 hours ago
Altman says OpenAI agrees with Anthropic’s red lines in Pentagon dispute

OpenAI CEO Sam Altman said Friday that he agrees with Anthropic’s red lines in its increasingly contentious negotiations with the Pentagon over the terms of use for the company’s AI models. As the feud between Anthropic and the Department of Defense (DOD) has reached a boiling point, the AI firm has refused to budge on...

TechCrunchabout 5 hours ago
Employees at Google and OpenAI support Anthropic’s Pentagon stand in open letter

While Anthropic has an existing partnership with the Pentagon, the AI company has remained firm that its technology not be used for mass domestic surveillance or fully autonomous weaponry.