
In February 2026, a high-stakes standoff erupted between the U.S. Defense Department and AI company Anthropic over military uses of its Claude AI model. The conflict centered on Anthropic's refusal to remove safeguards preventing mass surveillance of Americans and fully autonomous weapons, threatening a $200 million contract and raising fundamental questions about AI ethics in warfare.
12 events · 8 days · 21 source articles
Anthropic's Claude AI model was reportedly deployed during a U.S. special operations raid that resulted in the capture of Venezuelan President Nicolás Maduro. This military operation brought tensions between Anthropic and the Pentagon into public view, as it raised questions about how Claude was being used under the existing $200 million contract signed the previous summer.
News outlets reported that Anthropic had increasingly found itself at odds with the Pentagon over how its AI model Claude could be deployed in military operations. The dispute centered on Anthropic's restrictions against using its technology for autonomous weapons and government surveillance, which the company maintained despite pressure from the Defense Department.
Media coverage highlighted that Anthropic's carve-outs against autonomous weapons and mass surveillance could cost it a major military contract. The company was identified as the last of its peers to not supply its technology to a new U.S. military internal network, setting it apart from competitors like OpenAI and xAI.
Defense Secretary Pete Hegseth summoned Anthropic CEO Dario Amodei to the Pentagon for a Tuesday meeting to discuss military use of Claude. Hegseth threatened to designate Anthropic a 'supply chain risk'—a label typically reserved for foreign adversaries and never before applied to an American company. The high-pressure meeting aimed to resolve the dispute over contract terms.
A Pentagon official confirmed to The Hill that the meeting between Defense Secretary Hegseth and CEO Amodei would take place on Tuesday. The confirmation underscored the seriousness of the dispute as the company continued discussions with the department around the terms of use for Claude.
The U.S. government escalated its threats, warning it would end military contracts with Anthropic unless the company opened its AI technology for unrestricted military use. Defense officials also indicated they could invoke the Defense Production Act, a Cold War-era law, to give the military more sweeping authority to use Anthropic's products even without the company's approval.
Defense Secretary Hegseth issued an ultimatum to Anthropic with a deadline of 5:01 p.m. ET on Friday, February 27, to agree to the removal of all safeguards. The Pentagon delivered its 'last and final offer' on Wednesday night, demanding the AI firm allow unrestricted military use of Claude. OpenAI and xAI had reportedly already agreed to similar terms.
CEO Dario Amodei issued a detailed public statement titled 'Statement from Dario Amodei on Our Discussions with the Department of War,' emphasizing Anthropic's commitment to defending the United States while maintaining ethical boundaries. He noted that Anthropic was the first frontier AI company to deploy models in classified networks and at National Laboratories, but argued that some AI uses 'undermine, rather than defend, democratic values.'
Less than 24 hours before the Friday deadline, Amodei announced that Anthropic 'cannot in good conscience accede' to the Pentagon's demands. The company stated that the new contract language 'made virtually no progress on preventing Claude's use for mass surveillance of Americans or in fully autonomous weapons.' Anthropic maintained its two red lines: no mass domestic surveillance and no lethal autonomous weapons without human oversight.
Pentagon spokesman Sean Parnell responded on social media, stating that the Defense Department 'has no interest in' violating laws but wants to use Anthropic's AI technology in all legal ways. He emphasized that the military would not let the company dictate limits on lawful military applications, setting the stage for a potential Friday showdown.
As the Friday afternoon deadline approached, both sides remained at an impasse. At stake were hundreds of millions of dollars in contracts and access to some of the most advanced AI tools on the planet. The Pentagon could potentially designate Anthropic as a supply chain risk or invoke the Defense Production Act, while Anthropic faced the possibility of losing its military contracts entirely.
Following Anthropic's public stand, Google and DeepMind employees began seeking to establish their own 'red lines' on military AI use, indicating that the dispute could have broader implications across the tech industry. The move suggested growing concerns among AI workers about unrestricted military applications of their technology.