
A high-stakes conflict erupted between AI company Anthropic and the U.S. Defense Department over whether the military could use Anthropic's Claude AI model without restrictions. The dispute centered on two key safeguards: prohibitions on mass surveillance of Americans and fully autonomous weapons. This timeline tracks the escalating confrontation from the Pentagon's ultimatum through Trump's executive action.
12 events · 1 days · 30 source articles
Defense Secretary Pete Hegseth summoned Anthropic CEO Dario Amodei to the Pentagon, demanding the company remove restrictions on military use of its Claude AI model. Hegseth set a Friday, February 27 deadline at 5:01 PM ET for compliance, threatening to designate Anthropic as a 'supply chain risk' or invoke the Defense Production Act if the company refused. The Pentagon wanted unrestricted 'lawful use' of the technology, while Anthropic maintained guardrails against mass surveillance and fully autonomous weapons.
The conflict between Anthropic and the Pentagon became public knowledge as media outlets began reporting on the high-stakes standoff. Bloomberg and Foreign Policy characterized it as a 'game of chicken' between the world's biggest military and one of the most safety-conscious AI companies. OpenAI and xAI had reportedly already agreed to the Pentagon's new contract terms, increasing pressure on Anthropic.
Dario Amodei published a detailed statement emphasizing Anthropic's extensive support for U.S. national security, noting the company was 'the first frontier AI company to deploy our models in the US government's classified networks.' He stressed his belief in 'the existential importance of using AI to defend the United States and other democracies,' while maintaining that certain applications could 'undermine, rather than defend, democratic values.'
The Defense Department delivered what it called its 'last and final offer' to Anthropic on Wednesday night. In his statement, Amodei rejected the offer, saying the new contract language 'made virtually no progress on preventing Claude's use for mass surveillance of Americans or in fully autonomous weapons.' Pentagon spokesman Sean Parnell stated the military 'has no interest in' letting companies dictate limits on technology use.
With less than 24 hours before the Friday deadline, Anthropic CEO Dario Amodei publicly stated the company 'cannot in good conscience accede' to the Pentagon's demands. He maintained two firm 'red lines': no mass surveillance of Americans and no lethal autonomous weapons without human oversight. The company emphasized it was not walking away from negotiations entirely but could not accept the proposed terms. At stake was a $200 million contract and potential designation as a supply chain risk.
As the deadline loomed, major news outlets provided deeper analysis of what was at stake. NPR and DW News reported that the conflict went to the heart of how AI should be used in warfare, with hundreds of millions of dollars in contracts and access to advanced AI tools hanging in the balance. The dispute raised fundamental questions about whether private companies could set ethical boundaries on military technology use.
Google and DeepMind employees began circulating internal communications calling on their leadership to adopt similar ethical guardrails as Anthropic. This signaled growing worker activism across the AI industry regarding military applications of artificial intelligence technology, echoing earlier employee protests over defense contracts.
US Under Secretary of Defense Emil Michael sharply escalated the confrontation, calling Dario Amodei a 'liar' with a 'God-complex' in posts on social media. Michael accused Amodei of wanting 'nothing more than to try to personally control the US military' and being 'ok putting our nation's safety at risk.' The Pentagon announced plans to pull Anthropic from defense supply chains and terminate existing agreements.
Employees at Amazon, Google, Microsoft, and OpenAI began pressing their executives to adopt tough AI guardrails and stand with Anthropic against Pentagon demands. Over 300 Google employees and over 60 OpenAI employees signed an open letter titled 'We Will Not Be Divided,' urging their companies to refuse the Department of Defense's demands for unrestricted use of AI models for mass surveillance and autonomous killing.
OpenAI CEO Sam Altman stated that he agrees with Anthropic's red lines regarding mass surveillance and autonomous weapons, despite earlier reports that OpenAI had already agreed to the Pentagon's terms. This public statement complicated the narrative that Anthropic was isolated in its position and suggested potential alignment among AI companies on ethical boundaries.
Rep. Ro Khanna (D-Calif.) publicly praised Anthropic for rejecting the Pentagon's demands, saying 'Good for Anthropic.' This marked the first significant congressional voice weighing in on the dispute, indicating the controversy had political dimensions beyond the executive branch and defense establishment.
President Trump posted on Truth Social directing every federal agency to 'immediately cease' using Anthropic's products, calling the company 'Leftwing nut jobs' who made a 'DISASTROUS MISTAKE.' Trump stated that Anthropic was trying to 'STRONG-ARM the Department of War' and force the military to obey their terms of service 'instead of our Constitution.' Federal agencies were given six months to phase out Anthropic's tools completely.