
This timeline tracks the rapid escalation of a conflict between AI companies and the U.S. Department of Defense over the military use of artificial intelligence. What began with failed negotiations between Anthropic and the Pentagon over AI safety guardrails exploded into a legal battle, executive resignations, and revelations about AI's active use in warfare, highlighting tensions between Silicon Valley and national security interests.
8 events · 7 days · 20 source articles
Talks between the Department of Defense and Anthropic collapsed after the AI company refused to remove guardrails preventing its Claude technology from being used for mass domestic surveillance and fully autonomous weapons. Anthropic CEO Dario Amodei had warned Defense Secretary Pete Hegseth about risks of untested AI in autonomous warfare. The Pentagon demanded unrestricted military use for 'any lawful purpose.'
Following the failed negotiations, the Department of Defense formally labeled Anthropic a supply chain risk—a designation typically reserved for foreign adversaries like Chinese and Russian vendors. This unprecedented move prohibited the company from obtaining U.S. government contracts and effectively blacklisted it among defense contractors, threatening Anthropic's wider business operations.
In the wake of Anthropic's designation, OpenAI quickly announced its own deal with the Pentagon to make its AI systems available inside secure Defense Department computing systems. The announcement appeared rushed and sparked immediate public backlash, with users uninstalling ChatGPT and pushing Anthropic's Claude to the top of App Store charts in protest.
Caitlin Kalinowski, who led robotics hardware at OpenAI, announced her resignation citing concerns that the Pentagon partnership was rushed without proper guardrails. She stated that 'surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.' OpenAI confirmed there were no plans to replace her position.
Anthropic filed two lawsuits in California federal court against the Department of Defense and Trump administration, arguing the designation was 'unprecedented and unlawful' and violated the company's free speech and due process rights. The suits claimed the government was retaliating against Anthropic for its protected stance on AI safety and accused officials of 'seeking to destroy the economic value created by one of the world's fastest-growing private companies.'
The Pentagon-Anthropic clash reignited fears about government surveillance capabilities as experts warned that AI technology paired with the Trump administration's sweeping data collections posed new threats to individual privacy. The controversy raised questions about whether other startups would be deterred from pursuing defense work and highlighted the absence of clear guardrails for military AI applications.
The Washington Post reported that despite the ongoing dispute and blacklisting, the U.S. military had used Anthropic's Claude AI tool to strike around 1,000 targets in the first 24 hours of operations against Iran. Claude reportedly helped with war planning by optimizing target selection, analyzing intelligence data, and providing precise location coordinates through satellite image assessment as part of the Pentagon's Maven Smart System.
As the controversy continued, news outlets focused on analyzing how artificial intelligence was being deployed in the Iran conflict and its implications for future warfare. The use of AI to shorten the 'kill chain' and provide real-time targeting raised urgent questions about the need for regulation of military AI applications and the balance between technological capabilities and human oversight.