
In early March 2026, a dispute between AI company Anthropic and the Pentagon over military use of artificial intelligence escalated into a major legal and political confrontation. The conflict centered on guardrails for AI use in autonomous weapons and domestic surveillance, leading to Anthropic being designated a supply chain risk and filing federal lawsuits, while rival OpenAI quickly secured its own Pentagon deal amid executive resignations.
9 events · 8 days · 22 source articles
Talks between Anthropic and the Department of Defense fell through after the company refused to remove restrictions on its Claude AI system. Anthropic had set two firm red lines: no mass surveillance of Americans and no fully autonomous weapons without human oversight. Defense Secretary Pete Hegseth demanded unrestricted access for 'any lawful purpose,' which Anthropic rejected.
The Department of Defense formally labeled Anthropic a supply chain risk, a designation typically reserved for foreign adversaries like Chinese and Russian vendors. This designation would prohibit the company from obtaining U.S. government contracts and effectively blacklist it among defense contractors. The move came after Anthropic CEO Dario Amodei warned about the risks of using untested AI in autonomous warfare.
OpenAI quickly announced its own deal with the Pentagon to make its AI systems available inside secure Defense Department computing systems. The announcement prompted immediate backlash from users, with many uninstalling ChatGPT and pushing Anthropic's Claude to the top of the App Store charts. The rushed nature of the announcement raised concerns about whether proper guardrails had been established.
Caitlin Kalinowski, who led robotics hardware at OpenAI, resigned over the company's Pentagon deal, stating that 'surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.' She criticized the announcement as rushed without proper guardrails defined, calling it a 'governance concern.' OpenAI stated there were no plans to replace her.
Anthropic filed two federal lawsuits in California district court against the Department of Defense and Trump administration, arguing the designation was 'unprecedented and unlawful' and violated the company's free speech and due process rights. The company accused the government of illegally retaliating against it for adhering to AI safety principles and requested a judge undo the designation and block federal agencies from enforcing it.
The Washington Post reported that the U.S. military used Anthropic's Claude AI to help strike around 1,000 targets in the first 24 hours of operations against Iran, even amid the blacklisting dispute. Claude assisted in war-planning by optimizing target selection, analyzing intelligence data, and providing precise location coordinates through satellite image assessment as part of the Pentagon's Maven Smart System.
The Anthropic-Pentagon clash intensified fears about government surveillance capabilities, with experts warning that AI paired with the Trump administration's sweeping data collections posed new threats to individual privacy. The dispute highlighted broader concerns about the acceptable use cases for military AI technology and the role of tech companies in setting ethical boundaries.
As the conflict continued, Georgetown University's Center for Security and Emerging Technology and other experts examined the expanding use of artificial intelligence in warfare, particularly in the Iran conflict. The use of AI to shorten the 'kill chain' and provide real-time targeting options raised urgent questions about the need for military AI regulation.
The U.S. war with Iran began affecting AI infrastructure across the Middle East, raising questions about large-scale investments by American tech firms in the region. Industry leaders had committed billions to delivering chips and building data centers in Gulf nations, but the conflict put these investments at risk as tensions spilled across the region.