
In late February 2026, a high-stakes confrontation between the Pentagon and AI company Anthropic over military AI safeguards escalated into a broader crisis about AI ethics, government power, and corporate responsibility. The dispute resulted in a federal ban on Anthropic, sparked a competitive scramble among AI companies, and triggered unprecedented public backlash against OpenAI's subsequent Pentagon deal.
13 events · 2 days · 30 source articles
President Trump posted on Truth Social ordering all federal agencies to immediately cease using Anthropic's technology, with a six-month phase-out period. The ban came after Anthropic refused to allow its AI models to be used for mass domestic surveillance or fully autonomous weapons. Defense Secretary Pete Hegseth designated Anthropic as a supply-chain risk.
Just hours after Trump's ban announcement, the US military launched Operation Epic Fury against Iranian targets, using Anthropic's Claude AI tools for intelligence assessments, target identification, and battle scenario simulations. The operation employed B-2 stealth bombers, Tomahawk cruise missiles, and F-35 fighters. The irony of using Anthropic's technology immediately after banning it drew widespread attention.
OpenAI CEO Sam Altman announced his company had reached an agreement with the Pentagon to deploy AI models in classified military environments. Altman admitted the deal was "definitely rushed" and that "the optics don't look good," but claimed it included the same red lines on mass surveillance and autonomous weapons that Anthropic had demanded. The timing and circumstances sparked immediate skepticism.
Sam Altman held a public question-and-answer session on X to address concerns about OpenAI's Pentagon deal. He deflected questions about mass surveillance and automated killing to the public sector, saying it wasn't his role to set national policy and expressing belief in "the democratic process." The session failed to quell criticism and highlighted OpenAI's difficulty navigating its new role as national security infrastructure.
Anthropic's Claude chatbot overtook ChatGPT to claim the number one spot in Apple's US App Store free app rankings, rising from sixth place on Wednesday to first by Saturday evening. According to SensorTower data, Claude had been outside the top 100 at the end of January. Daily signups broke all-time records every day during this period as users expressed support for Anthropic's ethical stance.
The Verge and MIT Technology Review published detailed analyses showing that OpenAI's claimed safeguards were largely illusory. The company had effectively caved to Pentagon demands while using careful language to obscure this fact. The reports revealed that OpenAI's "compromise" was exactly what Anthropic had feared—allowing the military broad latitude while providing only rhetorical protections.
Anthropic announced new memory import tools allowing users to easily transfer their conversation history and context from ChatGPT, Gemini, or Copilot to Claude. The company also brought memory features to Claude's free tier. The strategic timing capitalized on growing calls for a ChatGPT boycott, making it seamless for users to abandon OpenAI without losing their personalized AI experience.
Treasury Secretary Scott Bessent announced the department was ending all use of Anthropic products, including Claude, following Trump's directive. Bessent stated, "Under President Trump no private company will ever dictate the terms of our national security." The move demonstrated the administration's willingness to enforce the ban aggressively across government agencies.
Hundreds of tech workers from major companies including OpenAI, IBM, Slack, Cursor, and Salesforce Ventures signed an open letter urging the Department of Defense to withdraw its designation of Anthropic as a "supply chain risk." The letter also called on Congress to examine whether using such extraordinary authorities against an American technology company was appropriate.
Data from Sensor Tower showed ChatGPT app uninstalls rose 295% day-over-day on Saturday, March 2, as public backlash intensified against OpenAI's Pentagon deal. Users organized boycotts on social media, with even celebrities like Katy Perry reportedly backing the Claude alternative. The unprecedented response demonstrated how quickly consumer sentiment could shift based on ethical concerns.
As criticism mounted, OpenAI leadership held internal discussions with employees waiting in limbo for clarity on the company's position. Sam Altman and other executives defended the deal while acknowledging its rushed nature. The company faced accusations of being unequipped to manage its new responsibilities as a piece of national security infrastructure while maintaining its consumer reputation.
Under intense pressure, OpenAI and the Pentagon announced amendments to their agreement adding explicit language prohibiting domestic surveillance of U.S. persons and nationals. The additions referenced the Fourth Amendment, National Security Act, and FISA Act, and clarified that the limitations prohibit "deliberate tracking, surveillance, or monitoring." Altman published the internal memo on X to demonstrate transparency.
Multiple outlets reported that the Pentagon-Anthropic rupture was "stunning" Silicon Valley and forcing a broader reckoning about how AI companies should work with government. The crisis exposed that no one had a good plan for managing the intersection of commercial AI development, national security needs, democratic accountability, and ethical safeguards. The incident set a precedent that would shape future AI-government relationships.