
The Verge · Mar 2, 2026 · Collected from RSS
On Friday evening, amidst fallout from a standoff between the Department of Defense and Anthropic, OpenAI CEO Sam Altman announced that his own company had successfully negotiated new terms with the Pentagon. The US government had just moved to blacklist Anthropic for standing firm on two red lines for military use: no mass surveillance of Americans and no lethal autonomous weapons (or AI systems with the power to kill targets without human oversight). Altman, however, implied that he'd found a unique way to keep those same limits in OpenAI's contract. "Two of our most important safety principles are prohibitions on domestic mass surveillan … Read the full story at The Verge.
On Friday evening, amidst fallout from a standoff between the Department of Defense and Anthropic, OpenAI CEO Sam Altman announced that his own company had successfully negotiated new terms with the Pentagon. The US government had just moved to blacklist Anthropic for standing firm on two red lines for military use: no mass surveillance of Americans and no lethal autonomous weapons (or AI systems with the power to kill targets without human oversight). Altman, however, implied that he’d found a unique way to keep those same limits in OpenAI’s contract.“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” Altman wrote. “The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement,” he added, using the Trump Administration’s preferred name for the Defense Department, the Department of War.Across social media and the AI industry, people immediately began to challenge Altman’s claim. Why, they asked, would the Pentagon suddenly agree to these red lines when it had said — in no uncertain terms — that it would never do so?The answer, sources told The Verge, is that the Pentagon didn’t budge. OpenAI agreed to follow laws that have allowed for mass surveillance in the past, while insisting they protect its red lines.One source familiar with the Pentagon’s negotiations with AI companies confirmed that OpenAI’s deal is much softer than the one Anthropic was pushing for, thanks largely to three words: “any lawful use.” In negotiations, the person said, the Pentagon wouldn’t back down on its desire to collect and analyze bulk data on Americans. If you look line-by-line at the OpenAI terms, the source said, every aspect of it boils down to: If it’s technically legal, then the US military can use OpenAI’s technology to carry it out. And over the past decades, the US government has stretched the definition of “technically legal” to cover sweeping mass surveillance programs — and more.OpenAI’s former head of policy research, Miles Brundage, said on X that “in light of what external lawyers and the Pentagon are saying, OpenAI employees’ default assumption here should unfortunately be that OpenAI caved + framed it as not caving, and screwed Anthropic while framing it as helping them.”In a statement to The Verge, OpenAI spokesperson Kate Waters said the Pentagon had not asked for mass surveillance powers and denied that the agreement allowed for the crossing of certain lines. “The system cannot be used to collect or analyze Americans’ data in a bulk, open-ended, or generalized way,” Waters said.AI systems could help the military (or other departments) conduct widespread surveillance operations with unprecedented levels of detail. AI’s best talent is finding patterns, and human behavior is nothing if not a set of patterns — imagine an AI system layering, for any one individual, geolocation data, web browsing information, personal financial data, CCTV footage, voter registration records, and more — some publicly available, some purchased from data brokers. “Using these systems for mass domestic surveillance is incompatible with democratic values,” Amodei wrote in a statement. “Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person’s life—automatically and at massive scale.”While Anthropic says it pushed for a contract that specifically proscribes the practice, OpenAI appears to rely heavily on existing legal limits. It said its Pentagon agreement states that “for intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives requiring a defined foreign intelligence purpose.”“OpenAI employees’ default assumption here should unfortunately be that OpenAI caved + framed it as not caving.”But this isn’t reassuring. In the years after 9/11, US intelligence agencies ramped up a surveillance system that they determined fell within the legal limits OpenAI cites, including multiple mass domestic spying operations (along with apparently highly invasive international ones). In 2013, National Security Agency intelligence contractor Edward Snowden revealed the extent of some of these programs, such as reportedly collecting telephone records of Verizon customers on an “ongoing, daily” basis, and gathering bulk data on individuals from tech companies like Microsoft, Google, and Apple via a secretive program called PRISM. Despite promises of reform from intelligence agencies and attempts at legal changes, few significant limits to these powers were enacted. Mike Masnick, founder of Techdirt, said online that OpenAI’s deal “absolutely does allow for domestic surveillance. EO 12333 is how the NSA hides its domestic surveillance by capturing communications by tapping into lines *outside the US* even if it contains info from/on US persons.”“The intelligence law section of this is very persuasive if you don’t realize that every bad intelligence scandal in the last 30 years had a legal memo saying it complied with those authorities,” Palisade Research’s Dave Kasten wrote of OpenAI’s agreement.The Pentagon “has not asked us to support that type of collection or analysis, and our agreement does not permit it,” Waters said. “Our agreement does not permit uses of our models for unconstrained monitoring of U.S. persons’ private information, and all intelligence activities must comply with existing US law. In practical terms, this means the system cannot be used to collect or analyze Americans’ data in a bulk, open-ended, or generalized way.”Anthropic’s Amodei has publicly said that the law had not yet caught up with AI’s ability to conduct surveillance on a massive scale. And Altman takes pains in his statement to say that OpenAI’s contract “reflects [its red lines] in law and policy,” meaning that it’s simply abiding by existing laws and existing Pentagon policies, the latter of which can change at any time. (OpenAI attempts to address the latter issue in a Q&A, where it says the contract “explicitly references the surveillance and autonomous weapons laws and policies as they exist today, so that even if those laws or policies change in the future, use of our systems must still remain aligned with the current standards reflected in the agreement.”)Do you work for an AI company with military contracts? Contact me via Signal at haydenfield.11 on a non-work device with tips.Sarah Shoker, a senior research scholar at the University of California, Berkeley and former lead of OpenAI’s geopolitics team, told The Verge that “I think there are a lot of modifying words that are in the sentences that the [OpenAI] spokesperson gave.” Shoker added that the vagueness of the language doesn’t make it clear what exactly is prohibited here. “The use of the word ‘unconstrained,’ the use of the word ‘generalized,’ ‘open-ended’ manner — that’s not a complete prohibition. That is language that’s designed to allow optionality for the leadership… It allows leaders also not to lie to their employees in the event that the Pentagon does use the LLM in a legal manner without OpenAI leadership’s knowledge.”Based on what we’ve seen of OpenAI’s existing contract and according to the Pentagon’s current legal constraints, it could legally use OpenAI’s technology to search foreign intelligence databases for information on Americans on a large scale. The Pentagon could also buy bulk location data from data brokers and use OpenAI’s tech to map out Americans’ typical patterns, or to quickly and seamlessly build profiles of many American citizens from publicly available data, including surveillance footage, social media posts, online news, voter registration records, and more, potentially layered onto other data it had purchased already.OpenAI’s “red line” on lethal autonomous weapons is similarly weak. The company’s contract with the Pentagon, which the company released excerpts from on Saturday, states that OpenAI’s technology “will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control.” That would put it in compliance with a 2023 Department of Defense directive. There appear to be no additional contractually obligated bans or restrictions — which is ostensibly why it was able to sign an agreement with the Pentagon. Anthropic, meanwhile, sought a ban for unsupervised lethal autonomous weapons, at least until it deemed the technology ready.The source said that the majority of OpenAI’s agreement was nothing new, and it wasn’t anything that other AI companies involved in Pentagon deals hadn’t seen before, whether due to elements floated in negotiations or things that AI companies involved with the Pentagon had already been doing.OpenAI’s technical safeguards aren’t new — and their power is limitedAfter a Trump administration official confirmed that OpenAI’s agreement “flows from the touchstone of ‘all lawful use,’” Altman cited other parts of the agreement to make the case that OpenAI was maintaining its red lines. He said some OpenAI employees would receive security clearances to check in on the systems, for example, and that OpenAI would introduce classifiers (or small models that can monitor and tag large models, potentially blocking them from performing certain actions). In OpenAI’s blog post about the agreement, the company writes that its deployment architecture “will enable us to independently verify that these red lines are not crossed, including running and updating classifiers.”But that’s not necessarily true, he said. The source said AI companies involved with the Pentagon already use these safeguards, and their impact is limited. Classifiers, for instance, wouldn’t be able to confirm