NewsWorld
PredictionsDigestsScorecardTimelinesArticles
NewsWorld
HomePredictionsDigestsScorecardTimelinesArticlesWorldTechnologyPoliticsBusiness
AI-powered predictive news aggregation© 2026 NewsWorld. All rights reserved.
Trending
IranMilitaryIranianStrikesOperationsGulfRegionalCrisisIsraeliNuclearPowerTargetingTimelineMarketsSupremeMarchEscalationGovernmentConflictProtestsPricesStatesDigestSuccession
IranMilitaryIranianStrikesOperationsGulfRegionalCrisisIsraeliNuclearPowerTargetingTimelineMarketsSupremeMarchEscalationGovernmentConflictProtestsPricesStatesDigestSuccession
All Articles
How OpenAI caved to the Pentagon on AI surveillance
The Verge
Clustered Story
Published about 4 hours ago

How OpenAI caved to the Pentagon on AI surveillance

The Verge · Mar 2, 2026 · Collected from RSS

Summary

On Friday evening, amidst fallout from a standoff between the Department of Defense and Anthropic, OpenAI CEO Sam Altman announced that his own company had successfully negotiated new terms with the Pentagon. The US government had just moved to blacklist Anthropic for standing firm on two red lines for military use: no mass surveillance of Americans and no lethal autonomous weapons (or AI systems with the power to kill targets without human oversight). Altman, however, implied that he'd found a unique way to keep those same limits in OpenAI's contract. "Two of our most important safety principles are prohibitions on domestic mass surveillan … Read the full story at The Verge.

Full Article

On Friday evening, amidst fallout from a standoff between the Department of Defense and Anthropic, OpenAI CEO Sam Altman announced that his own company had successfully negotiated new terms with the Pentagon. The US government had just moved to blacklist Anthropic for standing firm on two red lines for military use: no mass surveillance of Americans and no lethal autonomous weapons (or AI systems with the power to kill targets without human oversight). Altman, however, implied that he’d found a unique way to keep those same limits in OpenAI’s contract.“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” Altman wrote. “The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement,” he added, using the Trump Administration’s preferred name for the Defense Department, the Department of War.Across social media and the AI industry, people immediately began to challenge Altman’s claim. Why, they asked, would the Pentagon suddenly agree to these red lines when it had said — in no uncertain terms — that it would never do so?The answer, sources told The Verge, is that the Pentagon didn’t budge. OpenAI agreed to follow laws that have allowed for mass surveillance in the past, while insisting they protect its red lines.One source familiar with the Pentagon’s negotiations with AI companies confirmed that OpenAI’s deal is much softer than the one Anthropic was pushing for, thanks largely to three words: “any lawful use.” In negotiations, the person said, the Pentagon wouldn’t back down on its desire to collect and analyze bulk data on Americans. If you look line-by-line at the OpenAI terms, the source said, every aspect of it boils down to: If it’s technically legal, then the US military can use OpenAI’s technology to carry it out. And over the past decades, the US government has stretched the definition of “technically legal” to cover sweeping mass surveillance programs — and more.OpenAI’s former head of policy research, Miles Brundage, said on X that “in light of what external lawyers and the Pentagon are saying, OpenAI employees’ default assumption here should unfortunately be that OpenAI caved + framed it as not caving, and screwed Anthropic while framing it as helping them.”In a statement to The Verge, OpenAI spokesperson Kate Waters said the Pentagon had not asked for mass surveillance powers and denied that the agreement allowed for the crossing of certain lines. “The system cannot be used to collect or analyze Americans’ data in a bulk, open-ended, or generalized way,” Waters said.AI systems could help the military (or other departments) conduct widespread surveillance operations with unprecedented levels of detail. AI’s best talent is finding patterns, and human behavior is nothing if not a set of patterns — imagine an AI system layering, for any one individual, geolocation data, web browsing information, personal financial data, CCTV footage, voter registration records, and more — some publicly available, some purchased from data brokers. “Using these systems for mass domestic surveillance is incompatible with democratic values,” Amodei wrote in a statement. “Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person’s life—automatically and at massive scale.”While Anthropic says it pushed for a contract that specifically proscribes the practice, OpenAI appears to rely heavily on existing legal limits. It said its Pentagon agreement states that “for intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives requiring a defined foreign intelligence purpose.”“OpenAI employees’ default assumption here should unfortunately be that OpenAI caved + framed it as not caving.”But this isn’t reassuring. In the years after 9/11, US intelligence agencies ramped up a surveillance system that they determined fell within the legal limits OpenAI cites, including multiple mass domestic spying operations (along with apparently highly invasive international ones). In 2013, National Security Agency intelligence contractor Edward Snowden revealed the extent of some of these programs, such as reportedly collecting telephone records of Verizon customers on an “ongoing, daily” basis, and gathering bulk data on individuals from tech companies like Microsoft, Google, and Apple via a secretive program called PRISM. Despite promises of reform from intelligence agencies and attempts at legal changes, few significant limits to these powers were enacted. Mike Masnick, founder of Techdirt, said online that OpenAI’s deal “absolutely does allow for domestic surveillance. EO 12333 is how the NSA hides its domestic surveillance by capturing communications by tapping into lines *outside the US* even if it contains info from/on US persons.”“The intelligence law section of this is very persuasive if you don’t realize that every bad intelligence scandal in the last 30 years had a legal memo saying it complied with those authorities,” Palisade Research’s Dave Kasten wrote of OpenAI’s agreement.The Pentagon “has not asked us to support that type of collection or analysis, and our agreement does not permit it,” Waters said. “Our agreement does not permit uses of our models for unconstrained monitoring of U.S. persons’ private information, and all intelligence activities must comply with existing US law. In practical terms, this means the system cannot be used to collect or analyze Americans’ data in a bulk, open-ended, or generalized way.”Anthropic’s Amodei has publicly said that the law had not yet caught up with AI’s ability to conduct surveillance on a massive scale. And Altman takes pains in his statement to say that OpenAI’s contract “reflects [its red lines] in law and policy,” meaning that it’s simply abiding by existing laws and existing Pentagon policies, the latter of which can change at any time. (OpenAI attempts to address the latter issue in a Q&A, where it says the contract “explicitly references the surveillance and autonomous weapons laws and policies as they exist today, so that even if those laws or policies change in the future, use of our systems must still remain aligned with the current standards reflected in the agreement.”)Do you work for an AI company with military contracts? Contact me via Signal at haydenfield.11 on a non-work device with tips.Sarah Shoker, a senior research scholar at the University of California, Berkeley and former lead of OpenAI’s geopolitics team, told The Verge that “I think there are a lot of modifying words that are in the sentences that the [OpenAI] spokesperson gave.” Shoker added that the vagueness of the language doesn’t make it clear what exactly is prohibited here. “The use of the word ‘unconstrained,’ the use of the word ‘generalized,’ ‘open-ended’ manner — that’s not a complete prohibition. That is language that’s designed to allow optionality for the leadership… It allows leaders also not to lie to their employees in the event that the Pentagon does use the LLM in a legal manner without OpenAI leadership’s knowledge.”Based on what we’ve seen of OpenAI’s existing contract and according to the Pentagon’s current legal constraints, it could legally use OpenAI’s technology to search foreign intelligence databases for information on Americans on a large scale. The Pentagon could also buy bulk location data from data brokers and use OpenAI’s tech to map out Americans’ typical patterns, or to quickly and seamlessly build profiles of many American citizens from publicly available data, including surveillance footage, social media posts, online news, voter registration records, and more, potentially layered onto other data it had purchased already.OpenAI’s “red line” on lethal autonomous weapons is similarly weak. The company’s contract with the Pentagon, which the company released excerpts from on Saturday, states that OpenAI’s technology “will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control.” That would put it in compliance with a 2023 Department of Defense directive. There appear to be no additional contractually obligated bans or restrictions — which is ostensibly why it was able to sign an agreement with the Pentagon. Anthropic, meanwhile, sought a ban for unsupervised lethal autonomous weapons, at least until it deemed the technology ready.The source said that the majority of OpenAI’s agreement was nothing new, and it wasn’t anything that other AI companies involved in Pentagon deals hadn’t seen before, whether due to elements floated in negotiations or things that AI companies involved with the Pentagon had already been doing.OpenAI’s technical safeguards aren’t new — and their power is limitedAfter a Trump administration official confirmed that OpenAI’s agreement “flows from the touchstone of ‘all lawful use,’” Altman cited other parts of the agreement to make the case that OpenAI was maintaining its red lines. He said some OpenAI employees would receive security clearances to check in on the systems, for example, and that OpenAI would introduce classifiers (or small models that can monitor and tag large models, potentially blocking them from performing certain actions). In OpenAI’s blog post about the agreement, the company writes that its deployment architecture “will enable us to independently verify that these red lines are not crossed, including running and updating classifiers.”But that’s not necessarily true, he said. The source said AI companies involved with the Pentagon already use these safeguards, and their impact is limited. Classifiers, for instance, wouldn’t be able to confirm


Share this story

Read Original at The Verge

Related Articles

Engadgetabout 3 hours ago
Anthropic's Claude can now absorb your past conversations with other AI chatbots

Anthropic has made switching to its Claude AI chatbot easier than ever. The company announced a new memory import tool that can extract all of a competing AI chatbot's memories and context of you into a text prompt that can be fed into Claude. With Anthropic's prompt, you can then copy and paste the output into Claude's memories, and the AI chatbot will pick up where you left off with another AI chatbot, whether it's ChatGPT, Gemini or Copilot. Anthropic said it'll take about 24 hours for Claude to assimilate the new context, but you'll be able to see the change by clicking on the "See what Claude learned about you" button. Claude users can even tweak what the AI chatbot remembers in the "Manage memory" section in the app's settings. Anthropic pointed out that Claude is meant to focus on "work-related topics to enhance its effectiveness as a collaborator," adding that it might not remember personal details that are unrelated to work. Anthropic's timing doesn't seem to be just a coincidence. Claude recently jumped to the number one spot in the App Store's free apps charts, dethroning ChatGPT in the process. The rise in popularity likely stems from its recent dispute with the Department of Defense, where Anthropic refused to budge on AI guardrails related to mass domestic surveillance and fully autonomous weapons. On the other hand, OpenAI will be taking Anthropic's vacated role with the Department of Defense, leading to a trend of users boycotting ChatGPT and canceling their subscriptions. This article originally appeared on Engadget at https://www.engadget.com/ai/anthropics-claude-can-now-absorb-your-past-conversations-with-other-ai-chatbots-153201656.html?src=rss

South China Morning Postabout 4 hours ago
AI-assisted US strikes in Iran to intensify China drive for tech self-reliance: analysts

Artificial intelligence has become a stronger force on the battlefield, with the US military’s use of AI-assisted strikes on Iran underscoring what analysts say is the “urgency” for China to accelerate its push for tech self-reliance. The US Department of Defence deployed Anthropic’s systems in the Iran campaign even after their deal collapsed, according to reports by The Wall Street Journal and Reuters. The technology was used for intelligence assessments, target identification and battle...

South China Morning Postabout 4 hours ago
What weapons were used in US and Iran strikes, and which are also deployed near China?

The joint US-Israel attack on Iran and Tehran’s own retaliatory strikes on a handful of cities in the region have been a display of advanced weapons from both sides, some applied for the first time. Tomahawk Land-Attack Missiles Footage released by the US military confirmed the use of the Tomahawk Land-Attack Missiles. According to the US Navy, the missiles were launched by Arleigh Burke-class destroyers on February 28, in the strikes against Iran. The Tomahawk cruise missile is a long-range...

moneycontrol.comabout 6 hours ago
Warships , suicide drones and stealth bombers : Full list of weapons US used against Iran in Operation Epic Fury

Published: 20260302T120000Z

South China Morning Postabout 14 hours ago
US using AI, B-2 bombers and suicide drones in Iran strikes

The United States unleashed an array of weaponry against Iranian targets on Saturday, including Tomahawk cruise ⁠missiles, stealth fighters, and for the first time ⁠in combat, low-cost one-way attack drones modelled after Iranian designs. US Central ⁠Command released photographs showing Tomahawk missiles, F-18 and F-35 fighter jets alongside details of the strikes on Iran as part of Operation Epic Fury. Artificial intelligence The Pentagon used artificial intelligence services from Anthropic,...

Engadgetabout 23 hours ago
Anthropic's Claude grabs top spot in App Store after Trump's ban

Anthropic may have lost out on doing business with the US government, but it's gained enough popularity to earn the number one spot on the App Store's Top Free Apps leaderboard. At the top, Claude beat out both ChatGPT and Google Gemini, which respectively sit at the second and third spots on Apple's free apps charts. The sudden surge in user downloads isn't random. It follows news that President Trump has barred any federal agency from using Anthropic's Claude or other AI tools after the AI company refused to concede on certain guardrails. After declining to have its AI models be used for mass domestic surveillance and fully autonomous weapons, Anthropic was also threatened with a "supply-chain risk" label by the Department of Defense Secretary Pete Hegseth. The very public spat led to a wave of user support that finally allowed Claude to dethrone OpenAI's ChatGPT on the App Store as the most downloaded free app. While OpenAI has stepped into Anthropic's shoes after agreeing to a deal with the Department of Defense, the CEO still offered up some thoughts about the debacle during an AMA on X. Even though Claude is a competing model, Sam Altman said that Anthropic's supply-chain risk designation was "a very bad decision" that he's hoping gets reversed. On top of that, OpenAI's CEO called Anthropic's blacklisting "an extremely scary precedent," but he's "still hopeful for a much better resolution." This article originally appeared on Engadget at https://www.engadget.com/big-tech/anthropics-claude-grabs-top-spot-in-app-store-after-trumps-ban-193610130.html?src=rss