NewsWorld
PredictionsDigestsScorecardTimelinesArticles
NewsWorld
HomePredictionsDigestsScorecardTimelinesArticlesWorldTechnologyPoliticsBusiness
AI-powered predictive news aggregation© 2026 NewsWorld. All rights reserved.
Trending
IranTrumpMilitaryFebruaryNuclearIranianLaunchTimelineStrikesIsraeliGovernmentSupremeTariffDigestLegalTradeSaturdayLimitedCourtNegotiationsTargetingElectionCrisisRefund
IranTrumpMilitaryFebruaryNuclearIranianLaunchTimelineStrikesIsraeliGovernmentSupremeTariffDigestLegalTradeSaturdayLimitedCourtNegotiationsTargetingElectionCrisisRefund
All Articles
Our Agreement with the Department of War
Hacker News
Clustered Story
Published about 4 hours ago

Our Agreement with the Department of War

Hacker News · Feb 28, 2026 · Collected from RSS

Summary

Article URL: https://openai.com/index/our-agreement-with-the-department-of-war Comments URL: https://news.ycombinator.com/item?id=47199948 Points: 104 # Comments: 99

Full Article

Yesterday we reached an agreement with the Pentagon for deploying advanced AI systems in classified environments, which we requested they also make available to all AI companies.We think our agreement has more guardrails than any previous agreement for classified AI deployments, including Anthropic’s. Here’s why.We have three main red lines that guide our work with the DoW, which are generally shared by several other frontier labs:No use of OpenAI technology for mass domestic surveillance.No use of OpenAI technology to direct autonomous weapons systems. No use of OpenAI technology for high-stakes automated decisions (e.g. systems such as “social credit”).Other AI labs have reduced or removed their safety guardrails and relied primarily on usage policies as their primary safeguards in national security deployments. We think our approach better protects against unacceptable use.In our agreement, we protect our red lines through a more expansive, multi-layered approach. We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop, and we have strong contractual protections. This is all in addition to the strong existing protections in U.S. law. We believe strongly in democracy. Given the importance of this technology, we believe that the only good path forward requires deep collaboration between AI efforts and the democratic process. We also believe our technology is going to introduce new risks in the world, and we want the people defending the United States to have the best tools.Our agreement includes:1. Deployment architecture. This is a cloud-only deployment, with a safety stack that we run that includes these principles and others. We are not providing the DoW with “guardrails off” or non-safety trained models, nor are we deploying our models on edge devices (where there could be a possibility of usage for autonomous lethal weapons). Our deployment architecture will enable us to independently verify that these red lines are not crossed, including running and updating classifiers.2. Our contract. Here is the relevant language:The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols. The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high-stakes decisions that require approval by a human decisionmaker under the same authorities. Per DoD Directive 3000.09 (dtd 25 January 2023), any use of AI in autonomous and semi-autonomous systems must undergo rigorous verification, validation, and testing to ensure they perform as intended in realistic environments before deployment.For intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives requiring a defined foreign intelligence purpose. The AI System shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities. The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.3. AI expert involvement. We will have cleared forward-deployed OpenAI engineers helping the government, with cleared safety and alignment researchers in the loop. Why are you doing this? First, we think the US military absolutely needs strong AI models to support their mission especially in the face of growing threats from potential adversaries who are increasingly integrating AI technologies into their systems. We originally did not jump into a contract for classified deployment, as we did not feel that our safeguards and systems were ready, and have been working hard to ensure that a classified deployment can happen with safeguards to ensure that red lines are not crossed. We were—and remain—unwilling to remove key technical safeguards to enhance performance on national security work. That is not the correct approach to supporting the US military. Second, we also wanted to de-escalate things between DoW and the US AI labs. A good future is going to require real and deep collaboration between the government and the AI labs. As part of our deal here, we asked that the same terms be made available to all AI labs, and specifically that the government would try to resolve things with Anthropic; the current state is a very bad way to kick off this next phase of collaboration between the government and AI labs.Why could you reach a deal when Anthropic could not? Did you sign the deal they wouldn’t?Based on what we know, we believe our contract provides better guarantees and more responsible safeguards than earlier agreements, including Anthropic’s original contract. We think our red lines are more enforceable here because deployment is limited to cloud-only (not at the edge), keeps our safety stack working in the way we think is best, and keeps cleared OpenAI personnel in the loop.We don’t know why Anthropic could not reach this deal, and we hope that they and more labs will consider it.Do you think Anthropic should be designated as a “supply chain risk”?No, and we have made our position on this clear to the government.Will this deal enable the Department of War to use OpenAI models to power autonomous weapons? No. Based on our safety stack, our cloud-only deployment, the contract language, and existing laws, regulation and policy, we are confident that this cannot happen. We will also have OpenAI personnel in the loop for additional assurance. Will this deal enable the Department of War to use OpenAI models to conduct mass surveillance on U.S. persons?No. Based on our safety stack, the contract language, and existing laws that heavily restrict DoW from domestic surveillance, we are confident that this cannot happen. We will also have OpenAI personnel in the loop for additional assurance. Do you have to deploy models without a safety stack?No, we retain full control over the safety stack we deploy and will not deploy without safety guardrails. In addition, our safety and alignment researchers will be in the loop and help improve systems over time. We know that other AI labs have reduced model guardrails and relied on usage policies as the primary safeguard, but we think our layered approach better protects against unacceptable use.What happens if the government violates the terms of the contract?As with any contract, we could terminate it if the counterparty violates the terms. We don’t expect that to happen.What if the government just changes the law or existing DoW policies?Our contract explicitly references the surveillance and autonomous weapons laws and policies as they exist today, so that even if those laws or policies change in the future, use of our systems must still remain aligned with the current standards reflected in the agreement. In their post, Anthropic states two of their red lines (we have the same two red lines, plus a third: automated high-stakes decision making), and reasons they do not believe these red lines would be upheld in the contracts they had seen from the DoW at that time. Below is why we believe those same red lines would hold in our contract:Mass domestic surveillance. It was clear in our interaction that the DoW considers mass domestic surveillance illegal and was not planning to use it for this purpose. We ensured that the fact that it is not covered under lawful use was made explicit in our contract.Fully autonomous weapons. The cloud deployment surface covered in our contract would not permit powering fully autonomous weapons, as this would require edge deployment. In addition to these protections, our contract offers additional layered safeguards including our safety stack and OpenAI technical experts in the loop.


Share this story

Read Original at Hacker News

Related Articles

Hacker Newsabout 21 hours ago
OpenAI reaches deal to deploy AI models on U.S. DoW classified network

Article URL: https://www.reuters.com/business/openai-reaches-deal-deploy-ai-models-us-department-war-classified-network-2026-02-28/ Comments URL: https://news.ycombinator.com/item?id=47189853 Points: 12 # Comments: 4

Hacker Newsabout 22 hours ago
OpenAI agrees with Dept. of War to deploy models in their classified network

https://xcancel.com/sama/status/2027578652477821175 https://fortune.com/2026/02/27/openai-in-talks-with-pentagon... Comments URL: https://news.ycombinator.com/item?id=47189650 Points: 96 # Comments: 32

TechCrunch1 day ago
ChatGPT reaches 900M weekly active users

OpenAI shared the new numbers as part of its announcement that it has raised $110 billion in private funding.

Engadget1 day ago
OpenAI secures another $110 billion in funding from Amazon, NVIDIA and SoftBank

OpenAI just announced a massive funding round of $110 billion, which is one of the biggest investment rounds in Silicon Valley history. The investors feature many of the usual suspects, including Amazon with $50 billion, NVIDIA with $30 billion and SoftBank with $30 billion. This investment brings OpenAI to a $730 billion valuation "We’re super excited about this deal," OpenAI CEO Sam Altman told CNBC. "AI is going to happen everywhere." That last statement seems more like a threat than a boast, but I digress. Beyond the funding round, OpenAI has announced strategic partnerships with both NVIDIA and Amazon. This will involve Amazon Web Services (AWS) running OpenAI models for enterprise customers to "build generative AI applications and agents at production scale." It also names AWS as the exclusive third-party cloud distribution provider for OpenAI Frontier, which is an agentic enterprise platform. OpenAI has also committed to consuming 2 gigawatts of Amazon's Trainium capacity, which is the company's custom-designed AI training accelerator. In other words, Amazon is spending a lot of money on OpenAI and then OpenAI will turn around and spend a lot of money with Amazon. The AI funding ouroboros continues. It's also worth noting that Amazon's investment in OpenAI will be staggered. The funding begins with $15 billion, but the remaining $35 billion will only be invested when certain conditions are met. Oddly, it's been reported that one condition is that OpenAI achieves artificial general intelligence. AGI is when AI evolves to or beyond human-level abilities, at which point the entire world turns into rainbows and everyone gets a pony. This could happen later this year, according to those bullish on the technology, or never, according to many researchers. Sam Altman said it was coming in 2025 but has since grown weary of the term. The new partnership with NVIDIA evolves the long-standing collaboration between the two companies. OpenAI has pledged to consume 2 gig

The Verge1 day ago
OpenAI snags $110 billion in investments from Amazon, Nvidia, and Softbank

OpenAI has closed another round of funding, totalling $110 billion being newly committed to the maker of ChatGPT, which it says has more than 900 million weekly active users and over 50 million consumer subscribers. Amazon is investing $50 billion and striking a deal that includes plans for custom models and more. Nvidia and SoftBank are each contributing $30 billion, as well, even as the Wall Street Journal notes that Nvidia's previous $100 billion investment plan is "on ice." This marks another massive influx of cash for the company that's now valued at $730 billion, and previously closed a $40 billion round in 2025. At the time, it was th … Read the full story at The Verge.

Hacker News1 day ago
OpenAI's $110B funding round (investments from Amazon, Nvidia, SoftBank)

Article URL: https://www.reuters.com/business/retail-consumer/amazon-invest-50-billion-openai-2026-02-27/ Comments URL: https://news.ycombinator.com/item?id=47181051 Points: 21 # Comments: 18