NewsWorld
PredictionsDigestsScorecardTimelinesArticles
NewsWorld
HomePredictionsDigestsScorecardTimelinesArticlesWorldTechnologyPoliticsBusiness
AI-powered predictive news aggregation© 2026 NewsWorld. All rights reserved.
Trending
IranMilitaryFebruaryStrikesLaunchesTimelineDiplomaticCompaniesDigestPakistanSaturdayStatesPolicyNuclearFederalIsraelTurkeyTrumpDealDrugCongressionalProtectionsGovernmentParamount
IranMilitaryFebruaryStrikesLaunchesTimelineDiplomaticCompaniesDigestPakistanSaturdayStatesPolicyNuclearFederalIsraelTurkeyTrumpDealDrugCongressionalProtectionsGovernmentParamount
All Articles
OpenAI’s Sam Altman announces Pentagon deal with ‘technical safeguards’
TechCrunch
Published about 2 hours ago

OpenAI’s Sam Altman announces Pentagon deal with ‘technical safeguards’

TechCrunch · Feb 28, 2026 · Collected from RSS

Summary

OpenAI's CEO claims its new defense contract includes protections addressing the same issues that became a flashpoint for Anthropic.

Full Article

OpenAI CEO Sam Altman announced late on Friday that his company has reached an agreement allowing the Department of Defense to use its AI models in the department’s classified network. This follows a high-profile standoff between the department — also known under the Trump administration as the Department of War — and OpenAI’s rival Anthropic. The Pentagon pushed AI companies, including Anthropic, to allow their models be used “all lawful purposes,” while Anthropic sought to draw a red line around mass domestic surveillance and fully autonomous weapons. In a lengthy statement released Thursday, Anthropic CEO Dario Amodei said the company “never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner,” but he argued that “in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values.” More than 60 OpenAI employees and 300 Google employees signed an open letter this week asking their employers to support Anthropic’s position. After Anthropic and the Pentagon failed to reach an agreement, President Donald Trump criticized the “Leftwing nut jobs at Anthropic” in a social media post that also directed federal agencies to stop using the company’s products after a six-month phase out period. In a separate post, Secretary of Defense Pete Hegseth claimed Anthropic was trying to “seize veto power over the operational decisions of the United States military.” Hegseth also said he is designating Anthropic as a supply-chain risk: “Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.” On Friday, Anthropic said it had “not yet received direct communication from the Department of War or the White House on the status of our negotiations,” but insisted it would “challenge any supply chain risk designation in court.” Techcrunch event Boston, MA | June 9, 2026 Surprisingly, Altman claimed in a post on X that OpenAI’s new defense contract includes protections addressing the same issues that became a flashpoint for Anthropic. “Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” Altman said. “The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.” Altman said OpenAI “will build technical safeguards to ensure our models behave as they should, which the DoW also wanted,” and it will deploy engineers with the Pentagon “to help with our models and to ensure their safety.” “We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept,” Altman added. “We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements.” Fortune’s Sharon Goldman reports that Altman told OpenAI employees at an all-hands meeting that the government will allow the company to build its own “safety stack” to prevent misuse, and that “if the model refuses to do a task, then the government would not force OpenAI to make it do that task.” Altman’s post came shortly before news broke that the U.S. and Israeli governments have begun bombing Iran, with Trump calling for the overthrow of the Iranian government. Anthony Ha is TechCrunch’s weekend editor. Previously, he worked as a tech reporter at Adweek, a senior editor at VentureBeat, a local government reporter at the Hollister Free Lance, and vice president of content at a VC firm. He lives in New York City. You can contact or verify outreach from Anthony by emailing anthony.ha@techcrunch.com. View Bio


Share this story

Read Original at TechCrunch

Related Articles

TechCrunchabout 3 hours ago
Xiaomi launches 17 Ultra smartphone, an AirTag clone, and an ultra slim powerbank

We round up everything Xiaomi announced at its Mobile World Congress event.

TechCrunchabout 4 hours ago
Why China’s humanoid robot industry is winning the early market

China’s push into humanoid robots is accelerating, with domestic firms shipping more units and iterating faster than U.S. competitors in a still-nascent market.

TechCrunchabout 15 hours ago
India disrupts access to popular developer platform Supabase with blocking order

India, one of Supabase’s biggest markets, is seeing patchy access after a government block order.

TechCrunchabout 20 hours ago
OpenAI fires employee for using confidential info on prediction markets

The company said such trades violates its internal company policies about using confidential information for personal gain.

TechCrunchabout 21 hours ago
President Trump orders federal agencies to stop using Anthropic after Pentagon dispute

"We don't need it, we don't want it, and will not do business with them again," the president wrote in the post.

TechCrunchabout 21 hours ago
Pentagon moves to designate Anthropic as a supply-chain risk

"We don't need it, we don't want it, and will not do business with them again," the president wrote in the post.