
Gizmodo · Feb 28, 2026 · Collected from RSS
At the outbreak of a new war, Altman is closer than ever to the Pentagon.
In an X post Friday evening, Sam Altman announced that his company, OpenAI, had just “reached an agreement with the Department of War to deploy our models in their classified network.” The timing is both startling and significant, making Altman a sort of poster boy for AI at war. Hours earlier, OpenAI’s chief competitor, Anthropic, had been told that its products had received a blacklisting of sorts from the Pentagon—a designation of “supply-chain risk to national security.” Anthropic has declared “red lines” around the use of its tech for mass surveillance and fully autonomous weapons, and the Pentagon finds this unacceptable. So, per Secretary of War Pete Hegseth, no company that works with the pentagon at all “may conduct any commercial activity with Anthropic.” As Axios notes, the Pentagon’s legal rubric for the designation remains to be seen, and the supply-chain risk designation is usually reserved for companies based in, and potentially supportive of countries deemed hostile to the U.S. In any case, the move matches the well-established Trump 2.0 pattern of whacking any party that displeases the administration with the largest, spikiest club available, and letting the courts decided later if the use of a given club was valid or not. But Anthropic’s loss is, at least theoretically, Sam Altman’s gain. To back up a bit, Anthropic’s very existence is a slap in the face to Altman—with Anthropic having been created in the first place as essentially a spin-off from OpenAI, supposedly dedicated to standards of ethics and safety that Amodei and his team perceived OpenAI as not having upheld. So the Super Bowl commercials in which Anthropic not-so-subtly trashed OpenAI were not, it appears, the product of a friendly rivalry. Altman and Anthropic founder and CEO Dario Amodei are bad at concealing their apparent animosity for one another. At a photo op for AI leaders in India earlier this month, the two conspicuously declined to interlock their hands. As my Gizmodo colleague AJ Dellinger has already noted, leaked remarks from Sam Altman seemingly timed to go along with OpenAI’s Pentagon deal show Altman attempting to grasp for some kind of moral stance similar to Amodei’s on surveillance and autonomous killbots. But any such claim on Altman’s part has already been hand-waved away as pure bluster by State Department and former DOGE official Jeremy Lewin, who posted on X that Altman’s stated principles were, in practice, just some feel-good fluff added to the agreement that actually gave OpenAI, Lewin strongly implies, zero power to stop the Pentagon from doing whatever it wants with OpenAI’s models. In contrast to Anthropic, the company has “reached the patriotic and correct answer here,” Lewin writes.But Altman’s hand-wringing around Anthropic’s “red lines” was already contradicted in spirit by remarks he made earlier this month about Anthropic in his long X post about Anthropic’s mean Super Bowl ads (worth reading in full because it’s a hall of fame example of being Not Mad). First, the good part of the Anthropic ads: they are funny, and I laughed.But I wonder why Anthropic would go for something so clearly dishonest. Our most important principle for ads says that we won’t do exactly this; we would obviously never run ads in the way Anthropic…— Sam Altman (@sama) February 4, 2026 In the course of complaining about the ad, Altman takes a long detour to pop off about, essentially, the same thing that made the Pentagon angry. Amodei’s company, Altman says, “wants to control what people do with AI.” They also, he says, “block companies they don’t like from using their coding product (including us), [and] they want to write the rules themselves for what people can and can’t use AI for.” Whatever terms you want to use for Anthropic’s strategy, it’s been blisteringly effective from a business standpoint. If 2025 was Google’s year of AI success, 2026 has, so far, been Anthropic’s—with the hype around its flagship product, Claude Code, causing the enterprise version of the 2022 ChatGPT earthquake. Anthropic’s day-to-day moves have set Wall Street’s agenda throughout the year. This month, Anthropic surpassed OpenAI in total cash raised. But, rather bizarrely, Altman also took a populist-sounding stance in his anti-Anthropic Super Bowl rant, in which he claimed that “Anthropic serves an expensive product to rich people.” This is effectively meaningless since OpenAI and Anthropic both charge for subscriptions and API access. But Altman seems to be positioning ad-supported ChatGPT and perhaps some future ad-supported version of its coding product, Codex, as the democratic, normie versions of these products, and contrasting that with Anthropic being for “rich people.” His intended framing may not be the one the public absorbs. To be clear, reality and perception may be on two entirely different tracks here. The Pentagon denies that Anthropic’s stated reasons are the core of this issue. “This has nothing to do with mass surveillance and autonomous weapons being used. The Pentagon has only given out lawful orders,” an anonymous Pentagon officially reportedly told CBS News. Also, Anthropic’s self-imposed rules constraining its ability to scale up were revised into flexible guidelines on Tuesday. Hours after Altman’s announcement of a deal with Trump’s Pentagon, that same Pentagon launched what the president has called “major combat operations” against Iran in conjunction with Israel. A poll from the Associated Press and the University of Chicago published earlier this week showed that a majority of Americans already have little to no trust of Trump when it comes to national security, and a fresh YouGov poll shows that they disfavor war with Iran more than they favor it. A Gallup poll published yesterday found, rather astonishingly, that more Americans now “sympathize” with the Palestinians than the Israelis. Amid that political backdrop, Anthropic’s “red lines” fight with the Pentagon has created a symbolic space labeled “AI That Is Unquestioningly Friendly to the U.S. War Machine” in big neon letters, and moved his company’s narrative squarely outside of it. Sam Altman and OpenAI, it appears, are willingly stepping into it. Military contractors currently using Anthropic products like Claude Code will have six months to phase them out, according to the Pentagon, and Anthropic has already declared that in the meantime, it will challenge this designation in the courts. During that time, however, the public perception of Anthropic will be unshackled from the perception of this new war with Iran. The same can’t be said of OpenAI.