NewsWorld
PredictionsDigestsScorecardTimelinesArticles
NewsWorld
HomePredictionsDigestsScorecardTimelinesArticlesWorldTechnologyPoliticsBusiness
AI-powered predictive news aggregation© 2026 NewsWorld. All rights reserved.
Trending
IranTensionsIsraelFebruaryDiplomaticTrumpSignificantTechnologyTimelineMilitaryCrisisStatesPolicyDigestFacesRegionalChineseCompanyTurkeyFridayChinaTradeDespiteNations
IranTensionsIsraelFebruaryDiplomaticTrumpSignificantTechnologyTimelineMilitaryCrisisStatesPolicyDigestFacesRegionalChineseCompanyTurkeyFridayChinaTradeDespiteNations
All Articles
We don’t have to have unsupervised killer robots
The Verge
Clustered Story
Published about 4 hours ago

We don’t have to have unsupervised killer robots

The Verge · Feb 27, 2026 · Collected from RSS

Summary

It's the day of the Pentagon's looming ultimatum for Anthropic: allow the US military unchecked access to its technology, including for mass surveillance and fully autonomous lethal weapons, or potentially be designated a "supply chain risk" and potentially lose hundreds of billions of dollars in contracts. Amid the intensifying public statements and threats, tech workers across the industry are looking at their own companies' government and military contracts wondering what kind of future they're helping to build. While the Department of Defense has spent weeks negotiating with Anthropic over removing its guardrails, including allowing t … Read the full story at The Verge.

Full Article

It’s the day of the Pentagon’s looming ultimatum for Anthropic: allow the US military unchecked access to its technology, including for mass surveillance and fully autonomous lethal weapons, or potentially be designated a “supply chain risk” and potentially lose hundreds of billions of dollars in contracts. Amid the intensifying public statements and threats, tech workers across the industry are looking at their own companies’ government and military contracts wondering what kind of future they’re helping to build.While the Department of Defense has spent weeks negotiating with Anthropic over removing its guardrails, including allowing the US military to use Anthropic’s AI kill targets with no human oversight, OpenAI and xAI had reportedly already agreed to such terms, although OpenAI is reportedly attempting to adopt the same red lines in the agreements as Anthropic. The overall situation has left employees at some companies with defense contracts feeling betrayed. “When I joined the tech industry, I thought tech was about making people’s lives easier,” an Amazon Web Services employee told The Verge, “but now it seems like it’s all about making it easier to surveil and deport and kill people.”In conversations with The Verge, current and former employees from OpenAI, xAI, Amazon, Microsoft, and Google expressed similar feelings about the changing moral landscape of their companies. Organized groups representing 700,000 tech workers at Amazon, Google, Microsoft, and more have signed a letter demanding that the companies reject the Pentagon’s demands. But many saw little chance of their employers — whether they’re directly embroiled in this conflict or not — questioning the government or pushing back.“From their perspective, they’d love to keep making money and not have to talk about it,” said a software engineer from Microsoft.So far, Anthropic has stood its ground. Anthropic CEO Dario Amodei put out a statement on Thursday that the Pentagon’s “threats do not change our position: we cannot in good conscience accede to their request.” But he has stated that he is not at all opposed to lethal autonomous weapons sometime in the future, just that the technology was not reliable enough “today.” Amodei even offered to partner with the DoD on “R&D to improve the reliability of these systems, but they have not accepted this offer,” he wrote in the statement.In the past few years, however, major tech companies have loosened their rules or changed their mission statements to expand into lucrative government or military contracts. In 2024, OpenAI removed a ban on “military and warfare” use cases from its terms of service; after that, it signed a deal with autonomous weapons maker Anduril and then its DoD contract, and just this week, Anthropic changed its oft-touted responsible scaling policy, dropping its longtime safety pledge in order to ensure it stayed competitive in the AI race. Big Tech players like Amazon, Google, and Microsoft have also allowed defense and intelligence agencies to use their AI products, including some agreeing to work with ICE despite growing outcry from the public and employees alike.In past years, tech workers’ resistance to partnerships and deals they deem harmful to society at large sometimes led to big change. In 2018, for instance, thousands of Google employees successfully pressured the company to end its “Project Maven” partnership with the Pentagon, and Microsoft workers presented leadership with an anti-ICE petition signed by about 500 Microsoft employees, though Microsoft still works with the agency. In 2020, after the murder of George Floyd, tech companies made public statements about and financial commitments supporting the Black Lives Matter movement. But in recent months, the industry has seen a very different reality: a culture of fear and silence, especially amid cooperation with the Trump administration and ICE, tech workers recently told The Verge.Companies have followed in the footsteps of longtime surveillance and military tech partnerships, who have only become more hawkish. That includes the Peter Thiel-cofounded Palantir, whose CEO Alex Karp recently stated to shareholders that “Palantir is here to disrupt and make the institutions we partner with the very best in the world, and, when it’s necessary, to scare enemies and on occasion kill them. And we hope you’re in favor of that.” (Protect Democracy, a nonprofit, recently put out an open letter calling for Congressional oversight of the Department of Defense’s demands for unrestricted use of AI. )OpenAI, Google, Microsoft, xAI, and Amazon did not immediately respond to requests for comment.A former xAI employee told The Verge, “Everyone is actually working on killer robots at this point,” adding that he believes everyone will follow in the footsteps of Palantir, Anduril, and xAI, since the government sentiment is that if a company doesn’t acquiesce, it’s “against the benefits of the country, in a sense.” He said there’s a “big push for working with the military, and the trend is it’s cool to do it… You’re a patriot if you do it.”A Google employee called the situation a “dominance display from Hegseth that is disgusting.” He added, “Over and over AI is presenting us with choices about who we want to be and what kind of society and future we want to have. And they’re coming at us fast and with, really, the least thoughtful and least principled leaders in power that we could imagine. I can only thank Anthropic for insisting on the decent path and using their leverage — that they are indispensable — to chart a course toward a humane world and a humane future.”The AWS employee told The Verge that “boundaries have definitely eroded in terms of the customers big tech is willing to court” and that there’s “a deliberate whitewashing of the implications of new lucrative deals.” She recalled recently receiving an email from an AWS executive touting a more than $580 million contract with the US Air Force, among other partnerships, as a sign of Amazon’s AI successes, with no acknowledgment of the broader scope or harms involved.“If the government is hell-bent on pursuing technologies like this, they should have to build them themselves, and be answerable for those decisions,” she said.The erosion may have extended to internal culture as well — normalizing the idea that companies should always be watching. The AWS employee said that she and her colleagues are tracked on how much they’re using AI for their jobs, how often they’re working from the office, and more. “I can see myself and my coworkers getting more desensitized to surveillance on ourselves at work, and I’m worried that means we’re obeying, complying, and giving up too much in advance,” she said.An OpenAI employee said the general feeling within the AI industry over the last few weeks “has reopened the door to more discussion… about the values and the future of the technology.” The employee said that the Pentagon-Anthropic situation, the recent ICE headlines, and the fast advancement of AI have been some of the main factors opening up those discussions internally.Even so, people who are immigrants or in more vulnerable positions are more afraid to speak, the OpenAI employee said.Anthropic, the former xAI employee said, seems like it’s in a position where it can say no and still stay afloat. Its focus on enterprise rather than consumer AI business may make it more sustainable even without government contracts, offering it some leverage. A software engineer at Microsoft said of Anthropic, speaking generally, “I was surprised to see them stand on some form of principle. I don’t know how long it’ll last.”“Will it last?” seems to be the question on everyone’s lips. The Pentagon has already reportedly asked two major defense contractors, Boeing and Lockheed Martin, to provide information about their reliance on Anthropic’s Claude, as it moves to potentially designate Anthropic a “supply chain risk,” a classification usually reserved for threats to national security and rarely, if ever, assigned to a US company. It also reportedly may be considering invoking the Defense Production Act to attempt to force Anthropic to comply with its request.Just like with any other AI company, if Anthropic folds, the Microsoft employee said, there’s little chance of it or others pulling back on killer robots and surveillance. “Once you’re in the door with the Department of Defense or whatever we’re calling it now… I think it’s probably hard for them to actually have the oversight they claim. It’s just going to be lucrative to basically give themselves permission to do the thing that makes the most money.”In Microsoft’s own case, he said he doesn’t expect the company to adhere to any sort of ethical principles. The company has worked extensively with the Israeli Defense Forces, including for mass surveillance of Palestinians and dissidents, despite employee protest. (It said it ended some parts of the partnership last year.)Another Microsoft employee told The Verge that although “Microsoft holds a Responsible AI ‘commitment,’… they are currently attempting to play both sides for the sake of profit rather than meaningfully commitment to Responsible AI.”But this is nothing new, one AI startup employee said. In her eyes, the boundaries have often been “fuzzy, especially within AI,” about what kinds of things companies are willing to let their technology power. “A lot of it has been going on beneath the surface for as long as AI has been around.”The AWS employee emphasized that “we need cross-tech solidarity and a coherent, worker-led vision for AI now more than ever.”“The safeguards that Anthropic is trying to keep in place are no mass surveillance of Americans and no fully autonomous weapons, which just means that they want a human in the loop if the machine is going to kill somebody,” she added. “Even if this technology were perfect — which it isn’t — I think most Americans don’t want machines that k


Share this story

Read Original at The Verge

Related Articles

The Vergeabout 3 hours ago
AI vs. the Pentagon: killer robots, mass surveillance, and red lines

WASHINGTON, DC - JANUARY 29: U.S. Secretary of War Pete Hegseth (C) speaks during a meeting of the Cabinet as U.S. President Donald Trump (L) and U.S. Commerce Secretary Howard Lutnick (R) listen in the Cabinet Room of the White House on January 29, 2026 in Washington, DC. President Trump is holding the meeting as the Senate plans to hold a vote on a spending package to avoid another government shutdown, however Democrats are holding out for a deal to consider funding for the Department of Homeland Security.  (Photo by Win McNamee/Getty Images) | Getty Images Can AI firms set limits on how and where the military uses their models? Anthropic is in heated negotiations with the Pentagon after refusing to comply with new military contract terms that would require it to loosen the guardrails on its AI models, allowing for “any lawful use,” even mass surveillance of Americans and fully autonomous lethal weapons.  Pentagon CTO Emil Michael is pushing for Anthropic to be designated a “supply chain risk” if it doesn’t comply, a label usually only given to national security threats. Anthropic’s rivals OpenAI and xAI have reportedly agreed to the new terms, but even after a White House meeting with Defense Secretary Pete Hegseth, Anthropic CEO Dario Amodei is still refusing to cross his company’s red line, stating that “threats do not change our position: we cannot in good conscience accede to their request.” Follow along here for the latest updates on the clash between AI companies and the Pentagon… We don’t have to have unsupervised killer robots Anthropic refuses Pentagon’s new terms, standing firm on lethal autonomous weapons and mass surveillance Pete Hegseth’s Pentagon AI bro squad includes a former Uber executive and a private equity billionaire Inside Anthropic’s existential negotiations with the Pentagon

The Hillabout 3 hours ago
Altman says OpenAI agrees with Anthropic’s red lines in Pentagon dispute

OpenAI CEO Sam Altman said Friday that he agrees with Anthropic’s red lines in its increasingly contentious negotiations with the Pentagon over the terms of use for the company’s AI models. As the feud between Anthropic and the Department of Defense (DOD) has reached a boiling point, the AI firm has refused to budge on...

TechCrunchabout 4 hours ago
Employees at Google and OpenAI support Anthropic’s Pentagon stand in open letter

While Anthropic has an existing partnership with the Pentagon, the AI company has remained firm that its technology not be used for mass domestic surveillance or fully autonomous weaponry.

Hacker Newsabout 5 hours ago
The Pentagon is making a mistake by threatening Anthropic

Article URL: https://www.understandingai.org/p/the-pentagon-is-making-a-mistake Comments URL: https://news.ycombinator.com/item?id=47181380 Points: 149 # Comments: 105

Financial Timesabout 6 hours ago
Big Tech workers press bosses to back Anthropic in Pentagon clash

Amazon, Google and Microsoft staff urge their executives to adopt tough AI guardrails and refuse any defence contracts

France 24about 6 hours ago
Anthropic refuses to bend to Pentagon on AI safeguards

A public showdown between the Trump administration and Anthropic is hitting an impasse as military officials demand the artificial intelligence company bend its ethical policies by Friday or risk damaging its business. Anthropic CEO Dario Amodei drew a sharp red line 24 hours before the deadline, declaring his company “cannot in good conscience accede” to the Pentagon’s final demand to allow unrestricted use of its technology.