NewsWorld
PredictionsDigestsScorecardTimelinesArticles
NewsWorld
HomePredictionsDigestsScorecardTimelinesArticlesWorldTechnologyPoliticsBusiness
AI-powered predictive news aggregation© 2026 NewsWorld. All rights reserved.
Trending
FebruaryChinaSignificantMilitaryTimelineDigestFaceDiplomaticFederalTurkeyFridayTrumpDrugGovernanceTensionsCompanyStateIranParticularlyEscalatingCaliforniaTargetingNuclearDespite
FebruaryChinaSignificantMilitaryTimelineDigestFaceDiplomaticFederalTurkeyFridayTrumpDrugGovernanceTensionsCompanyStateIranParticularlyEscalatingCaliforniaTargetingNuclearDespite
All Articles
I Never Would’ve Guessed the Skynet Problem Would Come Before the Mass Layoffs
Gizmodo
Published about 6 hours ago

I Never Would’ve Guessed the Skynet Problem Would Come Before the Mass Layoffs

Gizmodo · Feb 27, 2026 · Collected from RSS

Summary

You don't need to worry about AI taking your job during a nuclear apocalypse.

Full Article

You may have heard that the Department of Defense and Anthropic are fighting over the AI company’s guardrails for Claude. Every day brings fresh leaks, and now, the Washington Post is reporting that the Pentagon allegedly presented a scenario involving a nuclear missile attack against the U.S. as a manipulative way to ask whether it would be allowed to use its AI model to defend the country. “Anthropic chief executive Dario Amodei’s answer rankled the Pentagon, according to the official, who characterized the CEO’s reply as: You could call us, and we’d work it out,” the Washington Post reports. The Pentagon didn’t like that answer, of course, and Anthropic denies the account. But the fact that we’re having this discussion at all is quite a jolt to the senses as we think about the future of AI. Especially as Defense Secretary Pete Hegseth threatens to invoke the Defense Production Act to strip Claude’s guardrails and allow the AI to engage in things like mass domestic surveillance and fully automated warfare. America’s military leaders apparently want to use AI in all of the situations that sci-fi of the past 80 years has warned us about. And it’s kind of weird that an AI-induced nuclear winter might arrive before the robots take all of our jobs. Whose jobs are getting replaced? Increased automation has always meant a loss of jobs. Those fears have been most pronounced over the past century in blue-collar work, where machines have replaced the manual labor of so many humans in factories. But the rise of AI in recent years has brought those fears to the white-collar world, where many middle-class Americans in the so-called information economy worry they’re about to be replaced by ChatGPT. And they’re right to be concerned. Block announced on Thursday that the company is laying off 40% of its workforce because AI can do the work. But Block’s CEO also admitted that his company overhired during the covid pandemic, raising suspicions over his grandiose proclamations about AI. There haven’t been mass layoffs across the entire economy yet, but it certainly feels like that’s coming, whether it ultimately materializes or not. Drop your guardrails, or you hate America At the same time, we’re seeing another danger emerge from AI that’s arguably much more important: Fully automated war. Pete Hegseth met with Anthropic CEO Dario Amodei on Tuesday and delivered an ultimatum. Either strip Claude of its safeguards, or Anthropic be labeled a “supply chain risk,” a designation that’s never been used to label an American company before. On top of that, Hegseth reportedly threatened to invoke the Defense Production Act, which would allow the Pentagon to force Anthropic to get rid of those guardrails anyway. The U.S. is not officially at war, and there’s no clear emergency that would necessitate invoking the Defense Production Act. It’s a difficult position for Anthropic, and the company issued a statement Thursday saying it wouldn’t acquiesce to the military’s demands. The deadline for Anthropic to agree is 5:01 p.m. ET on Friday, so we’ll see what the Pentagon decides to do. It all feels so terribly manipulative, hearkening back to the post-9/11 arguments you’d hear for supporting torture in the 2000s. Would you waterboard someone if they knew the details about an impending dirty bomb attack on the U.S.? Would you hook up someone’s testicles to a car battery if it meant stopping another 9/11? AI is not ready to handle nukes The idea that we should make our weapons, nuclear or otherwise, fully autonomous is an absolutely ridiculous one if you listen to the people who actually build these things. Amodei’s letter on Thursday acknowledged that partially autonomous weapons are already being used in some parts of the world, but even the most advanced AI is not ready to be handed the keys. From the Anthropic letter: Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk. We have offered to work directly with the Department of War on R&D to improve the reliability of these systems, but they have not accepted this offer. In addition, without proper oversight, fully autonomous weapons cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day. They need to be deployed with proper guardrails, which don’t exist today. It’s notable that Amodei isn’t even ruling out the use of AI to fully automate the weapons systems of the future. He’s just arguing that AI isn’t there yet. Will AI ever be ready? Researchers at King’s College London recently tested GPT-5.2, Claude Sonnet 4, and Gemini 3 Flash in some simulated war games to see how they’d perform. The AI models played 21 games and deployed at least one tactical nuclear weapon in 95% of the games, according to New Scientist. AI has no reason to fear deploying nuclear weapons that have the potential to wipe out humanity because it cannot experience fear. These AI models can tell you about fear; they can talk with you and convince humans that they’re in some way conscious, but they’re not. They are tech products that will not hesitate to push the big red button unless stringent guardrails are put in place to stop them. The military has played around with these ideas for decades, first trying to build Skynet with DARPA’s Strategic Computing Initiative in the 1980s. But the tech wasn’t there yet. The advent of AI means that we can properly build an autonomous weapons system that requires no human in the loop. The only question is whether that’s a smart thing to do, especially in a time of rising fascism in the U.S. Military leaders are acting weird Undersecretary of Defense Emil Michael chided Amodei in a tweet on Thursday, insisting that the Anthropic CEO was lying about the company’s discussion with the Pentagon. “It’s a shame that @DarioAmodei is a liar and has a God-complex,” wrote Michael. “He wants nothing more than to try to personally control the US Military and is ok putting our nation’s safety at risk. The @DeptofWar will ALWAYS adhere to the law but not bend to whims of any one for-profit tech company.” It’s an astonishing thing to witness if you step back and remember that none of this was normal in the pre-Trump era. Military leadership would never publicly rail against an American CEO, calling him a liar and saying he has a God-complex. It just didn’t happen for simple reasons of decorum and professionalism. But it also demonstrates two things: First, that the Pentagon is desperate to use Claude, as Michael’s tweet reeks of desperation. Second, perhaps we should be deeply concerned about what the military wants to do with all of this advanced technology at its disposal. Or, to be more accurate, advanced technology that it wants to take away from a private company. We might get to find out around 5:01 p.m. ET.


Share this story

Read Original at Gizmodo

Related Articles

Gizmodoabout 3 hours ago
If You Love ‘Chainsaw Man,’ You Should Read ‘Dorohedoro’ and ‘Dai Dark’ Immediately

Q Hayashida’s blend of fantasy, sci‑fi, and dark humor deserves far more recognition.

Gizmodoabout 4 hours ago
A Dish of Neurons Playing DOOM Is the Wildest Thing I’ve Seen in Ages

Coming soon to a LAN party near you: a Petri dish.

Gizmodoabout 4 hours ago
‘Mars Express’ Is Phenomenal Because It Remembers What Cyberpunk Actually Means

The criminally slept-on sci-fi noir animated film understands that its genre is a warning, not an aspiration.

Gizmodoabout 4 hours ago
The Great Insider Trading Reckoning Reportedly Hits OpenAI

Apparently the paycheck isn't enough.

Gizmodoabout 4 hours ago
Chocolate ‘Boner’ Syrup Recalled for Actually Containing Viagra Ingredient

This week, Lockout Supplements issued a voluntary recall of its "Boner Bears Chocolate Syrup" for having unlabeled amounts of sildenafil.

Gizmodoabout 5 hours ago
Trump Says US Is Cutting Off Anthropic for Refusing to Drop AI Safeguards

The President said the AI startup is filled with "leftwing nut jobs."