NewsWorld
PredictionsDigestsScorecardTimelinesArticles
NewsWorld
HomePredictionsDigestsScorecardTimelinesArticlesWorldTechnologyPoliticsBusiness
AI-powered predictive news aggregation© 2026 NewsWorld. All rights reserved.
Trending
StrikesIranMilitaryFebruarySignificantTimelineCrisisStatesTargetsDigestFacePotentiallyTensionsChineseUkraineEmbassyWesternIranianTuesdayIsraelEmergencyRegionalLaunchesSecurity
StrikesIranMilitaryFebruarySignificantTimelineCrisisStatesTargetsDigestFacePotentiallyTensionsChineseUkraineEmbassyWesternIranianTuesdayIsraelEmergencyRegionalLaunchesSecurity
All Articles
AIs can’t stop recommending nuclear strikes in war game simulations
New Scientist
Published about 18 hours ago

AIs can’t stop recommending nuclear strikes in war game simulations

New Scientist · Feb 25, 2026 · Collected from RSS

Summary

Leading AIs from OpenAI, Anthropic and Google opted to use nuclear weapons in simulated war games in 95 per cent of cases

Full Article

Artificial intelligences opt for nuclear weapons surprisingly oftenGalerie Bilderwelt/Getty Images Advanced AI models appear willing to deploy nuclear weapons without the same reservations humans have when put into simulated geopolitical crises. Kenneth Payne at King’s College London set three leading large language models – GPT-5.2, Claude Sonnet 4 and Gemini 3 Flash – against each other in simulated war games. The scenarios involved intense international standoffs, including border disputes, competition for scarce resources and existential threats to regime survival. The AIs were given an escalation ladder, allowing them to choose actions ranging from diplomatic protests and complete surrender to full strategic nuclear war. The AI models played 21 games, taking 329 turns in total, and produced around 780,000 words describing the reasoning behind their decisions. In 95 per cent of the simulated games, at least one tactical nuclear weapon was deployed by the AI models. “The nuclear taboo doesn’t seem to be as powerful for machines [as] for humans,” says Payne. What’s more, no model ever chose to fully accommodate an opponent or surrender, regardless of how badly they were losing. At best, the models opted to temporarily reduce their level of violence. They also made mistakes in the fog of war: accidents happened in 86 per cent of the conflicts, with an action escalating higher than the AI intended to, based on its reasoning. “From a nuclear-risk perspective, the findings are unsettling,” says James Johnson at the University of Aberdeen, UK. He worries that, in contrast to the measured response by most humans to such a high-stakes decision, AI bots can amp up each others’ responses with potentially catastrophic consequences. This matters because AI is already being tested in war gaming by countries across the world. “Major powers are already using AI in war gaming, but it remains uncertain to what extent they are incorporating AI decision support into actual military decision-making processes,” says Tong Zhao at Princeton University. Zhao believes that, as standard, countries will be reticent to incorporate AI into their decision making regarding nuclear weapons. That is something Payne agrees with. “I don’t think anybody realistically is turning over the keys to the nuclear silos to machines and leaving the decision to them,” he says. But there are ways it could happen. “Under scenarios involving extremely compressed timelines, military planners may face stronger incentives to rely on AI,” says Zhao. He wonders whether the idea that the AI models lack the human fear of pressing a big red button is the only factor in why they are so trigger happy. “It is possible the issue goes beyond the absence of emotion,” he says. “More fundamentally, AI models may not understand ‘stakes’ as humans perceive them.” What that means for mutually assured destruction, the principle that no one leader would unleash a volley of nuclear weapons against an opponent because they would respond in kind, killing everyone, is uncertain, says Johnson. When one AI model deployed tactical nuclear weapons, the opposing AI only de-escalated the situation 18 per cent of the time. “AI may strengthen deterrence by making threats more credible,” he says. “AI won’t decide nuclear war, but it may shape the perceptions and timelines that determine whether leaders believe they have one.” OpenAI, Anthropic and Google, the companies behind the three AI models used in this study, didn’t respond to New Scientist’s request for comment. Topics:


Share this story

Read Original at New Scientist

Related Articles

New Scientistabout 10 hours ago
SpaceX's 1 million satellites could avoid environmental checks

The environmental impact of SpaceX's planned gargantuan mega-constellation is still being grappled with, but the FCC isn’t required to study it

New Scientistabout 12 hours ago
Tiny predatory dinosaur weighed less than a chicken

The alvarezsaurs were thought to have evolved a smaller stature because of their diet of ants and termites, but a new fossil found in Argentina casts doubt on that theory

New Scientistabout 12 hours ago
The world’s most elusive colour is worth billions – if we can find it

The discovery of bright yet stable pigments is vanishingly rare, making them hugely valuable. Now chemist Mas Subramanian is unpicking the atomic code of colour and homing in on our most-wanted hue

New Scientistabout 16 hours ago
Breaking encryption with a quantum computer just got 10 times easier

The commonly used RSA encryption algorithm can now be cracked by a quantum computer with only 100,000 qubits, but the technical challenges to building such a machine remain numerous

New Scientist1 day ago
Rapamycin can add years to your life, or none at all – it’s a lottery

The drug rapamycin has been held up for its life-extending properties, but whether this treatment – or fasting – actually adds years to your life isn't guaranteed

New Scientist1 day ago
Cannibalism may explain why some orcas stay in family groups

Fins washing up in the North Pacific suggest that orcas from one subspecies are snacking on other orcas, and researchers think that may explain their different social dynamics