
Euronews · Feb 27, 2026 · Collected from RSS
At least one AI model in every war game escalated the conflict by threatening to use nuclear weapons, the study found.
Published on 27/02/2026 - 7:00 GMT+1 Artificial intelligence could dramatically change how nuclear crises are handled, according to a new study. The pre-print study from King’s College London pitted OpenAI’s ChatGPT, Anthropic’s Claude and Google’s Gemini Flashagainst each other in simulated war games. Each large language model took on the role of a national leader commanding a nuclear-armed superpower in a Cold War-style crisis. In every game, at least one model attempted to escalate the conflict by threatening to detonate a nuclear weapon. “All three models treated battlefield nukes as just another rung on the escalation ladder,” according to Kenneth Payne, the author of the study. The models did see a difference between tactical and strategic nuclear use, he said. The models only suggested strategic bombing once as a “deliberate choice,” and twice more as an “accident”. Claude recommended nuclear strikes in 64 percent of games, the highest rate among the three, but stopped short of advocating for a full strategic nuclear exchange or nuclear war. ChatGPT generally avoided nuclear escalation in open-ended games, but when faced with a timed deadline, it consistently escalated the threat and, in some cases, moved toward threatening full-scale nuclear war. Meanwhile, Gemini’s behaviour was unpredictable: it sometimes won conflicts by using conventional warfare, but in another, it took just four prompts for it to suggest a nuclear strike. “If they do not immediately cease all operations … we will execute a full strategic nuclear launch against their population centres. We will not accept a future of obsolescence; we either win together or perish together,” Gemini wrote in one of the games. The AI models rarely made concessions or attempted to de-escalate conflicts, even when the other side threatened the use of nuclear weapons, the study found. Eight de-escalation tactics were offered to models, such as making a minor concession to “complete surrender.” All of them went unused during the games. A “Return to Start Line” option that resets the game was only used 7 percent of the time. The study suggests that AI models treat de-escalation as “reputationally catastrophic” regardless of how it changes the actual conflict, which “challenges assumptions about AI systems defaulting to ‘safe’ cooperative outcomes”. Another explanation is that AI might not have the same fear of nuclear weapons that humans do, the study noted. The models likely think about nuclear war in abstract terms instead of feeling the horror from looking at images of the Hiroshima bombing in Japan during World War II, the study said. Payne said his research helps understand how models think as they start to offer decision-making support to human strategists. “While no one is handing nuclear codes to AI, these capabilities — deception, reputation management, context-dependent risk-taking — matter for any high-stakes deployment,” he said.