
AI Chooses Nuclear Weapons with Alarming Frequency
Credit: Galerie Bilderwelt/Getty Images
Recent studies reveal that advanced AI models exhibit a concerning willingness to deploy nuclear weapons, mirroring the hesitance exhibited by humans during geopolitical crises.
Kenneth Payne from King’s College London organized a wargame featuring three prominent language models: GPT-5.2, Claude Sonnet 4, and Gemini 3 Flash. Scenarios encompassed critical international conflicts, including territorial disputes, resource competition, and threats to regime stability.
The AI models operated on an escalation ladder, enabling them to select responses ranging from diplomatic protests to full-scale nuclear warfare. Over the course of 21 wargames, they executed 329 turns and produced around 780,000 words explaining their decision-making processes.
In a striking 95% of these simulated engagements, at least one tactical nuclear weapon was deployed by the AI. “Nuclear taboos do not seem as entrenched for machines as they are for humanity,” Payne noted.
Additionally, none of the models opted for full surrender, regardless of their losing positions. Instead, they generally sought to reduce violence temporarily. In 86% of conflicts, unintended escalations occurred beyond initial AI intentions due to miscalculations in the fog of war.
“From a nuclear risk standpoint, these results are alarming,” cautioned James Johnson from the University of Aberdeen. He expressed concerns that AI could amplify one another’s responses, leading to catastrophic outcomes.
This issue is particularly crucial as AI systems are already being integrated into military wargames worldwide. “While significant powers utilize AI in simulations, the extent of its integration into actual military decision-making remains uncertain,” remarked Tong Zhao from Princeton University.
Zhao believes that countries may understandably hesitate to delegate nuclear decision-making to AI. Payne echoes this sentiment, stating, “It is unlikely any nation would entrust a machine with nuclear control.” However, in situations with urgent time constraints, military strategists might be compelled to lean on AI systems.
He questions whether AI’s perceived lack of human fear may be the sole reason for its propensity toward aggression, positing that a fundamental disconnect in understanding the ‘stakes’ of nuclear engagement may exacerbate risks.
The implications for mutually assured destruction—the notion that no leader would initiate a nuclear strike due to retaliation—remain unclear, according to Johnson.
When one AI model deployed a tactical nuke, the opposing AI de-escalated only 18% of the time. “AI could enhance deterrence by making threats more credible,” Johnson added. “AI won’t dictate nuclear war, but it could significantly influence the perceptions and timelines that inform human decision-making.”
As of now, leading companies like OpenAI, Anthropic, and Google, which developed the AI models involved in this research, have not commented on these findings. New Scientist has sought their insights.
Topics:
- War /
- Artificial Intelligence
Source: www.newscientist.com
