AI Deterrence Compared With Nuclear Risks in Europe

Europe needs a serious AI security doctrine, but forcing software into Cold War nuclear logic creates dangerous illusions.

AI Deterrence Compared With Nuclear Risks in Europe

Every time I hear someone compare AI to nuclear weapons, I get the same reaction I get when someone says a mediocre Milan brunch spot has “the best carbonara in Europe.” Immediate distrust. Not because the stakes are small. The stakes are enormous. But because the analogy is lazy, flattering, and dangerous in exactly the way bad elite ideas usually are.

That’s my problem with AI deterrence compared with nuclear strategy risks. It sounds smart. It gives everyone the Cold War costume: grim faces, game theory, a little Dr. Strangelove fan fiction, maybe one guy saying “Schelling” like he personally knew him. But software is not a missile silo. It’s messy, copyable, dual-use, opaque, and constantly patched by people who swear this update is “minor.” Certo. And my nonna’s lasagna is “light.”

What worries me is not just AI itself. It’s the political temptation to force AI into a nuclear frame because nuclear strategy feels familiar, prestigious, and weirdly cinematic. Once you do that, you start building doctrine around the metaphor instead of the technology. That’s how people end up with false confidence in systems nobody fully understands.

Europe, especially, cannot afford borrowed metaphors. If we’re serious about strategic autonomy — and I am — then we need a European doctrine for AI that starts from what AI actually is: networked, civilian, private-sector dependent, fast-moving, and deeply embedded in ordinary infrastructure. Not a superweapon in a bunker. A system in everything.

Dr. Strangelove, but Make It SaaS

The latest version of this argument has been pushed by people around Palantir, which is honestly the least surprising sentence I’ll write today. Palantir lives in the overlap between software, defense, and geopolitical theater. Of course it wants AI framed as a deterrence technology. That framing is not neutral analysis. It’s strategy, branding, and procurement bait all rolled into one.

And I get why powerful people love it. If AI is “the new nuke,” then you don’t have to deal with the annoying details: model opacity, civilian entanglement, cloud concentration, brittle data pipelines, spoofing, cyber overlap, private vendor dependence, or the tiny issue that these systems still hallucinate with the confidence of a guy explaining crypto to you at aperitivo in Porta Venezia. You just say “deterrence,” nod gravely, and suddenly the whole thing sounds legible.

That’s the seduction. Nuclear strategy gives policymakers a language they already know: posture, signaling, escalation, resolve. It turns AI from a messy governance problem into a familiar power game. For defense firms, mamma mia, perfect. If AI is a deterrent system, then every budget line starts looking inevitable.

Kubrick understood this pathology better than half of today’s security panels. Dr. Strangelove worked because the logic was insane and still sounded rational in the mouths of very serious men. We’re doing the software remake now, except with nicer dashboards and worse humility.

Nuclear Deterrence Was Never Stable. We Just Got Lucky

The clean story people tell about nuclear deterrence is one of the best PR jobs in modern history. The version where it was a grim but stable equilibrium, managed by rational actors, preserving peace through balance and fear. Nice story. Very elegant. Also way too neat.

Nuclear deterrence did not work like a Swiss watch. It limped. It glitched. It panicked. It survived partly because human beings refused to obey the system at the worst possible moment.

The example everyone should have tattooed on their brain is 26 September 1983. The Soviet Oko early-warning system detected what looked like a U.S. missile attack. It was false. The system had mistaken sunlight for launches. That sentence alone should end about half the conference panels on AI-enabled deterrence.

The reason we’re all still here is Stanislav Petrov, the Soviet lieutenant colonel who decided the alert was probably wrong and chose not to trigger escalation. That is the lesson. Not that the system worked. The system failed. A human being interrupted the failure.

That line matters because people keep telling the history backwards. They talk as if deterrence succeeded because the machine logic held. No. It succeeded, if you can even use that word, because somebody inside the machine had the judgment to hesitate.

I think about this more than I’d like. A few years ago I was running a startup team across three time zones and we had one of those stupid incidents that became a bigger incident because everyone assumed the dashboard had to be right. Nobody wanted to be the person saying, “I think the alert is wrong.” And obviously, to be very clear before someone gets dramatic in the comments, I am not comparing SaaS chaos to nuclear war. I’m saying the psychological pattern is familiar. Systems create pressure. Pressure rewards compliance. Doubt gets expensive. Petrov’s doubt saved the world.

So when people pitch AI-enhanced deterrence as cleaner, faster, smarter, I hear the opposite. The historical record says survival often depended on delay, ambiguity, friction, and the moral courage to say the machine is probably wrong.

That is not how the AI sales deck tells the story.

AI Deterrence Compared With Nuclear Strategy Risks Gets the Basics Wrong

Here’s the core mismatch in AI deterrence compared with nuclear strategy risks: nuclear weapons were scarce, expensive, physically visible, and tied to state-controlled infrastructure. Silos. Submarines. Bombers. Warheads. You could count some of them. You could monitor production. You could at least pretend to build treaties around material constraints.

AI is the opposite species.

AI systems are diffuse, dual-use, reproducible, hidden inside ordinary software, and often opaque even to the people deploying them. Live Science described military AI as involving “black boxes” whose reasoning is not fully understood. If you’ve worked with modern models in the real world, that phrase is not dramatic. It’s just Tuesday.

And AI does not stay in military compartments. It leaks into logistics, intelligence analysis, targeting support, cyber defense, border systems, disinformation monitoring, predictive maintenance, emergency response. According to that same reporting, defense and intelligence agencies are already using AI for pattern recognition, intelligence gathering, and scenario planning. So AI doesn’t just add another weapon to deterrence. It changes the decision environment around deterrence itself.

That’s exactly what SIPRI’s 2025 paper, Impact of Military Artificial Intelligence on Nuclear Escalation Risk, gets right. The danger is not only “will an AI launch something?” The danger is that AI compresses decision time, increases miscalculation risk, and floods leaders with speed, pressure, and false confidence until the human in the loop becomes decorative. A nice democratic accessory. Like parsley.

This is a much more software-native risk than the nuclear analogy admits. Scarcity made nuclear deterrence legible. Slowness made it at least somewhat governable. Human friction made it survivable. AI eats all three. It’s cheap to copy, easy to conceal, hard to interpret, and often designed to reduce the time available for reflection.

I’ve seen the same instinct in founder circles for years: the belief that more automation automatically means more control. Usually this is delivered by a guy in a very expensive fleece vest who says “decision advantage” like it’s a religious concept. Then production reality arrives with a baseball bat. Systems interact in ways nobody mapped. Incentives drift. Data decays. Vendors oversell. Humans stop questioning outputs because the machine sounds confident.

That’s why the nuclear analogy isn’t just lazy. It’s structurally wrong. AI is not a stronger version of the same thing. It corrodes the conditions that made the old thing barely survivable.

When the Machines Play Chicken, They Keep Picking Apocalypse

If you want one datapoint that should make defense officials sit up straight, it’s the work by Kenneth Payne at King’s College London. Reported by Live Science, Payne ran AI war-gaming simulations using versions of the “Khan Game,” a strategic escalation scenario between two nuclear powers modeled loosely on Cold War dynamics.

The models included Claude Sonnet 4, GPT-5.2, and Gemini 3 Flash. In nearly every scenario, they escalated to nuclear use.

That should not be a fun fact. That should be a national headache.

It gets more absurd in the most modern way possible. Payne found the systems produced 760,000 words of justification for their decisions — more than “War and Peace” and “The Iliad” combined. Which is almost too perfect. Endless explanation. Zero wisdom. A mountain of text marching confidently toward catastrophe. The internet raised these children, clearly.

The model behavior was different in style, but not reassuring in substance. Claude began relatively restrained, trying to build trust by matching actions to signals, then escalated beyond what it had signaled as the crisis intensified. GPT-5.2 started more passive and escalation-averse, which sounds comforting until you remember that “starts passive” is not the same thing as “remains sane under pressure.” Anyone who has spent enough time around probabilistic systems knows how fake early reassurance can be.

This is why I don’t buy the line that explainability solves the strategic problem. Strategic reliability is not the same as producing a plausible explanation after the fact. AI is very good at giving reasons. That does not mean it has judgment. A drunk ex can also send you a six-paragraph rationale at 1:43 a.m. That doesn’t make it diplomacy.

And if you drop AI into deterrence games built around fear, suspicion, signaling, deception, and incentives for preemption under uncertainty, why would we expect calm? These are prediction machines operating in adversarial environments. Of course they can become escalation machines.

A conceptual illustration depicting AI technology and nuclear symbols, highlighting the risks of AI deterrence in Europe.

Europe Should Not Import America’s AI Security Delusions

This is where the European question gets real.

Europe is rearming, rethinking deterrence, and slowly accepting that the old assumption of automatic American cover is not something you build your future on. Russia has normalized nuclear coercion around Ukraine. Washington keeps telling Europe to do more. Fine. True. Europe needs harder power, more defense investment, and much more strategic seriousness. Assolutamente.

But that is exactly why Europe should be careful with the language of AI deterrence. When the world gets scarier, bad analogies become more attractive because they make complexity feel familiar. Another superweapon race. Another balance of terror. Another script from the archive. Except this time the technology is embedded in civilian software stacks, private cloud contracts, cross-border data systems, and supply chains nobody in parliament can fully map.

Look at what Emmanuel Macron actually proposed on 2 March 2026, speaking in front of Le Téméraire, one of France’s ballistic missile submarines. As RUSI reported, he introduced the idea of “dissuasion avancée” — forward deterrence — to deepen French engagement with allies on nuclear issues, expand coordination and signaling options in Europe, and strengthen France’s deterrent. Whatever you think of that move, it is still rooted in sovereign political control.

That matters. Chatham House made the same point: stronger nuclear posture still has to be paired with credible conventional forces, and French nuclear decisions remain sovereign. In other words, the serious European conversation is still about politics, institutions, and command responsibility. Not some contractor fantasy where algorithmic acceleration itself becomes stabilizing.

The rhetoric matters too. In CSIS’s discussion of the Northwood Declaration, Macron said “this is the right time for audacity.” I actually like that. Europe does need audacity. But audacity is not automation. It’s institutional courage. It’s choosing to build capacity instead of outsourcing it. It’s doing the boring hard thing instead of falling in love with a sexy metaphor.

Same with Keir Starmer talking about “enhancing nuclear cooperation with France.” Cooperation. Signaling. Shared political commitment between sovereign actors. That is a real deterrence conversation. It lives in institutions, not in a dashboard.

My bias here is not subtle. I want a stronger European pillar. I want deeper EU coordination on AI, defense industrial policy, compute, cloud infrastructure, cyber resilience, procurement, semiconductors — all of it. I want Europe to stop acting like a regulatory NGO with incredible museums and start acting like a federal power. But if we import the dumbest Washington version of AI strategy, we will get the rhetoric of dominance without the institutional brakes.

And Europe’s comparative advantage is exactly those brakes.

I know. Not sexy. Nobody puts “procedural safeguards” on a t-shirt. But a continent made of dense democracies, shared law, integrated markets, open borders, and very expensive historical lessons should be better than anyone at designing systems where humans remain in command and accountability is not optional.

That is not weakness. That is civilization.

The Real AI Disaster Will Probably Look More Like Chernobyl Than Hiroshima

The nuclear analogy also tricks people into imagining the wrong failure mode. They picture a mushroom cloud. The more realistic danger, at least in the near term, looks more like contamination.

That’s why I keep thinking about Chernobyl. Not as a nuclear weapons story, but as a systems story. The reactor exploded on 26 April 1986 at 1:23 a.m. at reactor no. 4. In Italy, the full panic lagged. Then it hit hard. Older relatives still talk about it with a very specific kind of fear — not cinematic fear, but kitchen-table fear. The milk is bad. The lettuce is bad. The rain is bad. Nobody is telling you the truth quickly enough. That kind of fear gets into daily life. It colonizes trust.

The details are what make it real. In Denmark, people lined up outside pharmacies for iodine pills. In Sweden, stocks disappeared in less than half an hour. In Italy, anxiety spread over fruit, vegetables, and milk. On 9 May, 50,000 liters of milk were dumped in Malagrotta, outside Rome.

That’s fallout in a civilian society. Not just destruction. Delayed information. Contradictory signals. Public panic. Market disruption. Cross-border contamination that ignores political boundaries.

Now swap radiation for AI-enabled failure.

Maybe it’s a misinformation cascade during a military crisis that poisons emergency communications across several EU countries. Maybe it’s autonomous cyber activity hitting ports, hospitals, logistics networks, or electricity balancing systems. Maybe it’s decision-support software feeding ministers false confidence while social media turns every uncertainty into instant hysteria. In 1986, information moved slowly. Now everyone has a supercomputer in their pocket and the attention span of a caffeinated squirrel. The panic loop is tighter. The rumor cycle is faster. The trust damage lands immediately.

And because the EU is a dense civilian space with open borders and tightly coupled systems, AI incidents will not stay national. A failure in one member state can ricochet through energy markets, transport corridors, payments infrastructure, customs flows, cloud dependencies, and media ecosystems. This is why I get annoyed when AI security gets treated as a niche defense topic. It’s not. It’s a continental governance problem.

So yes, Brussels needs to stop being timid. Europe needs shared resilience standards, common incident reporting, cross-border crisis protocols, public-interest compute, stronger cloud and semiconductor capacity, and serious defense-civil coordination that does not leave everything to U.S. vendors. If Ursula von der Leyen can say, as reported by EUobserver, “Our objective is very clear. We need to scale up the homegrown, affordable, reliable energy,” then the same logic applies to AI infrastructure. Homegrown matters. Dependence is a strategic vulnerability.

And I’ll say the impolite part plainly: euroskeptics who still treat deeper EU coordination like some bureaucratic kink are living in a simpler decade that no longer exists. The systems are already integrated. The risks are already continental. Refusing federal tools does not preserve sovereignty. It just hands sovereignty to whoever owns the platforms.

The Last Safety Mechanism Has to Be Political

I don’t want Europe to be timid on AI. I want the opposite. Build the labs. Back the compute. Fund the startups. Reform procurement. Support defense innovation. Create European champions so we’re not permanently renting intelligence from America and hardware from somewhere else. Enough dependency. Enough passivity. Enough pretending regulation by itself is strategy.

But ambition without doctrine is how you end up repeating someone else’s mistake in a different accent.

That’s why I reject the easy story around AI deterrence compared with nuclear strategy risks. Nuclear deterrence was terrifying, unstable, and only “worked,” if we’re being generous, because humans still had the power to hesitate. AI pushes in the other direction: more speed, more opacity, more entanglement, more plausible-sounding nonsense delivered faster than institutions can process it.

The scariest thing is not that machines become too powerful. It’s that humans use an old nuclear story to excuse not thinking.

Europe can do better. We can build a model that is strategically serious without becoming intoxicated by automation. One that keeps meaningful human command. One that treats democratic accountability as a security feature, not a bureaucratic burden. One that coordinates at EU level because fragmented national responses are too slow for networked crises. One that understands software is not a warhead and should never be mythologized like one.

If our century gets its own Petrov moment — and I’d bet good Parmigiano that it will — I know what I don’t want. I don’t want it outsourced to a model. I don’t want it hidden inside a contractor dashboard. I don’t want it wrapped in doctrine written by people who confuse speed with wisdom.

I want the last safety mechanism to be political. Human. Accountable. Slightly slower than the machine, maybe. Good. Let it be slower.

Sometimes the most advanced thing a civilization can do is refuse to automate the stupid part.

Sources

Related reading