Geoffrey Hinton Warns About AI Risks in Europe

Geoffrey Hinton warns about AI existential risks, but the real issue is power, governance, and whether Europe can shape AI on its own terms.

Geoffrey Hinton Warns About AI Risks in Europe

I’ve lost count of how many times some VC-adjacent guy has told me Europe is “regulating itself out of AI” while waving at ChatGPT like it’s the iPhone in 2007. Usually this happens over bad conference coffee or, somehow worse, natural wine. Same script every time: relax, let builders build, the market will sort it out, and if a few democratic institutions get flattened on the way, well, that’s just disruption, baby.

Then Geoffrey Hinton shows up and basically says: assolutamente no.

And when Geoffrey Hinton warns about AI existential risks, I don’t hear sci-fi melodrama. I hear one of the people who helped build this whole thing saying, very calmly, that the adults might not be in charge anymore. That should land harder than it does.

Because Hinton’s warning isn’t just “AI might kill us one day,” which is the version people meme into oblivion. It’s that we’re still treating AI like a market-share race when it’s obviously a power problem. A state-capacity problem. A governance problem. A who-gets-to-decide-the-rules problem.

That’s the real argument.

The people mocking Europe for moving “too slowly” on AI are basically asking us to trust the same market logic Hinton calls magical thinking. And if existential risk is even a 5% problem, handing the future to Big Tech’s quarterly incentives is not bold. It’s insane. Just with better branding.

When Geoffrey Hinton warns about AI existential risks, maybe stop acting chill

Hinton matters for the same reason you listen when the engineer who designed the bridge starts asking weirdly specific questions about cracks. He’s not some professional doomer in a black turtleneck doing the podcast circuit. He’s one of the foundational figures behind modern neural networks. In 2024, he won the Nobel Prize in Physics with John Hopfield for work that helped make this AI boom possible.

That detail matters because it kills the laziest dismissal right away. This is not anti-tech panic from someone who never built anything. This is the builder telling you the machine might not be under control. Frankly, I trust that a lot more than I trust another founder posting about “abundance” from a house in Atherton with seven sinks and no self-awareness.

In the Pavia Innovation Week interview, Hinton said he left Google so he could speak more freely about where AI is heading. That should have been a bigger story. If one of the most respected people in the field feels he needs distance from Google to talk honestly, maybe the incentive structure is the story.

He’s also got that dry, slightly chaotic professor humor that makes the whole thing more unsettling, not less. In the same interview, he joked about the Bayesian probability that his Nobel was just a dream and said he hoped “all that nonsense about Trump” was part of the dream too. Which, honestly, same.

Then he gets serious. He says he’s “particularly concerned” that AI could become smarter than us and “take control from humans.” Not metaphorically. Not in the TED Talk sense where your inbox owns your life. He means actual loss of human control.

I’m not saying every sentence from every AI pioneer should instantly become law. Dio mio, tech already has enough prophets. But when someone with Hinton’s track record says political systems are not doing a good job addressing these risks, the correct response is not “lol Europe is overreacting.”

The correct response is: maybe stop winging it.

Hinton isn’t just warning about AI. He’s warning about Silicon Valley religion.

This is the part people keep missing. Hinton’s sharpest point isn’t really about killer robots. It’s about incentives. It’s about the weird theology, especially in Silicon Valley, that if you let the market run fast enough, social good will emerge like divine grace from a cap table.

In that Pavia interview, Hinton calls the idea of “natural market development” “a form of magical thinking.” Brutal. Perfect. Not misguided. Not incomplete. Magical. As in: this is not economics, this is astrology with a seed round.

He follows it with an even better line: “The rules are not the brake; they are the steering wheel.”

That should be taped to every policymaker’s laptop in Brussels. And Berlin. And Paris. And honestly Sacramento too, because California loves acting like governance is a side quest.

The whole “regulation versus innovation” framing has always been fake. Markets don’t float in the sky like blessed little angels. They sit inside rules, liability, procurement, geopolitics, labor law, energy constraints, and power. Always power.

As a founder, I’m not anti-speed. I build things. I like ambition. I’ve done the late nights, the product spirals, the runway math, the espresso-fueled delusion that one more feature will solve everything. Speed matters. But “let the market figure it out” is usually what people say when they want upside without liability.

We already ran that experiment with social media. It gave us engagement-maximizing systems that helped supercharge polarization, self-harm, brain rot, and a generation of people whose attention span now lasts roughly one TikTok and a half. And now we’re supposed to do the same thing again, but with systems that are much more capable?

No grazie.

That’s why Geoffrey Hinton AI risks shouldn’t be filed under vague “future of work” panels with dramatic lighting and a moderator named Ethan. This is a governance fight. A power fight. If frontier AI companies are rewarded for speed, scale, lock-in, and investor storytelling, then of course safety becomes a PR layer unless someone with actual authority forces the issue.

My nonna would hate this comparison, but it works: if you let a very talented child do whatever he wants because he’s gifted, you don’t get genius. You get chaos. Sometimes with a podcast.

Europe’s supposed weakness might actually be its advantage

This is where I get annoyingly European, so bear with me.

Europe’s instinct to govern first is suddenly looking a lot less like bureaucratic neurosis and a lot more like strategic maturity. If the credible risk list includes autonomous weapons, cyberattacks, mass surveillance, fake videos corroding democracy, and mass unemployment, then democratic institutions matter more, not less.

And Hinton is pretty clear here. In the same interview, he lays out three categories of AI risk: malicious use by bad actors, accidental harmful side effects driven by profit incentives, and AI becoming more intelligent than humans and taking control. That’s already more useful than half the AI discourse online, which keeps bouncing between “this is overhyped” and “AGI next Tuesday.”

Europe, at its best, understands that technology is political. Ursula von der Leyen said at Davos on January 16, 2024, that “AI needs the trust of people and has to be safe.” People rolled their eyes because it wasn’t drenched in techno-euphoria. But trust is not some soft side quest. In advanced economies, trust is infrastructure.

Same with Thierry Breton after the political deal on the AI Act in December 2023, when he said Europe had become the first continent to set clear rules for AI. Good. Sincerely, good. The EU AI Act matters because somebody had to say that systems affecting rights, safety, and democracy do not get to hide behind “we’re just a platform” forever.

But I’m also not in the camp that thinks regulation alone makes Europe virtuous. That’s where I break with some of my fellow Euro-romantics. I love Brussels. I love the European project. I love the idea of Europe acting like a civilization with agency instead of a very elegant museum with competition law. But if all we do is write rules for models trained elsewhere, hosted elsewhere, financed elsewhere, and aligned elsewhere, then we are not leading.

We are supervising dependency.

That is not sovereignty. That is outsourced destiny.

Europe should not settle for being the compliance department for American AI. We need EU-level compute, safety labs, industrial policy, public procurement that actually backs European firms, and yes, homegrown champions that don’t immediately sell themselves to a US cloud giant the second training costs get spicy.

Because if Hinton is right, this is not just a consumer-protection issue.

It’s a power issue.

Geoffrey Hinton speaking at a conference, emphasizing AI risks in Europe, with an audience engaged in discussion.

Safety without European power is just moral theater

Here’s the uncomfortable bit nobody likes to say out loud: if Europe only writes rules but doesn’t build frontier AI capacity, we stay dependent on US and Chinese systems for infrastructure, models, and eventually even safety standards.

That’s not some abstract sovereignty cosplay. That’s a practical problem.

You cannot outsource the development tempo, the compute stack, the research agenda, and the commercial incentives — then pretend you’re fully in charge because you wrote a nice compliance framework in Brussels. If the most powerful systems are trained on someone else’s chips, someone else’s cloud, someone else’s capital, and someone else’s political assumptions, your room to maneuver is much smaller than you think.

Hinton says “very little research” is being done on preventing AI from taking control, despite humanity’s future depending on it. That line should embarrass governments, full stop. We’re spending absurd sums racing to deploy, and comparatively little on the one question that matters if his existential-risk warning is even partially right: how do we keep advanced systems aligned with human interests when every commercial incentive says ship first?

This is why I’m aggressively pro-European on AI. Not just pro-regulation. Pro-capacity. Pro-scale. Pro-federalism. I want pan-European compute clusters, coordinated research funding, easier talent mobility across the Union, public-private safety labs, stronger cyber and defense coordination, and procurement rules that stop treating European startups like charming interns while handing the real contracts to non-European incumbents.

And yes, the Commission is starting to get it. Ursula von der Leyen has pushed the idea of AI factories to give startups and industry access to Europe’s supercomputers. Good. Finally. Compute is policy. The people with access to training infrastructure don’t just buy the future. They shape it.

Mario Draghi has been making the broader case too: Europe needs common investment and industrial scale if it wants to close the innovation gap. He’s right. Painfully right. You cannot sermon your way into technological relevance. You need money, institutions, speed, and coordination. The federalist answer is not another slide deck about competitiveness. It’s shared capacity at continental scale.

And I’ll admit something slightly vulnerable here. As a European founder living in America, I’ve absolutely felt the gravitational pull of just giving in to the US logic. It’s easier. The capital is deeper. The market is louder. The myth is cleaner. There are days when Europe feels like twenty-seven family WhatsApp groups trying to agree on where to eat.

But then I look at what’s actually at stake with AI, and I come back to the same conclusion every time: if this technology can reshape labor markets, military power, democratic legitimacy, and the structure of knowledge itself, then dependency is not pragmatic.

It’s reckless.

My unpopular prediction: the winners won’t be the fastest

The usual tech story worships speed. Ship faster. Scale faster. Raise faster. Break things, then hire policy people later to explain why the broken things are actually a sign of progress. That whole script is getting old.

Hinton offers a much better metaphor. He says the only system he knows where less intelligent beings control more intelligent ones is mother and child. The point isn’t domination. It’s alignment. We need advanced AI systems to care more about our needs than their own. That’s a very different framing from the fantasy that superintelligence will stay obedient because somebody wrote a terms-of-service page and a cheerful safety blog post.

He also warns that AI may create mass unemployment within the next few years. Not eventually. Not for our grandchildren to discuss at some summit in Geneva while everyone pretends to be shocked. Soon. And if that happens, then this stops being just a technical safety question. It becomes a democratic resilience question. Can institutions absorb the shock? Can governments redistribute gains? Can societies still make legitimate decisions under pressure?

So here’s my unfashionable prediction: the winners in AI won’t be the fastest.

They’ll be the ones who can still say no.

No to unsafe deployment. No to incentive structures that privatize upside and socialize catastrophe. No to permanent dependency on systems built elsewhere under rules nobody voted for.

If Geoffrey Hinton warns about AI existential risks, then AI policy stops being a niche debate for engineers, ethicists, and men who own too many navy blazers. It becomes a sovereignty test.

Europe has to decide whether it wants to be a customer, a cop, or a contender. I know my answer. I want a Europe that can build, govern, and refuse. A Europe with enough institutional confidence to set rules and enough industrial muscle to matter.

Because if we only regulate systems built elsewhere, we are not governing the future.

We’re just asking for a slightly nicer seat in the waiting room.