EU States Brace for April 28 AI Act Power Clash

As EU countries harden positions before April 28 AI Act showdown, the real fight is over enforcement, authority, and market unity.

EU States Brace for April 28 AI Act Power Clash

Europe is about to do the most European thing imaginable: call something “harmonised” and then spend weeks arguing over who gets to interpret it. That’s the real story behind why EU countries harden positions before April 28 AI Act showdown. Not killer robots. Not the usual “Europe regulates, America builds” slop people repost on X like it’s still 2021.

The fight is much more boring on paper and much more important in practice. It’s about whether the AI Act is actually one rulebook for one market, or whether it turns into the usual patchwork with a fancy EU label on top. If you’re a founder building in Europe, that difference is everything.

I’m Italian, which means I’m emotionally attached to the European project in a way that is probably unhealthy. I want the EU to work. I want it to stop acting like a group project where everyone wants credit and nobody wants final responsibility. Because if I’m selling one AI product across Europe, I do not want one law, six regulators, twelve interpretations, and a compliance deck longer than The Godfather. Even my nonna would have said basta.

The real AI Act problem: who’s actually in charge?

According to MLex on April 23, 2026, the interaction between the AI Act and sector-specific laws is the main unresolved issue ahead of the April 28 negotiations. That sounds dry. It is not dry. It’s the whole game.

The AI Act is supposed to be horizontal. One baseline framework across the EU. The Commission’s own language calls it a comprehensive legal framework with harmonised rules. Harmonised is the key word here. Not “harmonised unless your national banking supervisor feels creative.”

And to be fair, member states are not openly trying to shove AI obligations into every sector-specific law. MLex reports they largely oppose that. Good. They should. Nobody needs an AI mini-constitution for banking, another for health, another for transport, and a fourth one written by telecom people who think every problem can be solved with a consultation.

But Europe has a special talent for avoiding fragmentation in theory while recreating it in practice. You don’t need to rewrite every sectoral law to get a fragmented market. You just need overlapping authorities, fuzzy competence, and different national interpretations of who leads when AI touches a regulated industry.

That’s how you end up with the same product being treated one way in Milan, another in Amsterdam, and a third in Barcelona. Same AI logic. Same underlying law. Different supervisory vibes.

And yes, “supervisory vibes” is not a legal term. It should be.

A founder I met in Brussels recently, building compliance tools for hospitals, said something that stuck with me: “I can price strict rules. I can’t price contradictory ones.” Exactly. Founders can adapt to tough regulation. What we can’t do is build around ambiguity that changes by country, sector, or whichever authority had the strongest coffee that morning.

That’s the real risk here. If Europe says the AI Act creates one market, but implementation still depends on a maze of national and sectoral interpretations, then it’s not really one market. It’s branding.

Simplification is a nice word. It hides a lot of nonsense.

Everybody in Brussels loves the word simplification. It sounds lovely. Clean. Reasonable. Like a kitchen designed by a Scandinavian minimalist who has never fried anything in olive oil. But in EU policymaking, simplification often means three different things, and pretending they’re the same is how bad deals happen.

The Council of the EU’s March 13, 2026 position backs fixed delayed application dates, clearer limits on AI Office competence, and restored registration duties. Those are not tiny edits. They shape enforcement, timing, and visibility. In other words: who moves, when, and under whose authority.

Some of that is fair. Companies need certainty. Delayed application dates can be useful if they are fixed and predictable. Clarity around who does what is also good. I have spent enough time dealing with European bureaucracy to know there is a unique form of psychic damage that comes from trying to understand which portal, authority, or PDF is the “real” one. Once in Lisbon I lost half a day to a regulatory questionnaire that asked the same question four times in slightly different ways, like it was trying to catch me in a lie.

So no, I’m not against simplification. I’m against fake simplification.

Because “easier to comply with” is not the same as “harder to enforce.” The first helps startups. The second usually helps big incumbents with legal teams large enough to field a five-a-side football match.

That’s why the legal debate around non-regression matters, even if the phrase itself sounds like something invented to punish normal people. The point is simple: if you remove too much accountability in the name of efficiency, you haven’t streamlined the law. You’ve hollowed it out.

And the Commission has been very clear about what the AI Act is supposed to do. Trustworthy AI. Human-centric AI. Not just “AI, but with nicer paperwork.”

My hot take is not even that hot. When governments say simplification, some mean real clarity. Some mean delay. Some mean dilution. Europe should stop pretending that’s one coalition.

If you want fixed dates because companies need certainty, bene. If you want to weaken central coordination and call it pragmatism, I’m going to roll my eyes so hard I can see my childhood in Italy.

The deepfake fight is smaller than it looks, and more revealing

One of the unresolved issues ahead of the final deal, according to MLex on April 20, 2026, is a ban on sexual deepfakes. The Council’s March 13 position supports added bans on non-consensual sexual deepfake content and child sexual abuse content.

Good. That should not be controversial.

This is one of those areas where Europe should be direct, not performatively nuanced. If the EU cannot draw a bright line around exploitative synthetic sexual content, then all the talk about values starts sounding like one of those glossy brochures you find in a hotel lobby and immediately ignore.

There’s a certain type of internet-brained guy who hears “ban” and immediately starts yelling about censorship, liberty, slippery slopes, civilization collapsing, whatever. I understand the instinct in some contexts. Not here. There is no noble innovation principle being defended by allowing non-consensual sexual deepfakes. There is just harm, made cheaper and faster by generative tools.

What’s interesting is that this issue seems less contentious than the governance stuff. MLex’s April 23 reporting suggests there’s room for compromise here. Which tells you something important about Europe: it can agree on obvious harms faster than it can agree on institutional power.

That’s revealing. Also a little depressing.

Because banning exploitative deepfakes is necessary, but it’s the easy part. The hard part is deciding who enforces what when the issue is less emotionally obvious and more structurally messy. Politicians are happy to sound tough on harms voters understand instantly. Try explaining AI Office competence versus national authority competence over dinner in Rome and watch everyone suddenly become fascinated by the bread basket.

Still, on this one, I’ll give member states credit. Some lines should be bright, boring, and non-negotiable.

The AI Office question is the whole question

Now we get to the part that sounds procedural and is actually existential: governance.

MLex on April 20, 2026 lists AI Office powers as one of the five unresolved disputes. The Council’s March 13 mandate explicitly asks for clearer limits on AI Office competence. Again, this is not some side argument for institutional nerds. This is the control panel.

The EPRS April 2026 briefing lays out an enforcement architecture with the AI Office, national authorities, the European AI Board, a scientific panel, and an advisory forum. Read that as a founder and tell me your blood pressure stayed normal.

Then add the warning from CERRE that member states may choose different supervisory models and different market-surveillance approaches. Same law. Different enforcement experiences. Very elegant. Very continental. Very annoying.

I’m going to say this plainly because people keep dressing it up in policy language: if Europe wants a real single market for AI, it cannot build an enforcement model where every capital keeps one hand on the steering wheel. Coordination without authority is just a group chat. And I say that as someone who has spent years in international WhatsApp groups where nobody decides anything until one German sends a spreadsheet and one Dutch person says “let’s be practical.”

There’s a deeper reflex underneath this. Some governments hear “EU-level competence” and think “loss of control.” I hear it and think “finally, maybe the market will function like a market.”

That’s the federalist split. I don’t want Brussels to centralize everything because I have some weird aesthetic love for institutions. I want the EU level to have enough authority to stop implementation from turning into 27 local interpretations plus a keynote about competitiveness.

Because if the AI Office ends up with prestige but no real teeth, Europe will have built the institutional equivalent of a beautifully branded airport with no planes.

EU leaders in discussion, highlighting tensions over the upcoming AI Act and its implications for technology regulation.

What startups actually fear is not strict regulation. It’s uncertainty.

This is the part policymakers still underestimate. Founders can work with clear rules. We do it all the time. Taxes. Labor law. Payments. GDPR. Product safety. Food labeling, if you hate yourself enough to work in food tech. The rules can be annoying, expensive, even rigid, and companies will still adapt if the system is legible.

What kills momentum is uncertainty.

That’s why the MLex April 20, 2026 report matters. Alongside sectoral interplay and AI Office powers, it flags disputes over compliance grace periods and deadlines for national sandboxes. Which sounds administrative until you’re the one trying to ship a product, raise a round, and explain to investors why your roadmap now depends on whether three authorities agree on what “transition” means.

I’ve had this conversation with American investors more than once, and I already know how it goes. The second they sense ambiguity in Europe, they don’t say, “wow, what a nuanced governance model.” They say, “call me when there’s clarity.” Then they wire money to a Delaware C-corp doing something 20% worse and 80% easier to understand.

Infuriating. Also completely rational.

The frustrating part is that the Commission is not blind to this. It has support mechanisms around implementation: the AI Pact, the AI Act Service Desk, AI Factories, the broader innovation package. Good. That’s the right instinct. Help companies understand the rules. Build infrastructure. Make adoption easier. Don’t just publish law and disappear into a cloud of acronyms.

The Commission also keeps stressing that the AI Act is risk-based and that most AI systems pose limited to no risk. That matters. A lot. Not every startup is building a sci-fi villain. Most are building things like document processing, fraud detection, scheduling tools, diagnostics support, workflow automation. Useful, mostly boring software. Europe should make it easy for those companies to understand where they stand and scale across one home market.

But if member states harden positions in ways that weaken harmonised implementation, all those support tools start to feel cosmetic. Nice signage above a cracked foundation.

That contradiction is the whole problem. Europe says it wants AI champions, AI factories, AI adoption, AI competitiveness. Great. Then don’t make implementation more nationally contingent right when companies need certainty most.

Strict rules are survivable. Ambiguous rules are poison.

The real pro-European position is better Brussels, not less Brussels

Here’s the political script I’m tired of: national control gets framed as practical, while EU coordination gets framed as ideological. In AI, that’s backwards.

Too much national discretion doesn’t make Europe more competitive. It makes Europe easier to ignore. By US hyperscalers. By Chinese giants. By incumbents who can afford complexity because they already have compliance teams with terrifyingly good posture and an entire floor of lawyers billing in six-minute increments.

The Commission keeps presenting the AI Act as part of a broader strategy: the AI Continent Action Plan, the AI Innovation Package, AI Factories, the whole push to make Europe not just a regulator but an actual place where AI gets built and deployed. That framing is right. Regulation alone won’t do it. Industrial policy alone won’t do it either. You need both. And you need them at European scale, because no individual member state has the size to go toe-to-toe with OpenAI, Google, Microsoft, Anthropic, ByteDance, or the next monster that shows up with a $20 billion compute budget and a cinematic launch video.

The Council’s own AI page places this in the EU competitiveness agenda too. Fine. Then competitiveness has to mean more than flattering startups in speeches. It has to mean building a market where a company can launch across the Union without discovering that the real product is regulatory interpretation.

So let me be direct. If member states keep trimming EU-level coordination every time implementation becomes real, they do not get to complain later that Europe lacks scale. You cannot sabotage harmonisation in spring and cry about fragmentation in autumn. Pick a lane.

And this is what makes the April 28 showdown interesting. Several issues reportedly look compromise-ready. But governance and sectoral interplay are still stuck. Which means the real argument is not whether a deal is possible. It’s what kind of Europe that deal assumes.

I’m pro-EU enough to say this with love: Europe’s problem is often not that Brussels wants too much. It’s that the Union still doesn’t trust itself enough to finish what it starts. We dream in 450 million consumers and implement in 27 administrative cultures.

That’s not sovereignty. That’s hesitation wearing a flag pin.

A real pro-European position on AI governance is not “let Brussels do everything.” It’s “give the EU level enough authority to make harmonisation real, and make the rest brutally clear.” Better Brussels. Cleaner lines. Sharper mandates. Faster interpretations. Less institutional cosplay.

Because when AI becomes infrastructure — and it will — the question is very simple: do we want Europe governed like a union or managed like a patchwork?

If EU countries harden positions before April 28 AI Act showdown only to produce a deal that looks simpler on paper but weaker in coordination, Europe will have done the most tragically European thing possible: win the law and lose the market.

And honestly? That would be such a waste.

Europe has the talent. The researchers. The industrial base. The universities. The public institutions. The market. Even the values, if we can stop turning them into brochure copy. What we keep lacking is nerve when the boring implementation details arrive.

So here’s my challenge to the people in the room on April 28: stop treating federalism like an embarrassing side effect of policy. Own it. If the AI Act is really a harmonised framework, then harmonise it with some spine.

Otherwise it’s the same old EU special. One market in speeches. Twenty-seven mini-Europes in practice.

And ragazzi, I’m done pretending that’s good enough.

Sources

Related reading