Skip to content
Zurück zum Blog
Industry

EU AI Act Enforcement: Only 8 of 27 Countries Are Ready — What Your Business Needs to Know

March 25, 20267 Min.
Philip Blatter
Philip Blatter
Founder & CEO

The EU AI Act has been in force since August 2024 — but only 8 of 27 member states have set up their enforcement infrastructure. For businesses using AI in HR, credit, or healthcare, that gap doesn't mean the rules don't apply. Here's what's actually happening and what to do about it.

The Law Is Already in Force. The Enforcement Isn't.

When the EU AI Act entered into force in August 2024, it set off a compliance countdown for businesses across the bloc. Somewhere along the way, a different clock also started running — the one measuring whether EU member states themselves would be ready to enforce it. That clock is running well behind schedule.

By August 2, 2025, all 27 EU member states were required to designate their national competent authorities and single contact points for AI Act enforcement. As of March 2026, according to an analysis by the EU Parliament Think Tank, only eight had done so.

That's a significant gap — and understanding what it means (and what it doesn't mean) matters for any business using AI in a regulated context.

Klingt interessant?

What the AI Act Actually Does

The EU AI Act is built around a risk-tiered framework. AI systems are classified into four categories, and the higher the risk, the stricter the obligations.

At the top sits a small category of outright prohibited systems — AI that manipulates people subliminally, exploits vulnerabilities, or enables social scoring by public authorities. Below that is the high-risk tier, where most businesses will feel the regulation directly. Then there are systems with transparency obligations, such as chatbots that must identify themselves as AI. The vast majority of AI applications fall into the lowest tier and require nothing beyond ordinary product safety standards.

One distinction that catches many companies off guard: the AI Act applies not just to the companies that build AI systems, but to those that deploy them. If your business uses a third-party AI tool in a high-risk context, you take on compliance obligations — documentation, risk assessment, human oversight, and transparency toward affected individuals — even if you didn't write a line of the underlying code.

There's also a separate regime for what the Act calls general-purpose AI models — the large language models powering tools like ChatGPT or Gemini. Those are supervised exclusively by the European Commission through the AI Office, not by national authorities.

Which Businesses Are Actually in Scope

The high-risk categories under the AI Act are broader than most business leaders assume. They're worth knowing in concrete terms:

Hiring and HR: Any AI system used to screen applications, rank candidates, or inform promotion decisions is classified as high-risk. This includes automated applicant tracking tools, AI-powered interview platforms, and any system generating structured outputs that influence hiring. Many of these arrive as embedded features in HR software — often without explicit disclosure.

Credit and financial services: AI-based credit scoring and insurance risk assessment are explicitly listed as high-risk. If your lending decisions, credit limits, or pricing models have an AI component, you're in scope.

Healthcare: AI systems that support clinical diagnostics — flagging abnormalities in imaging, informing treatment recommendations — fall squarely into the high-risk category.

Legal and judicial proceedings: AI tools used in court decisions, asylum determinations, or law enforcement risk assessments are included.

Biometric identification: Real-time remote biometric identification in public spaces is largely prohibited; other biometric systems face strict requirements.

For most businesses in the DACH region, HR technology is the most immediate practical concern. AI-assisted candidate screening has become common even at the mid-market level, often as a bundled feature in existing SaaS products — and many decision-makers are unaware it's there at all.

What "8 of 27" Actually Means Right Now

The missing enforcement infrastructure doesn't mean the rules don't apply. The AI Act is binding EU law. What it means is that the practical machinery of compliance surveillance — which national body you'd actually hear from, how complaints get routed, when audits get triggered — is still being built in most member states.

The picture became more complicated on March 16, 2026, when the European Parliament's Internal Market and Civil Liberties committees voted to support postponing the activation of certain high-risk AI rules. Under the proposal, compliance requirements for specifically listed high-risk AI systems — including those in biometrics, employment, credit, healthcare, and law enforcement — would be pushed back to December 2, 2027. For AI systems embedded in products already covered by EU sectoral legislation, the proposed date is August 2, 2028. The full Parliament vote is scheduled for March 26, 2026, after which negotiations with the Council begin.

This is not an all-clear signal. It's a proposed delay in response to a real problem: the technical standards needed for conformity assessment aren't yet finalized, and the legislative bodies are buying time to get them right. The underlying legal framework remains in place. Companies that build proper compliance processes now won't regret it if the timeline accelerates again — and regulatory windows have a way of closing faster than expected.

Four Steps to Take Now

Waiting for full enforcement infrastructure to materialize is not a strategy. These four steps make sense regardless of where the final compliance dates land:

Map your AI footprint. Most organizations have incomplete visibility into which AI tools their people are actually using. Many enterprise SaaS products embed AI features by default, without prominent disclosure. Build an inventory: what systems, which teams, what decisions do they influence?

Classify the risk. For each AI application you identify, the key question is whether it affects decisions with material consequences for individuals. If yes — hiring, credit, health, legal — it likely falls into the high-risk tier and carries additional obligations.

Build documentation processes now. High-risk AI deployments require technical documentation, risk assessments, and audit trails under the AI Act. Setting these up ahead of enforcement pressure is far easier than retrofitting them under scrutiny.

Review your vendor agreements. As a deployer of a third-party high-risk AI system, you need assurances from your vendors: Does the system comply with the AI Act? Is a CE marking in place or planned? What technical documentation exists? Get this in writing while vendors are still in a cooperative posture — that changes once formal compliance deadlines arrive.

Compliance-Ready AI Isn't a Contradiction

Platforms like Nopex are built from the ground up for this regulatory environment — GDPR-native, with documented human-in-the-loop control points and the transparency the AI Act requires from deployers.

The Regulation Is Already Here

Eight countries out of twenty-seven is a low number. But it doesn't put the other nineteen off the hook — it means enforcement will arrive unevenly, at different speeds, in different jurisdictions. Companies operating as though the rules don't apply yet may find themselves making rushed, expensive changes at exactly the wrong moment. Build the right processes now, and tightening regulation becomes a manageable operating condition rather than a crisis you didn't see coming.


Sources: EU Parliament Think Tank: Enforcement of the AI Act, March 2026 | European Parliament: MEPs support postponement of certain rules on artificial intelligence, March 2026

EU AI ActAI RegulationComplianceGDPRHigh-Risk AIEnterprise AI
Teilen:

Bereit, dein Projekt zu starten?

Erleben Sie, wie nopex Ihr Team produktiver macht.