The EU Parliament adopted its position on AI Act amendments on March 26, 2026, setting concrete and binding deadlines. High-risk AI compliance is required by December 2027 — and many SMBs are already in scope without realising it.
Three Deadlines. A New Rulebook. And Most Companies Don't Know They're In Scope.
On March 26, 2026, the European Parliament formally adopted its position on AI Act amendments — the opening move in trilogue negotiations with the Council. A first compromise could land as early as April 28, 2026. What follows isn't bureaucratic paperwork. It's a binding timeline with fines, documentation obligations, and — for many businesses — real liability exposure.
The so-called Digital Omnibus is the vehicle for these changes: a legislative package that bundles several digital regulations under one revision, sharpening the AI Act's timeline in the process. What's changing affects not just large corporations. It affects every business that deploys AI systems — including as a customer of a SaaS product.
The Three Dates That Matter
Klingt interessant?
The AI Act works with risk categories. Each has its own timeline — and the differences are substantial.
November 2, 2026 — Watermarking obligation. Anyone producing or publishing AI-generated content — audio, images, video, text — must label it with machine-readable markers from November onward. Marketing teams, agencies, publishers: this is the nearest deadline and the one with the fewest months left to prepare.
December 2, 2027 — High-risk AI. Systems deployed in explicitly listed high-risk categories must be fully compliant by this date. That covers AI used in HR — candidate screening, performance evaluation, termination — credit scoring, educational assessments, and anywhere algorithms prepare or make decisions with material consequences for individuals.
August 2, 2028 — AI in regulated products. AI systems embedded in products already governed by EU sectoral legislation — medical devices, machinery, vehicles — have an extended deadline. Both AI Act requirements and sector-specific product regulations must be met simultaneously, which significantly raises complexity.
High-Risk AI: Who's Affected and Who Doesn't Know It Yet
The most common misconception in SMBs: "We don't build AI, so the AI Act doesn't apply to us." The law sees it differently. It distinguishes not only between providers and deployers — it sets obligations for both.
A deployer is any company that uses an AI system in a professional context. If you use recruiting software that automatically scores applications; if you rely on AI assistance for credit decisions on customers or suppliers; if you assess employee performance with algorithmic support — you are a deployer and potentially subject to high-risk rules.
The complication: many of these capabilities are embedded as features in standard SaaS solutions, without the vendor explicitly labelling them as "AI systems under the AI Act." The obligation to classify the risk sits with the deploying company, regardless.
A Fragmented Europe — Why the Deadlines Apply Anyway
Anyone hoping that patchy enforcement infrastructure will buy more time should look at the numbers plainly. According to the EU Parliament Think Tank, as of early 2026 only 8 of 27 EU member states had designated their national contact points for AI Act oversight — even though the deadline for doing so was August 2, 2025.
That might sound like an opportunity to wait. It isn't. The AI Act applies as a regulation directly in every member state. Missing national enforcement bodies don't change the legal effect — they only delay oversight capacity. Companies that wait for their country's designated authority to be fully operational will find themselves working backward under time pressure.
The Critical Technical Requirement: Human Oversight
High-risk AI under the AI Act doesn't just require documentation and risk assessments. It requires traceable human control points. Automated decisions must be reviewable, correctable, and stoppable by humans — and this process must be auditable.
For many businesses, this is the structurally demanding part of compliance. Adding an "approve" button before an automated output doesn't satisfy the requirement. The human must genuinely be in a position to understand the AI's decision and meaningfully challenge it. Human-in-the-loop is not optional ergonomics — it's a regulatory obligation.
Nopex was built for exactly this standard. The platform makes human oversight a design principle: every AI-powered process contains structured control points, all activity is logged, and decisions are presented in a way that's assessable without deep technical knowledge. This isn't a compliance add-on — it's the core architecture.
What to Do Now
Three steps before the first deadline hits in November: First, build an AI inventory — which systems does your company use, and which decisions do they influence? Second, check the risk classification — does the system affect individuals in one of the listed high-risk categories? Third, hold your vendors accountable — demand written confirmation that the system is AI Act-compliant and that technical documentation exists.
Start now and you have 18 months until the most important deadline. That's enough time — if you use it.
Book a demo and find out whether your AI usage falls under high-risk rules — and what that means for your operations.
Sources: OneTrust – EU Digital Omnibus & AI Act Timelines 2026 · EU Parliament Think Tank – Enforcement of the AI Act