
The most common objection to acting on the AI Act has been uncertainty. The law passed in 2024. The obligations were on paper. But the timelines were soft, the guidance was incomplete, and the enforcement machinery was not yet assembled. Organisations across Europe have been watching and waiting, safe in the knowledge that “not quite yet” was a defensible position.
That position ended today.
On 26 March 2026, the European Parliament voted 569 to 45 — with 23 abstentions — to adopt its position on the AI Act Omnibus. The vote fixes hard dates. It creates legal certainty where there was ambiguity. And for organisations deploying AI in any significant context, the clock is now running.
What The Great Return Said
The paper identified four structural forces converging to make local-first AI not a trend but an outcome. The fourth force was the EU AI Act — the world’s first binding AI regulation, applying real obligations to real organisations, with real consequences for non-compliance.
But the paper also noted a limiter: implementation ambiguity. Standards were incomplete. Guidance was in draft. Without hard dates, the regulatory force was theoretical. Organisations could rationally defer.
The paper predicted that ambiguity would resolve. It has.

The Dates That Now Exist
The Parliament’s position establishes three application dates, and one immediate prohibition.
2 November 2026 — Watermarking. AI systems that generate audio, image, video, or text content must mark that content as AI-generated. This is the first obligation to land. Eight months from today.
2 December 2027 — High-risk AI systems. This is the central deadline. It covers AI systems used in biometric identification, critical infrastructure, education, employment decisions, access to essential services, law enforcement, justice administration, and border management. If your organisation deploys AI in any of these contexts — directly or through a supplier — this date is yours.
2 August 2028 — Sectoral overlap. Where AI is built into products already regulated under EU sector-specific law — medical devices, radio equipment, toy safety, and others — the AI Act obligations may be less stringent, and this later date applies. The Parliament’s logic: avoid duplicating compliance burdens where sector legislation already provides safeguards.
The prohibition on AI “nudifier” systems — tools that generate non-consensual sexually explicit imagery of identifiable persons — is a separate ban, not contingent on the dates above.
These are Parliament’s positions, not yet final law. Council trilogue must follow. But Parliament voted 569-45. The margins are not ambiguous. The direction is fixed.

What 2 December 2027 Actually Means
Twenty months.
For an organisation that has not yet audited its AI use, twenty months sounds like a comfortable runway. It is not. Consider what compliance for a high-risk AI system actually requires under the Act: a risk management system, technical documentation, data governance measures, transparency obligations to users, human oversight mechanisms, accuracy and robustness records, and registration in the EU’s AI database before deployment.
None of these are checkbox exercises. Data governance alone — documenting what training data was used, where it came from, and what biases may be embedded — takes months for systems already in production. The Act does not grandfather existing systems. If you are deploying a high-risk AI system on 2 December 2027, it must be compliant on that date.
The organisations that start now have twenty months. The organisations that start at the end of 2026 have twelve. Those that wait for the Council to conclude trilogue — likely late 2026, possibly early 2027 — have less.
Why Local AI Makes This Easier
The paper’s argument was structural: the AI Act creates compliance obligations that are architecturally easier to satisfy when AI runs locally. Today’s vote adds a specific clause that sharpens this.
The Parliament backed a provision allowing service providers to process personal data to detect and correct biases in AI systems — but only when strictly necessary, and with explicit safeguards. The intent is right: bias detection is a compliance requirement, and you cannot detect bias without data. The constraint is the safeguard.
Here is the architectural reality: demonstrating that bias correction was “strictly necessary” and performed “with safeguards” is dramatically simpler when the AI system and its data processing are on hardware you control. When bias correction happens inside a cloud provider’s infrastructure, you are dependent on their logging, their audit trails, and their definitions of “strictly necessary.” When it happens locally, the audit trail is yours.
The same logic applies across the Act’s requirements. Technical documentation is easier to maintain when you control the system. Data governance records are yours, not held by a third party under a service agreement. Human oversight mechanisms are implemented in your environment, not enabled or disabled at a provider’s discretion.
The compliance architecture that the Act demands maps more cleanly onto local AI than onto cloud AI. This is not ideology. It is the structure of the obligations.

The SMC Extension: More Organisations, Same Obligations
One amendment in today’s vote deserves specific attention: the extension of SME support measures to small mid-cap enterprises.
The original AI Act gave proportional treatment to small and medium-sized enterprises — simplified documentation, access to regulatory sandboxes, lower administrative burden. The logic was that an eight-person startup could not meet the same compliance overhead as a multinational.
Parliament has now extended this to small mid-caps — companies that have outgrown strict SME status but are not yet large enterprises. This is a significant clarification for a specific tier of organisation: the scaling company that has recently passed the SME threshold and was facing abrupt full-obligation compliance.
The substance of the obligations does not change. The support framework extends further down the size curve. More organisations will be covered; more will have access to simplified pathways.
What This Means for the Running Scorecard
Update #6 documented all five Big Tech counter-moves from Chapter 9. The scorecard stood at 8 of 30 predictions assessed, all in the right direction, five moving faster than projected.
Today’s vote closes an open item: the paper’s identification of the AI Act as a structural force was explicitly contingent on implementation ambiguity resolving. It has resolved. The fourth force is no longer theoretical. It has a date.
The paper also noted that the AI Act would accelerate the local AI shift by making cloud-dependent deployments more legally complicated. Today’s bias correction clause — allowed, but only with strict safeguards that are easier to demonstrate locally — is the first specific provision that makes that argument concrete rather than inferential.
Scorecard update: 9 of 30 predictions assessed. 9 in the right direction. The fourth structural force moves from “theoretical” to “active.”
Three Steps for December 2027
The paper ended with action frameworks. So will this update.
Step one: audit your AI footprint. Map every AI system your organisation uses or deploys. Identify which fall into the high-risk categories — biometrics, employment decisions, essential services, access decisions. This list is almost always longer than the initial estimate. Most organisations discover AI in HR, in access control, in customer-facing decisions, that was not purchased as “AI.”
Step two: assess the architecture. For each high-risk system, ask: where does the AI run? Where does the training data live? Who controls the audit trail? Who can produce the technical documentation the Act requires? If the answers involve a third-party cloud provider, the compliance path runs through that provider’s cooperation, their documentation standards, and their timelines. Build that dependency into the risk assessment.
Step three: start with documentation now. The Act does not require you to have solved compliance by today. It requires you to demonstrate, on 2 December 2027, that you have. That demonstration is built on records that accumulate over time — data governance logs, risk assessments, oversight records. The organisations that start building those records today will have 20 months of evidence. Those that start in November 2027 will have weeks.
The clock started today. The date is fixed. The organisations that act now will find the deadline manageable. Those that wait for the Council to conclude, for the Commission to publish all guidance, and for enforcement to begin, will find themselves running.
The paper predicted this moment. It arrived on schedule.