
European organisations are choosing local deployment for agentic AI not because it is technically superior, but because the EU’s regulatory framework has structural gaps that make cloud-based autonomous systems legally untenable. Four compliance failures have accumulated since the AI Act entered into force: ambiguous scope classification, missing high-risk guidelines, unresolved liability attribution, and a technical file requirement that autonomous systems cannot structurally satisfy. Together they have created a liability vacuum. Local deployment is filling it by default.
The Scope Problem No One Has Resolved
The EU AI Act does not name agentic AI systems explicitly. The law’s examples of systems that output content or decisions influencing environments are non-exhaustive — a design choice made when the dominant AI paradigm was still the chatbot. Agentic systems — systems that plan, execute multi-step action chains, invoke external tools, and operate with reduced human involvement — look substantively different from what legislators had in mind.
Greens MEP Sergey Lagodinsky raised this directly with Commissioner Henna Virkkunen last autumn, asking whether agents fall under the AI Act. Virkkunen’s response: it is “likely” that they fall in scope, given the non-exhaustive examples. That response is not binding legal guidance. It is an opinion from a commissioner. Organisations deploying agentic systems in regulated sectors have a compliance obligation that requires more than “likely.”
Some member states have actively resisted new rules on AI agents, citing regulatory fatigue. The Commission convened legal experts through DG JUST to examine the impact of agentic AI on contract formation and liability attribution — the working paper concluded only that the situation was “challenging.” That is not an answer an organisation building a patient-record AI system can deploy with confidence.

The Deadline That Keeps Moving
The AI Act set 2 February 2026 as the legal deadline for the Commission to publish guidelines on high-risk AI system classification. The Commission missed that deadline. It then indicated draft guidelines would follow within the same month. That did not happen either. The guidelines are now on a “revised timeline” without a published date. This is the second missed deadline for the same document.
The consequence is specific. Organisations deploying high-risk AI systems have been trying to build compliant technical infrastructure without knowing exactly which systems the regulator considers high-risk. CE marking, conformity assessments, and post-market monitoring systems all depend on that classification being clear. The guidance that would make it clear does not exist.
The Commission’s Omnibus package — backed by both Parliament and Council, as documented in Update #7 — is moving the high-risk enforcement date from August 2026 to December 2027. This extends the ambiguity window by more than a year rather than resolving it. Organisations that chose local deployment because cloud-based AI could not satisfy unknown requirements are not wrong to have done so. The regulatory machinery is still not assembled.
Many member states have also not formally designated the national bodies responsible for enforcing the AI Act. Without enforcement bodies in place, the obligations on paper have no mechanism for inspection or penalty — yet no organisation in a regulated sector is willing to bet that this absence will be permanent.

Liability Without Rules
In early 2025, the Commission withdrew the AI Liability Directive. The result is 27 different national civil liability regimes covering AI-related harm, instead of the harmonised EU framework the directive would have created. MEP Axel Voss, the EPP rapporteur on the directive, condemned the decision as creating “legal uncertainty, corporate power imbalances and a Wild West approach that only benefits Big Tech.” Kim van Sparrentak of the Greens warned that withdrawal leaves the EU with a different civil liability regime in every member state for the same class of harm.
For enterprises deploying agentic AI in B2B contexts, this has a concrete operational consequence. When an autonomous agent causes commercial harm — miscalculating a credit assessment, erasing a production record, executing a transaction at the wrong rate — liability attribution falls back to contract law in each member state. A deployment across five EU jurisdictions means five different legal frameworks for the same failure mode.
This is not theoretical. The Commission cited an illustrative case in its own DG JUST working paper: a user whose AI agent purchased a product at more than 250 times the going rate using their credit card. No unified liability rule determined who bore the loss. The Air Canada chatbot case — in which a customer won against an airline whose agent promised a non-existent discount — was resolved by a national tribunal. Multiply that ambiguity across 27 member states with autonomous systems operating across them, and the liability exposure becomes uninsurable.
The Technical File Problem
A systematic regulatory analysis published on arXiv in April 2026 by Nannini et al. maps AI agent deployments against the full stack of EU law — the AI Act, GDPR, the Cyber Resilience Act, NIS2, the DSA, and sector-specific legislation. Its conclusion on high-risk agentic systems: they cannot currently satisfy the AI Act’s essential requirements.
The paper identifies four structural compliance gaps specific to autonomous agents: cybersecurity vulnerabilities from external tool integration, human oversight evasion through autonomous action chains, transparency failures across multi-party systems, and runtime behavioural drift that makes system behaviour unpredictable over time.
The last of these is worth dwelling on. The AI Act requires providers of high-risk systems to maintain a technical file — an exhaustive inventory of the system’s design, behaviour, and data flows — as a precondition for CE marking. For systems that plan autonomously, revise internal strategies based on tool results, and change behaviour based on accumulated context, maintaining a current technical file is structurally impossible. The requirement was designed for systems with stable, defined outputs. Autonomous agents are not that.
This is not a gap companies can close by working harder on their compliance documentation. It is a mismatch between the law’s architecture and the technology’s architecture. Harmonised standards under Standardisation Request M/613 were still in draft as of January 2026. Until those standards exist, organisations cannot obtain CE marking for high-risk agentic deployments through the regular pathway.

Local Deployment as the Compliance Default
In January 2026, France’s Ministry of the Armed Forces awarded Mistral AI a framework agreement to deploy AI models across all military branches and affiliated agencies through 2030. The models run on French-controlled infrastructure. The agreement was explicitly framed around data sovereignty and compliance with GDPR and the AI Act — not around technical superiority. Mistral was not selected because its models outperform the alternatives on benchmark tests. It was selected because it was deployable in a way that resolved the compliance exposure US cloud providers could not.
A 2026 framework agreement between France, Germany, and Mistral extends the same logic to public administration. Enterprises including HSBC, Stellantis, and Veolia are running Mistral’s open-weight models on their own servers. HSBC uses self-hosted generative AI to automate credit assessments and compliance reviews — tasks where data governance requirements are non-negotiable and where a cloud provider’s jurisdictional exposure cannot be accepted.
These are not organisations choosing local AI because they prefer it. They are organisations for which cloud-based AI cannot satisfy the compliance requirements they operate under. The regulatory vacuum is the driver. Local deployment fills it not because it resolves the open questions, but because it removes the organisation from the most exposed positions: no cloud account reachable by a foreign court order, no dependency on a US CLOUD Act jurisdiction, no exposure to a provider that might change its access rules overnight.
What the Scorecard Shows
Prediction 2.1 in The Great Return stated: “EU AI Act high-risk AI system obligations apply from August 2, 2026 with no delay.” The Commission’s Omnibus proposal, backed by Parliament and Council, is moving the high-risk enforcement date to December 2, 2027. That prediction is now conditionally revised — the delay is happening.
What this article adds is not a simple scorecard movement but a structural finding: the delay does not resolve the compliance gaps it was meant to give organisations time to close. The guidelines that would enable classification don’t exist. The liability framework that would govern failures doesn’t exist. The harmonised standards enabling CE marking don’t exist. The enforcement bodies in many member states don’t exist.
Delaying enforcement by 16 months extends the window without filling it. For regulated-sector organisations making deployment decisions now — a system going live this year, a procurement decision this quarter, an audit preparation underway — the extended deadline offers no relief. The vacuum is operational. The response to it is visible in the procurement data. This is the first published evidence advancing prediction 2.1, and it points in one direction: the regulatory framework intended to govern agentic AI is not keeping pace with it, and the organisations that recognised this earliest are already building accordingly.
What would move prediction 2.1 to confirmed in its revised form: a formal Commission decision fixing December 2027 as the binding high-risk application date. What would move it to confirmed in its original form: enforcement action after August 2026 against a high-risk AI system. The former is more likely. The August deadline is not coming back.