
The paper gave it a different name.
In February 2026, the vocabulary was “Regional Trust Hubs,” “European sovereign silicon,” and “local-first AI infrastructure.” The argument was structural: geopolitics, compliance, hardware maturity, and environmental limits would push AI workloads from US hyperscalers back to European-controlled infrastructure. The conclusion was framed as a prediction.
It is no longer a prediction. It is EU industrial policy.
This week, the vocabulary converged. The phrase now running in European Parliament briefings, member state communications, and infrastructure strategy documents is sovereign AI factory. Denmark has called on the Commission to fast-track capacity. MEPs are pressing for faster progress on Big Tech accountability. An opinion piece circulating at the Commission level — “Sovereign AI Factories and the Future of Infrastructure Strategy” — argues the case in the exact structural language the paper used.
When governments name a thing, it stops being a trend and starts being a programme.
What Changed
For the first seven weeks after publication, the paper was tracking the forces that were making local AI structurally inevitable. The question was always: are the conditions forming, or have they already formed?
The conditions have formed. The policy response is now also forming.
Denmark’s position to the Commission is precise: fast-track sovereign data centre capacity as a strategic infrastructure priority, on par with energy security and semiconductor supply. The framing is not about AI as software. It is about AI as infrastructure — infrastructure that, like power grids and rail networks, cannot be allowed to depend on foreign ownership and foreign law.
This is the sovereignty argument from Chapter 3 of the paper, running as a member state diplomatic position.

Why the Vocabulary Shift Matters
The label “sovereign AI factory” is not just rhetorical. It carries specific industrial policy logic.
A factory, in EU industrial policy terms, implies a supply chain — hardware, energy, talent, regulation — and a product. It implies reproducibility: not one installation but a model that can be deployed across member states. It implies public investment at infrastructure scale, not at research grant scale. And it implies that the output is treated as a public asset, not a commercial service subject to standard market terms.
The paper projected that European governments would eventually reach for this framing, driven by the same geopolitical logic that produced the Chips Act (€43B), the Data Act, and the AI Act. The IPCEI-AI programme — Important Projects of Common European Interest for Artificial Intelligence — is the institutional vehicle waiting for this vocabulary. It funds cross-border AI infrastructure projects with state-aid clearance.
“Sovereign AI factory” maps directly onto the IPCEI model. When a phrase from policy debate becomes a project description, funding follows.
The MEP Pressure
Alongside the member state push, MEPs are pressing the Commission for faster progress on Big Tech accountability — specifically on the question of whether the Digital Markets Act’s gatekeeper obligations are being enforced with sufficient speed and depth.
The connection to local AI is not obvious but it is real.
The DMA’s gatekeeper framework imposes interoperability requirements on core platform services. When those requirements are not enforced — or enforced slowly — the practical effect is continued lock-in. Organisations that want to migrate AI workloads from cloud to local still depend on data portability, API openness, and the ability to extract their own data from hyperscaler environments.
The paper named this in Chapter 9 as a counter-move: “The switching cost is not just technical but organisational: workflows, training, documentation, and business processes become entangled with the cloud AI provider’s specific capabilities.” MEP pressure on DMA enforcement is, structurally, pressure on the counter-move. Faster DMA enforcement lowers the switching cost. It removes one of the tools Big Tech uses to slow the return.

The Axelera Signal
The sovereign factory argument only holds if the hardware it runs on exists.
Axelera AI — a Dutch-Swiss company producing European AI inference chips — ships its next generation, Europa, in the first half of 2026. At up to 629 TOPS per chip, it outperforms the inference capacity of the third-generation NPUs from US vendors that are only now reaching enterprise volumes. Axelera is not alone: SiPearl (France), Kalray (France), and Semidynamics (Spain) are building sovereign European silicon for specific inference, HPC, and RISC-V workloads respectively.
When a sovereign AI factory is proposed as infrastructure policy, the hardware tier is European. This is new. In 2024, any honest assessment of local AI inference at enterprise scale had to acknowledge that European hardware was aspirational. In mid-2026, it is shipping.
The paper projected this as a “2025–2026 arrival.” The arrival is on schedule.
The Regional Trust Hub Model
The paper’s specific policy recommendation was the Regional Trust Hub — a publicly anchored, regionally deployed AI inference node serving SMEs, public institutions, and regulated-sector organisations that cannot build their own infrastructure but cannot use US hyperscale cloud without material compliance risk.
The sovereign AI factory concept is the scaled-up version of this idea. The factory produces the capacity. The Trust Hub distributes it.
For small and medium organisations — a regional hospital group, a mid-size accounting firm, a public school network — the sovereign factory matters not because they will directly access it, but because it creates the supply chain beneath a shared regional AI infrastructure they can actually use. The economics only work at scale. The policy only works if the factory is built first.
Denmark’s push is therefore not just about Danish infrastructure. It is about demonstrating that a member state can move first, fast enough to establish a replicable model. The Commission’s task, if it responds, is to provide the funding mechanism that lets the model propagate.

What the Scorecard Says
Update #7 brought the prediction scorecard to 9 of 30, with the fourth structural force — EU regulation — moving from theoretical to active.
The sovereign factory development closes a different prediction. The paper projected that European governments would shift from regulating AI to investing in sovereign AI infrastructure, driven by the same forces that produced the Chips Act and the Digital Decade targets. Denmark’s move, the Commission briefings, and the MEP pressure on DMA enforcement are collectively this shift — in its earliest, fastest-moving form.
Scorecard update: 10 of 30 predictions assessed. 10 in the right direction. One prediction — the policy cascade from regulation to investment — moves from “forming” to “active.” The remaining 20 are mostly in chapters the news has not yet fully reached: sector-by-sector deployment, the management skills gap, labor market effects, and the 2028–2030 adoption curve.
That curve is now running on a sharper incline than the paper estimated.