
We published "The Great Return" on February 7, 2026. Five weeks later, the paper's central argument — that dependency on foreign AI infrastructure is a structural risk, not a theoretical one — has been demonstrated live, in real time, in the most consequential way possible.
Not by a regulatory filing. By a war.
The Scenario the Paper Described. Now Actual Events.
Chapter 3 of the paper spent considerable space on what we called the "decoupling of intelligence" — the argument that routing every prompt through servers governed by foreign law represents a dependency incompatible with genuine autonomy. We cited submarine cable vulnerability. We cited the CLOUD Act. We cited the risk that a foreign government could, through legal or commercial pressure, interrupt access to a critical AI capability at the worst possible moment.
On February 27, 2026, the US Department of Defense and Anthropic reached a public impasse. Anthropic refused to remove contractual safeguards prohibiting the use of Claude for fully autonomous weapons and mass domestic surveillance. In response, President Trump directed every federal agency to immediately cease using Anthropic's technology. Defense Secretary Hegseth designated the company a "supply chain risk" — a label previously reserved for foreign adversaries like Huawei and Kaspersky — and announced a six-month phase-out.
Here is what happened next, and why it matters more for Europe than for Washington.
The Pentagon continued using Claude. The US military's strikes in Iran — ongoing at the time of writing — use Anthropic's technology for intelligence processing and targeting support. As Lawfare Media noted, the government cannot simultaneously declare a vendor an acute security threat and keep it running in active combat operations for six months. The designation exposed the bottleneck it was supposed to solve: Claude is too deeply embedded to switch off.
Defense analysts at Piper Sandler put it plainly: Anthropic is "heavily embedded in the military and the intelligence community" and moving off the company's technology could "pose some short-term disruptions." One Defense Department official, off the record, called it "a huge pain in the ass to disentangle."

This is the dependency bottleneck made visible. And the lesson is not confined to the Pentagon.
This Was Not in the Paper. But It Should Have Been.
The paper's Chapter 3 predicted that geopolitical friction would expose cloud dependency as a national security concern. What it did not anticipate was the precise mechanism: not a foreign adversary restricting access, but a domestic political confrontation between a government and its own AI supplier over the supplier's ethics.
The supply chain risk designation is normally reserved for Huawei. It was applied to a San Francisco company that declined to remove guardrails against autonomous killing. That is an extraordinary escalation — and it reveals something the paper's geopolitical chapter argued structurally but did not have a concrete example for: that dependency creates leverage, and leverage will eventually be used.
For a European organization running its operations on a US cloud AI provider, the lesson is not abstract. If the US government can unilaterally attempt to blacklist its own most safety-conscious AI company over a contract dispute — and fail to extract itself from that dependency even while doing so — then any organization with similar dependencies should be asking the same question the Pentagon is now asking in operational terms: what does our transition plan look like?
This confirms and sharpens prediction 2.2 from the paper (AI Act compliance pressure drives organizations toward local inference), but through a wholly unexpected route. It is not the EU AI Act doing the work here. It is a geopolitical shock demonstrating, in real time, that AI infrastructure dependency is a liability — and that the liability does not announce itself politely in advance.
The European Response: An Invitation

The irony was not lost on Europe. Within days of the blacklisting, Europeans were publicly calling for Anthropic to relocate to the EU, citing its alignment with the AI Act's human-oversight principles and its explicit refusal to enable mass surveillance — the very requirements European regulation mandates.
The European Policy Centre published a response titled "The Pentagon blacklisted Anthropic for opposing killer robots. Europe must respond." The Chatham House framing was harder: the blacklisting is, in its words, "a hammer blow to the trustworthiness of US technology."
This is the sovereignty argument from Chapter 3 expressed not as policy analysis but as lived experience. The US has demonstrated that its government will use commercial and legal pressure against its own AI infrastructure companies if those companies resist. If it will do that to American companies, the question of what it might do — or permit to happen — to European organizations relying on that same infrastructure answers itself.
The OpenAI Side-Effect
Hours after Anthropic was blacklisted, OpenAI signed a Pentagon deal for classified systems access. OpenAI CEO Sam Altman had publicly stated that same morning that he shared Anthropic's position on restricting military uses. The sequence was noted.
The consumer response was immediate and measurable. Claude climbed to #1 on the Apple App Store globally. ChatGPT uninstalls surged 295% in the days following the announcement. The "QuitGPT" movement claimed 1.5 million participants. OpenAI later acknowledged the announcement looked "sloppy and opportunistic."
For the paper's argument, the consumer shift is a secondary signal. The primary signal is structural: OpenAI's contract with the Pentagon — in which it claims to offer equivalent safeguards through "technical" rather than contractual mechanisms — introduces exactly the kind of third-party-mediated compliance ambiguity that makes European organizations nervous under the AI Act. If the safeguards are technical rather than contractual, they can be removed. If they are contractual, the Pentagon already demonstrated it will try.
The Iran Dimension

The paper's Chapter 3 cited submarine cable cuts in the Red Sea — linked to the Yemen conflict — as a concrete example of the infrastructure vulnerability argument. The Iran conflict adds a harder dimension. Iranian drone strikes on AWS data centers in the UAE and Bahrain were described by Tehran as deliberate and strategic. The Strait of Hormuz, through which data routing infrastructure and energy supply both pass, is a contested chokepoint in an active conflict zone.
The paper described this risk. The paper did not expect to be citing it as current events five weeks after publication.
For European organizations, the data routing implications are direct. Transatlantic internet traffic passes through a small number of cable systems, some of which route through conflict-adjacent geography. The "supply chain resilience" argument from Chapter 3 — that local AI inference ensures business logic remains operational during international data-routing conflicts — was theoretical in February. It is less theoretical in March.
What This Means
The paper's core prediction was that 2026 would be the year the default inverted — from "cloud-first" to "local-first, cloud-extended." The events of the past two weeks have accelerated that inversion, not through the regulatory path the paper emphasized, but through a geopolitical demonstration that even the world's most advanced military cannot disentangle itself from a single AI dependency in six months.
That is the dependency bottleneck. Not a concept. A case study.
For European organizations with their own AI deployments: the August 2 AI Act enforcement deadline is now eleven weeks away. The paper's compliance argument (prediction 2.1, 2.2) stands. But the geopolitical argument just received a real-world illustration that no policy paper could have manufactured.
The question is no longer whether dependency creates risk. The question is how long it takes your organization to answer "what does our transition plan look like?"

What to Watch Next
The Anthropic lawsuit against the Pentagon — filed today in the Northern District of California — will be a critical signal. If the courts vacate the supply chain risk designation, it confirms that even this administration cannot weaponize procurement law against a domestic company over a contract dispute without legal consequence. If it is upheld, the chilling effect on AI safety culture in the US will be significant — and the argument for European-origin AI infrastructure will sharpen further.
Axelera Europa's first production deployments are still on track for H1 2026. The AI Act enforcement clock is running. And the Iran conflict is not resolved.
The paper's June update will cover enterprise adoption data, Axelera's first reviews, and the outcome of the Anthropic legal challenge. If the courts move fast, that story may arrive before June.
This update is part of an ongoing research series tracking the predictions made in The Great Return: Why 2026 Marks the Tipping Point for Local AI Migration in Europe — published February 2026. Full paper available at Zenodo.