
Chapter 9 of The Great Return named five moves Big Tech would deploy to slow the shift to local AI: free-tier subsidies, ecosystem lock-in, the safety narrative, API restrictions, and talent acquisition. Seven weeks after publication, all five are visible in the same week’s news cycle. The headline development wasn’t in the paper.
The Playbook
The paper was direct about what to expect. “Google, Microsoft, Meta, Amazon, and others are not passively watching their cloud revenue streams migrate to local hardware. Their strategic responses are sophisticated, well-funded, and designed to preserve the centralized paradigm.”
Five moves were named. Here’s how each one landed in March 2026.
Free Until It Isn’t
The zero-cost subsidy move has been running for months. In the last three weeks it became visible as a pattern rather than a collection of product launches.
Microsoft integrated Copilot into Health (March 12) — connecting it to medical records and wearables. Copilot Tasks (February 26) now uses its own computer to complete multi-step work. Copilot is arriving on Xbox consoles this year. A second generation of Microsoft’s AI image model (MAI-Image-2) rolled out free in Copilot and Bing. Two new Cloud PCs — the Asus NUC 16 and Dell Pro Desktop for Windows 365 — launch in Q3 2026, running a locked-down operating system Microsoft calls Windows CPC. No local processing. Pure cloud dependency, embedded into the hardware SKU.
So far, so expected. But something else happened in the same week: Microsoft announced it was removing Copilot entry points from Snipping Tool, Photos, Widgets, and Notepad — “reducing unnecessary Copilot entry points” in the official language. The Copilot buttons in Windows 11 had been “getting out of control.”
The zero-cost subsidy model had overshot. Users pushed back. Microsoft is now calibrating.
This is the first documented backlash in the counter-move playbook. The rollout is not being abandoned — Copilot Health and Copilot Tasks are the real bets. But the mass-surface approach had to be trimmed because it generated visible user resistance.

On the lock-in side, two moves stand out. Microsoft is requiring a Microsoft account to save SwiftKey typing data starting May 31, 2026 — Google and Apple accounts removed. Keyboard input data becomes Microsoft-proprietary. Separately, Anthropic’s Claude Cowork capability is being integrated into Microsoft Copilot cloud — a competitor AI’s best agent feature absorbed into the walled garden rather than let operate independently. This is not partnership. It is ecosystem capture.
The paper’s description holds precisely: “the switching cost is not just technical but organizational: workflows, training, documentation, and business processes become entangled with the cloud AI provider’s specific capabilities.”
The Safety Narrative Goes to Washington
The most significant counter-move of the quarter did not happen in Brussels. It happened in Washington.
On March 20, 2026, the Trump administration released a national AI legislative framework — a four-page document with seven provisions, delivered to Congress as a roadmap for federal AI legislation. The key provision: pre-empt all state AI laws. A single national standard, applied uniformly, overriding state-level regulation.
The stated rationale is innovation speed. “A patchwork of conflicting state laws would undermine American innovation and our ability to lead in the global AI race.” The framework also includes a liability shield: Congress should prevent “penalizing AI developers for a third party’s unlawful conduct involving their models.” No independent oversight mechanism. No enforcement structure.
Critics were precise. “White House AI czar David Sacks continues to do the bidding of Big Tech at the expense of regular, hardworking Americans,” said Brendan Steinhauser, CEO of the Alliance for Secure AI. “This federal AI framework seeks to prevent states from legislating on AI and provides no path to accountability for AI developers for the harms caused by their products.”
The paper predicted the safety narrative would be deployed in regulatory lobbying, with cloud providers pushing for open-weight models to face the same high-risk classification as commercial cloud. That happened during the EU AI Act negotiations — and Article 2(12) largely held the line.
What the paper did not anticipate was this version: the safety narrative executed through the US executive branch as statutory pre-emption. Lobbying influences how legislation is written. Pre-emption eliminates competing legislation entirely. California’s SB-53 and New York’s RAISE Act — state laws that might protect local AI deployment ecosystems — are now under direct federal threat.

For European readers, the immediate legal risk is contained. The AI Act, GDPR, and the European regulatory space are not affected by US federal pre-emption. But there is a signal worth reading: when Big Tech is willing to mobilise the US executive branch against state-level AI oversight, it tells you how seriously the industry is treating the threat of distributed, local, sovereign AI. You do not spend political capital on irrelevant problems.
The Chinese Wild Card
The paper predicted five counter-moves. A sixth arrived this week from an unexpected direction.
On March 23, 2026, the US-China Economic and Security Review Commission published a report documenting the dominance of Chinese open-source AI models on global platforms. Alibaba’s Qwen family of models has surpassed Meta’s Llama in global cumulative downloads on HuggingFace. Approximately 80% of US AI startups now run on Chinese open-source AI models. DeepSeek R1 briefly overtook ChatGPT as the most downloaded model on the US App Store in January.
The commission’s framing: “Chinese open-source AI is creating a self-reinforcing competitive advantage.” The machinery of open-source contribution — cheap, widely adopted, continuously improved by global usage data — allows China to close performance gaps despite restricted access to advanced semiconductors. “Open model proliferation creates alternative pathways to AI leadership.”
The regulatory weapon implicit in this framing is: local AI and open-weight models equal Chinese infrastructure running on Western hardware. This is a much more politically potent argument against non-commercial local AI than the “Wild West safety” narrative the paper predicted. Security concerns mobilise faster than safety debates.
The counter-evidence also landed in the same week. Siemens CEO Roland Busch said publicly there are “no disadvantages” to using Chinese open-source AI for the company’s industrial automation models — citing cost and ease of customisation. The world’s largest industrial automation company is pragmatic. Below the CEO level, cost wins national security debates.
The European reading is this: the Chinese open-source dominance story sharpens the market gap the paper identified. If US cloud and Chinese open-source are both politically and technically complicated options, European sovereign AI stacks — local, open-weight, built on European hardware and European data governance — become the only clean option. The paper’s argument gains a new axis.
What This Means
All five counter-moves named in Chapter 9 are documented in the March 2026 news cycle. Prediction 5.5 moves from unresolved to confirmed — faster than expected.
Two things are worth noting alongside the confirmation.
First: the Copilot backlash. The fact that Microsoft had to roll back Copilot integration in consumer apps is evidence that the zero-cost subsidy model has limits. Users notice when the tool becomes the product. The broad-surface saturation approach was visibly resisted. That matters for the paper’s argument — it means the cloud-first default is not passively accepted.
Second: the scale of the federal counter-move is signal. Industries do not mobilise federal legislative machinery against irrelevant threats. The Trump framework’s pre-emption move is an indicator of how seriously Big Tech is treating the migration to local AI. The paper’s core prediction — that the shift is real and that 2026 is the tipping point — is being confirmed by the strength of the response against it.
The running scorecard: 8 of 30 predictions assessed, 8 in the right direction, 5 moving faster than the paper projected. No predictions have been wrong.
