On 27 March 2026, Iranian-linked hackers published the contents of Kash Patel’s personal Gmail account via Telegram.

Patel is the director of the FBI.

The story was reported in the security press as an embarrassment to US intelligence. That framing misses the point. The relevant fact is not that it happened to someone with that job title. The relevant fact is that it happened through a personal email account — not through any FBI system, not through a government network, not through a known vulnerability in any piece of infrastructure. Through a cloud account that anyone with the credentials could access from anywhere.

There was no attack on the FBI. There was an attack on a person.

That distinction, played out across four incidents in one month, describes a structural shift in how data is compromised that has direct implications for every organisation deciding whether to keep its AI in the cloud or bring it home.

The person is the perimeter — identity as the new attack surface

The Old Attack Surface

For most of the history of enterprise security, the dominant threat model ran something like this: an attacker identifies a vulnerability in infrastructure, exploits it to gain access, and exfiltrates data from within. The defenders’ job was to find and close vulnerabilities before attackers could reach them.

This model produced an entire industry. Patching cycles. Penetration testing. CVE databases. Intrusion detection. Firewall rules. Zero-trust network architecture. These tools are real and necessary. Against a specific class of attack, they work.

But the infrastructure model assumes that the target is the infrastructure. That assumption has been quietly eroding for years, and March 2026 is the month it broke into public view.

The attackers who compromised the EU Commission, hijacked the Axios JavaScript library, broke the LiteLLM supply chain, and published the FBI director’s email did not exploit vulnerabilities in the traditional sense of the word. They found credentials. They found accounts. They found the people behind the systems, and they attacked the person rather than the perimeter.

The result is identical from the victim’s perspective. The data is gone. But the mechanism is different — and the implications for how we think about AI deployment are not the same.


March 2026: Four Incidents, One Pattern

The four incidents did not appear in the same news cycle. They arrived across a month, in different sectors, attributed to different threat actors. Read individually, each looks like a discrete breach. Read together, they form a pattern.

1. The EU Commission — 24 March

Amazon Web Services confirmed that 350 gigabytes of EU Commission data had been exfiltrated. The breach came less than 60 days after a separate Ivanti MDM compromise in January.

The detail that matters: AWS’s official statement noted that its infrastructure had operated as designed. No vulnerability in AWS was exploited. What was compromised was account-level access — credentials that authorised a session, and a session that authorised the transfer. The attacker did not break into the cloud. They logged in.

The data was from the Commission’s public-facing web platforms — databases, employee data associated with Europa.eu — hosted on an American cloud provider. ShinyHunters claimed the data and announced publication with no ransom demand. The internal Commission networks were not reached. But the account-level access that made it possible was indistinguishable, in mechanism, from access to any other cloud-hosted asset.

2. Axios npm — 31 March

Google attributed this operation to UNC1069, a North Korean threat actor. The attack did not find a vulnerability in the Axios JavaScript library — one of the most widely deployed packages on the web, with more than 100 million downloads per week. It found the maintainer’s account.

The package was hijacked for approximately three hours. During that window, a remote access trojan was served to every project that had Axios in its dependency tree and ran an install or update. Three hours, 100 million downloads per week. The arithmetic is the attack.

There was no bug in Axios. There was a credential in a package registry. That is all that was needed.

3. LiteLLM — 24–26 March

The paper named LiteLLM explicitly in chapter seven as a core component of the open-source AI stack. It is the gateway layer — the component that sits between AI applications and the underlying models, handling routing, cost tracking, rate limiting, and model switching. Millions of downloads per day. Used by organisations that have built local AI deployments precisely because they wanted to own their stack.

Malicious code was inserted into LiteLLM through a compromised PyPI account. Two versions (1.82.7 and 1.82.8) were published with malware that harvested SSH keys, cloud tokens, and Kubernetes secrets and installed persistence on the host. TeamPCP claimed responsibility. The incident was part of a broader Trivy supply chain compromise active during the same period.

The irony is structural. The organisations using LiteLLM were, in many cases, running local AI precisely to avoid the vulnerability of being dependent on external providers. The attack came in anyway — not through their infrastructure, through the person who maintained the library they depended on.

4. Kash Patel — 27 March

The same day as the EU Commission breach. The FBI director’s personal Gmail account. Published through Telegram by a group with Iranian government links.

The significance is not the geopolitics. It is the account type. Not a government system. Not an FBI infrastructure component. A personal cloud account — the kind used by everyone with a smartphone, including every employee at every regulated-sector organisation building a local AI strategy.

If the head of the FBI’s personal cloud account is a viable attack surface for state-level threat actors, that category of account is a viable attack surface everywhere. The person is the perimeter. The perimeter has a Gmail address.

Four incidents, one pattern — credentials as the common thread

Why This Shift Is Happening Now

The shift from infrastructure exploitation to identity theft is not new in theoretical terms — security researchers have tracked the trend for years. What is new is the velocity and the industrialisation.

On 1 April 2026 — one day after the Axios breach was reported — BleepingComputer covered a service called EvilTokens: a commercial, subscription-based platform specifically built to facilitate Microsoft device code phishing at scale. The product. The support tier. The credential-theft as a service offering.

This is the industrialisation of identity compromise. It is no longer the exclusive tool of nation-state actors with deep technical capability. It is a service with a pricing page.

The structural reason is straightforward. Infrastructure hardening has worked. Enterprises have invested heavily in vulnerability management, and the cost of infrastructure exploitation has risen. A company running modern zero-trust architecture and disciplined patching cycles is meaningfully harder to penetrate through a CVE than it was five years ago. Sophisticated actors responded by moving to what is easier: the people behind the systems, who typically have fewer technical defences than the systems themselves.

The cloud has amplified this dynamic. When data lives in cloud accounts rather than on-premise servers, the account credential becomes the master key. Steal the credential, access the data. No exploits required. No persistence in the network. No forensic footprint on the target’s infrastructure. One login event, and the attacker is inside as an authorised user.

This is not a problem that patches solve. It is an architectural consequence of storing sensitive data in credentials-gated accounts accessible from the public internet.


What Local AI Changes

The paper made a specific argument. Four converging forces — geopolitics, environmental pressure, regulation, and silicon maturity — are pushing AI inference from cloud to local infrastructure. The argument was about where computation happens and what that means for data sovereignty, compliance, and cost.

March 2026 adds a dimension the paper did not foreground: local AI removes a specific attack surface that is particularly vulnerable to the credential-theft model.

When a European organisation runs its AI on local infrastructure — model weights on a server in its own building, inference never leaving the network, no API call to a US provider — there is no cloud account to phish. The data processed by the AI does not live in an account accessible via stolen credentials from anywhere on the internet. An attacker who obtains the account credentials of an employee does not thereby obtain access to five years of AI-processed patient records, or case histories, or financial data, because those records were never uploaded to a credentials-gated cloud service in the first place.

This does not make local AI immune to credential-based attack. An attacker who gains physical access to the local server, or who compromises the local network at a deeper level, still has pathways. The person running the local deployment can still be compromised. Local AI is not the same as secure AI by definition.

But it is a meaningful reduction in the specific attack surface that the four March incidents exploited. The EU Commission was not breached because AWS had a vulnerability. It was breached because AWS had a front door accessible from the internet, and someone obtained the key. Local AI does not have that front door.

Local AI and the credential attack surface — what changes, what doesn't

What It Does Not Change

Honesty requires the other side of this argument.

Local AI does not protect the organisation from supply chain attacks in the category of Axios and LiteLLM. Those attacks target the software components that build and run local deployments. An organisation running a carefully configured local AI stack that includes LiteLLM was still exposed in March 2026, regardless of whether its model inference was local or cloud-based. The attack surface was not the cloud; it was the open-source tooling ecosystem, accessed through a maintainer’s compromised account.

The defence for this class of attack is different: version pinning, software bill of materials discipline, dependency verification, and the kind of supply chain monitoring that most European SMEs do not yet practise. This is addressable. It is not addressed by moving inference local.

Nor does local AI protect the individual person from credential compromise. An employee whose personal email is exfiltrated loses those emails regardless of what their employer’s AI infrastructure looks like. The Kash Patel incident demonstrates this at the extreme end. No enterprise IT policy governs personal Gmail. No local AI deployment changes that.

What local AI changes is narrower and more specific: it removes the data processed by the AI itself from the attack surface reachable via cloud credentials. That is not nothing. For organisations processing GDPR Article 9 data — children’s records, medical histories, mental health documentation — removing that data from cloud-account reach is a material risk reduction by any reasonable assessment.


The European Dimension

European organisations face a specific version of this problem that diverges from the US experience in an important way.

The credential-based attack model that produced the four March incidents is asymmetric in its consequences. When a US company loses cloud-stored data through credential theft, the primary consequence is reputational and financial. When a European public-sector organisation or regulated-sector enterprise loses GDPR Article 9 data the same way, the primary consequence is also legal — and the regulator does not accept “the provider’s infrastructure operated as designed” as a defence.

AWS said exactly that about the EU Commission breach. Infrastructure as designed. That is true. It is also true that the data was European public-sector web platform data, stored under an American company’s terms of service, under US cloud law jurisdiction. It is true that the Commission had signed a data processing agreement it believed complied with GDPR. And it is true that 350 gigabytes of that data was published.

The data was not protected by the compliance documentation. It was not protected by the contract. It was accessible via credentials, and the credentials were compromised, and now the data is public.

For a Belgian youth care organisation, a regional hospital, an SME managing DORA-relevant financial data: the scenario is identical in structure. The scale is smaller. The legal exposure is proportionate. The credential-theft model does not care about sector or organisation size.

The paper’s core argument — that European organisations in regulated sectors will migrate to local AI as compliance pressure makes cloud inference untenable — gains an additional dimension here. It is not only that regulators are tightening requirements around where AI processing happens. It is also that the threat model for cloud-stored data has changed in a way that makes local storage materially safer against the attack vectors that are actually in use.

Those two forces do not compete. They converge.


The Paper Did Not Predict This

The paper made thirty predictions. This is not one of them.

The paper argued for local AI on the basis of regulatory compliance, data sovereignty, environmental sustainability, and hardware maturity. It did not argue that the credential-based attack model would emerge as a distinct threat class that local architecture specifically mitigates. That argument was not visible in February 2026 in the form it took in March.

It is visible now.

This is, in the series’ terminology, an update rather than a prediction still pending. Not a scorecard item. An argument that grew from the evidence of one month and strengthens the paper’s conclusion through a mechanism the paper did not identify. The destination remains the same: local AI as the structural destination for European regulated-sector organisations. The road has grown a new lane.

The person is the perimeter. The cloud makes that perimeter a front door. Local infrastructure does not move the perimeter back to the system — that argument was lost years ago. It removes the most accessible entrance the credential-theft model uses.

For European organisations deciding where their AI data lives, March 2026 made that argument concrete. Not in a CVE database. In four incidents, visible in the same news cycle, readable by anyone paying attention to what the evidence actually says.