
Anthropic tried to write a conscience clause into a government contract.
The clause was modest in scope. Before signing a Pentagon procurement agreement, Anthropic wanted a written guarantee that its AI models would not be used for mass surveillance or autonomous weapons systems. This was not a policy stance issued in a press release. It was a contractual condition — a legal limit on the use of a product that Anthropic agreed to supply.
The Pentagon refused.
What happened next is the story this article is about. Not because of what it means for Anthropic. But because of what it means for anyone who believes AI ethics can be durable without the right legal environment to protect them.
The Weapon That Was Used
When Anthropic refused to drop the clause and pursued the point legally, Defense Secretary Pete Hegseth responded by classifying Anthropic as a risk to the national supply chain.
That classification has a specific history. It is a designation developed to manage contractors with operational ties to adversarial foreign powers — primarily China. It was built to deal with companies where the concern is that a foreign government has economic leverage, intelligence access, or covert influence over the contractor’s products and data. It carries significant consequences: contract exclusions, procurement bans, potential cascading effects on any government customer relationship.
Hegseth applied it to a US AI company because that company asked for a contractual ethics guarantee.
This is not a rounding error in US national security policy. It is a deliberate choice to use the most available coercive instrument to punish a company for asserting a limit on how its product would be used. The instrument happened to be a foreign adversary designation. The target was a domestic company with an explicit safety mission.

What the Court Said
Federal Judge Rita Lin blocked the designation. Her ruling: the Pentagon’s action violates Anthropic’s constitutional rights, including the right to freedom of expression.
The First Amendment framing matters. Lin did not rule on contract law alone — on whether the Pentagon had the right to reject a contractor’s terms. She ruled that using a national security classification as retaliation for a company’s expressed position on how its AI should be used constitutes a restriction on protected speech. The government was, in effect, penalising Anthropic for saying something — specifically, for saying our AI should not be used for mass surveillance or autonomous weapons.
The block is temporary — seven days, giving the government time to appeal. The appeal may succeed. The administration’s legal theory, if it runs this to the Supreme Court, will involve national security deference — the doctrine that courts give wide latitude to executive branch decisions on defence procurement. That doctrine has historically been broad.
But the First Amendment finding is already public. The reasoning is in the record. Even if the government wins on appeal, they will have won in a case where a federal court found their conduct was retaliatory suppression of speech. That finding does not expire when the appeal is filed.
The IPO Complication
Anthropic is preparing to go public. The timeline reported this week: as early as October 2026. Potential valuation: $60 billion. Underwriters in discussion: Goldman Sachs, JPMorgan Chase, Morgan Stanley.
A company entering public markets while designated a federal supply chain risk — even temporarily — faces a specific kind of investor concern. It is not about the legal merits. It is about the signal that the executive branch is willing to use national security machinery against a company that asserts ethical limits on its products. Investors considering a $60 billion valuation want to understand the political exposure of the asset they are buying.
Anthropic won the first round. The designation is paused. But the IPO now carries a disclosure obligation: the company must inform prospective investors that it has been in active litigation with the US federal government over whether expressing an AI ethics position constitutes a national security risk.
That is an unusual prospectus footnote.

The European Reading
The paper’s Chapter 9 covered Big Tech counter-moves — the ways cloud providers fight to slow the shift to local AI. Update #6 documented all five: free-tier subsidies, ecosystem lock-in, the safety narrative, API restrictions, and talent acquisition.
The Anthropic case is not in that taxonomy. It is a different structure: not a corporation fighting regulation, but a corporation asserting an ethical limit on its own product and a state punishing it for doing so.
The paper did not name this counter-move. It should have.
The safety narrative counter-move — the one deployed to frame open-weight and local AI as dangerous — assumed that the argument would be made rhetorically, in lobbying and regulatory comment periods. The Anthropic case shows what happens when rhetoric runs out: the state reaches for the largest available instrument, in this case a supply chain risk designation designed for Chinese adversaries, and applies it to a domestic company that said something inconvenient.
For European organisations evaluating local AI infrastructure, the European reading is this.
If a US AI company cannot maintain contractual ethical limits on its own products without federal retaliation, then the ethics of that company’s AI are only as durable as the current administration’s tolerance for them. The Constitutional protection exists — the court affirmed it — but it requires litigation, legal costs, reputational exposure, and the willingness to fight. Not every company will fight. Anthropic did. Others, facing the same pressure with less financial resilience and no IPO story, will accept the terms.
This is the operational reality of building AI governance on the ethics of US cloud providers: the governance is real until it is inconvenient, and inconvenience has a price.

What Holds in Europe
The EU AI Act does not depend on Anthropic’s willingness to fight. It does not depend on Microsoft’s voluntary commitments or Google’s Responsible AI principles or any company’s conscience clause. It is statutory law with independent enforcement.
Article 5 of the AI Act prohibits AI systems that deploy subliminal techniques to manipulate behaviour, that exploit vulnerabilities, that perform real-time biometric surveillance in public spaces without narrow exception, and that make risk assessments based on social scoring. These are not contractual conditions a vendor can agree to and then renegotiate when a government customer objects. They are prohibitions. A European deployment of any AI system — from any provider — must comply with them or face enforcement.
The GDPR’s prohibition on processing special categories of personal data without explicit legal basis is similarly statutory. It does not require a vendor to assert it. The law asserts it on behalf of the data subject, regardless of what the vendor’s contract says, regardless of what the customer’s procurement team agreed to.
This is the structural difference between a conscience clause and a legal prohibition. The clause depends on the party asserting it having the resources and incentive to defend it. The prohibition exists independent of any party’s willingness to assert it.
For European organisations, running AI on European infrastructure subject to European law means the ethical limits are in the architecture. Not in the terms of service. Not in a vendor’s mission statement. Not in a conscience clause that a future administration can attack with a supply chain designation.
The Sixth Counter-Move
Update #6 named five Big Tech counter-moves. In the same piece, a sixth appeared: the Chinese open-source dominance story, used as a security framing to complicate local AI and open-weight adoption.
The Anthropic case names a seventh.
State weaponisation of security designations against companies that assert AI ethical limits is a counter-move — not against local AI specifically, but against any actor in the AI ecosystem that tries to constrain how AI is used inside state procurement. The target this week was Anthropic. The instrument was the supply chain risk label. The goal was compliance: drop the clause, accept the deployment terms, continue the contract.
The paper’s running prediction was that the structural forces driving local AI are self-reinforcing: regulation creates demand for local infrastructure, which funds hardware innovation, which lowers compliance barriers, which accelerates adoption. The conscience clause case adds a term to that equation.
State pressure on AI ethics in the US cloud ecosystem is an accelerant for European sovereign deployment. Not because European states are better. But because European statutory AI law is not removable by executive action, and the conscience clause — the ethical limit a vendor inserts to protect its product from misuse — is a much weaker instrument than a prohibition backed by a supervisory authority and four percent of global annual turnover.
Europe’s AI ethics are in the statute book. Anthropic’s were in the contract. The contract is in court. The statute is not.