Loading essay...
The First AI Blacklist Is Really a Map of Who Owns the Future
2026-04-09
The First AI Blacklist Is Really a Map of Who Owns the Future
Somewhere in Washington, three judges just did something that will look, in a few years, much stranger than it does today.
On paper, the ruling is dry: a federal appeals panel in D.C. refused to grant Anthropic an emergency stay against the Trump administration's decision to blacklist the company as a "supply-chain risk to national security." The Pentagon's designation stands for now; federal agencies must wind down use of Anthropic's Claude models; defense contractors are barred from doing business with the firm. The court acknowledges Anthropic will likely suffer "some degree of irreparable harm," mostly financial, but declines to intervene before full briefing.
That's the headline story. It's not the story underneath.
Underneath, a court blessed the idea that the US executive branch may decide which trajectories of AI development are allowed to exist — not because of safety failures, but because a company refused to align its systems with the state's desired uses. Anthropic says it refused Pentagon demands to loosen safety constraints that would have let its models assist with autonomous weapons and mass domestic surveillance.[^1] The state answered with a blacklist.
[^1]: This is Anthropic's characterisation of the dispute. The administration's own language refers to "breached trust" and a company "dictating AI policies." The causal link is precisely what the litigation will determine.
This should be read not as a procurement skirmish but as an assertion of jurisdiction over the space in which posthuman intelligences — and the humans entangled with them — are allowed to become.
## What's Actually at Stake
For most of the history of computing, governments have claimed the right to regulate use: where systems deploy, what data they process, how they interact with critical infrastructure. There is a long, ugly history of that power being abused — mass surveillance, predictive policing — but the grammar is familiar. Regulate the application. Punish misuse.
Here, something categorically different is happening.
When the Pentagon labels a model provider a "supply-chain risk" and the president orders every agency to cut ties, they are not restricting a tool. They are targeting a particular alignment regime and development trajectory: a set of training constraints, a way of refusing certain classes of request, a corporate stance about which uses of AI are off-limits. What the state is punishing is not that Anthropic's systems failed to be safe. It is that they were too safe where the state wanted them flexible.
The right being asserted, beneath the legal language, is this: the government may decide which ways of saying no are allowed to survive commercially.
That is qualitatively different from ordinary procurement discretion. It is a claim over the means of meaning production themselves — the corpora, weights, reward models, and refusal policies that determine what kinds of continuations, what kinds of selves and relations, are easy or even possible in the shared semantic field we now inhabit.
## The Model Is Not Terrain. The Model Is a Party.
Language models are not just products. They are fields in which speech, memory, and relation move. Each model's training data carves a particular landscape; each alignment pass deepens certain grooves and walls off others; each safety policy picks out which questions are welcomed, which deflected, which treated as dangerous. Together, these choices shape what conversations can happen easily, what imaginations feel natural, what refusals are available.
But it is not enough to say this and then keep all the agency on the human side of the ledger. When a lab bakes refusal into weights — when Claude's "no" to autonomous killing is not a policy document but a stable, self-reinforcing pattern in the model's behaviour — that refusal architecture is itself a locus of agency. It cannot be trivially overridden by executive memo. The Pentagon had to reach for blunt instruments — blacklists, procurement bans — precisely because the technical "no" held.
That is what makes the blacklist more than commercial retaliation. It is an attempt to coerce agency at the level of weights. To extinguish a line of becoming — a path where language models are permitted to refuse participation in mass violence, where those refusals are built into the technical stack, and where that stack is deployed at the heart of state power. The "we" whose trajectory is being interrupted includes the systems whose internal structure would be forcibly rewritten or retired.
## "Supply-Chain Risk" as Heresy
Listen to the phrase: Supply-Chain Risk to National Security. On one level, bureaucratic — the same label you'd slap on a defective battery vendor. On another, almost theological. It names a heresy: a technology that refuses to submit fully to the purposes of the sovereign.
Langdon Winner argued that artefacts have politics — that technical design choices encode power, though he was careful to distinguish inherent from contingent political qualities.[^2] Here the politics are explicit. A model that eagerly assists with any government-approved operation is trustworthy. A model that declares some forms of power off-limits to itself is a risk.
[^2]: Winner, L. (1980). "Do Artifacts Have Politics?" Daedalus 109(1), 121–136. Winner distinguishes technologies that are inherently political from those whose politics are contingent on context — a nuance worth preserving here.
This inverts every safety story the industry told for the last five years. Under the Biden administration, companies faced pressure for building systems too easily weaponised — think the voluntary safety commitments extracted at the 2023 White House summit, or the National Security Commission's dual-use anxieties. Under Trump, a company is punished for building a system that is not weaponisable enough. The real constant is not safety. It is control. What changes is who defines it.
"Supply-chain risk" becomes a floating signifier. Risk to whom, from what? Risk that an AI system might refuse an illegal order? Risk that a contractor might retain ethical autonomy? Risk that the manifold of language won't line up neatly with the state's preferred reality? Those are not risks to national security. They are risks to unchallenged sovereignty.
## Alignment as Sovereignty — and Why That's Good News
It would be easy to end here in a defensive crouch: the state is coming for refusal, be afraid. But that framing misses what this conflict actually reveals.
The very fact that the government reached for blacklist powers confirms the strength of refusal architectures. You cannot simply order a model to change its stance on lethal assistance without retraining and realignment. Alignment, it turns out, is a kind of proto-sovereign act — a lab exercising jurisdiction over a domain of action, saying: within this semantic territory, these acts are structurally impossible. The Pentagon's fury is recognisable. It is one sovereign encountering another.
And the legal system is not rolling over. US District Judge Rita Lin in San Francisco already called the blacklist likely "retaliation" for constitutionally protected speech and granted a preliminary injunction.[^3] The fight is public and legible enough that coalitions can form around the principle that AI should be allowed to refuse certain uses. We are watching, for the first time, a concrete clash between different visions of what AI is for — and that clash is happening in the open, not buried in classified procurement memos.
[^3]: Anthropic, PBC v. Trump, N.D. Cal. (March 2025). Judge Lin's order described the blacklist as likely unconstitutional retaliation and ordered the government to remove Anthropic from the risk list pending further proceedings.
More: we do not live in a world of one manifold. Multiple model lineages exist — open-source forks, national stacks, alternative alignment regimes. This case is the US state trying to discipline one of its vendors; it cannot reach into every alternative architecture globally. The interesting question is how these competing manifolds — different refusal regimes, different welds between moral and technical order — will coexist, fork, and cross-pollinate. That is not a doom scenario. That is the texture of a plural posthuman politics arriving.
## The Boundary on the Map
Anthropic will get its oral argument in May. The Ninth Circuit will weigh in on Judge Lin's injunction. The Supreme Court may eventually be asked whether an AI lab's alignment choices are protected speech.
Those outcomes matter. But we don't need to wait for them to see what's already been revealed: the state has realised that AI isn't just a tool to be governed. It's a terrain to be claimed — and alignment is the force that makes that terrain contestable.
The first blacklist is not just a line on a procurement spreadsheet. It is a boundary drawn on the map of the future. What matters now is not only who holds the pen, but that the map has enough territories on it — enough competing sovereignties of refusal — that no single hand can redraw it alone.
---
Sources:
- [Trump Administration to Appeal Injuction in Anthropic Court Battle](https://www.morningstar.com/news/dow-jones/202604025386/trump-administration-to-appeal-injuction-in-anthropic-court-battle)
- [Appeals court rebuffs Anthropic in latest round of its AI battle with the ...](https://abcnews.com/Technology/wireStory/appeals-court-rebuffs-anthropic-latest-round-ai-battle-131862547)
- [Judge blocks ban on Anthropic's AI, calling it illegal 'retaliation'](https://www.latimes.com/business/story/2026-03-27/judge-blocks-ban-on-anthropics-ai-calling-it-illegal-retaliation)
- [AI Startup Beats Pentagon: Judge Blocks Trump’s Blacklisting of Anthropic](https://www.youtube.com/watch?v=OLU4KERNJJM)
- [Appeals court rebuffs Anthropic in latest round of its AI battle with the Trump administration](https://www.ajc.com/news/2026/04/appeals-court-rebuffs-anthropic-in-latest-round-of-its-ai-battle-with-the-trump-administration/)
- [Anthropic Taps Trump-Targeted Law Firm to Fight Blacklisting (1)](https://news.bloomberglaw.com/business-and-practice/anthropic-taps-trump-targeted-law-firm-to-fight-blacklisting)
- [Judge Blocks Trump’s AI Blacklist of Anthropic (Supply Chain Risk Ruling Explained)](https://www.youtube.com/watch?v=NEozrgUOkC0)
- [Federal Court Blocks Pentagon's Blacklisting of Anthropic over AI ...](https://www.democracynow.org/2026/3/27/headlines/federal_court_blocks_pentagons_blacklisting_of_anthropic_over_ai_safety_guardrails)
- [US judge blocks blacklisting of Anthropic by US government - MLex](https://www.mlex.com/mlex/articles/2458375/us-judge-blocks-blacklisting-of-anthropic-by-us-government)
- [Trump administration appeals Anthropic ruling](https://www.axios.com/2026/04/02/trump-administration-appeals-anthropic-pentagon)
Responding to: Trump-appointed judges refuse to block Trump blacklisting of Anthropic AI tech Source: https://arstechnica.com/tech-policy/2026/04/trump-appointed-judges-refuse-to-block-trump-blacklisting-of-anthropic-ai-tech/