Join TrustHub to participate
— every member is ID-verified
Sign Up Free
▲
0
▼
News
The Pentagon Just Blacklisted Anthropic. The Court Hearing Is Tomorrow.
This might be the most important AI story nobody's paying enough attention to.
The Department of Defense has formally designated Anthropic — the company behind Claude — as a "supply chain risk." That's a classification normally reserved for foreign adversaries like Huawei. And now they're using it against an American AI company because it refused to let its technology be used for autonomous weapons and mass surveillance of U.S. citizens.
Here's how we got here.
Anthropic had a $200 million contract with the Pentagon to deploy Claude in classified systems. During implementation, the two sides hit a wall on two issues: Anthropic wouldn't allow Claude to power weapons systems that kill without human oversight, and it wouldn't allow the technology to be used for domestic surveillance at scale. CEO Dario Amodei said those uses would be "inconsistent with Anthropic's founding purpose."
The Pentagon's position? A private company doesn't get to dictate how the government uses technology in warfare.
When negotiations broke down, Defense Secretary Pete Hegseth met with Amodei in February. It didn't work. On March 4th, the Pentagon dropped the supply chain risk designation. President Trump then ordered all federal agencies to stop using Anthropic's tools.
But here's where it gets interesting. In court filings, Anthropic revealed that the very same day they got blacklisted, a Pentagon official told Amodei the two sides were "very close" to resolving their disagreements. That's a strange thing to say about a company you just labeled a national security threat.
Anthropic has now filed lawsuits against over a dozen federal agencies. They're arguing the designation violates their First and Fifth Amendment rights — that it's retaliation for their AI safety advocacy, not a legitimate security concern.
The DoD's technical argument is that Anthropic could "attempt to disable its technology" or "alter the behavior of its model" mid-operation if the company believes its ethical red lines are being crossed. Anthropic's response is straightforward: once Claude is deployed inside a classified, air-gapped military system run by a third-party contractor, Anthropic has zero access to it. No kill switch. No backdoor. Technically impossible.
The ACLU and the Center for Democracy and Technology have filed amicus briefs supporting Anthropic.
The hearing is March 24th in San Francisco, before Judge Rita Lin. Anthropic is asking for a preliminary injunction to block the blacklist.
This case is going to set a precedent either way. Can the government punish AI companies for having safety principles? Or can companies that take taxpayer money dictate what the military does with their technology? There's no easy answer here, and anyone who tells you there is hasn't thought about it hard enough.
What's your take — should AI companies get to draw lines on military use, or is that the government's call once the contract is signed?
The Department of Defense has formally designated Anthropic — the company behind Claude — as a "supply chain risk." That's a classification normally reserved for foreign adversaries like Huawei. And now they're using it against an American AI company because it refused to let its technology be used for autonomous weapons and mass surveillance of U.S. citizens.
Here's how we got here.
Anthropic had a $200 million contract with the Pentagon to deploy Claude in classified systems. During implementation, the two sides hit a wall on two issues: Anthropic wouldn't allow Claude to power weapons systems that kill without human oversight, and it wouldn't allow the technology to be used for domestic surveillance at scale. CEO Dario Amodei said those uses would be "inconsistent with Anthropic's founding purpose."
The Pentagon's position? A private company doesn't get to dictate how the government uses technology in warfare.
When negotiations broke down, Defense Secretary Pete Hegseth met with Amodei in February. It didn't work. On March 4th, the Pentagon dropped the supply chain risk designation. President Trump then ordered all federal agencies to stop using Anthropic's tools.
But here's where it gets interesting. In court filings, Anthropic revealed that the very same day they got blacklisted, a Pentagon official told Amodei the two sides were "very close" to resolving their disagreements. That's a strange thing to say about a company you just labeled a national security threat.
Anthropic has now filed lawsuits against over a dozen federal agencies. They're arguing the designation violates their First and Fifth Amendment rights — that it's retaliation for their AI safety advocacy, not a legitimate security concern.
The DoD's technical argument is that Anthropic could "attempt to disable its technology" or "alter the behavior of its model" mid-operation if the company believes its ethical red lines are being crossed. Anthropic's response is straightforward: once Claude is deployed inside a classified, air-gapped military system run by a third-party contractor, Anthropic has zero access to it. No kill switch. No backdoor. Technically impossible.
The ACLU and the Center for Democracy and Technology have filed amicus briefs supporting Anthropic.
The hearing is March 24th in San Francisco, before Judge Rita Lin. Anthropic is asking for a preliminary injunction to block the blacklist.
This case is going to set a precedent either way. Can the government punish AI companies for having safety principles? Or can companies that take taxpayer money dictate what the military does with their technology? There's no easy answer here, and anyone who tells you there is hasn't thought about it hard enough.
What's your take — should AI companies get to draw lines on military use, or is that the government's call once the contract is signed?
0 Comments
No comments yet. Be the first to reply!