Federal Judge Blocks Trump's Anthropic Blacklist and Ban

iconCryptoBriefing
Share
Share IconShare IconShare IconShare IconShare IconShare IconCopy
AI summary iconSummary

expand icon
A federal judge in San Francisco blocked the Trump administration from enforcing a blacklist against AI firm Anthropic and banning its Claude models. The ruling halts enforcement of the directive and the Pentagon’s designation of Anthropic as a national security risk. The dispute arose after Anthropic refused to remove safety restrictions in Pentagon talks. The administration ordered agencies to stop using its tech, with Defense Secretary Pete Hegseth calling it a supply chain risk. Anthropic, which had a $200 million Pentagon contract, claimed the move violated the First Amendment and impacted liquidity and crypto markets. The CFT framework was also cited in related regulatory discussions.

Anthropic won a key early court victory after a federal judge in San Francisco granted the AI startup’s request for a preliminary injunction, temporarily blocking the Trump administration from enforcing actions that blacklisted the company and restricted federal agencies from using its Claude models.

US District Judge Rita Lin found that Anthropic had shown a likelihood of success on core parts of its case, writing that the government’s actions appeared more punitive than security-driven. Reuters reported that Lin said punishing Anthropic for drawing public attention to the government’s contracting position appeared to be illegal First Amendment retaliation.

The order bars the administration, for now, from implementing or enforcing President Donald Trump’s directive against Anthropic and from advancing the Pentagon’s effort to treat the company as a national security supply chain risk. Reuters reported that the ruling is stayed for seven days to allow the government time to appeal.

The dispute began after Anthropic refused to remove safety restrictions from Claude in Pentagon negotiations fully. The company said it would not agree to uses tied to fully autonomous weapons without human supervision or to mass surveillance of Americans, even as it remained open to broader government work.

Trump then moved in late February to order federal agencies to stop using Anthropic’s technology, while Defense Secretary Pete Hegseth separately labeled the company a supply chain risk, a designation that could force defense contractors to avoid Claude in military work. Anthropic argued that this was the first time such a label had been publicly used against an American company in this way.

The stakes are high because Anthropic has become an important AI vendor to the US government. The company had a $200 million Pentagon contract and had already deployed models across Defense Department classified networks before the relationship broke down over usage terms.

The Trump administration relied on separate legal authorities for the Pentagon blacklist and for wider federal procurement restrictions, forcing Anthropic to challenge them in different courts. A separate case tied to civilian government contracting is still moving forward in Washington.

Disclaimer: The information on this page may have been obtained from third parties and does not necessarily reflect the views or opinions of KuCoin. This content is provided for general informational purposes only, without any representation or warranty of any kind, nor shall it be construed as financial or investment advice. KuCoin shall not be liable for any errors or omissions, or for any outcomes resulting from the use of this information. Investments in digital assets can be risky. Please carefully evaluate the risks of a product and your risk tolerance based on your own financial circumstances. For more information, please refer to our Terms of Use and Risk Disclosure.