Anthropic Challenges Pentagon’s Blacklisting in Federal Lawsuits

Anthropic, the AI technology firm known for its assistant Claude, has initiated legal action against the Trump administration following a controversial decision to blacklist the company on national security grounds. On Thursday, the Pentagon classified Anthropic as a “supply chain risk” due to its refusal to permit unrestricted military applications of its technology. This decision has ignited a public dispute regarding the potential military use of Anthropic’s AI chatbot.

In response to this designation, Anthropic filed two lawsuits on Monday, one in a federal court in California and another in the federal appeals court in Washington, DC. Each lawsuit aims to contest different aspects of the Pentagon’s measures against the company. The filings assert that “these actions are unprecedented and unlawful,” emphasizing that the Constitution does not allow the government to punish a firm for its protected speech. Anthropic’s legal approach seeks to address what it describes as a campaign of retaliation by the Executive branch.

The Pentagon has refrained from commenting on the ongoing litigation, maintaining its policy of silence during legal disputes. Anthropic, backed by major investors including Alphabet and Amazon, has maintained a firm stance against the use of its technology for mass surveillance or fully autonomous weapons. US Defense Secretary Pete Hegseth had previously warned that the company would face consequences if it did not agree to “all lawful uses” of Claude.

Former President Donald Trump also weighed in, stating he would instruct federal agencies to cease using Claude, although he allowed the Pentagon six months to terminate its use of the AI assistant, which is integrated into classified military systems, including those utilized during the Iran conflict. The move to designate Anthropic as a supply chain risk marks the first instance of the federal government applying such a designation against a domestic company.

Anthropic has argued that the government’s actions primarily affect military contractors utilizing Claude for defense-related projects. The company, recently valued at approximately $380 billion (£284 billion), projects that most of its anticipated $14 billion (£10.5 billion) revenue for the year will be generated from businesses and government agencies employing Claude for various tasks, including computer coding.

In the past year, the Department of Defense has signed agreements worth up to $200 million with leading AI labs, including Anthropic, OpenAI, and Google. Shortly after the Pentagon’s move to blacklist Anthropic, Microsoft-backed OpenAI announced a partnership with the US military to deploy its technology.

As the legal battle unfolds, the implications for Anthropic and its technology could have significant repercussions on the broader AI landscape and its integration within military operations. The outcome of these lawsuits may define the extent to which companies can navigate the complex intersection of national security and technological development.