Gate News message, April 23 — Anthropic submitted a filing to the U.S. Court of Appeals for the D.C. Circuit stating that once its AI models are deployed in Pentagon environments, the company has neither visibility nor technical means to control or shut down the models, and no “kill switch” exists.
The filing is the latest development in Anthropic’s dispute with the Pentagon over a “supply chain risk” designation. The Department of Defense labeled Anthropic a supply chain risk in March, citing the company’s alleged improper interference with how its technology is used in sensitive military operations. The core contention centers on Anthropic’s usage policies, which prohibit Claude from being used for autonomous weapons or mass surveillance—restrictions the Pentagon characterizes as “obfuscation.”
The litigation has resulted in a split decision: a Washington court rejected Anthropic’s request to suspend the supply chain risk label, while a California court approved it. In practical terms, Anthropic cannot bid on new Pentagon contracts but may continue serving other government agencies. Meanwhile, the Trump administration is pushing deployment of Anthropic’s new model, Mythos, across federal agencies, with officials exploring how to use it to defend against cyberattacks—a stance that contradicts the Pentagon’s position that Anthropic poses a national security risk. The next hearing is scheduled for May 19.
Related News
Anthropic weapon-grade cybersecurity model Mythos was accessed without authorization: how did they do it?
SlowMist CISO issues alert: ShinyHunters claims to have breached Anthropic’s internal systems
Anthropic confirms an investigation: Claude Mythos Preview appears to have been accessed without authorization