Bold claim: Anthropic chief pushes back on a Pentagon demand to drop AI safeguards. And this is where the stakes get real. If you’re curious about how tech and national security collide, keep reading.
Anthropic is resisting a government request that would lower or remove safety measures from its AI tools, such as Claude. The core concern is twofold: the potential use of these tools for “mass domestic surveillance” and for fully autonomous weapons. Anthropic founder and CEO Dario Amodei stated that such use cases have never been included in their contracts with the Department of War, and they should not be included now either. The Department of War is the name used for the U.S. Defense Department under an executive order issued by President Donald Trump in September.
Amodei indicated that if the Department of Defense decides to remove Anthropic as a supplier, the company will assist in a smooth transition to another provider. A Defense Department spokesperson could not be reached for comment at the time. Previously, a Pentagon official told the BBC that if Anthropic refused to comply, the Defense Production Act could be invoked through a figure named Hegseth to pressure the company. The Defense Production Act allows the president to label a company or its product as essential to national defense, enabling government mandates to ensure supply.
Hegseth also threatened to designate Anthropic as a “supply chain risk,” which would mark the company as not secure enough for government use. A former DoD official, speaking on condition of anonymity, described Hegseth’s rationale for these measures as “extremely flimsy.”
Amodei did not spell out exact DoD use cases that would constitute mass surveillance or autonomous weapons. In a company blog post, he highlighted a broader concern: AI can combine many seemingly harmless data points into a comprehensive, automatic portrait of an individual’s life at large scale.
But Amodei acknowledged certain legitimate uses: the company supports AI-enabled lawful foreign intelligence and counterintelligence missions. He insisted that deploying such systems for mass domestic surveillance would clash with democratic values.
On weapons, Amodei argued that even today’s most capable AI systems aren’t reliable enough to power fully autonomous weapons. He stated, unequivocally, that Anthropic will not provide a product that could endanger U.S. warfighters or civilians. He emphasized the need for guardrails and oversight that simply aren’t in place today.
Amodei added that Anthropic had offered direct collaboration with the Department of War on research and development to improve reliability, but the offer hadn’t been accepted.
Notes from the BBC indicate that Hegseth pressed for a Tuesday meeting with Amodei.
Would you side with strong safety guarantees even if they limit access to powerful AI, or do you think safeguards hinder innovation and national security? Share your thoughts in the comments.