Defense Secretary Pete Hegseth gave Anthropic until Friday to remove AI safeguards for military use or risk losing its Pentagon contract, escalating tensions over ethics, surveillance and autonomous weapons.

Defense Secretary Pete Hegseth (Photo: Reuters)
US Defense Secretary Pete Hegseth has given artificial intelligence (AI) company Anthropic a Friday deadline to loosen safeguards on its AI systems for unrestricted military use, warning the company could risk losing its Pentagon contract if it refuses, according to people familiar with the matter.
The development, first reported by Axios, was confirmed to The Associated Press by a person familiar with a meeting between Hegseth and Anthropic CEO Dario Amodei.
Anthropic, which makes the chatbot Claude, remains the only major AI company approved for classified military networks that has resisted supplying its technology to a new internal Defense Department system.
According to the person familiar, Hegseth made clear that companies supplying the military must not impose built-in ethical limits that restrict lawful operations. In a January speech at Elon Musk’s SpaceX facility in South Texas, Hegseth said he was dismissing AI systems that won’t allow you to fight wars.
He outlined a vision for Pentagon systems that operate without ideological constraints that limit lawful military applications, adding pointedly that the department’s AI will not be woke.
Pentagon officials warned Anthropic that it could be designated a supply chain risk or face action under the Defense Production Act, which would give the military broader authority over how its products are used, according to two officials who spoke to The Associated Press.
Amodei did not budge on what he has described as red lines: fully autonomous military targeting and domestic surveillance of US citizens. In an essay last month, Amodei warned of the risks of unchecked AI power, writing that a powerful AI looking across billions of conversations from millions of people could gauge public sentiment, detect pockets of disloyalty forming, and stamp them out before they grow.
Anthropic has long branded itself as more safety-focused than its rivals, particularly since its founders left OpenAI in 2021. The Pentagon, however, argues that lawful military authority, not corporate policy, should determine how AI tools are deployed.
A senior Pentagon official told The Associated Press that the department has issued only lawful orders and that any legal responsibility for use of the tools rests with the military, not the AI provider.
The dispute comes as the Defense Department accelerates its AI rollout. Last summer, it awarded contracts worth up to $200 million each to Anthropic, Google, OpenAI and Elon Musk’s xAI. While Anthropic was first cleared for classified networks and works with partners like Palantir, the others are currently operating in unclassified settings.
Earlier this year, Hegseth highlighted only xAI and Google as core partners. He announced that Musk’s chatbot Grok would join the Pentagon’s internal AI network, GenAI.mil, just days after Grok faced scrutiny for generating sexualized deepfake images without consent.
OpenAI has also joined the secure AI platform, allowing service members to use a customized version of ChatGPT for unclassified work.
The clash reflects a broader national debate over AI’s role in warfighting and surveillance. Owen Daniels of Georgetown University’s Center for Security and Emerging Technology told The Associated Press that Anthropic’s stance could cost it influence as the Pentagon pushes ahead.
- Ends
Published By:
Nitish Singh
Published On:
Feb 25, 2026

3 hours ago

