Economy March 19, 2026

Pentagon Ban on Anthropic’s Claude Runs into Operational Pushback

Defense users say removing Claude will be disruptive, time-consuming and costly as military teams slow the phase-out or plan rapid reinstatement if the dispute is resolved

By Ajmal Hussain
Pentagon Ban on Anthropic’s Claude Runs into Operational Pushback

Defense personnel, contractors and some agency officials are resisting the Pentagon’s directive to remove Anthropic’s Claude AI tools after the company was labeled a supply-chain risk. Users say Claude outperforms available alternatives, was embedded quickly after a $200 million contract in July 2025, and that replacing it will be costly and slow - with recertification of substitutes potentially taking 12 to 18 months.

Key Points

  • Pentagon ordered removal of Anthropic products after designating the company a supply-chain risk on March 3, with a six-month phase-out.
  • Anthropic secured a $200 million defense contract in July 2025 and its Claude model was approved to operate on classified military networks, becoming widely adopted within the department.
  • Replacing Claude is expected to be costly and time-consuming - recertification for alternatives could take 12 to 18 months - creating productivity losses and forcing contractors to choose between rapid pivots or slow-rolled phase-outs.

Department of Defense staffers, former officials and IT contractors who work closely with U.S. military units say they are reluctant to abandon Anthropic’s AI tools despite an official order to remove them following a supply-chain risk designation. The directive, issued by Defense Secretary Pete Hegseth on March 3, bars use of Anthropic products by the Pentagon and its contractors and calls for a six-month phase-out.

That instruction has encountered resistance inside the department. Some personnel are dragging their feet on compliance, while others are preparing to restore access if the dispute between Anthropic and the Pentagon over guardrails governing military use of its models is resolved, sources with direct knowledge of deployments said. One IT contractor described the pushback bluntly: "Career IT people at DoD hate this move because they had finally gotten operators comfortable using AI," adding, "They think it’s stupid." The contractor also asserted that Anthropic’s Claude model "is the best," and contrasted it with xAI’s Grok, which the contractor said often returned inconsistent answers to the same query.


Operational disruption and certification timelines

Officials and contractors warn that removing Anthropic from military networks will not be quick or painless. Several sources said recertifying alternative AI systems for classified or military networks could take months. Joe Saunders, CEO of government contractor RunSafe Security, warned that there is a "substantial cost to replace those models with alternatives," and that certification for a replacement system could run 12 to 18 months. Saunders added, "It’s not just costly, it’s a loss of productivity," noting his experience helping the military incorporate AI chatbots.

Orders to cease using Claude are spreading through the department. One official described personnel as complying out of caution - "no one wants to end their career over this" - but also called the change wasteful. In practice, tasks previously handled by Claude, such as querying large datasets for information, are being done manually in some cases using tools like Microsoft Excel, the official said.


Extent of adoption and embedded workflows

Anthropic became deeply embedded in Pentagon workflows after winning a $200 million defense contract in July 2025. Its Claude model was the first AI model approved to operate on classified military networks, and officials familiar with its deployment said adoption was strong. Within federal circles, Anthropic’s models were broadly viewed as more capable than some competitors.

Sources said Claude tools were used to support U.S. military operations during the conflict with Iran, and that the technology remains in active use despite the company’s placement on the Pentagon’s restricted list. One expert cited continued operational use as "the clearest signal" of how highly certain quarters value the tool.

The impact of losing Claude touches both software development and operational platforms. Developers used Anthropic’s Claude Code tool widely inside the department to generate software code, and several people described frustration at losing access. One senior official counseled developers not to depend on a single tool, even as they acknowledged the practical difficulties of switching.


Systems that will require rebuilding

Replacing Claude will require more than swapping models in API calls. For example, sources said Palantir’s Maven Smart Systems - a software platform that delivers intelligence analysis and weapons targeting - relies on multiple prompts and workflows constructed with Anthropic’s Claude Code. Palantir, which holds Maven-related contracts with the Defense Department and other national security agencies that have a potential value of more than $1 billion, will need to substitute another AI model and rework parts of its software, according to people familiar with the matter.

Some teams are deliberately slow-rolling the phase-out because they are actively using Claude to assemble workflows - sequences of automated tasks that process and triage large volumes of data. Developers expressed frustration that moving to alternate AI agents would mean losing bespoke agents they had built to sift through vast datasets.


Contractor reporting and strategic decisions

The Defense Department has directed contractors, including major defense firms, to assess and report their reliance on Anthropic products and begin winding them down. That directive forces officials and contractors to weigh a strategic choice: rapidly pivot to alternatives such as OpenAI, Google or xAI, or to execute an unwind that preserves the option to return to Anthropic should the dispute be resolved within the six-month window.

Some agencies are explicitly betting on a negotiated resolution. One chief information officer at a federal agency said their plan is to slow the phase-out in expectation that the government and Anthropic will reach a settlement before the deadline.

Observers describe the episode as emblematic of broader tensions around AI adoption at both operational and political levels. "What we are seeing play out here is the tension of adoption, both inside the Pentagon as well as the political level," said Roger Zakheim, director of the Ronald Reagan Presidential Foundation and Institute.


Implications for productivity and procurement

Officials and contractors emphasize immediate productivity costs as teams migrate away from Claude. Tasks once handled by the model now may require manual effort or less capable automation, and rebuilding embedded workflows will consume engineering resources. The combination of time-consuming recertification processes and the need to retrofit platforms that relied on Claude’s tooling creates a sizable near-term operational burden.

At the same time, some users are preparing contingency plans that would permit a rapid return to Anthropic’s models if the company and the Defense Department resolve their disagreement within the phase-out window. Others are accepting the transition and bracing for the extended timelines and costs that replacements and recertification will entail.

As the six-month clock on the phase-out runs, the Pentagon faces an immediate choice between enforcing the ban strictly and risking operational friction, or implementing a softer approach that maintains operational continuity while pursuing security and procurement objectives.


Reporting note

Several Pentagon officials, staff and contractors spoke on condition of anonymity because they were not authorized to comment publicly. The Defense Department, Anthropic and xAI did not respond to requests for comment.

Risks

  • Operational disruption and productivity loss as tasks previously automated by Claude are handled manually or re-engineered - this impacts defense operations and defense contractors.
  • Long recertification timelines and substantial replacement costs for alternative AI models - this affects procurement cycles and software vendors serving classified or military networks.
  • Strategic uncertainty for contractors and agencies deciding whether to fully transition to competitors or maintain the option to reinstate Anthropic - this creates planning and contract risk for the defense tech sector.

More from Economy

China detains traffickers in new crackdown on fentanyl precursor chemicals Mar 19, 2026 Orban Conditions EU Aid for Ukraine on Restoration of Druzhba Oil Flows Mar 19, 2026 Serbia Extends Ban on Crude and Fuel Exports Through April 2 to Protect Domestic Supply Mar 19, 2026 Swiss Watch Exports Resume Growth in February Led by U.S., Japan and France Mar 19, 2026 UK stocks slide as Middle East fighting deepens; BoE decision in focus Mar 19, 2026