World March 3, 2026

Pentagon Official Warns Commercial AI Contract Terms Could Halt Military Operations

Senior defense research chief says limits in vendor agreements risk interrupting planning and execution of combat missions

By Derek Hwang
Pentagon Official Warns Commercial AI Contract Terms Could Halt Military Operations

A top Pentagon research official told industry leaders that broad operational restrictions embedded in some commercial AI contracts could incapacitate U.S. military missions in real time, potentially stopping planning or execution if terms of service were seen to be violated. The remarks, delivered at a Washington security technology summit, came amid tensions between the Defense Department and AI firms over allowable uses of their systems.

Key Points

  • A senior Pentagon official said commercial AI contracts include broad operational restrictions that could interrupt planning and execution of combat operations - sectors affected: defense, national security.
  • Contracts reviewed covered commands responsible for air operations over Iran, China and South America and contained numerous limitations - sectors affected: defense, aerospace.
  • Anthropic's Claude was reportedly the only AI model on the Defense Department's classified systems at the time of the review; disputes with Anthropic preceded similar agreements with OpenAI - sectors affected: defense, technology.

A senior Pentagon official said on Tuesday that commercial artificial intelligence contracts signed during the Biden administration contain sweeping operational limits that could cripple U.S. military missions if enforced in real time, including the ability to plan and carry out combat operations.

Emil Michael, the under secretary of defense for research and engineering, recounted what he described as a moment of alarm when he examined the contractual terms that govern AI models already integrated into some of the Defense Department's most sensitive commands. He did not identify the AI provider whose contracts he reviewed.

Michael made his remarks at the American Dynamism Summit in Washington, a gathering that brings together technology companies active in space and national security projects. His comments came just days after a dispute over how the Pentagon could use Anthropic's AI tools led President Donald Trump to bar the startup from government work and call it a national security risk.

"I had a 'holy, holy cow' moment," Michael said at the summit. He described contractual language that, in his view, would prevent planners from moving forward if a proposed operation might "potentially lead to kinetics" or explosions. He said he found dozens of restrictions in agreements that covered commands responsible for air operations over Iran, China and South America.

According to Michael, the contracts were drafted in a way that could allow an AI model to stop functioning mid-operation if an operator were judged to have violated the terms of service. He warned that such an outcome could leave personnel unable to complete time-sensitive planning and execution tasks.

At the time Michael reviewed the contracts, Anthropic's Claude was reportedly the only AI model available to the Defense Department on its classified systems. Michael said his concerns intensified after a senior executive at an unnamed AI company questioned whether its software had been used in what Michael called one of the most successful military operations in recent memory.

News reports have indicated that Anthropic's Claude was used to help plan the U.S. government raid that captured former Venezuelan President Nicolas Maduro in January. Michael framed that example while stressing the broader operational risk posed by restrictive contractual terms.

"What we're not going to do is let any one company dictate a new set of policies above and beyond what Congress has passed," Michael said.

The disclosures by Michael may shed light on the recent confrontation between the Defense Department and Anthropic. Defense Secretary Pete Hegseth labeled the company a "supply-chain risk" after negotiations over restrictions on autonomous weapons and mass surveillance stalled, and President Donald Trump moved to exclude Anthropic from government business.

Hours after that escalation, rival OpenAI reached an agreement with the Pentagon. A statement from OpenAI CEO Sam Altman suggested that the Department had accepted similar restrictions with OpenAI's AI models.


The concerns highlighted by Michael raise questions about how commercial AI licensing terms interact with military requirements for continuous, reliable system availability and operational control. The official's account underscores tensions between the Defense Department's need for flexible tools in fast-moving operations and the constraints some vendors place on the use of their technology.

Risks

  • Operational interruption risk: Contractual clauses could allow an AI model to stop mid-operation if terms of service are deemed violated, posing immediate mission risk - impacts defense and national security operations.
  • Supply-chain and vendor dependence risk: The Pentagon faces tensions with AI vendors over acceptable restrictions, with officials calling some firms a "supply-chain risk," which could affect procurement and readiness - impacts defense contracting and technology suppliers.
  • Policy and legal uncertainty: Conflicts between vendor-imposed policies and statutory Congressional frameworks may create uncertainty for defense use of AI models - impacts defense policy and compliance teams.

More from World

Russian Hawks Alarmed by U.S. Strike on Iran, Urge Moscow to Escalate in Ukraine Mar 3, 2026 Trump Says U.S. Forces Have Neutralized Numerous Iranian Naval and Air Assets Mar 3, 2026 Missile Barrage Leaves Tehran Largely Deserted as Civilians Face Power Cuts and Fear Mar 3, 2026 U.S. Orders Non-Essential Staff From Several Middle East Missions as Travel Collapses Mar 3, 2026 Tehran Rules Out Talks with Washington for Now as Strikes and Market Turmoil Continue Mar 3, 2026