A senior Pentagon official said on Tuesday that commercial artificial intelligence contracts signed during the Biden administration contain sweeping operational limits that could cripple U.S. military missions if enforced in real time, including the ability to plan and carry out combat operations.
Emil Michael, the under secretary of defense for research and engineering, recounted what he described as a moment of alarm when he examined the contractual terms that govern AI models already integrated into some of the Defense Department's most sensitive commands. He did not identify the AI provider whose contracts he reviewed.
Michael made his remarks at the American Dynamism Summit in Washington, a gathering that brings together technology companies active in space and national security projects. His comments came just days after a dispute over how the Pentagon could use Anthropic's AI tools led President Donald Trump to bar the startup from government work and call it a national security risk.
"I had a 'holy, holy cow' moment," Michael said at the summit. He described contractual language that, in his view, would prevent planners from moving forward if a proposed operation might "potentially lead to kinetics" or explosions. He said he found dozens of restrictions in agreements that covered commands responsible for air operations over Iran, China and South America.
According to Michael, the contracts were drafted in a way that could allow an AI model to stop functioning mid-operation if an operator were judged to have violated the terms of service. He warned that such an outcome could leave personnel unable to complete time-sensitive planning and execution tasks.
At the time Michael reviewed the contracts, Anthropic's Claude was reportedly the only AI model available to the Defense Department on its classified systems. Michael said his concerns intensified after a senior executive at an unnamed AI company questioned whether its software had been used in what Michael called one of the most successful military operations in recent memory.
News reports have indicated that Anthropic's Claude was used to help plan the U.S. government raid that captured former Venezuelan President Nicolas Maduro in January. Michael framed that example while stressing the broader operational risk posed by restrictive contractual terms.
"What we're not going to do is let any one company dictate a new set of policies above and beyond what Congress has passed," Michael said.
The disclosures by Michael may shed light on the recent confrontation between the Defense Department and Anthropic. Defense Secretary Pete Hegseth labeled the company a "supply-chain risk" after negotiations over restrictions on autonomous weapons and mass surveillance stalled, and President Donald Trump moved to exclude Anthropic from government business.
Hours after that escalation, rival OpenAI reached an agreement with the Pentagon. A statement from OpenAI CEO Sam Altman suggested that the Department had accepted similar restrictions with OpenAI's AI models.
The concerns highlighted by Michael raise questions about how commercial AI licensing terms interact with military requirements for continuous, reliable system availability and operational control. The official's account underscores tensions between the Defense Department's need for flexible tools in fast-moving operations and the constraints some vendors place on the use of their technology.