The impasse between the U.S. Defense Department and artificial intelligence developer Anthropic is set to reach a critical moment by 5:01 p.m. (2201 GMT) on Friday. At stake are core questions about how powerful AI might be used in military operations and how companies should manage the attendant risks.
The dispute has unfolded over several months and now operates against a firm deadline set by Pentagon negotiators. The department is pressing AI firms to accept contract language that would allow any lawful use of their systems, while Anthropic has insisted on preserving guardrails that bar certain military applications, notably autonomous weapons and domestic surveillance.
High-level debate and political attention
The standoff has drawn commentary from a range of figures, reflecting broader unease about how advanced AI systems could be deployed. Chris Miller, a former acting secretary of defense, framed the outcome as a decisive test for companies that claim they want to apply AI responsibly, saying: "It's a shot across the bow about the future of artificial intelligence and its use on the battlefield." Miller added the outcome will "be an acid test for those companies that claim to want to use AI humanely."
Political leaders have voiced related concerns. Democratic Senator Elissa Slotkin said during a confirmation hearing that most people do not support allowing weapons systems to operate in war and kill people without some form of human oversight. She also stated that she did not believe Americans - whether Democrat or Republican - want mass surveillance of U.S. citizens.
The Pentagon has disputed framing of the issue as a binary choice between unconstrained military use and excessive private-sector restrictions. A Pentagon chief spokesperson, Sean Parnell, posted on X that the department has no interest in using AI for mass surveillance of Americans - which he noted is illegal - nor in developing autonomous weapons that operate without human involvement. The department has also been referred to in the dispute by the name used by the prior administration, the Department of War, a label the Pentagon used in public comments during the negotiations.
Contract talks and the sticking points
Over the past year the Pentagon has established ceiling agreements, capped at $200 million, with several leading AI labs, including Anthropic, OpenAI and Google. In renegotiations of contract language the department has sought to replace company usage policies with a clause that allows any lawful use of the technology.
Anthropic has maintained what it characterizes as red lines: it will not permit its Claude models to be used for fully autonomous weapons or for domestic surveillance. The startup was among the first in the space to handle classified information through a supply arrangement routed via a cloud provider. Anthropic's chief executive, Dario Amodei - who left another AI lab in 2020 over concerns about stewardship of the technology - has warned that AI has outpaced legal frameworks and that powerful systems could aggregate disparate materials to gather intelligence on unwitting civilians. He has cautioned that, in narrow cases, AI could undermine democratic values even though the Department of War, and not private companies, makes military decisions.
Amodei met with Defense Secretary Pete Hegseth this week. Following that meeting the Pentagon circulated revised contract language intended to signal movement toward compromise. Anthropic's response was that the revised language "made virtually no progress" and would permit safeguards to be ignored.
Business and supply-chain consequences
Anthropic faces concrete business risks if the parties fail to reach agreement. The Pentagon has warned it will terminate work with the company and may designate it a supply-chain risk. Such a designation, typically reserved for suppliers tied to adversary nations, can bar defense contractors from using the company’s technology in Pentagon projects.
That potential designation has prompted the Pentagon to request assessments from prime contractors, including Lockheed Martin, on their dependence on Anthropic ahead of any formal supply-chain risk determination. The broader defense industrial base comprises many thousands of contractors, a fact the Pentagon has emphasized as it weighs the downstream impact of any supplier restriction.
The department also made a second, more forceful threat that some legal experts have questioned. A senior Pentagon official said that if Anthropic refuses to agree to the department's terms, the secretary of war would ensure the Defense Production Act is invoked on Anthropic, compelling use of the company's services by the Pentagon regardless of Anthropic's consent.
Negotiation status and implications
At the moment the two sides appear to remain at an impasse. The Pentagon has framed its position as allowing any lawful use while disclaiming interest in illegal domestic surveillance or fully autonomous weapons without human oversight. Anthropic has said its guardrails are necessary because the technology can be misapplied and because legal protections have not kept up with rapid technical progress.
The outcome by the Friday deadline will affect Anthropic's ability to continue working with the Pentagon and could influence how the defense sector sources AI capabilities. It will also serve as an indicator for other AI vendors about how far the Pentagon will press for contract terms that favor unrestricted lawful military applications.
Summary
The Pentagon and Anthropic are locked in a negotiations standoff over contract language that would allow any lawful use of AI systems. Anthropic wants to retain restrictions on autonomous weapons and domestic surveillance; the Pentagon wants usage policies replaced by an all-lawful use clause. The dispute has prompted threats of contract termination and a potential supply-chain risk designation for Anthropic, and a senior official said the Defense Production Act could be used to compel Anthropic’s participation if the company refuses to comply.
Key points
- The Pentagon has set a 5:01 p.m. (2201 GMT) Friday deadline to resolve the dispute with Anthropic.
- Anthropic resists removing guardrails that bar fully autonomous weapons and domestic surveillance; the Pentagon is pushing for an all-lawful use clause that would supersede company usage policies.
- Potential impacts span the defense contracting base and related supply chains, as the Pentagon could bar contractors from using Anthropic if it designates the company a supply-chain risk.
Risks and uncertainties
- Operational risk to Anthropic's business - The Pentagon warned it could terminate work with Anthropic and declare it a supply-chain risk, which would limit the company's access to defense contracts.
- Legal and coercive risk - A senior Pentagon official indicated the department could invoke the Defense Production Act to compel Anthropic's services, a move whose legality some experts have questioned.
- Policy and governance uncertainty - Divergent views remain on acceptable limits for military use of AI, and the dispute highlights unresolved tension between company-imposed guardrails and government demands for broad lawful-use rights.