A confrontation between Anthropic, the developer of the Claude family of large language models, and the U.S. Department of Defense began in January and escalated through February and early March after Anthropic refused a Pentagon request to remove or loosen safeguards on its systems.
The Pentagon subsequently designated the company a "supply-chain risk," a move that jeopardized Anthropic's government contracts and prompted senior executives at the AI lab to warn that the designation could shave billions from the company's projected 2026 revenue. Anthropic has characterized the action as retaliatory, saying it stems from the company's resistance to allowing its technology to be used for autonomous weapons or for domestic surveillance in the United States.
Legal experts cited in the dispute have raised doubts about whether the law the Pentagon relied on aligns with Anthropic's actual conduct. Analysts have also pointed to apparent contradictions in the Pentagon's own behavior and said evidence could suggest the decision was motivated by animus rather than legitimate security concerns. Anthropic has moved to challenge the designation in court.
Timeline of the dispute
- January 29 - Friction emerged after a Pentagon request that Anthropic remove safeguards that, if lifted, could enable the government to use the company's technology to enable weapons to target autonomously and to conduct domestic surveillance in the United States.
- February 11 - The Pentagon pushed multiple AI firms, including Anthropic, to make their tools available in classified settings without many of the standard restrictions those companies otherwise impose on customers.
- February 14 - The Department of Defense considered severing ties with Anthropic because the company insisted on retaining some limits on how the U.S. military could employ its models.
- February 23 - U.S. Defense Secretary Pete Hegseth summoned Anthropic CEO Dario Amodei to the Pentagon for discussions focused on the potential military applications of Claude.
- February 24 - Pentagon officials told Anthropic to comply with their requests or face consequences, including being labeled a supply-chain risk.
- February 25 - The Pentagon asked defense contractors, among them Boeing and Lockheed Martin, to evaluate how dependent they were on Anthropic's technology.
- February 26 - Pentagon spokesperson Sean Parnell formally requested that Anthropic permit the department to use its technology for all lawful purposes and gave the company a deadline of 5:01 p.m. ET on February 27 to respond.
- February 26 - Anthropic announced it would not comply with the Pentagon's request to remove safeguards from its AI models.
- February 27 - President Donald Trump directed every federal agency to immediately stop using Anthropic's technology.
- February 27 - Secretary Hegseth ordered the Department of Defense to designate Anthropic as a "supply-chain risk to national security."
- February 27 - Anthropic stated that it would challenge the Pentagon's decision through the courts.
- February 27 - OpenAI announced an agreement to deploy its technology inside the Department of Defense's classified network.
- February 28 - OpenAI said its deal with the Pentagon included explicit limits: its technology may not be used for mass domestic surveillance in the United States, may not direct autonomous weapons systems, and may not be used for high-stakes automated decisions.
- March 2 - The U.S. Departments of State, Treasury and Health and Human Services moved to stop using Anthropic's Claude.
- March 3 - Lockheed Martin said it would follow the Department of Defense's direction, signaling that defense contractors were likely to remove Anthropic's tools from their supply chains to protect their federal contracts, according to legal experts.
- March 4 - U.S. Treasury Secretary Scott Bessent told CNBC that the Treasury would remove Anthropic from its government systems within days.
- March 4 - A major technology industry group urged de-escalation, warning that labeling a firm a supply-chain risk injects uncertainty for companies and could risk the military's access to leading products and services.
- March 5 - The Department of Defense formalized the decision and designated Anthropic as a supply-chain risk.
- March 6 - Amazon said it was assisting customers in transitioning Department of Defense workloads to alternative models on its cloud platform, while noting that customers and partners could continue to use Claude for all non-Pentagon workloads.
- March 6 - The U.S. General Services Administration drafted stringent rules for civilian artificial-intelligence contracts and terminated Anthropic's OneGov agreement, which had made Claude available to the federal government.
- March 9 - Anthropic filed suit seeking to prevent the Pentagon from placing it on a national security blacklist, arguing the designation is unlawful and violates the firm's free speech and due process rights.
- March 9 - Anthropic executives warned that the U.S. government's blacklisting could reduce the company's 2026 revenue by multiple billions of dollars and inflict reputational damage.
- March 10 - Microsoft filed a legal brief in support of Anthropic's lawsuit, saying the DoD designation directly affects Microsoft and that a temporary restraining order is necessary to avoid costly supplier disruptions and rushed rebuilding of products that rely on Anthropic's technology.
The dispute has produced swift shifts across government agencies, defense contractors and cloud providers. Federal agencies and several departments have moved to cease using Anthropic's Claude in their systems, while some defense contractors signaled they would comply with the DoD directive to avoid risking their own contracts. Cloud providers have offered transition assistance for Pentagon workloads while keeping non-DoD usage options intact.
Legal commentary on the case highlighted three central lines of critique: potential mismatch between the statute invoked and Anthropic's actions, internal inconsistencies in the Pentagon's conduct, and indications that the decision may have been influenced by animus rather than an objectively assessed security threat. Anthropic has pursued judicial relief and secured public backing from at least one major technology company, which has asked the court for temporary protection to prevent immediate and disruptive supply-chain consequences.
At stake for Anthropic are both near-term commercial relationships with the federal government and longer-term reputational effects. Company leaders have quantified potential financial exposure by estimating dramatic reductions to projected 2026 revenue if federal and contractor business evaporates. The DoD has emphasized national-security considerations in its actions, while industry groups and some private-sector participants have urged a more measured approach to avoid undermining access to advanced AI tools.
The legal proceedings and operational changes are ongoing, and the case is unfolding both in the courts and across federal procurement channels as agencies, contractors and cloud providers adapt to the designation and related guidance.