Stock Markets March 9, 2026

Anthropic takes legal fight to court, says government retaliation followed refusal to lift Claude safety limits

AI company alleges Pentagon demanded removal of safeguards, prompting a White House-ordered ban and a multi-pronged lawsuit

By Avery Klein
Anthropic takes legal fight to court, says government retaliation followed refusal to lift Claude safety limits

Anthropic has filed suit against the U.S. government, saying the administration retaliated after the company refused a Defense Department demand to remove safety restrictions on its Claude model. The company says it modified Claude for classified use, declined to permit deployment for lethal autonomous weapons or mass domestic surveillance, and was later branded a supply-chain risk following an ultimatum and a presidential directive. Anthropic has also challenged a separate authority in the D.C. Circuit.

Key Points

  • Anthropic alleges the Pentagon demanded removal of Claude’s usage limits and that the company refused to permit lethal autonomous weapons without human oversight and mass surveillance of Americans.
  • Following a February ultimatum, President Trump directed agencies to halt Anthropic’s technology and senior defense officials labeled the company a supply-chain risk, prompting multiple agencies to sever ties.
  • Anthropic has filed suit alleging violations of the Administrative Procedure Act, the First and Fifth Amendments, overreach of presidential authority, and improper agency sanctions; it also filed a related appeal in the D.C. Circuit.

Summary

Anthropic has lodged a lawsuit against the U.S. government, alleging that officials escalated a dispute over usage limits on its Claude model into punitive actions that include a government-wide ban and a supply-chain risk designation. The company says it cooperated extensively with defense networks, tailored a version of Claude for classified military use and offered to help transition work if agreements could not be reached. According to Anthropic, the disagreement arose when the Department of Defense demanded removal of many of Claude’s usage restrictions, and the company refused to permit the model to be used for lethal autonomous warfare without human oversight or for mass surveillance of Americans. The firm says those two limitations were non-negotiable because Claude had not been tested for such purposes and could not be used safely for them.


What Anthropic alleges

Anthropic states it invested years developing Claude into what it describes as the government’s most-deployed frontier AI model, including through a specialized deployment called "Claude Gov" for classified networks. The company says it relaxed numerous standard restrictions on Claude to enable national security work, but that a critical rupture occurred in the fall of 2025 during negotiations over integration with the Pentagon’s GenAI.mil platform.

According to Anthropic’s complaint, the Department of Defense insisted the company abandon its usage policy and permit Claude to be used for, in the department’s words, "all lawful uses." Anthropic says it largely agreed with this framing but declined two specific requests on the grounds that those uses were unsafe and untested: first, use of Claude in lethal autonomous warfare without human oversight; and second, use of Claude for mass surveillance of Americans. The company also states it offered to assist in transferring the relevant work to another provider if a mutually acceptable arrangement could not be reached.


Conflicting accounts about the dispute’s origins

The Pentagon, however, has offered a different chronology. The department’s chief technology officer has publicly said tensions increased after a U.S. raid in Venezuela, after which an Anthropic executive reportedly called a contact at Palantir to ask whether Claude had been used in the operation. That episode is referenced by Pentagon officials but is not included in Anthropic’s court filing.


From ultimatum to an agency-wide ban

Anthropic says the situation escalated when Secretary of Defense Pete Hegseth met with Anthropic’s CEO Dario Amodei on February 24 and presented a four-day ultimatum: comply with the DoD’s demands or face one of two consequences - compelled compliance under the Defense Production Act or exclusion from the defense supply chain as a "national security risk." Anthropic reports that Amodei publicly rejected the demand on February 26.

According to the complaint, the next day - before a 5:01 p.m. Eastern deadline had passed - President Donald Trump posted on Truth Social an order for every federal agency to immediately stop using Anthropic’s technology, characterizing the company as a "RADICAL LEFT, WOKE COMPANY." Hours later, Secretary Hegseth announced on X that Anthropic was a "Supply-Chain Risk to National Security" and stated that no military contractor or supplier could do commercial business with the company.

Government agencies move rapidly, Anthropic says. The complaint lists agencies that severed or terminated ties: the General Services Administration canceled a government-wide contract, and the Treasury Department, State Department and the Federal Housing Finance Agency publicly cut ties with Anthropic.

Anthropic’s filing further alleges that the Pentagon launched a major air attack on Iran using Anthropic’s tools hours after the ban - a claim the company presents as an allegation in its complaint.


Administration response

The White House responded through spokeswoman Liz Huston, who is quoted in public statements as saying the administration would not allow a company to "jeopardize our national security by dictating how the greatest and most powerful military in the world operates." She added that U.S. forces would "never be held hostage by the ideological whims of Big Tech leaders" and would follow the Constitution rather than "any woke AI company’s terms of service."


Why Anthropic went to court

Anthropic contends the supply-chain risk designation lacks factual basis. The company points to existing security credentials and prior government endorsements as evidence contrary to the designation, citing FedRAMP authorization, active security clearances and years of positive government feedback. The complaint also notes that Secretary Hegseth had previously described Claude’s capabilities as "exquisite" at the February 24 meeting.

The company reports that two senior Pentagon officials later told reporters they saw "no evidence of supply-chain risk" and described the designation as "ideologically driven." Building on those statements, Anthropic has advanced five legal claims in its suit. The company argues the government’s actions violated the Administrative Procedure Act, the First Amendment and the Fifth Amendment, exceeded the president’s statutory authority, and ran afoul of the APA’s prohibition on unauthorized agency sanctions.

In addition to the primary lawsuit, Anthropic has filed a related challenge in the U.S. Court of Appeals for the D.C. Circuit, contesting a separate legal authority the government invoked.


Legal and market implications noted in the complaint

Anthropic frames its filing as a pushback against what it characterizes as punitive government action following a refusal to remove safety limitations. The complaint links the factual record it presents - including security certifications, past cooperation with defense networks, internal discussions and public statements - to its legal claims alleging procedural and constitutional violations.

Where the record is disputed, Anthropic’s filing documents its version of events and cites conflicting public accounts offered by Pentagon officials. The company’s suit asks the courts to review both the procedural basis for the government’s sanctions and the constitutional issues it raises.


What remains contested

Key elements of the story remain contested between Anthropic and the Pentagon. The parties differ on when and how the dispute began, whether a supply-chain risk designation was factually supported, and what actions the government lawfully could take in response to a company’s usage policies for an AI model operating on classified networks. The court filings lay out Anthropic’s allegations and legal theories; the government has asserted alternative facts and publicly defended its decisions.


Next steps

The case is now in the federal courts, with Anthropic pressing multiple statutory and constitutional claims and pursuing an appellate challenge in the D.C. Circuit. How judges interpret the procedural record and legal authorities raised by both sides will determine the next phase of the dispute and whether any of the contested agency actions are reversed or upheld.

Risks

  • Uncertainty over government contracting and procurement policies for AI systems could disrupt defense and cloud services sectors if legal rulings permit broad agency sanctions - impacts are particularly relevant for defense contractors and enterprise cloud providers.
  • Designation as a supply-chain risk, whether upheld or disputed, raises legal and reputational risks for AI vendors working with classified networks, affecting commercial relationships across government and adjacent markets.
  • Conflicting factual accounts between Anthropic and Pentagon officials create legal uncertainty that could prolong litigation and leave existing contracts and integrations in limbo - this uncertainty affects procurement timelines for defense modernization programs.

More from Stock Markets

AAR Manufacturing Secures $159.78 Million Air Force Contract to Repair 463L Cargo Pallets Mar 9, 2026 Lockheed Martin Lands $761 Million in U.S. Defense Contracts Mar 9, 2026 Beta Technologies Accelerates MV250 Military Cargo Drone Program, Secures Multiple FAA Grants Mar 9, 2026 Joby Gains After Selection for White House eVTOL Integration Pilot Programs Mar 9, 2026 Mexican equities slip as S&P/BMV IPC posts 0.63% decline, hits one-month low Mar 9, 2026