The Trump administration has produced a draft directive that would require civilian artificial intelligence contractors to allow the U.S. government to employ their models for "any lawful" purpose, according to published reports. The guidance comes amid a high-profile dispute between the Department of War and Anthropic that culminated Thursday when the Pentagon labeled the AI company a "supply-chain risk."
Market participants reacted with caution after the developments. Shares of firms tied to AI saw selling pressure as investors tried to gauge how far the administration might press requirements that could conflict with the safety and ethical guardrails some providers impose. By 15:59 ET (20:59 GMT), the Nasdaq 100 had fallen 1.51%. Within large-cap tech, Microsoft Corporation (NASDAQ:MSFT) declined 0.42% and Alphabet Inc Class A (NASDAQ:GOOGL) slipped 0.78% as traders digested the potential implications of a GSA "irrevocable license" condition in new procurement documents.
The GSA proposals and the Pentagon action signal a hardening posture inside military procurement circles. The Pentagon's designation of Anthropic as a "supply-chain risk" - a label observers say is typically applied to foreign entities - effectively prevents government contractors from incorporating Anthropic's technology into their systems. Officials cited a months-long disagreement over Anthropic's refusal to waive safeguards that would permit mass domestic surveillance and the use of its Claude models in lethal autonomous weapons as the reason for the blacklisting.
Secretary of War Pete Hegseth defended the designation on Friday, saying the United States needs "patriotic" technology partners that do not set restrictive "red lines" on lawful operations. Anthropic has countered that the designation lacks legal merit and has announced plans to challenge the decision in court. The administration has provided a six-month transition window for agencies to move away from Anthropic's systems.
Beyond the specific action against one company, the GSA's draft contains broader mandates for AI contractors. One provision would bar contractors from "intentionally encod[ing] partisan or ideological judgments" into model outputs, language framed as an attempt to remove what the administration describes as "embedded bias" or "wokeness" from models used in government work. Another requirement would obligate firms to disclose whether their models have been altered to comply with non-U.S. regulatory regimes, such as the European Union's AI Act.
Analysts at Evercore ISI cautioned that the disclosure and neutrality rules could force a degree of separation between the American AI technology stack and international standards, potentially creating divergent versions of models for different regulatory regimes. Traders and procurement observers are also watching for a response from other major AI providers: OpenAI reportedly stepped in to fill the Pentagon's immediate needs after Anthropic was excluded, and market actors are awaiting further signals from that company and others.
For now, the unfolding interventions by the Pentagon and the GSA have left the AI sector and defense procurement stakeholders navigating heightened uncertainty on legal, operational and commercial fronts.
Key points
- The draft directive would require AI firms to permit the U.S. government to use their models for "any lawful" purposes, escalating procurement demands.
- The Pentagon labeled Anthropic a "supply-chain risk" after a dispute over waiving safeguards related to surveillance and lethal autonomous weapons; agencies have six months to transition away from Anthropic systems.
- GSA draft rules would ban intentionally encoding partisan or ideological judgments and require disclosure when models are modified to meet non-U.S. regulations, potentially encouraging a split between U.S. and international AI stacks.
Risks and uncertainties
- Legal risk: Anthropic intends to challenge the Pentagon's "supply-chain risk" designation in court, creating uncertainty for procurement timelines and contractor choices.
- Market and industry risk: The GSA's "irrevocable license" and neutrality requirements have already pressured AI-adjacent equity prices and could complicate vendor relationships across the tech sector.
- Operational risk: Mandatory disclosures about compliance with non-U.S. frameworks and restrictions on ideological content could force vendors to maintain divergent model versions for different regulatory environments, affecting development and deployment strategies.