Stock Markets February 13, 2026

U.S. Forces Used Anthropic’s Claude in Maduro Operation, Report Says

Claude accessed through Palantir platforms; deployment highlights tension between company usage rules and military classified network demands

By Leila Farooq PLTR
U.S. Forces Used Anthropic’s Claude in Maduro Operation, Report Says
PLTR

The Wall Street Journal reported that Anthropic’s AI model Claude was employed in a U.S. operation to seize former Venezuelan President Nicolas Maduro, with access provided via a partnership between Anthropic and Palantir Technologies. The deployment and related details have not been independently verified and raise questions about the interaction between company usage policies and government requests to run advanced AI tools on classified systems.

Key Points

  • Anthropic’s Claude was reportedly used in the U.S. operation to capture Nicolas Maduro, with access routed through Palantir’s platforms.
  • The Pentagon is pressing major AI firms, including OpenAI and Anthropic, to enable their tools on classified government networks, with some firms providing custom military tools primarily on unclassified systems.
  • Anthropic allows classified use through third parties but still enforces usage policies that prohibit violence, weapons design and surveillance; the company recently completed a $30 billion funding round and is cited with a $380 billion valuation.

According to a Wall Street Journal report citing unnamed people familiar with the matter, Anthropic’s artificial-intelligence model Claude was used in the U.S. military operation that resulted in the capture of former Venezuelan President Nicolas Maduro. The Journal said Claude was used through Anthropic’s partnership with data and analytics firm Palantir Technologies, whose platforms are commonly used across the Defense Department and federal law enforcement agencies.

At the time of publication, Reuters could not immediately verify the Journal’s account. Requests for comment from the U.S. Defense Department, the White House, Anthropic and Palantir were not answered immediately, according to the report.

The deployment report arrives amid broader pressure from the Pentagon for leading AI companies to enable their tools for use on classified government networks. That effort, Reuters reported exclusively, includes outreach to firms such as OpenAI and Anthropic to make their AI systems available to classified environments without some of the typical restrictions applied to public users.

Many AI vendors are developing bespoke systems for U.S. military use. Most of these systems so far are accessible only on unclassified networks, which are generally used for administrative and non-sensitive military functions. Anthropic is notable in that it is described as the only firm whose technology has been placed into classified settings via third-party arrangements, though such use remains governed by Anthropic’s usage policies.

Those usage policies explicitly prohibit employing Claude to facilitate violence, to design weapons or to conduct surveillance, the report said. The Journal’s account also noted Anthropic’s recent financing and valuation figures: the company raised $30 billion in its latest funding round and is now valued at $380 billion.

The operational context provided by the report states that the United States captured President Nicolas Maduro in an audacious raid and transported him to New York to face drug-trafficking charges early in January.

The article also includes performance-oriented commentary about Palantir in an investment product context, noting that automated ProPicks AI assessments evaluate PLTR alongside thousands of other companies using more than 100 financial metrics and that the product has highlighted prior winners such as Super Micro Computer and AppLovin with cited percentage gains. The report noted readers could consult that service to see whether PLTR appears in current strategies.


While these accounts outline how Claude may have been integrated into a high-profile operation via Palantir, the facts reported have not been independently confirmed at this time and several government and corporate entities did not immediately respond to requests for comment.

Risks

  • Verification uncertainty - the report could not be immediately independently verified and several parties did not respond to requests for comment, creating ambiguity around the scope and details of Claude’s use. Impacted sectors: Defense, Technology.
  • Policy constraints - Anthropic’s stated usage rules forbid violent or surveillance applications, potentially constraining how its tools can be applied even when routed through third parties. Impacted sectors: Defense, AI vendors.
  • Governance and access tension - the Pentagon’s push to run AI tools on classified networks without standard user restrictions raises uncertainties about policy compliance, oversight and operational boundaries. Impacted sectors: Defense, Federal law enforcement, AI industry.

More from Stock Markets

Indigenous Occupation Halts Operations at Cargill’s Santarem Terminal Feb 21, 2026 Market Turbulence Reinforces Case for Broader Diversification Feb 21, 2026 NYSE Holdings UK Ltd launches unified trading platform to streamline market access Feb 21, 2026 Earnings Drive Weekly Winners and Losers as Buyout Headlines Lift Masimo Feb 21, 2026 Barclays Sees 'Physical AI' Scaling to Hundreds of Billions by 2035 Feb 21, 2026