Stock Markets February 15, 2026

U.S. Forces Used Anthropic’s Claude in Maduro Capture, Report Says

Report cites use of Anthropic AI via Palantir platforms; Pentagon engagement with AI firms on classified access noted

By Marcus Reed PLTR
U.S. Forces Used Anthropic’s Claude in Maduro Capture, Report Says
PLTR

Sources cited by the Wall Street Journal say Anthropic’s large language model, Claude, was employed during the U.S. operation that seized former Venezuelan President Nicolas Maduro. The deployment reportedly occurred through Anthropic’s partnership with Palantir Technologies, whose platforms are commonly used across the Defense Department and federal law enforcement. Multiple agencies and companies did not immediately respond to requests for comment, and available reports have not been independently verified.

Key Points

  • Reporters citing unnamed sources said Anthropic’s AI model Claude was used in the operation that captured former Venezuelan President Nicolas Maduro.
  • Claude’s access in the operation was reportedly facilitated through Anthropic’s partnership with Palantir Technologies, whose platforms are widely used by the Defense Department and federal law enforcement.
  • The Pentagon is pursuing arrangements for top AI companies, including OpenAI and Anthropic, to make tools available on classified networks with fewer standard restrictions; most military AI tools remain on unclassified networks.

According to people familiar with the matter cited in a Wall Street Journal report, the U.S. military used Anthropic’s artificial-intelligence model Claude in the operation that resulted in the capture of former Venezuelan President Nicolas Maduro. The report says the AI model was accessed through Anthropic’s partnership with data firm Palantir Technologies, which provides platforms widely used by the Defense Department and federal law enforcement.

Independent verification of that account was not immediately available. The U.S. Defense Department, the White House, Anthropic and Palantir did not immediately respond to requests for comment, according to the reporting.

Separate reporting has described an ongoing push by the Pentagon to have leading AI companies - including OpenAI and Anthropic - make their artificial-intelligence tools available on classified networks without many of the restrictions those firms typically place on users. That push was described as a priority in recent coverage of the Defense Department’s effort to expand AI access across sensitive environments.

Many AI companies are developing bespoke tools for U.S. military use, with most of those tools currently accessible only on unclassified networks that are usually reserved for military administration tasks. Anthropic is reported to be unique in being available in classified settings through third parties, though even in those instances the government remains subject to Anthropic’s usage policies.

Anthropic’s current usage policies explicitly prohibit employing Claude to support violence, design weapons or carry out surveillance. Those constraints remain in place even where third-party access to classified networks exists. Financial details cited in reporting note that Anthropic raised $30 billion in its most recent funding round and is valued at $380 billion.

The reported operation resulted in the capture of President Nicolas Maduro in early January and his subsequent transfer to New York to face drug-trafficking charges, according to the same reporting. The available accounts emphasize that the exact role and extent of Claude’s involvement have not been independently corroborated.


Context and implications: The accounts place Anthropic and Palantir at the intersection of modern AI deployment and defense operations, while also underscoring the tension between government demands for broader, less-restricted access to advanced AI tools and the usage limits companies impose on their products.

Risks

  • The report has not been independently verified, creating uncertainty about the exact role Claude played in the operation - this affects assessments of operational reliance on commercial AI tools (impacts Defense and national security sectors).
  • Anthropic’s usage policies prohibit employing Claude to support violence, design weapons or perform surveillance, which could limit government use and create policy friction when AI capabilities are sought for sensitive operations (impacts Defense and AI vendor relationships).
  • Even where Anthropic’s model is available in classified settings through third parties, the government remains bound by the company’s usage policies, introducing legal and operational constraints on deployment in defense contexts (impacts Defense procurement and compliance processes).

More from Stock Markets

Indigenous Occupation Halts Operations at Cargill’s Santarem Terminal Feb 21, 2026 Market Turbulence Reinforces Case for Broader Diversification Feb 21, 2026 NYSE Holdings UK Ltd launches unified trading platform to streamline market access Feb 21, 2026 Earnings Drive Weekly Winners and Losers as Buyout Headlines Lift Masimo Feb 21, 2026 Barclays Sees 'Physical AI' Scaling to Hundreds of Billions by 2035 Feb 21, 2026