Stock Markets February 28, 2026

OpenAI Says Pentagon Deal Adds Multiple Safeguards for Classified AI Use

Company outlines layered protections and three explicit red lines after competing lab faces supply-chain designation threat

By Hana Yamamoto
OpenAI Says Pentagon Deal Adds Multiple Safeguards for Classified AI Use

OpenAI said its recently announced agreement with the U.S. Department of Defense contains expanded protections for classified deployments of its technology, including a set of three prohibited uses and a multi-layered safety approach. The announcement came after the White House directed the government to end work with Anthropic and the Pentagon moved to label that startup a supply-chain risk. OpenAI said breaches of the contract could lead to termination, but does not expect that outcome.

Key Points

  • OpenAI says its DoD agreement establishes three prohibited uses - mass domestic surveillance, directing autonomous weapons, and high-stakes automated decisions; sectors affected include defense and AI technology.
  • Company outlines a multi-layered safety approach - control over safety stack, cloud deployment, cleared personnel in the loop, and contractual protections - which it says exceeds prior classified AI agreements.
  • Pentagon has entered agreements up to $200 million each with major AI labs, and is seeking to maintain operational flexibility; impacts reach defense procurement and cloud service providers.

Feb 28 - OpenAI said on Saturday that its agreement with the U.S. Department of Defense to deploy its technology on the department's classified network includes additional safeguards intended to limit risky applications.

The statement followed a directive from U.S. President Donald Trump on Friday ordering the government to stop working with Anthropic, and the Pentagon's move to declare that startup a supply-chain risk. Anthropic said it would contest any such designation in court. OpenAI announced its own deal with the Pentagon late on Friday.

Three explicit prohibitions

OpenAI said the contract enforces three specific red lines for how its technology may be used: it cannot be employed for mass domestic surveillance, it cannot be used to direct autonomous weapons systems, and it cannot be applied to any high-stakes automated decisions. The company characterized these as firm boundaries embedded in the agreement.

Layered safety measures

Beyond enumerating prohibited applications, OpenAI described a multi-layered approach to protecting those red lines. According to the company, it retains full discretion over its safety stack, will deploy capabilities via cloud infrastructure, will include cleared OpenAI personnel in operational loops, and has negotiated strong contractual protections with the department. OpenAI said these elements collectively amount to more guardrails than prior classified AI deployment agreements, including the arrangement with Anthropic.

The Pentagon has signed agreements worth up to $200 million each with multiple major AI labs over the past year, including Anthropic, OpenAI and Google. In discussing those deals, the department has emphasized the need to preserve flexibility for defense use and to avoid being constrained by warnings from technology creators about using AI in weaponry or other operations.

OpenAI warned that a breach by the U.S. government of its contract provisions could lead to contract termination, while adding that the company does not anticipate such a breach. Separately, OpenAI said Anthropic should not be designated a supply-chain risk and that it has communicated this position to government officials.


Investment note included in original material

The original material contained a promotional passage highlighting an investment product and an AI research prompt. That section stressed using better data and AI-powered insights to inform investing decisions, and encouraged readers to seek tools that provide institutional-grade data and analysis to identify attractive investment opportunities. The article text above is limited to the factual account of the Pentagon agreement and related developments.

Risks

  • Legal and reputational risk from the Pentagon's planned supply-chain designation for Anthropic, which could affect AI lab relationships with the defense sector - sector: defense/technology.
  • Contractual risk if either party breaches the terms - OpenAI warned breaches could trigger termination of the contract, creating uncertainty for defense AI deployments - sector: defense/technology.
  • Operational and policy uncertainty as the department balances creator-imposed guardrails against the need for flexibility in defense applications, which could complicate procurement and deployment decisions - sector: defense/cloud services.

More from Stock Markets

BofA Flags AI-Related Hazards That Could Undermine EU Equity Rally Feb 28, 2026 APEX Tech Acquisition Raises $112 Million in NYSE Debut Feb 28, 2026 Moscow trade ends with mixed signals as MOEX posts three-month high while more stocks fall than rise Feb 28, 2026 Greg Abel’s First Letter to Shareholders Seeks Stability as Berkshire Names New CEO Feb 28, 2026 FDA Expands Pediatric Approvals for Novo Nordisk’s Once-Weekly Growth Hormone Sogroya Feb 28, 2026