Feb 28 - OpenAI said on Saturday that its agreement with the U.S. Department of Defense to deploy its technology on the department's classified network includes additional safeguards intended to limit risky applications.
The statement followed a directive from U.S. President Donald Trump on Friday ordering the government to stop working with Anthropic, and the Pentagon's move to declare that startup a supply-chain risk. Anthropic said it would contest any such designation in court. OpenAI announced its own deal with the Pentagon late on Friday.
Three explicit prohibitions
OpenAI said the contract enforces three specific red lines for how its technology may be used: it cannot be employed for mass domestic surveillance, it cannot be used to direct autonomous weapons systems, and it cannot be applied to any high-stakes automated decisions. The company characterized these as firm boundaries embedded in the agreement.
Layered safety measures
Beyond enumerating prohibited applications, OpenAI described a multi-layered approach to protecting those red lines. According to the company, it retains full discretion over its safety stack, will deploy capabilities via cloud infrastructure, will include cleared OpenAI personnel in operational loops, and has negotiated strong contractual protections with the department. OpenAI said these elements collectively amount to more guardrails than prior classified AI deployment agreements, including the arrangement with Anthropic.
The Pentagon has signed agreements worth up to $200 million each with multiple major AI labs over the past year, including Anthropic, OpenAI and Google. In discussing those deals, the department has emphasized the need to preserve flexibility for defense use and to avoid being constrained by warnings from technology creators about using AI in weaponry or other operations.
OpenAI warned that a breach by the U.S. government of its contract provisions could lead to contract termination, while adding that the company does not anticipate such a breach. Separately, OpenAI said Anthropic should not be designated a supply-chain risk and that it has communicated this position to government officials.
Investment note included in original material
The original material contained a promotional passage highlighting an investment product and an AI research prompt. That section stressed using better data and AI-powered insights to inform investing decisions, and encouraged readers to seek tools that provide institutional-grade data and analysis to identify attractive investment opportunities. The article text above is limited to the factual account of the Pentagon agreement and related developments.