The Pentagon is pressing leading artificial intelligence companies to expand the availability of their models into classified government networks while seeking fewer of the typical restrictions firms place on customers.
During a White House event this week, Pentagon Chief Technology Officer Emil Michael told technology executives the department wants to operate the same frontier AI capabilities across both unclassified and classified domains, according to people familiar with the matter. An official who requested anonymity said the department is moving to deploy frontier AI capabilities across all classification levels.
These discussions form part of continuing negotiations between defense officials and the top generative AI firms over how U.S. forces will use AI in a future battlespace that already includes autonomous drone swarms, robotic systems and cyber operations. The push to bring commercial models into classified environments intensifies a debate over whether the military should be able to use AI with fewer vendor-imposed restrictions - and how to reconcile that desire with company efforts to set boundaries around deployment.
Most AI firms working with the U.S. defense establishment are currently providing customized tools that operate on unclassified networks typically used for administrative purposes. Only one company - Anthropic - is available in classified settings through third-party arrangements, but even in those cases the government remains subject to the company’s usage policies. Classified networks are used for a wide range of sensitive tasks, potentially including mission planning or weapons targeting.
It was not clear how or when the Pentagon planned to make chatbots or other generative AI models available on classified networks.
Defense officials see value in using AI to synthesize vast streams of information and help inform decisions. But AI systems are not infallible: they can generate errors or fabricate seemingly plausible information. AI researchers warn that such mistakes in classified settings could carry serious, even deadly, consequences.
To limit those risks, many AI companies have embedded safeguards inside their models and require customers to follow specific usage guidelines. Pentagon officials have pushed back against some of those limits, arguing that they should be able to deploy commercial AI tools provided the use complies with U.S. law.
This week OpenAI reached an agreement with the Pentagon allowing the military to use its tools, including ChatGPT, on an unclassified environment that has been rolled out to more than 3 million Defense Department employees. As part of that agreement, OpenAI agreed to remove many of its typical user restrictions, although some guardrails remain.
OpenAI said the recent agreement applies specifically to unclassified use through genai.mil, and that any extension of that arrangement to other environments would require a new or modified agreement.
Alphabet’s Google and xAI have previously reached similar arrangements with the department.
Negotiations with Anthropic have been more fraught. Anthropic executives have told military officials they do not want their technology used for autonomous weapons targeting or for domestic surveillance inside the United States. The company’s products include a chatbot known as Claude.
“Anthropic is committed to protecting America’s lead in AI and helping the U.S. government counter foreign threats by giving our warfighters access to the most advanced AI capabilities,” an Anthropic spokesperson said. “Claude is already extensively used for national security missions by the U.S. government and we are in productive discussions with the Department of War about ways to continue that work.”
The comment referenced a presidential directive ordering the Department of Defense to be renamed the Department of War, a change that would require action by Congress.
How the Pentagon and AI companies ultimately resolve tensions over restrictions, operational risk and appropriate use cases will determine the extent to which commercial generative models are embedded into mission-critical and classified workflows. For now, the conversations between defense officials and the leading AI firms reflect competing priorities: the military’s desire for broad, flexible access to advanced models and the companies’ efforts to manage harms and set limits on deployment.
Summary
Pentagon technology leaders are urging top generative AI companies to make their models available on classified networks with fewer usage restrictions than currently applied. OpenAI has agreed to allow use on an unclassified Department of Defense environment rolled out to more than 3 million employees, while Alphabet’s Google and xAI have reached similar arrangements. Anthropic permits classified access through third parties but maintains usage policies that limit certain military applications. Defense officials argue they should be able to deploy commercial AI tools consistent with U.S. law, and officials view these models as tools to synthesize information to aid decision-making in a battlespace increasingly influenced by autonomous systems and cyber activity.