Google is in active discussions with the U.S. Department of Defense to permit the deployment of its Gemini artificial intelligence models inside classified environments, according to people with direct knowledge of the talks.
The negotiations would represent a marked change from Google’s earlier posture on military partnerships and would broaden the company’s role as a technology supplier to the Pentagon.
At the center of the conversations is language that would allow the Pentagon to operate Google’s AI for all lawful purposes. Google has proposed contract provisions designed to prevent the models from being employed for domestic mass surveillance and to prohibit use in autonomous weapon systems, including targeting tasks, unless appropriate human oversight is maintained.
Those proposed safeguards are described as broadly similar to the terms reached between the Pentagon and OpenAI earlier this year. Reportedly, OpenAI chief executive Sam Altman requested that the Pentagon extend the same contractual framework to all AI firms to ensure consistent treatment across the industry.
It remains uncertain whether Google’s suggested restrictions will be incorporated into any final contract language. The questions of surveillance and autonomy were also core to a separate dispute between the Pentagon and the AI company Anthropic earlier this year.
That standoff began in January after Anthropic declined to relax safety guardrails on its systems. The Pentagon subsequently labeled Anthropic a supply-chain risk, placing the company’s existing government work at risk.
The reporting also included promotional material relating to trades in Google parent-company shares. That promotional passage referenced GOOGL and suggested a chart-analysis tool that provides traders with a complete trading plan - including entry, stop-loss, and profit-target recommendations - in under 60 seconds. The promotional material framed this tool as intended to close what it called the conviction gap for traders.
Summary: Google is negotiating with the Department of Defense to allow classified use of its Gemini AI models, proposing contract terms to restrict domestic surveillance and autonomous weapons use absent human oversight. Whether those terms will be adopted is unclear.
Key context: The issues under discussion mirror terms previously agreed between the Pentagon and OpenAI, and the topics of surveillance and autonomy were central to an earlier dispute between the Pentagon and Anthropic.