A meeting of national defence representatives in A Coruna, Spain, produced a limited agreement on Thursday about how artificial intelligence should be deployed in military settings, but two of the worlds largest military powers did not back the text. Only 35 of the 85 countries present at the Responsible AI in the Military Domain - REAIM - summit signed a commitment composed of 20 principles intended to guide military AI use.
The written commitment, which is non-binding, reiterated that humans must remain responsible for AI-enabled weapons, encouraged clear chains of command and control, and called for the sharing of information on national oversight arrangements when doing so does not conflict with national security. It also emphasised the need for risk assessments, thorough testing, and education and training for personnel charged with operating military AI systems.
The limited take-up came against a backdrop of strained relations between the United States and some European allies and uncertainty about the future shape of transatlantic cooperation. Several attendees and delegates said those tensions contributed to hesitation among some countries to enter into joint agreements.
Dutch Defence Minister Ruben Brekelmans described government decision-making on military AI as a "prisoners dilemma," noting the tension between adopting responsible restrictions and avoiding self-imposed limitations relative to adversaries. He warned that the speed of development by other military powers is reinforcing both the imperative to advance AI capabilities and the urgency of working on responsible use in parallel.
Brekelmans was quoted as saying, "Russia and China are moving very fast. That creates urgency to make progress in developing AI. But seeing it going fast also increases the urgency to keep working on its responsible use. The two go hand-in-hand."
The document produced in A Coruna builds on earlier, less prescriptive efforts. At previous military AI summits held in The Hague in 2023 and Seoul in 2024, roughly 60 nations - excluding China but including the United States - backed a modest "blueprint for action" that did not carry legal commitments.
Observers active in the process said the more detailed set of principles put forward this year made some states uncomfortable about endorsing firmer policies, even though the current declaration remains non-binding. Yasmin Afina, a researcher at the U.N. Institute for Disarmament Research who has advised on the effort, said some participants were uneasy about formalising more concrete measures.
Among the countries that did sign on Thursday were Canada, Germany, France, Britain, the Netherlands, South Korea and Ukraine. The signatories affirmed the principles emphasizing human responsibility, command-and-control clarity, risk assessment and personnel training, as well as information sharing about oversight where national security allows.
The limited endorsement highlights ongoing debate among governments over how to balance innovation and caution as military AI capabilities advance. Delegates and officials at the summit framed the dilemma as one in which states are trying to avoid being outpaced technologically while also seeking to prevent accidents, miscalculation or unintended escalation that could arise from rapid deployment without commensurate governance.
Additional context provided at the summit
- The 20 principles include human responsibility for AI-powered weapons, clearer command and control lines, and the promotion of risk assessments, testing and training for operators.
- The document calls for sharing information on national oversight arrangements when consistent with national security interests.
- Previous summits in The Hague and Seoul resulted in a non-binding blueprint supported by about 60 nations, a grouping that excluded China but included the United States.