Responsible AI and Civilian Protection in Armed Conflict

Policy Brief No. 197

February 14, 2025

While the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy holds promise, its supporters should place greater emphasis on how the implementation of its principles will lead to better protection of civilians in armed conflict, especially when combined with other measures not limited to the use of artificial intelligence (AI) and autonomy. This policy brief argues that the responsible use invoked by the declaration should not result in only marginally better protection of civilians (PoC) outcomes than “irresponsible” use, but should instead achieve markedly better ones. Giving meaning to the declaration’s implied PoC content depends on whether the expansion of its membership and stewardship of the process raises the ceiling or lowers the floor for responsible use. National and multilateral efforts to promote the responsible military use of AI should be connected to a renewed commitment among all states to mitigate harm to civilians resulting from all military operations, not only those that involve the use of AI.

About the Authors

Daniel R. Mahanty specializes in issues at the intersection of US national security and human rights, with a recent focus on civilian protection in armed conflict.

Kailee Hilt is a program manager and research associate at CIGI. She focuses on public policy issues tied to emerging technology, privacy and cybersecurity.