In February 2023, the Netherlands hosted the inaugural global summit on the Responsible Use of Artificial Intelligence in the Military Domain, culminating in the endorsement of the “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy” by 32 states, which has since grown to 54 endorsing states. This declaration, developed by the United States, is nonbinding and aims to foster consensus on norms for the military deployment of artificial intelligence (AI). This paper supports Canada’s leadership of the Accountability Working Group, one of three international working groups formed to elaborate on compliance with the declaration’s principles. It delves into the complex legal discourse surrounding accountability for AI actions in armed conflict, particularly focusing on lethal autonomous weapons systems and decision support systems used in targeting.