iscussions surrounding the governance of artificial intelligence (AI) are complex, as they involve — or at least should involve — a dynamic interplay between technical, legal, ethical and policy expertise. This complexity is magnified at the international level when the geopolitical interests of various states are in competition. Global governance, at its very nature, is difficult in a competitive international political system. It is even more complicated when an emerging technology such as AI, which is still in its infancy and has some applications to war and conflict that are untested and unknown, is involved. These concerns are compounded by the speed of the technological advancement in the area.
While AI is not new — Marvin Minsky developed what is widely believed to be the first neural network learning machine in the early 1950s — its applications to many aspects of war and conflict have opened a new Pandora’s box. Ethicists, international legal experts and international affairs specialists have been sounding the alarm on the potential misuse of this technology and the lack of any regulations governing its use. In 2021, the US National Security Commission on Artificial Intelligence warned that “the United States must act now to field AI systems and invest substantially more resources in AI innovation to protect its security, promote its prosperity, and safeguard the future of democracy” (National Security Commission on Artificial Intelligence 2021).
Without a doubt, the most complex global governance challenges surrounding AI today involve its application to defence and security — from killer swarms of drones to the computer-assisted enhancement of military command-and-control processes. The corresponding international policy frameworks are still in their early stages and require a degree of normative change that states have found challenging in the past. The current and growing rivalry among great powers now extends beyond the geopolitical and into the technology realm, raising the potential risks to stability in the international system (Araya and King 2022).
Without a doubt, the most complex global governance challenges surrounding AI today involve its application to defence and security.
The University of Waterloo’s AI Institute hosted a series of webinars exploring the ethics of AI and automated warfare. These discussions brought together both technical and policy experts to debate the big questions, concerns and developments in the application of AI to war and conflict. This essay series is the result. The essays seek to understand how the weaponization of AI is operating within the contemporary global governance architecture. They attempt to posit how this technology will continue to advance within the defence and security realm, examine the corresponding influence on the current geopolitical landscape and ask what moral and ethical considerations are associated with the deployment of autonomous weapons. For those who work in the field of AI, who endeavour to create a more just and equitable future, this essay series provides pithy analysis of the key issues.
The series was guided by a number of interrelated research questions for exploration. These included (but were not limited to) the following:
- How is AI being used in the military context? What applications are emerging?
- How are adversarial state actors using data manipulation as a strategic lever and what are the implications?
- Why are the capabilities of AI poorly understood? How does this make it difficult for actors to signal their capabilities and assess the competencies of others?
- What are the implications of adversarial and counter-AI technology from a normative and international law standpoint?
- Can AI meet the legal responsibility countries face with respect to the law of armed conflict, for example?
- What are the ethical and normative implications of lethal autonomous weapons systems (LAWS)?
- How can ethical guidelines and standards be universally upheld given how divergent countries are on these issues?
The essay series begins with a piece written by Alex Wilner titled “AI and the Future of Deterrence: Promises and Pitfalls.” Wilner looks at the issue of deterrence and provides an account of the various ways AI may impact our understanding and framing of deterrence theory and its practice in the coming decades. He discusses how different countries have expressed diverging views over the degree of AI autonomy that should be permitted in a conflict situation — as those more willing to cut humans out of the decision-making loop could gain a strategic advantage. Wilner’s essay emphasizes that differences in states’ technological capability are large, and this will hinder interoperability among allies, while diverging views on regulation and ethical standards make global governance efforts even more challenging.
Looking to the future of non-state use of drones as an example, the weapon technology transfer from nation-state to non-state actors can help us to understand how next-generation technologies may also slip into the hands of unsavoury characters such as terrorists, criminal gangs or militant groups. The effectiveness of Ukrainian drone strikes against the much larger Russian army should serve as a warning to Western militaries, suggests James Rogers in his essay “The Third Drone Age: Visions Out to 2040.” This is a technology that can level the field by asymmetrically advantaging conventionally weaker forces. The increased diffusion of drone technology enhances the likelihood that future wars will also be drone wars, whether these drones are autonomous systems or not. This technology, in the hands of non-state actors, implies future Western missions against, say, insurgent or guerilla forces will be more difficult.
Data is the fuel that powers AI and the broader digital transformation of war. In her essay “Civilian Data in Cyber Conflict: Legal and Geostrategic Considerations,” Eleonore Pauwels discusses how offensive cyber operations are aiming to alter the very data sets of other actors to undermine adversaries — whether through targeting centralized biometric facilities or individuals’ DNA sequence in genomic analysis databases, or injecting fallacious data into satellite imagery used in situational awareness. Drawing on the implications of international humanitarian law, Pauwels argues that adversarial data manipulation constitutes another form of “grey zone” operation that falls below a threshold of armed conflict. She evaluates the challenges associated with adversarial data manipulation, given that there is no internationally agreed upon definition of what constitutes cyberattacks or cyber hostilities within international humanitarian law (IHL).
In “AI and the Actual International Humanitarian Law Accountability Gap,” Rebecca Crootoff argues that technologies can complicate legal analysis by introducing geographic, temporal and agency distance between a human’s decision and its effects. This makes it more difficult to hold an individual or state accountable for unlawful harmful acts. But in addition to this added complexity surrounding legal accountability, novel military technologies are bringing an existing accountability gap in IHL into sharper focus: the relative lack of legal accountability for unintended civilian harm. These unintentional acts can be catastrophic, but technically within the confines of international law, which highlights the need for new accountability mechanisms to better protect civilians.
Some assert that the deployment of autonomous weapon systems can strengthen compliance with IHL by limiting the kinetic devastation of collateral damage, but AI’s fragility and apparent capacity to behave in unexpected ways poses new and unexpected risks. In “Autonomous Weapons: The False Promise of Civilian Protection,” Branka Marijan opines that AI will likely not surpass human judgment for many decades, if ever, suggesting that there need to be regulations mandating a certain level of human control over weapon systems. The export of weapon systems to states willing to deploy them on a looser chain-of-command leash should be monitored.
Regulatory efforts to contain these new potential weapons mean we need to consider qualitative stipulations on human oversight in a given context, argues Frank Sauer in “Autonomy in Weapon Systems and the Struggle for Regulation.” Present discussions have proposed a two-pronged approach that would categorically prohibit autonomous weapons from performing certain roles, such as striking human targets, while allowing other applications subject to certain regulations, such as strikes against inanimate military targets. At the moment, some states that are party to the UN Convention on Certain Conventional Weapons seem intent on blocking hard regulation in this area. This inaction, however, might prompt efforts to regulate LAWS at a different international forum.
The UN Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems has made substantial contributions to understanding and regulating this complex topic. This includes the 11 guiding principles that were adopted by the 2019 Meeting of the High Contracting Parties to the Prohibitions or Restrictions on the Use of Certain Conventional Weapons.1 Among the issues discussed were: the objectives and purposes of the convention, challenges posed by IHL, the human element in the use of lethal force, military implications of related technologies, as well as options for addressing humanitarian and international security challenges.2 While the UN group has been working on the matter since 2016, no consensus has been advanced since its final meeting in July 2022.
Moreover, debates about the security implications of AI often result in parties talking past each other as some discuss the present and near-term developments in “narrow” AI (such as autonomous drones or automated cyberattacks), while others focus on the long-term prospects of “artificial general intelligence.” Toby Walsh cautions in “The Problem with Artificial (General) Intelligence in Warfare” that once these distinctions are clarified, we will be better able to think about the ethical and legal challenges of this technological advancement.
Despite the many divergent views on AI and its weaponization, states will need to continue promoting and establishing multilateral dialogue on a comprehensive architecture for AI governance. This will be critical to managing this new reality within the changing geopolitical landscape.