merging military technologies employing advancements in artificial intelligence (AI) and machine learning — fitted with improved sensor technology and robotics — are expected to transform warfare (Vergun 2020). Some tech, in particular autonomous weapons systems (AWS), commonly known as killer robots, which will operate without significant human assessment or oversight, have been described as the “third revolution in warfare” (Lee 2021).
We have not yet seen in operation systems that can find, select and engage targets without humans in charge. Some analysts believe that such systems are still under development (Knight 2022; Woodson 2020). Others contend that they exist, but have not been deployed on a scale that would allow any claims to be tested (Kallenborn 2022).
Advanced militaries have clearly indicated their interest in such systems. Reasons range from speed of response and enhanced situational awareness to the ability to overwhelm the defence systems of adversaries. However, several countries, including the United States, also assert that AWS can strengthen the implementation of international humanitarian law (IHL) and protect civilians by limiting collateral damage (Reaching Critical Will 2019). As Robert Work, former US deputy secretary of defense, has contended, “it is a moral imperative to at least pursue this hypothesis” (Reuters 2021).
But there are risks in developing and deploying autonomous weapons. The unreliability and fragility of AI technologies central to the systems make the testing of such a hypothesis potentially harmful to civilians (Morgan et al. 2020). There are also operational concerns; for example, it is not clear how AWS will interact with crewed platforms in wider military operations. No concrete information exists on how AWS responds in complex and rapidly changing environments such as battlefields.
Experts also do not agree on how to apply existing laws and norms to these new weapons (Winter 2022). With so much unknown and so little agreement, all claims about the direct and indirect impacts of AWS on civilians and civilian infrastructure must be carefully scrutinized.
Our Current Understanding of Autonomous Weapons
Have we seen active deployment of killer robots? The answer depends on how this technology is defined. Some countries and experts contend that fully autonomous weapons systems do not exist and would not be wanted by any country (Jeangène Vilmer 2021). In their view, any human involvement means that the systems are not autonomous. With this restrictive definition, the whole problem of autonomous weapons disappears.
But the autonomous capabilities of certain systems do seem to be increasing. Active use of the Turkish-made Kargu-2 loitering munition in the conflict in Libya appears to demonstrate significant autonomous capabilities, including the ability to engage targets independently (Kallenborn 2022). However, the maker of the Kargu-2 is coy about its AI capabilities, offering no specific information on whether it can function on its own or always functions under the control of human operators. Manufacturers also tend to hype the autonomous features, which might not include any capability to independently select and engage a target (Knight 2022). Certainly, the states that develop and use the new technologies have insisted that they are not autonomous. In informal discussions, Turkey, for example, has insisted that the Kargu-2 is not fully autonomous and that humans are in control (Marijan and Standfield 2021).
Last autumn, Frank Kendall, secretary of the US Air Force, assured an audience that humans were the ultimate decision makers after the Air Force used AI for the first time to help identify a target or targets in “a live operational kill chain” (Miller 2021). No verification was provided.
Seven years of discussions at the United Nations Convention on Certain Conventional Weapons (CCW), the key international forum at which autonomous weapons are examined, have led to a tentative acknowledgment that lethal weapons must remain under human control. However, there is no general agreement on the level of awareness and control that the human operator must maintain over a weapon system.
But a common understanding of this level of control, especially over key functions such as target selection and engagement, is important in establishing and ensuring human accountability. If a significant level of human control of operations cannot be demonstrated, who is to be held accountable for what the system does? Who is to be held accountable for civilians who are hurt or killed and civilian infrastructure that is damaged or destroyed?
If a weapon system that makes independent decisions does not appear to have anticipated all scenarios that might involve civilians, can the software developers be held to account (Sharkey 2012; Winter 2022)? Perhaps, although such an attempt seems likelier to lead to a diffusion of responsibility. It is often the case that a vast number of individuals contribute to the coding and training of systems, generally in ignorance of the activity of others and even the overall end product. Moreover, as Missy L. Cummings (2019) notes, “currently in the United States, manufacturers of military weapons are indemnified against accidents on the battlefield.”
Who is to be held accountable for civilians who are hurt or killed and civilian infrastructure that is damaged or destroyed?
While developers and users of AWS persist in maintaining the significant role of human operators, a number of questions about the nature of that role remain. Does the human operator simply approve decisions made by the system, possibly distanced by both time and space from the targeting event? Or does the system have the ability to search for targets based on pre-approved target profiles, using sensor inputs to, for example, recognize military-age males holding weapons? In other words, does the human operator have all the necessary information and the ability to make evidence-based decisions that might prevent unintended victims from being targeted? How good are the systems at distinguishing between combatants and non-combatants? Are they as good as humans?
Those who support the development of autonomous systems might say they are better than humans. For some, the humanness of soldiers and operators is the problem that the technology solves. William H. Boothby (2018) argues that “robotic technologies will not be distorted by fear, anger, vengeance, amnesia, tiredness or other peculiarly human fallibilities.”
But Elliot Winter (2022) argues that for such tech to work well, “machines would need to possess advanced skills in observation and recognition as well as sophisticated judgement-making ability.” In his view, these capabilities would ensure compliance with the IHL principle of distinction, the ability to distinguish combatants from non-combatants.
However, if humans are affected by emotions and bias, so are the technologies coded by those humans. Researchers have demonstrated that AI technologies tend to disproportionately misrepresent disadvantaged communities (Gebru 2020). Image recognition, for example, often incorrectly identifies women and racialized minorities.
Another problem relates to technology that does not adequately distinguish among different types of actors. Some experts fear that groups of individuals might be misidentified; for example, disabled individuals holding aid devices might be viewed as soldiers with guns. A key question is how accurate the technology is in distinguishing among different types of actors in a conflict zone. Will the judgment-making ability of the system be impacted by factors such as gender or race (Cummings 2017; Hunt 2019; Ramsay-Jones 2020)? And there is a further complication. As Winter concedes, combatants in a conflict zone can become hors de combat, no longer able or willing to fight. This change might be difficult for machines to interpret.
Whether the technology is even capable of achieving human-like judgment is debatable. AI experts speculate on various timelines for the achievement of “human-level” cognition, but even the most optimistic do not see much likelihood of it happening before 2075 (Müller and Bostrom 2016). Such a time frame raises the possibility that weapon systems will be brought into service before all the bugs are fixed. In such an event, any claims to protect civilians and other non-combatants will be disingenuous at best.
Ensuring Civilian Protection
Strict regulations are needed to ensure that the systems that are developed in the next few decades truly protect civilians and other non-combatants (Boulanin, Bruun and Goussac 2021).
A sufficient amount of human control over weapon systems must be mandated. Any weapon system that depends on sensor input to make decisions over target selection and engagement should be deemed high risk and unacceptable for use. At present, there seems to be a fair amount of acceptance among states that autonomous functions in already-prohibited weapons should not be used.
Weapon platforms that can most easily penetrate civilian areas are most in need of strict regulation. These would include autonomous aerial vehicles (drones), loitering munitions and tanks. The greatest concern would be not when such systems autonomously survey an area but if they are capable of selecting or engaging targets.
Exports of technologies and systems to countries where such systems could be used for purposes not intended by exporters must be restricted and, in some cases, monitored. Monitoring is particularly important because of the multi-use nature of much of the technology that would be incorporated into autonomous weapons systems.
What Must Happen
Proponents often portray autonomous weapons as if they exist in a vacuum, separate from political considerations and military strategies that in past and contemporary conflicts have resulted in the targeting of civilians and civilian infrastructure. However, relevant literature shows us that states often strategically target civilians and civilian infrastructure (Downes 2008; Sowers and Weinthal 2021). Thus, AWS could be instructed to target civilians or potentially decide to target civilians in order to achieve a given objective, such as securing a portion of a city.
State actors are still in charge of setting overall objectives. Ultimately, they will make decisions on how to use these new systems. Without specific constraints, regulations and practices to ensure civilian protections, it is not clear how autonomous systems add to the protection of civilians. On the other hand, the potential for errors and unanticipated actions by autonomous systems could escalate conflicts in ways not intended by military leadership.
The world needs specific regulations that evoke civilian protection measures when weapon systems using AI technologies are deployed. There must be minimum limits to the degree of human control, as well as restrictions on the types of autonomous weapons systems that can be used and the situations in which they are used. As Paul Scharre (2018) rightly notes, if these systems follow some current laws but operate without an understanding of context, they could take actions that human soldiers and operators guided by moral and ethical codes would not take.
States ultimately must be the ones to agree upon these regulations and to ensure the protection of civilians. So far, agreement at the CCW remains elusive and the more powerful states are reluctant to engage in serious discussions, instead allowing countries such as Russia to act as spoilers and stall all movement toward legally binding instruments. However, one potential common reason for putting in place regulations despite the current geopolitical realities is the recognition that these weapons will proliferate and could potentially be transferred to non-state groups and terrorist organizations. Preventing this destabilizing and dangerous proliferation is in the interest of all states and should make states more receptive to the need for regulatory efforts.
Previous arms control and disarmament agreements offer important insights into addressing the seemingly unique challenges of regulating AI technologies. For example, the Biological Weapons Convention and Chemical Weapons Convention have relevant frameworks on addressing the dual-use nature of AI technology — that is, civilian and military uses. Moreover, a focus on behaviours and operational uses of weapons, such as limiting or prohibiting use of systems that target personnel, could be pursued.
Without such rules and restrictions, the risks to civilians in armed conflict will continue to increase, given the unpredictability and potential risks that autonomous weapon systems will pose if left unchecked.