he advent of new technologies always prompts questions concerning their legality, and this is certainly true with respect to autonomous technologies, including those using varying degrees of artificial intelligence (AI). As autonomous solutions are developed and employed, countries need to ensure that their use aligns with established moral and ethical principles, which are often enshrined in both domestic and international legislation. The basic legal dilemma concerning any new technology is ascertaining whether existing law is capable of regulating it in conformity with those principles and, if not, what new legal instruments are necessary to meet that objective. This essay explores that question in the context of autonomous technologies.
Innovation in autonomy is being driven simultaneously by civilian and national security (including military) demands. Commercial autonomous technology for civilian application is primarily subject to domestic legal regulation, although international law can, and is likely to, play some role in its governance. Autonomous military technologies are predominantly developed for employment in an international environment during armed conflict; in that setting, international law is prominent. The essay begins with a discussion of the prospect for international legal regulation of autonomous civilian technologies. It then turns to certain challenges relating to international law that employing autonomy in national security and defence contexts, including on the battlefield, presents.
Regulation for Civilian Purposes
Legislatures across the globe should be preparing to amend their laws, and possibly adopt new ones, governing autonomous technologies. Some applications, such as aircraft autopilot systems and industrial robots, have been employed for decades, albeit in strictly controlled environments where robust security controls are in place. In the future, technologies with varying degrees of autonomy will become pervasive in many societies. Driverless public transit systems, self-driving cars and AI algorithms in medical diagnosis are leading this innovation, with countless other use cases bound to follow. Inevitably, domestic laws will require some degree of revision to ensure adequate regulation of such systems.
For the present, these new technologies are primarily subject to industry self-regulation, with several large companies having adopted internal policies relating to the use of automation in their products and services (for examples, see International Committee of the Red Cross 2019, 25–26). The experience states have had with current digital technologies offers valuable lessons in this regard; when the private sector is left to self-regulate, friction between companies and governments is likely to arise. Criticism by states directed at Twitter and Facebook about their handling of online content, such as fake news and live streaming of violent incidents, or the susceptibility of their algorithms to manipulation and biases, is illustrative. That these companies have called on governments to specify through regulation the kinds of action expected of them is therefore unsurprising (Press Association 2019; Rudgard and Cook 2019).
The extent to which industry self-regulation can govern more advanced autonomous technologies to the satisfaction of governments, civil society and the public generally is limited. Google, itself, has acknowledged that “self- and co-regulatory approaches will remain the most effective practical way to address and prevent AI related problems in the vast majority of instances, within the boundaries already set by sector-specific regulation,” but that “there are some instances where additional rules would be of benefit” and “relying on companies alone to set standards is inappropriate” (Google, n.d., 29; Evans 2020). Accordingly, it is sensible for governments to engage with the private sector and collaboratively work toward optimal governance regimes, as opposed to intervening only when unwanted consequences of this new technology have begun to manifest.
Legislatures across the globe should be preparing to amend their laws, and possibly adopt new ones, governing autonomous technologies.
Regulatory rules, rather than legislative solutions, are likely to emerge first, as has been the case with other novel technologies. In the field of nanotechnology, for example, several European countries have adopted regulations that impose reporting requirements on companies that manufacture, import or distribute nanomaterials.1 In the field of autonomy, we can likewise expect regulations tackling discrete issues, which at some point will be followed by legislative action, whether through amendments to existing laws or the adoption of new ones (this is without prejudice to the adoption of so-called enabling legislation, that is, legislation that grants the power to adopt regulations to a certain person or entity, such as a government minister).
Public international law, by contrast, will largely play a bystander’s role insofar as commercial autonomous solutions meant for civilian use are concerned. However, the international community may at some point feel the need to harmonize countries’ domestic laws to ensure that the internal legal regulation of these commercial technologies is consistent across borders. The legal mechanism for harmonization would be the adoption of a so-called uniform law treaty that obligates states that are parties to the instrument to legislate domestically with respect to their criminal, civil or administrative laws. For example, such a treaty could prescribe uniform safety standards, liability rules, certification schemes, data management processes, human supervision requirements over the use of the technology, fail-safe mechanisms to be put in place, operational constraints, rules regarding bias, and criminal offences involving autonomous technologies.
The most likely starting point for international legal regulation along these lines would be the European Union, for it is the only international organization with an institutional capacity, a pre-existing mandate and the political appetite to adopt such far-reaching binding rules (in fact, it has already taken preliminary steps toward intra-community regulation of AI; see European Commission 2020). Although formally the European Union only has the authority to legislate vis-à-vis its member states, the effect of any resulting regulation would extend beyond the organization’s borders. The situation might be analogous to the European Union’s General Data Protection Regulation — to the extent foreign companies offering products and services in the field of automation want to operate in the EU market, they would be obliged to follow applicable EU rules. This presents a strategic opportunity for the European Union, for it is uniquely well-positioned to serve as a pioneer in this area, thereby shaping the conversation as to the appropriate legal and regulatory regime for autonomous technologies.
Regulation of National Security and Defence-related Autonomous Technologies
It is widely accepted that autonomy, in particular AI, will revolutionize warfare. Examples of contexts in which autonomy is and will be employed include information processing, notably intelligence analysis; unmanned weapon systems; realistic military training; psychological warfare; and military command and control. It is therefore unsurprising that great-power competition for supremacy in military autonomous technologies and AI is under way.
In that warfare is governed by a dense international legal framework, many rules already exist that regulate the use of autonomous technologies in war. These rules form a regime of international law known as international humanitarian law (IHL), also labelled the law of armed conflict.
Scholarship on the interplay between these new technologies and IHL has primarily focused on the use of lethal autonomous weapons (Schmitt and Thurnher 2013; O’Connell 2014; Sassòli 2014; Geiss 2015). At the state level, a group of governmental experts convened under a UN umbrella has confirmed that “international humanitarian law continues to apply fully to all weapons systems, including the potential development and use of lethal autonomous weapons systems” (Group of Governmental Experts 2017, para. 16(b)), which logically leads to the conclusion that other military uses of autonomous technologies are likewise governed by this subfield of international law.
IHL, in particular its rules governing the conduct of hostilities (that is, the way in which a war is waged), are relevant insofar as the international community has not prohibited particular means or methods of warfare. Presently, no automated or autonomous technologies have been banned, although states have been under political, scholarly and civil society pressure to prohibit fully autonomous lethal weapons since the launch of the “Ban Killer Robots”2 movement. For instance, the European Parliament in 2018 adopted a resolution in which it urged the European Commission, individual member states and the European Council to “work towards the start of international negotiations on a legally binding instrument prohibiting lethal autonomous weapon systems” (European Parliament 2018, para. 3). In the absence of such a treaty, existing IHL rules govern their use.
The issue of lethal autonomous weapon systems aside, it is clear that autonomous technologies will increasingly find military usage. It is equally clear that the application of the pre-automation, pre-autonomy rules of IHL to those technologies is not without challenges. Many existing debates over how to apply IHL rules would apply equally to autonomous systems, as in the case of questions concerning the permissibility of directing non-destructive military operations against civilian objects3 or the geographical boundaries of the applicability of humanitarian law.4
Yet, issues unique to autonomy are bound to arise as well. For example, a cross-cutting issue in IHL, as well as in related fields of international law such as international criminal law, concerns accountability. If, for instance, autonomous cyber capabilities unexpectedly cause harm to civilians or damage to civilian objects, questions of responsibility attach. Under IHL, states are responsible for ensuring their weapon systems are used in a manner consistent with the conduct of hostilities rules. This obligation begs the question of weapon systems that operate autonomously, perhaps even using AI to select targets. If the armed forces using a system cannot assess the harm likely to be caused to the civilian population or civilian objects by an autonomous system with the requisite degree of reliability, whatever the correct standard of likelihood is, those armed forces are using the weapon indiscriminately in the battlespace. This would constitute a breach of IHL by the state employing the autonomous weapon system.5
Furthermore, international criminal law imposes individual criminal responsibility for war crimes, which include directing attacks against civilian objects with “intent” and “knowledge.”6 Questions about how criminal tribunals would apply these notions to encompass civilian damage caused by autonomous systems in circumstances such as those mentioned above would loom large in any criminal prosecution.
Autonomy is also being used for national security purposes, both benign and malicious, beyond the battlefield. As malicious uses are exposed, they often raise legal and ethical alarm bells. The highest-profile case of a government resorting to these technologies to surveil and identify individuals is the Chinese government’s continuous monitoring of the Uighur Muslim minority (Taddonio 2019), a case that set a precedent for other authoritarian governments to employ advanced technologies for illicit purposes. Adding to the complexity of the situation is commercial opportunism. The case of Clearview AI — a facial recognition software company that automatically scrapes images from the internet to form a database of several billion files, thereby enabling facial recognition (Hill 2020) — is a telling example of how the private sector, if left to self-regulate, risks societal harm that is not necessarily outweighed by the legitimate use of their services for national security and public order purposes. These and other cases demonstrate the potential negative effects of autonomy and automation, including the erosion of human rights (such as the right to privacy, freedom of the press and freedom of assembly). They also highlight the need to pay even greater attention to preserving and safeguarding the rule of law and basic moral and ethical values in the face of technological developments.
Conclusion
New technologies present normative challenges to both domestic and international law, in particular with regard to the suitability of pre-existing rules. Certain technology-specific issues are inevitably bound to arise that will require regulatory and legislative action. The resulting normative evolution will first occur in the domestic setting, for international law making is a relatively slow process, especially in fields with a national security nexus.
In this process, states will face many challenges. A fundamental difficulty stems from the dual-use nature of autonomous solutions. Accordingly, both domestic regulators and legislatures, as well as states as they engage in the interpretation and adoption of international law, will need to tread carefully, ensuring, on the one hand, that the rules and interpretive positions they adopt do not stifle innovation while guaranteeing, on the other, that they effectively prevent malicious uses of the technology. Sensible normative frameworks must be collaborative; governments should therefore work with industry and civil society in adopting fit-for-purpose governance regimes, while states should work together to fashion rules that advance shared values.
A more practical challenge is that it is difficult to regulate something that one does not fully understand. Autonomous technologies are in their infancy, and predicting scientific developments in this field — even in the near term — is difficult, if not impossible. Any new laws and regulations will need to be sufficiently general so as to not become outdated quickly, but also not so vague that they provide no meaningful guidance. The difficulty of this undertaking inevitably means that no overarching area-specific rule set will be adopted in the near future — neither domestic legal acts governing autonomous technologies writ large, nor an international treaty on autonomy as such. Instead, we may expect discrete rules governing relatively specific aspects of autonomous technologies.