There is a growing awareness among policy makers and researchers that the rise of artificial intelligence (AI) must be met with an active examination and improved understanding of the ethics surrounding AI. Adopting and using automation without rigorously addressing its potential impacts and consequences, or without the necessary governance mechanisms in place to control it, is now widely regarded as a recipe for disaster.
In particular, we need to exercise an exhaustive level of due diligence when it comes to the automation of war and the proliferation of lethal autonomous systems. The adoption of AI is having a profound impact on every sector, including military organizations and the global arms industry. The use and weaponization of machine learning by the military merits considerable scrutiny and debate.
This past October, the US government’s Defense Innovation Board released their recommendations on the ethical use of AI by the US Department of Defense. Created by the Obama administration, and chaired by former Google CEO Eric Schmidt, the board includes representatives from Microsoft, Facebook, Google, a number of universities (including the Massachusetts Institute of Technology and the California Institute of Technology), venture capital firms, and astrophysicist Neil deGrasse Tyson, director of the Hayden Planetarium.
Together, this group came up with five principles for the ethical use of AI by the military.
The first principle is responsibility — that human beings should “exercise appropriate levels of judgment” and be responsible for the “development, deployment, use, and outcomes” of AI. This principle sounds straightforward, but it is also the most contentious. The use of the term human judgment, rather than meaningful human control, is an important distinction, as we’ll see later.
The second principle is equitability — that the military should take action to avoid unintended bias that would inadvertently cause harm. This is the larger concern around bias in AI, and it is not clear that it can be resolved. There is no such thing as neutral data, given the potential biases or agenda of those collecting the data. For example, in a military conflict, intelligence about the battlefield could be coming from one party in the dispute, who might be biased against any other party in the conflict. An example of intended bias would be to eliminate the enemy (with prejudice). Whereas an unintended bias might be collateral or civilian damage against targets on the battlefield that were discriminated against in the data or intelligence collection process.
The third principle is traceability — the concept of algorithmic transparency and explainability, namely, that any machine-learning model or algorithm could explain how and why it made certain decisions. This is a concern with AI in general, but it is especially important when it comes to military affairs. Not only must accountability be maintained within the chain of command, but attribution is a crucial issue when it comes to who is responsible for an attack. War should not be caused because it is unclear who is responsible for an attack made by an autonomous system. Although researchers are now suggesting that traceability may not be possible.
The fourth principle is reliability — the military doesn’t want an AI application that behaves erratically each time it is used. Rather, they require reliable tools that perform as they are expected to. This principle reflects a growing belief that we treat AI as we would a new prescription drug: test it thoroughly and understand its effects and side effects before using it.
Finally, the fifth principle is governability. This principle addresses perhaps the greatest fear we hold when it comes to the automation of war and weaponry: that the system will escalate and be uncontrollable. The board calls for “human or automated disengagement or deactivation of deployed systems that demonstrate unintended escalatory or other behavior.”
The board’s recommendations are non-binding (although they were embraced by the US Department of Defense Joint AI Center), and are meant to provide civilian and commercial perspectives on how the military ought to embrace automation.
It’s likely that many of these companies represented on the board hope to supply machine-learning technology to the military. Microsoft, for example, just won a major cloud contract with the US Defense Department. This courtship between Silicon Valley and the Pentagon has been contentious, with protests coming from the employees of these companies, including some high-profile resignations.
One such resignation came from Liz O’Sullivan, who left the AI company Clarifai upon discovering that their machine-learning visual recognition technology would be used in the design of autonomous weapons systems — lethal devices often called “killer robots.”
O’Sullivan is an AI developer as well as an AI activist, and is involved with the International Committee for Robot Arms Control (ICRAC). Part of ICRAC’s work focuses on the first principle of responsibility and the importance of “meaningful” human control of autonomous systems, outlined in a report to the UN Convention of Certain Conventional Weapons’ governmental experts at their meeting this past August on lethal autonomous weapons.
In a series of tweets responding to the release of the Defense Innovation Board’s principles, O’Sullivan noted the important distinction between human judgment and human control. She notes that the vague use of language would allow for considerable flexibility when using autonomous systems.
For example, O’Sullivan cites the difference between “human in the loop” and “human on the loop,” with the former being part of the decision-making process and the latter supervising the decision making.
When a human operator is in the loop, the human has meaningful human control, as the human is engaged in each decision made by the AI. In contrast, an individual human operator on the loop is making judgments only, supervising the AI that is making decisions, rather than being involved in each decision.
Where this difference plays out is in the control of a single autonomous weapon system compared to the control of an army of autonomous weapon systems. Meaningful human control of an army of autonomous machines by a single soldier is arguably not possible (or should not be possible), whereas human oversight of that army could be.
This is a great example of why the language used in ethics guidelines is important. Too often ethics are used to justify practices that maybe should not have been permitted in the first place. Debating the ethics of lethal autonomous weapons may be a distraction from the larger issue of whether killer robots should be permitted to exist in the first place.
Unfortunately, that debate may be happening too late. Instead of arguing whether war should be automated at all, we’re focusing on how it will be automated. The analogy often drawn is with nuclear war. Militaries have successfully managed nuclear weaponry without blowing up the planet, therefore we should trust them to successfully manage lethal autonomous weapons?
That’s a rather low standard. Rather than preventing nuclear annihilation, a more appropriate comparison would be preventing nuclear proliferation. In the case of the latter, the track record is not so good, and similarly, the proliferation of lethal autonomous weapons is something we should be actively preventing.