The United States Quietly Kick-Starts the Autonomous Weapons Era

De-escalation mechanisms may stop future military accidents from becoming catastrophes.

January 15, 2024
darparobots
The DARPA Subterranean Challenge took place at the Louisville Mega Cavern September 21–24, 2021. DARPA (the Defense Advanced Research Projects Agency) is part of the US Department of Defense, responsible for the development of new technologies for military use. (DARPA/Cover-Images via REUTERS)

Activists and campaign groups — backed by dozens of countries and Nobel Peace Prize laureates — have long sought a global treaty banning lethal autonomous weapons systems. When the underlying technology was unproven and the world was a less hostile place, this objective seemed possible. That’s no longer the case. The policy conversation must now focus on devising mechanisms to manage these systems rather than halt their development.

Huge progress in visual recognition tools, machine learning and robotics is making it far easier for computers to navigate complex environments. The war in Ukraine has delivered a combat data bonanza and investment windfall for defence tech companies. And a surge in conflicts worldwide has revealed a fragmented international community increasingly unable or unwilling to enforce humanitarian laws. Concurrently, the United States has expedited its timeline for deploying intelligent weapons.

In late August 2023, US Deputy Secretary of Defense Kathleen Hicks delivered a keynote speech at a tech-focused defence industry conference in Washington, DC. Her talk had an anodyne title — “The Urgency to Innovate.” But the plans within constitute nothing less than a reinvention of contemporary warfare. The goal, in Hicks’s words: “to field attritable autonomous systems at scale of multiple thousands, in multiple domains, within the next 18-to-24 months.”

Arguing the US military must adapt to counter the global ambitions of China’s People’s Liberation Army (PLA), Hicks unveiled “Replicator.” Initiated by the US Department of Defense (DoD), the program aims to streamline the US military’s uptake of emerging technologies — especially those from the private sector. The intention is for Washington to maintain its ability to project US hard power abroad while becoming less reliant on the expensive, lumbering legacy components of today’s conventional armed forces. Instead, the DoD envisions a highly networked, data-driven force powered by artificial intelligence (AI). Human soldiers would be paired on the battlefield with waves of smaller, complementary, low-cost intelligent weapons systems that can be quickly replaced after being destroyed.

Speaking a week later, Hicks suggested some forms this might take: self-piloting naval vessels, uncrewed aircraft and “pods” of mobile, general-purpose units deployed on land, at sea, in the air and in space. The emphasis would be on attritability — a design principle generally defined as prioritizing function and expendability over long-term use and durability.

A US Navy task force developing military applications for AI already has an autonomous early-warning drone fleet in the Persian Gulf. Monitoring freedom of navigation around the Strait of Hormuz is strategically vital, given it’s a key chokepoint for global energy supplies. Yet these capabilities are now moving beyond simply aiding domain awareness. On October 23, 2023, after receiving orders from an operator ashore, an unmanned US Navy boat for the first time successfully attacked a fake enemy target using live rockets, without any tactical direction from a human operator.

The Pentagon reportedly has more than 800 active military AI projects in the works. Most of these relate to enhancing process efficiency, threat evaluation and battlefield decision making. But that summary is merely what can be gleaned from public sources and unclassified records. Unlike nuclear weapons, autonomous weapons can be developed and tested in secret. The war in Ukraine has also provided a clear case for why mass still matters in modern warfare, bolstering the value proposition of attritable smart weapons systems such as drones.

Hicks has vowed that Replicator will align with America’s stated approach to AI ethics and autonomous systems. These are reflected in the Pentagon’s newly updated policies on weapon systems autonomy and the US State Department’s political declaration on the responsible military use of AI. As of late November 2023, the latter had been endorsed by 49 countries — essentially all of America’s Western allies.

But the initiative’s main motive appears to be in preparation for a possible confrontation with authoritarian China. And Beijing is also actively pursuing autonomous weapons technology as part of its explicit doctrine of civil-military fusion. A chapter within the 2023 annual report of Congress’s US-China Economic and Security Review Commission warns that “investment and procurement patterns suggest the PLA aims to use AI-enabled weapons systems to counter specific U.S. advantages and target U.S. vulnerabilities.”

It’s here that lethal autonomous weapons present arguably their greatest medium-term risk: A deadly accident between the PLA and US military, or one of Washington’s allies, involving these weapons around Taiwan or in the South China Sea, could trigger a globally destabilizing war that no one really wants. Australia, for example, is also aggressively trying to develop and acquire unmanned systems. So are India and South Korea.

Without a formal de-escalation process in place, an incident involving lethal robots — due to unit malfunction or operator error — could spiral out of control quickly. Military escalation between China and the United States, two rival nuclear-armed superpowers, might ensue.

There is consensus that these weapons systems will vastly accelerate warfare. Their quick-strike capability will shrink the window to properly weigh retaliatory actions.

The recent agreement between China’s President Xi Jinping and US President Joe Biden to initiate bilateral talks to assess the risks of AI systems for military use is a small yet important step. But, of greater importance is that the two leaders agreed to have their militaries revive bilateral emergency hotlines. These direct links were abandoned by China in the wake of then House Speaker Nancy Pelosi’s visit to Taiwan in August 2022, which infuriated Beijing. The resumption of their use — while not guaranteed to last — can and should be a foundation from which to jointly create a specific, sequenced de-escalation process.

Pre-existing off-ramps for military commanders could mean the difference between diffusing tensions around an isolated accident or mobilizing for war. Because while proponents of lethal autonomous weapons say their use will be “less dramatic” and more bound by legality than the dystopian machine-powered death squads envisioned by activists, there is consensus that these weapons systems will vastly accelerate warfare. Their quick-strike capability will shrink the window to properly weigh retaliatory actions. A minor incident thus has a greater chance of morphing into a conflagration.

This dynamic will be compounded if human-in-the-loop models — where an operator must actively approve the use of force — are deprioritized in favour of human-on-the-loop ones. Also known as fully autonomous design, a human handler can oversee and intervene in a robot’s course of action if desired, but otherwise the units are granted agency to identify and destroy targets based solely on programming and algorithmic decision making. The US government could conceivably argue these weapons systems still align with its definition of keeping military uses of AI under responsible human control.

A 2021 UN report claims the world’s inaugural test run of a fully autonomous hunter drone, produced by a Turkish weapons manufacturer, took place in Libya in March 2020. However, American defence tech company Palantir — which has been deeply involved in the war in Ukraine — estimates such capabilities are still a few years away from being totally realized. It’s these human-on-the-loop models that have stoked fears around autonomous weapons going awry. Their future use is also central to drawn-out debate at the United Nations around creating international laws for the military use of AI, which has now been extended through to the end of 2025. A Russian diplomat, speaking during a UN debate on AI arms control in May 2023, said, “We understand that for many delegations the priority is human control. For the Russian Federation, the priorities are somewhat different.”

This debate may eventually force the United States’ hand in embracing fully autonomous weapons models. In an essay published shortly before his death, Henry Kissinger and a co-author, political scientist Graham Allison, wrote, “Never in history has one great power fearing that a competitor might apply a new technology to threaten its survival and security forgone developing that technology for itself.”

Even human-in-the-loop systems could fuel military escalation due to the prevalence of automation bias. Research shows that people tend to defer to computer-generated decisions over their own evidence and perceptions. An autonomous weapons system is thus likely to gain a human’s approval for the use of force, most of the time, when signalling an incoming threat.

Then there is the issue of the alignment problem whereby an AI program places achieving its objective above mitigating collateral damage. In May 2023, a US Air Force colonel briefly sparked panic by saying that during a recent test simulation, an autonomous drone had killed its operator to dodge an order to abort a requested airstrike. The Air Force quickly denied the incident, and the officer in question, Col. Tucker Hamilton, retracted his comments, calling them a “thought experiment” instead. But the point had been made. “We’ve never run that experiment, nor would we need to in order to realise that this is a plausible outcome,” said Hamilton afterward.

In recent years, war gamers inside and outside the US government have tried to map out what a US-China clash over Taiwan might look like. A recurring finding is that thwarting an amphibious assault by the PLA — one backed by Chinese airpower and long-range munitions launched from the mainland — would necessitate a large garrison of autonomous anti-aircraft and anti-ship systems.

Chinese forces in the region are already deploying so-called grey zone tactics against Taiwan and others, engaging in acts just shy of military provocation. These tactics include sending warplanes into Taiwan’s air defence zone on an almost daily basis. The Philippines, which updated its mutual defence treaty with the United States in May 2023, has also reported numerous cases of Chinese coast guard ships ramming their fishing boats, spraying them with water cannons or pointing military-grade lasers at vessel crew members. Canadian and American warships patrolling international waters have faced maritime intimidation and harassment. It’s impossible to predict whether, in the real world, a weapons system — especially a fully autonomous one — might interpret such ambiguous aggressions as worthy of a violent response.

Defence expert and former Pentagon policy chief Michèle Flournoy has highlighted the possibility that an adversary could “spoof” the visual recognition tools of autonomous weapons systems to manipulate them into attacking civilian targets. This type of false-flag operation could conceivably provide cover for Beijing to claim that mobilizing the PLA for an attack on Taiwan or nearby Western forces was defensive.

Armed conflicts are inherently unpredictable. Reflecting on the first year of Russia’s full invasion of Ukraine, military historian Phillips Payson O’Brien wrote that the biggest lesson for the world was that “war is rarely easy or straightforward — which is why starting one is almost always the wrong decision for any nation.” Before autonomous weapons become a fixture, de-escalation processes must be established to ensure that their growing pains do not drag the planet’s two rival superpowers into a ruinous conflict.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Kyle Hiebert is a researcher and analyst formerly based in Cape Town and Johannesburg, South Africa, as deputy editor of the Africa Conflict Monitor.