Computer systems are playing an ever-increasing role in surveying, identifying and categorizing potential targets on the battlefield. Modern militaries are deploying autonomous weapons systems to improve the speed with which they respond to threats. A human operator will set the mission parameters, but then an autonomous system can take over and execute the mission.
Frank Sauer, head of research at the Metis Institute for Strategy and Foresight and a senior research fellow at the Bundeswehr University in Munich, looks at the legal and ethical concerns around the use of autonomous weaponry.
Sauer argues that while artificial intelligence (AI) systems are very fast at categorizing objects, they lack the nuance required to make life-and-death decisions: “Machines don’t understand anything, they’re just good at matching patterns.” When deciding how much power an autonomous system has, governments need to consider the impacts of international humanitarian law and ethics, because allowing AI complete, unregulated control could be a runaway nightmare.