How is the weaponization of AI operating within the contemporary global governance architecture?
The most complex international governance challenges surrounding artificial intelligence (AI) today involve its defence and security applications — from killer swarms of drones to the computer-assisted enhancement of military command-and-control processes. These essays emerged from discussions at a webinar series exploring the ethics of AI and automated warfare hosted by the University of Waterloo’s AI Institute.
How is the weaponization of AI operating within the contemporary global governance architecture?
What are the ethical and legal considerations of automated warfare and artificial intelligence?
Artificial intelligence may influence deterrence in theory and practice, with a number of potential future outcomes.
The Third Drone Age is characterized by non-state actors using the latest advancements in drone technologies to pursue their political objectives.
New AI techniques to conduct offensive cyber operations could manipulate and corrupt the integrity of essential civilian data sets.
New military technologies create “accountability gaps” in armed conflict — and highlight the relative lack of legal accountability for accidental civilian harm in international humanitarian law.
Regulations that evoke civilian protection measures are needed when autonomous weapons systems are deployed.
Delegating decisions on the battlefield to machines raises fundamental ethical, legal and political questions, in particular when the military use of force is concerned.
Agreeing on guardrails around the use of artificial intelligence in warfare should be the current focus of the international community.
Humans should be kept in the decision-making loop when lives are at stake.
Autonomous weapons technology is here, but the laws around its use are lacking.
What should humans still be doing on the battlefield in the twenty-first century?