few years ago, in the impressive circular conference room of the Palais des Nations at the United Nations in Geneva, the author attended one of the annual conferences of the Convention on Certain Conventional Weapons (CCW) dedicated to lethal autonomous weapons. Surprisingly and worryingly, the discussion seemed rather disjointed. The well-meaning diplomats were talking across each other and there was a noticeable disconnect in the debate. Some diplomats were talking about a future when warfare is dominated by amazingly intelligent, perhaps even self-aware, machines. But others were discussing a near present of simple drones with the terrifying and terrible ability to identify, track and target people on the ground without any meaningful human control.
It is a fundamental mistake to conflate these two scenarios. And the ongoing discussions about if, how and when to regulate lethal autonomous weapons are harmed by such confusion. We need to consider the two different futures separately as the technical, legal and ethical challenges they pose are very different.
This confusion is perhaps not surprising. After all, it is a confusion that Hollywood has helped create. On the one hand, blockbuster movies have prepared us for the former. Will we one day come up against an intelligent humanoid robot like T-800 from the Terminator movies that greatly exceeds human intelligence? On the other hand, films such as Angel Has Fallen or the Future of Life Institute’s YouTube video “Slaughterbots” have prepared us for a second, simpler future of autonomous drones. Will relatively unsophisticated drones one day hunt down humans using the sort of facial recognition software already found in our smartphones?
A recent UN report has even suggested that this second future may no longer be in our future — such drones may have been used in March 2020 in the military conflict in Libya (United Nations Security Council 2021). The report suggests that Turkish-made drones hunted down and killed retreating forces affiliated to Khalifa Haftar without any data connection.
From a technological perspective, the two futures are very different. Indeed, researchers working in this area are careful to distinguish between these two futures. The distinction to be drawn is between artificial intelligence (AI) and artificial general intelligence (AGI).
There are many differing definitions of AI. Broadly speaking, AI researchers are attempting to develop software to perform tasks that humans require intelligence to complete. Humans use intelligence to perceive, reason about, act in and learn from the world. Therefore, AI includes tasks such as perception (“What can I see to the north?”), reasoning (“Are those retreating soldiers to my north still a threat?”), action (“How do I best act to counter the threat posed by these enemy units?”) and learning (“What can I learn from this hostile encounter with the enemy?”).
AI is currently limited to narrow-focused tasks. We can write software to recognize tanks from buses, or to translate Russian into English. But we cannot write software to do broad tasks, or to match the flexibility, adaptability and creativity of humans. This would take us to AGI.
AGI is the hypothetical ability of an intelligent agent to do any task that a human can. It is also called strong AI (as opposed to weak or narrow AI). Beyond AGI is super intelligence, where an agent far exceeds human-level intelligence.
It is worth noting that AGI is a minority pursuit in scientific research. Most researchers in this space focus on AI and not AGI. There are a few, high-profile research organizations with AGI as their overall goal, such as Alphabet’s DeepMind and the Microsoft-backed OpenAI. However, most university and industry AI centres are concentrating their efforts on the more immediate development and deployment of (narrow) AI, which already offers valuable returns.
It is also worth noting that AGI may be some way off. In a survey of 300 other experts in AI around the world, the median estimate1 of when we would achieve AGI was 2062 (Walsh 2018). A small fraction thought we might never achieve AGI. Equally, none thought it was near — in the next five or 10 years. But 92 percent estimated it would happen sometime in the next century. There was, however, a great variability in their estimates. We do not know what we miss to get from AI to AGI, so it is hard to know what we need to invent to get there or how long it might take. However, there are no laws of physics we know of that will prevent us from eventually getting there. It is therefore worth thinking about and preparing for an AGI future.
AGI is the hypothetical ability of an intelligent agent to do any task that a human can.
So what are the different challenges posed by putting AI and AGI in the battlefield? Before considering this question, it is important to acknowledge the many advantages that both AI and AGI offer in a military setting. Robots are, for example, the perfect technology for clearing a minefield. No one should ever risk a life or a limb to clear another minefield. And any robot that can clear a minefield is going to need some AI to go about this dangerous task. As another example, autonomous trucks are the perfect means to get supplies into contested territory. And autonomous driving requires AI to perceive, reason and act. AI also offers much promise to manage the information demands of the modern battlefield. AGI offers all these promises and more. Most importantly, it would permit humans to be removed completely from the battlefield.
Let us turn now to the risks. One of the biggest risks of putting AI in the battlefield is incompetence. We run the risk of handing over the decision of who lives and who dies to machines that may be incapable of following international humanitarian law (IHL). Principles such as distinction and proportionality are not easily programmed. Indeed, there is a debate as to whether such principles could ever be programmed into a machine — perhaps, ultimately, they cannot be. But the AI being fielded today, such as that used in the autonomous drones that may have been used to hunt down and kill retreating forces in Libya, do not uphold these principles. Similarly, the AI systems to be fielded in the near future will not be able to abide by IHL.
If you are a terrorist, there is little problem with incompetent machines that violate IHL. Militaries may be careful to field only AI-enabled weapons that are more accurate and cause less collateral damage than humans. But terrorists will not care if an AI-enabled weapon is only 50 percent accurate. Indeed, that is perhaps a more terrorful weapon than one that is more accurate.
Another risk is that of changing the speed, scale and character of war. Computers can work at much faster time scales than humans. And the great thing with code is that once you can get a computer to do something once, you can get it to do it a thousand times. Previously you needed an army. You needed to equip and feed that army. Now you would need a single programmer. And unlike an army, a computer will unblinkingly carry out any order, however evil. These are the perfect weapons for rogue states and terrorists.
A third risk is to stability. The chance of unplanned conflict will increase dramatically. We know what happens when we have complex AI systems facing each other in an adversarial setting. It is called the stock market. And with great regularity, we get unexpected feedback and flash crashes. With stocks, we can put circuit breakers in place, and unwind trades when things go wrong. But we will not be able to unwind any of the deaths that occur when a war is started between North and South Korea due to the complex interactions between AI systems facing each other in the demilitarized zone.
Attribution is another challenge that will threaten stability. It may be difficult to determine who is behind AI-powered weapons. Indeed, there have already been drone attacks on Russian military bases in Syria where it is not clear who was behind these attacks. We are then faced with the difficult problem of deciding how to respond given this uncertainty.
Turning to AGI, some of the risks disappear. For instance, by the very definition of AGI, there is no risk of incompetence. Any AGI system has to be as capable as a human. Indeed, it is likely to be more capable given that it may have superior sensors, along with various computational advantages such as greater memory and speed. In fact, this risk may actually reverse. If the AGI system is more capable and commits fewer errors, there is an ethical question of whether we are morally justified to let humans continue to fight war.
Other risks remain, such as the risk of changing the speed, scale and character of war, and the risk to stability. In addition, new risks emerge with AGI. For instance, there is now the question of sentience. If AGI systems achieve or require sentience, then there may be a moral obligation to protect them from suffering. This, however, is very speculative as we know too little about consciousness in biological beings to know whether it is possible in silicon, or whether it is necessary for AGI.
Returning to the diplomats mentioned at the opening of this essay, what should the diplomats discussing lethal autonomous weapons take from this discussion? First and foremost, AI-powered weapons will be weapons of terror and weapons of error. And this threat is very pressing. We urgently need to regulate how we hand over life-and-death decisions to AI-based systems. The threat of AGI, however, is likely much more distant. While philosophers and others may wish to ponder the challenges here, there is little urgency for diplomats to do so.
Above all, it is imperative that the international community agree on guardrails around the use of AI in warfare. A decade of discussion at the CCW in the United Nations has produced little. It is time, then, for one of the countries that has shown leadership in this area to take the discussions outside. This was a successful means to regulate landmines. There is no reason to suppose it could not also succeed with AI. We would be letting humanity down if we did not try.