FRANK SAUER: Humans make mistakes, but we’re kind of slow. Machines also make mistakes, but when they do, they all make the same mistake, at the same time, at lightning speed.
BRANKA MARIJAN: When we say that, you know, soldiers make mistakes and these systems don’t — I don't think that’s quite accurate. AI fails differently than humans.
JAMES ROGERS: It’s very hard to define just what is AI, what is automation, and then what is true AI and future autonomous systems.
FRANK SAUER: The thing that weighs heavily, I think, on many people’s minds in capitals around the world, is not necessarily the ethical implications. It really is the acceleration of processes. And we’re in a real bind there because from a military perspective, accelerating the completion of the targeting cycle is actually the key takeaway to get from autonomous weapons systems.
JAMES ROGERS: They want to be able to act faster than the enemy. They want to be able to take out these drones out of the sky or to make better targeting decisions quicker.
BRANKA MARIJAN: We already have a lot of precise weaponry that’s used very imprecisely. So, the introduction of autonomous systems won’t necessarily change the behaviour of different militaries.
JAMES ROGERS: So, we need to be really careful about how much we rely on computer systems when we’re making these life-and-death decisions.
BESSMA MOMANI: Sadly, in the case of artificial intelligence and all things emerging technologies in the application of war, no one really has a sense of rules. I think what this series demonstrates is that it can go very, very wrong if not regulated.
JAMES ROGERS: And it’s here that we’re hoping to introduce some sort of measures around meaningful human control or what others are calling appropriate human control. And the simple line there is that a human should always make the decision about whether or not another human dies.
BRANKA MARIJAN: At the moment, no one can be held accountable for actions that are carried out by an autonomous system. Our laws of war simply were made for humans. They were not made for machines.
FRANK SAUER: We’re clear on what it is that we’re talking about. We’re clear on how regulation, at least in an abstract sense, would look like. The key to me really now is will we find the political will to do something about it, either at the international level or, if that’s not possible, then domestically.
AARON SHULL: This affects individuals everywhere. And as a consequence of that, it’s our duty, I think, in a think tank, to be able to tell the story of these technologies, the problems around their governance and to advance potential solutions.
FRANK SAUER: Actually, it’s about us. It’s about us humans, and it’s much less about the machines and what they do or might be able to do in the future. The sooner people understand this, the sooner we can get to smart solutions, how to deal with it.