AI and the Future of Deterrence: Promises and Pitfalls

November 28, 2022

This essay is part of The Ethics of Automated Warfare and Artificial Intelligence, an essay series that emerged from discussions at a webinar series.

W

ith its roots tracing back to the Roman Empire, deterrence has a long and illustrious history (Wenger and Wilner 2012; Wilner 2015; Wilner and Wenger 2021; Wilner 2022). Today, contemporary deterrence theory is enjoying a “fifth wave” of academic scholarship, in which innovation, science and technology play prominent roles. A small subset of this emerging scholarship explores whether and how artificial intelligence (AI), machine learning and other related technologies influence the nature, scope and practice of deterrence, compellence and coercion. What follows is a speculative account of how AI and deterrence might intersect in the coming decade.

In a nutshell, deterrence involves using a combination of threats (for example, retaliation, punishment, denial, delegitimization) to influence an opponent’s behaviour. The goal, here, is to convince another to forgo a particular action. Working in reverse, compellence, a close relative of deterrence, convinces another to pursue a particular behaviour it might not have otherwise. Both processes are subsumed within the umbrella concept of coercion more broadly. In all cases, at least two actors are involved: a defender communicates a threat in hopes of changing a challenger’s behaviour; the challenger, in turn, weighs the cost of the threat against the presumed benefits of the action and decides whether or not to accede to the defender. Threats of punishment or retaliation (i.e., through sanctions, censure, or kinetic attack and cyberattack), promises of denial (i.e., diminishing the challenger’s assumed benefits through defence, resilience or defeat) and, at times, other considerations (i.e., delegitimization, normative constraints or incentives) anchor most coercive interactions. Importantly, although international relations theory and political science more broadly are often assumed to have made the greatest impact on coercion theory and practice as it relates to geopolitics and interstate conflict, several other disciplines, such as psychology and criminology, and other fields, such as terrorism and intelligence studies, have provided important insights that better situate deterrence within the context of contemporary insecurity.

Interestingly, AI, like coercion itself, is also said to subsume several waves of scholarship. Defining AI remains problematic: the field is marked by different technological approaches that attempt to accomplish different and at times divergent scientific tasks (Spector 2006). Some scholars, such as Stuart Russell and Peter Norvig (2009) — the former of whom, incidentally, has publicly warned that the development of some forms of AI may be as dangerous as the development and proliferation of nuclear weapons (Russell, quoted in Bohannon 2015) — describe a number of AI taxonomies, including systems that think or act like humans, such as those that can mimic human intelligence and those that think or act rationally in solving problems and behaving accordingly. Hence, the first wave of AI helped train computers to complete specific tasks. The process works by taking particular facts derived from a particular system — playing chess, for instance — and turning those facts into rules that a computer can understand, act upon and eventually excel at. The second wave used statistical learning to train AI to use probabilistic reasoning in handling and interpreting data and information. Training data is used to help a computer learn and adapt to new and fresh data. Recent advancements in machine learning, deep learning and neural nets are embedded within this wave of research, and progress in voice or facial recognition and object classification, for instance, are some of its better-known results. The third wave of AI is still under development; we are in it now. It is being built around “contextual adaptation,” such that AI systems will build “explanatory models” themselves, against which they will learn, reason and continue to extrapolate within and beyond specific and limited domains. In sum, AI, like deterrence, continues to evolve and expand, reflecting the push and pull effects of innovation, scientific and engineering discovery, and technological advancement.

Putting the two together, then, how might AI influence deterrence in theory and practice? What follows is a summary and speculative account of 11 potential future outcomes.1

First, better defence equals better deterrence by denial. By potentially augmenting the speed, accuracy and certainty of some defensive weapons or systems, AI could improve the reliability of defending physical and cyber infrastructure. Properly communicating these newfound defensive abilities and denying aggressors the fruits of their labour could altogether deter them from trying.

Second, and conversely, however, under other conditions AI may expand the potency of certain types of offensive attack, favouring punishment over denial. Pairing AI to autonomously swarming drones, for instance, and developing novel saturation tactics on land, at sea, in the air and in space might — when the technology is sufficiently refined and improved — provide an aggressor with a new coercive tool not easily defeated or denied. Then again, given the interplay between offensive advancements and defensive responses, coercive swarms might be appropriated for defence, too: new-age swarming bots might fend off attacks launched by legacy platforms. Think of a defensive and disposable robotic swarm countering an incoming fighter or frigate. Next-generation swarming capabilities, then, might be used to augment both offensive and defensive posture. The resulting robotic dogfight might recalibrate coercion toward the middle.

If ubiquitous sensors result in a tsunami of real-time data, AI might provide the analytic potency needed to anticipate an adversary’s next step, down to the very minute.

Third, and moving beyond kinetics alone, AI might improve a state’s ability to plan and implement both offensive and defensive coercive threats, by improving logistics, navigation, communications, coordination, recruitment, training, deployment, and so on. As the author has noted elsewhere, the “back-office AI that coordinates the machinery of warfare may make some especially complex coercive threats” — such as those we see today associated with Russia’s ongoing invasion of Ukraine — “more robust, persuasive, and feasible” (Wilner and Babb 2020, 408). A similar argument centres on using AI to turn information into intelligence useful to soldiers in theatre and in near real-time. The automation of data analysis and dissemination may provide military personnel with a means to shorten the decision-making process, providing them with an exploitable coercive advantage over adversaries.

Fourth, and relatedly, by rapidly providing unique and novel advice to decision makers that supersedes human innovation, ingenuity and capability, AI may provide situational awareness that dips into predictive analytics. Hyper-war thus introduces us to “hyper-coercion” (ibid., 409). If ubiquitous sensors result in a tsunami of real-time data, AI might provide the analytic potency needed to anticipate an adversary’s next step, down to the very minute. That capacity might supercharge a defender’s pre-emptive and deterrent capabilities.

Fifth, in the realm of information warfare augmented and amplified by AI, synthetic videos and deep fake technology might be used to fabricate seamless disinformation that manipulates everything from public discourse to individual political decisions. Used coercively, these innovations might be leveraged to threaten a target with fabricated delegitimization. Think of Russia or China threatening a Western official standing for re-election with reputational harm by leaking embarrassing synthetic videos unless particular policy positions are taken or promised.

Sixth, innovations in warfare often eventually lead to adversarial mimicry. If AI-enhanced decision-making and intelligence analysis provides one party of a conflict with a coercive advantage, then we should assume that the innovation itself will eventually make its way to the other actor, too. The end result is a distant future scenario in which competing AI systems — working at machine speed — provide their owners with ever-fleeting moments of coercive advantage. A geopolitical use-it or lose-it mentality might emerge, forcing adversaries to take action more quickly than they might have otherwise lest they once again find themselves at a disadvantage. Put another way, the logic and value of striking first, and fast, may prevail, upending long-standing escalatory calculations.

Seventh, owing to its dual-use and commercialized nature, AI might rebalance the coercive relationship between the traditionally strong and the traditionally weak. Smaller states may be able to purchase sophisticated AI tools off the shelf and, notwithstanding other limitations in terms of access to appropriate training data, find ways to innovate with them. Tactical imagination, an ability to revisit long-standing assumptions and a willingness to experiment with AI might provide the weak with new and novel ways to coerce and counter-coerce the strong. The same logic may hold true even among violent non-state actors, such as terrorist organizations, which may be able to purchase and retrofit AI and use it to leverage novel forms of asymmetric coercion.

Eighth, ubiquitous real-time surveillance could deter criminal and malicious behaviour. China’s experiment in using facial recognition software to deter and punish jaywalking at busy urban intersections is informative (Han 2018). Taken to the next level, if a state were to establish widespread, AI-powered surveillance of urban centres, border crossings and other sensitive locations to generate biometric identification and behavioural analytics, as the European Union itself has tested with a variety of border security projects (Wilner, Babb and Davis 2021), and if it were to publicly announce its use of these tools, it might convince criminals, terrorists, spies and other malicious actors that their plans are unlikely to succeed, deterring some unwanted behaviour.

Ninth, ethical, normative and legal limitations on how AI is developed and/or used in battle may dictate how some countries behave and others respond. While some states, notably several European allies, are openly against providing AI with the right or the means to kill individuals without human intervention, other states, such as China and Russia, appear far less concerned. Interestingly, a lower ethical bar may translate into a coercive advantage: some adversaries will derive a coercive benefit by expressing a willingness to use AI in ways that other countries might reject out of hand. Under certain conditions, delegating decisions to AI may provide countries with a tactical, strategic, or coercive advantage over those inclined to keep humans within the decision-making loop. AI self-restraint may become a form of preventive self-coercion.

Tenth, and relatedly, AI presents some alliances, notably the North Atlantic Treaty Organization, with a damning wedge issue to work through. Allies with uneven AI capabilities, governance, rules of engagement, legal statutes and so forth, may have difficulty engaging in geopolitics with a unified coercive voice. Collective defence may eventually hinge on AI interoperability: an ally unilaterally opting out of an alliance’s particular position on AI may drag the rest down with it, diminishing political decision making, cohesion and consensus while degrading its coercive effectiveness.

Finally, because AI capabilities are less tangibly understood vis-à-vis other, traditional tools of warfare, such as the number and model of tanks a country has in its possession, which can be counted and assessed, their introduction and use in conflict and coercion may be especially prone to misinterpretation and misperception. Without robust communication about AI capabilities, adversaries may have a poor understanding of a country’s AI prowess, misjudging the risks they run when contemplating certain activities. Addressing this coercive limitation will not be easy nor straightforward. How do you communicate a capability when that capability is an algorithm? What is the equivalent in terms of signalling AI capability to openly testing a new ballistic, nuclear, cyber or anti-satellite weapon? A capability that remains unknown or misunderstood by adversaries is one that will have little coercive impact on their behaviour.

In sum, as a scholar of deterrence, the author cannot think of a more exciting topic to unpack and explore. If AI is indeed the future of geopolitics, conflict and warfare, then AI is also the future of deterrence, compellence and coercion. What this means in practice, and how all of this will influence international relations, has yet to be properly determined.


  1. Findings presented here stem from the author’s AI Deterrence Project, which received funding from Canada’s Department of National Defence, including two awards provided through the Innovation for Defence Excellence and Security program (2018–2019 and 2020–2021), and one award stemming from the Mobilizing Insights in Defence and Security program (2019–2020).

Works Cited

Bohannon, John. 2015. “Fears of an AI Pioneer.” Science 349 (6245): 252. www.science.org/doi/10.1126/science.349.6245.252.

Han, Meghan. 2018. “AI Photographs Chinese Jaywalkers; Shames them on Public Screens.” Medium, April 9. https://medium.com/syncedreview/ai-photographs-chinese-jaywalkers-shames-them-on-public-screens-ad0a301a46a6.

Russell, Stuart and Peter Norvig. 2009. Artificial Intelligence: A Modern Approach. 3rd ed. Essex, UK: Pearson.

Spector, Lee. 2006. “Evolution of Artificial Intelligence.” Artificial Intelligence 170 (18): 1251–53.

Wenger, Andreas and Alex Wilner, eds. 2012. Deterring Terrorism: Theory and Practice. Stanford, CA: Stanford University Press.

Wilner, Alex. 2015. Deterring Rational Fanatics. Philadelphia, PA: University of Pennsylvania Press.

———. 2022. “The Many Shades of Canadian Deterrence.” Policy Perspective, August. Canadian Global Affairs Institute. www.cgai.ca/the_many_shades_of_canadian_deterrence.

Wilner, Alex and Andreas Wenger, eds. 2021. Deterrence by Denial: Theory and Practice. Amherst, NY: Cambria Press.

Wilner, Alex and Casey Babb. 2020. “New Technologies and Deterrence: Artificial Intelligence and Adversarial Behaviour.” In Deterrence in the 21st Century — Insights from Theory and Practice, edited by Frans Osinga and Tim Sweijs, 401–17. NL ARMS Netherlands Annual Review of Military Studies. The Hague, the Netherlands: T. M. C. Asser Press.

Wilner, Alex, Casey Babb and Jessica Davis. 2021. “Four Things to Consider on the Future of AI-enabled Deterrence.” Lawfare (blog), July 25. www.lawfareblog.com/four-things-consider-future-ai-enabled-deterrence.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.