AI and the Diffusion of Global Power

November 16, 2020

This article is a part of Modern Conflict and Artificial Intelligence, an essay series that explores digital threats to democracy and security, and the geopolitical tensions they create.

W

hat role will artificial intelligence (AI) play in shaping the balance of power? AI is a general-purpose technology (GPT) with many applications across civilian and military domains. Accordingly, the impetus for AI innovation and invention also comes from a broad set of actors, with countries and companies investing heavily. The history of economic and military power suggests that while some applications of AI might enhance existing powers, the general-purpose character of AI will limit first-mover advantages in most AI application areas, especially as regards the balance of power. Moreover, effective applications of AI in the military domain could require a degree of organizational change that status quo military powers have found challenging in the past, raising the potential risks for a country like the United States.

What Is AI?

AI is not a single widget, unlike a semiconductor or even a nuclear weapon. While the specific definition is contested, AI is a universe of techniques, such as machine learning and neural networks, that involve the use of computers and computing power for tasks that we used to think required human intelligence and engagement (West 2018; Burnham 2020).

That question raises another: what is necessary to succeed in AI? The oft-used phrase that data is the new oil (Toonders 2018) is, in the context of AI, probably wrong. Building a successful algorithm requires not only having a lot of data, but also having the right data, the programming talent to write an algorithm and the computational power, or “compute,” to train the algorithm (Hwang 2018). The number of cases where more data is the determining factor in predicting an algorithmic advance may be more limited than it might seem at first glance. One area where more data could matter is in predicting consumer behaviour, or, more nefariously, surveillance of a domestic population. Even then, at some point there are declining returns to gaining additional data. In a world of AI, any so-called autocratic advantage (Harari 2018) due to greater data access is likely quite limited. Nonetheless, that lack of substantive advantage over other types of regimes won’t prevent autocracies from exploiting access to their own populations’ data as a new tool to repress their populations more effectively.

The difference between data quantity and quality, as well as the importance of processing power, is critical to thinking about potential military uses of AI. On the one hand, one could argue that China has an advantage in AI because the size of its population gives it access to huge sets of population data (The Economist 2020a). But that data will not help China train the algorithms that are likely to be most relevant for twenty-first-century military conflicts. Instead, it’s the American military’s decades of experience fighting wars (whether one agrees or disagrees with the United States’ involvement in those conflicts) that should yield training data pertinent to designing algorithms for logistical planning, promotion and assignments, and operations on the front lines. The potential for generative adversarial networks, or GANs, to train algorithms (Gui et al. 2020) also limits the relevance of a raw advantage in data access. GANs use simulated environments, and competition, to substitute for a lack of real-world data.

The oft-used phrase that data is the new oil is, in the context of AI, probably wrong.

Despite the way a few companies, such as Google and Alibaba, have led in AI so far, it is unlikely that a small number of companies, or countries, will monopolize AI knowledge, particularly as AI techniques mature and become better known. Being aware that another company, or country, has designed an algorithm that can do a particular task, even without knowing how it was done, could provide vicarious knowledge that aids competitors in rapidly adopting algorithms debuted by others and make first-mover advantages relatively limited. The tight, high-end labour market in AI is likely to loosen in the coming years, particularly as universities around the world are producing a new generation of AI programmers and researchers.

Moreover, a key constraint on training algorithms, and something that could slow diffusion, is the massive computing power necessary to train cutting-edge algorithms. However, the relative cost of computing power (Hernandez and Brown 2020) is finally declining (The Economist 2020b), which reduces a potential barrier to mimicry.

Finally, cybersecurity will be essential for protecting algorithms from hackers and espionage. Even if hardware barriers continue to exist and countries or companies lack data to train algorithms themselves, cyberespionage could still provide a means to steal knowledge about algorithms. Through data poisoning, countries or industrial competitors could try to prevent potential adversaries from developing effective algorithms in the first place (Khurana et al. 2019). Algorithms that have been developed successfully are also vulnerable; through data hacking or spoofing (Heaven 2019), adversaries could prevent these effectively trained algorithms from actually being implemented (Yang et al. 2020).

AI and GPTs

GPTs are technologies with a wide number of extensive uses across many sectors (Bresnahan and Trajtenberg 1995). Historical examples include the combustion engine and electricity, while a more modern example is information technology (Jovanovic and Rousseau 2005). Coordinating innovation on GPTs is difficult, because of the large number of actors simultaneously pursuing inventions in related, or even the same, sectors.

AI is not a new field. Symbolic approaches to algorithm development, characterized by rule-based systems known as “Good Old-Fashioned Artificial Intelligence” (Haugeland 1985), have existed for decades. But the pace of advances in AI has grown in recent years due to new approaches. AI functions as a GPT because of the number of potential sectors for its use, and the large set of actors working on algorithms (Pethokoukis 2019). Researchers around the world, both at universities and at companies, are moving forward the state of the art in the basic understanding of AI and in specific application areas. Key areas of AI include vision algorithms and text algorithms, while methods include machine learning, deep learning and neural networks (Sejnowski 2020).

AI is an especially broad technology, with potential applications encompassing everything from the algorithms that determine Netflix and Amazon recommendations to the computer vision algorithms that attempt to detect missile launches. This makes AI much more like GPTs of the past — such as the steam engine — than like a regular dual-use technology. Dual-use technologies — the Global Positioning System, for instance — can be used for either military or civilian purposes. Algorithms can also be used for either military or civilian purposes, but their breadth and diversity of potential application means the dual-use frame may be less appropriate.

If AI is a GPT, that means, on balance, its applications are likely to become diffused, rather than remain concentrated. Given that innovation in the underlying science comes from private industry and universities, rather than from classified military research (despite the key funding the Defense Advanced Research Projects Agency provided to help launch the AI field), a wide range of actors have access to information on technology breakthroughs. In contrast, stealth technology, an application area of material science, represents a classic example of a technology with purely military applications. When technologies only have military applications, the number of potentially interested actors are limited, as are the net resources available for investment. Military-only applications also make inventions more likely to diffuse slowly, due to secrecy. Research shows that technologies based on underlying commercial research, on balance, spread faster than technologies based on underlying military research (Horowitz 2010).

Given the general-purpose character of AI, and the trends described above — interest from companies around the world in AI, and declining costs in computing power — it should be relatively difficult to control the spread of capabilities built from algorithms.

AI and Organizational Change

Yet the way AI will impact the balance of power is not simply a question of how technology spreads. After all, as described above, power generally comes not from invention in and of itself, but through its uses, which require concepts of operation and organizational change to implement those visions. This is true not only when thinking about how technology can impact economic power but also when thinking about its consequences for military power. When adopting new capabilities requires doing what militaries or companies have done before, only better — like a more efficient computer — status quo actors tend to centralize and consolidate power.

However, when adoption requires disruptive organizational change, it opens the potential for both significant shifts in economic power and underlying changes to the military balance of power. A classic example in military history is the aircraft carrier. When the United Kingdom’s Royal Navy invented the aircraft carrier with the HMS Furious in 1918, it viewed the utility of the aircraft carrier primarily as an aerial spotter for the battleship. Because the Royal Navy was the best in the world at battleship warfare, it thought about aircraft carriers as a way to improve an already well-established competency. Alternatively, the United States Navy and the Japanese Navy, in part due to their need to project power across the vast Pacific Ocean, thought about the aircraft carrier more as a mobile airfield. The United States, in particular, reorganized its navy in World War II to take advantage of the striking power of naval aircraft launched from aircraft carriers, transforming naval warfare as a result. The Royal Navy, in contrast, bound to battleships due to organizational politics and the weight of history, fell behind.

Given that AI is a GPT with many areas of use, different applications of AI may require different types of organizational change to take advantage of them. For example, a shift by air forces from focusing on low numbers of capital-intensive aircraft, such as the F-35 fighters, with highly trained pilots on board, to low-cost drone swarms — operating as a pack and uninhabited, with one pilot overseeing many aircraft — would be extremely disruptive, organizationally, for a military such as the United States’. In contrast, using computer vision algorithms to better identify patterns and detect missile launches or assist humans in identifying targets would not be as disruptive. But it is also important to keep in mind that most uses of AI by militaries will not be on the battlefield. Instead, they will be in logistics, personnel and other arenas far from the fight, but still potentially very consequential to overall military effectiveness.

AI and the Balance of Power

The large degree of uncertainty surrounding applications of AI by militaries makes determining the impact of AI on the balance of power difficult. However, some possibilities can be forecast, given the diverse potential military uses of algorithms and some of the general tendencies of the AI field.

Imagine two different types of military uses of AI. The first, and most common, use of AI by militaries will be general-purpose applications based on related algorithms in the commercial world. Project Maven in the United States (Seligman 2018), which draws on computer vision algorithms developed by companies for non-military purposes, exemplifies one general-purpose-derived application of AI by militaries. Military applications will require more cybersecurity, and some specialization, but the underlying basis of the algorithms will be similar. Thus, in these application areas, first-mover advantages should be relatively limited. Countries with substantial militaries and information economies should be able to mimic advances relatively quickly, since the underlying technology will be relatively accessible. Thus, these uses of AI should not, on their own, have a large relative impact on the balance of power. However, even if the technology is mimicked relatively quickly, the impact on the balance of power could still be asymmetric, as bureaucratic politics mean some militaries are better poised than others to take advantage.

More specialized applications of AI for militaries, although less frequent, could create much larger first-mover advantages and have important consequences for the balance of power. Algorithms designed to help human commanders manage a complex and multi-dimensional battlespace, for example, do not have as many obvious commercial corollaries. Thus, militaries are more likely to invest in the science required for breakthroughs, and that research will likely be secret and harder to copy by potential adversaries (although there would still be the potential for mimicry after seeing algorithms that others debut).

The United States, as the leading military in the world, is both a role model and a target. There is a great deal of rhetoric in the United States surrounding investments in AI, but despite the creation of the Joint Artificial Intelligence Center, there is concern that the rhetoric is not matched by the budgetary reality of limited investments. Moreover, as the leading military power in the world, the United States, like the Royal Navy with aircraft carriers, arguably faces the biggest risk. Meanwhile, even though China’s aspirations to leverage AI to leapfrog the United States’ economy, and the American military, are clear, it is much less clear whether Chinese investments will translate into surpassing the United States in AI, let alone with applications relevant for the balance of power. Moreover, around the world, from Canada to Israel to Singapore, governments are ramping up their AI investments and considering potential military uses. As the pandemic of coronavirus disease 2019 continues, one potential consequence of workplaces being unsafe for humans may be to accelerate investments in robotics and autonomous systems. This possibility could apply to the military and the private sector, although the consequences to the civilian economy will likely be clearer first.

Finally, this evaluation of the way AI could shape the balance of power, and the extent to which it might concentrate or diffuse power, focuses on so-called narrow applications of AI. Narrow algorithms are built to do one thing, such as play a game; an example is AlphaGo Zero, software developed in 2017 by DeepMind to play Go and trained with reinforcement learning, meaning that it learned to play the game without being fed training data from human game play. The impact of AI on the balance of power could be different if one company or country achieves a massive breakthrough that enables the creation of artificial general intelligence. A general algorithm that could write other algorithms, operate in many domains and avoid the problem of catastrophic forgetting (forgetting previous learning after acquiring new information in a different area) would give a first mover a substantial advantage. Some, such as Nick Bostrom (2014), director of the Future of Humanity Institute at Oxford University, worry that the first-mover advantages might be so large that they would be calamitous. Thus, the consequences on the balance of power would be very different.

Conclusion

The tremendous uncertainty among experts surrounding the potential for advances in AI (Grace et al. 2018) makes forecasting the consequences on the balance of power difficult. Nevertheless, investments by militaries around the world, and concern on the part of many researchers and organizations interested in understanding potential changes in the conduct of warfare, mean it is important to understand the likely impact of AI now. If AI is like other GPTs, it will certainly create winners and losers based on the ability and capacity of countries and companies to effectively use AI, in particular on their ability to secure algorithms from data poisoning, hacking and spoofing, which will reduce the risk of accidents.

But GPTs, as technology categories that are broader than specific dual-use widgets, tend to diffuse relatively quickly, especially in comparison to purely military technologies. In an absolute sense, algorithms, and knowledge of how to design them, are also likely to diffuse relatively quickly (compared to, say, knowledge about how to build an F-35). A big question, though, is the extent to which taking advantage of AI, whether more general or more specialized applications, will require significant, disruptive, organizational change. The higher the degree of change required, history suggests, the greater the potential for a shift in the balance of power (Horowitz 2010), and the greater the risk for a leading military such as the United States.

A final question is what the international community should do, given these trends. There is growing interest in AI governance, whether regarding specific military applications of AI, such as lethal autonomous weapon systems, or more general governance of, for example, potential facial surveillance. A paradox is that the greater the potential impact of AI, for a larger number of actors, the more difficult creating effective and binding regulation becomes. The significance of AI will make efforts in developing measures that build trust and confidence, as well as norms surrounding behaviour, critically important in the coming years.

Works Cited

Bostrom, Nick. 2014. Superintelligence: Paths, Dangers, Strategies. Oxford, UK: Oxford University Press.

Bresnahan, Timothy F. and M. Trajtenberg. 1995. “General purpose technologies: ‘Engines of growth’?” Journal of Econometrics 65 (1): 83108.

Burnham, Kristin. 2020. “Artificial Intelligence vs. Machine Learning: What’s the Difference?” Northeastern University (blog), May 6. www.northeastern.edu/graduate/blog/artificial-intelligence-vs-machine-learning-whats-the-difference/.

Grace, Katja, John Salvatier, Allan Dafoe, Baobao Zhang and Owain Evans. 2018. “Viewpoint: When Will AI Exceed Human Performance? Evidence from AI Experts.” Journal of Artificial Intelligence Research 62: 72954. https://doi.org/10.1613/jair.1.11222.

Gui, Jie, Zhenan Sun, Yonggang Wen, Dacheng Tao and Jieping Ye. 2020. “A Review on Generative Adversarial Networks: Algorithms, Theory, and Applications.” Cornell University arXiv e-print, January 20. https://arxiv.org/abs/2001.06937.

Harari, Yuval Noah. 2018. “Why Technology Favors Tyranny.” The Atlantic, October. www.theatlantic.com/magazine/archive/2018/10/yuval-noah-harari-technology-tyranny/568330/.

Haugeland, John. 1985. Artificial Intelligence: The Very Idea. Cambridge, MA: MIT Press.

Heaven, Douglas. 2019. “Why deep-learning AIs are so easy to fool.” Nature, October 9. www.nature.com/articles/d41586-019-03013-5.

Hernandez, Danny and Tom B. Brown. 2020. “Measuring the Algorithmic Efficiency of Neural Networks.” Cornell University arXiv e-print, May 8. https://arxiv.org/abs/2005.04305.

Horowitz, Michael C. 2010. The Diffusion of Military Power: Causes and Consequences for International Politics. Princeton, NJ: Princeton University Press.

Hwang, Tim. 2018. “Computational Power and the Social Impact of Artificial Intelligence.” March 23. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3147971.

Jovanovic, Boyan and Peter L. Rousseau. 2005. “General Purpose Technologies.” In Handbook of Economic Growth, Volume 1B, edited by Philippe Aghion and Steven N. Durlauf, 1181–1224. Amsterdam, The Netherlands: North Holland.

Khurana, N., S. Mittal, A. Piplai and A. Joshi. 2019. “Preventing Poisoning Attacks On AI Based Threat Intelligence Systems.” 2019 IEEE 29th International Workshop on Machine Learning for Signal Processing, Pittsburgh, PA, October 13–16. https://ieeexplore.ieee.org/document/8918803.

Pethokoukis, James. 2019. “How AI is like that other general purpose technology, electricity.” AEIdeas (blog), November 25. Washington, DC: American Enterprise Institute. www.aei.org/economics/how-ai-is-like-that-other-general-purpose-technology-electricity/.

Sejnowski, Terrence J. 2020. “The unreasonable effectiveness of deep learning in artificial intelligence.” Proceedings of the National Academy of Sciences of the United States of America, January 28. www.pnas.org/content/early/2020/01/23/1907373117.

Seligman, Lara. 2018. “Pentagon’s AI Surge on Track, Despite Google Protest.” Foreign Policy, June 29. https://foreignpolicy.com/2018/06/29/google-protest-wont-stop-pentagons-a-i-revolution/.

The Economist. 2020a. “China’s success at AI has relied on good data.” January 2. www.economist.com/technology-quarterly/2020/01/02/chinas-success-at-ai-has-relied-on-good-data.

———. 2020b. “The cost of training machines is becoming a problem.” June 11. www.economist.com/technology-quarterly/2020/06/11/the-cost-of-training-machines-is-becoming-a-problem.

Toonders, Joris. 2018. “Data Is the New Oil of the Digital Economy.” Wired. www.wired.com/insights/2014/07/data-new-oil-digital-economy/.

West, Darrell M. 2018. “What is artificial intelligence?” Brookings Institution, October 4. www.brookings.edu/research/what-is-artificial-intelligence/.

Yang, Chao-Han Huck, Jun Qi, Pin-Yu Chen, Yi Ouyang, I-Te Danny Hung, Chin-Hui Lee and Xiaoli Ma. 2020. “Enhanced Adversarial Strategically-Timed Attacks Against Deep Reinforcement Learning.” Paper presented at the International Conference on Acoustics, Speech and Signal Processing, Barcelona, Spain, May 4–8. https://ieeexplore.ieee.org/document/9053342.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Michael C. Horowitz is Richard Perry Professor and director of Perry World House at the University of Pennsylvania.

Autonomous systems are revolutionizing our lives, but they present clear international security concerns. Despite the risks, emerging technologies are increasingly applied as tools for cybersecurity and, in some cases, cyberwarfare. In this series, experts explore digital threats to democracy and security, and the geopolitical tensions they create.