Introduction: How Can Policy Makers Predict the Unpredictable?

November 9, 2020

This article is a part of Modern Conflict and Artificial Intelligence, an essay series that explores digital threats to democracy and security, and the geopolitical tensions they create.

P

olicy makers around the world are leaning on historical analogies to try to predict how artificial intelligence, or AI — which, ironically, is itself a prediction technology — will develop. They are searching for clues to inform and create appropriate policies to help foster innovation while addressing possible security risks. Much in the way that electrical power completely changed our world more than a century ago — transforming every industry from transportation to health care to manufacturing — AI’s power could effect similar, if not even greater, disruption.

Whether it is the “next electricity” or not, one fact all can agree on is that AI is not a thing in itself. Most authors contributing to this essay series focus on the concept that AI is a general-purpose technology — or GPT — that will enable many applications across a variety of sectors. While AI applications are expected to have a significantly positive impact on our lives, those same applications will also likely be abused or manipulated by bad actors. Setting rules at both the national and the international level — in careful consultation with industry — will be crucial for ensuring that AI offers new capabilities and efficiencies safely.

Situating this discussion, though, requires a look back, in order to determine where we may be going. While AI is not new — Marvin Minsky developed what is widely believed to be the first neural network learning machine in the early 1950s — its scale, scope, speed of adoption and potential use cases today highlight a number of new challenges. There are now many ominous signs pointing to extreme danger should AI be deployed in an unchecked manner, particularly in military applications, as well as worrying trends in the commercial context related to potential discrimination, undermining of privacy, and upended traditional employment structures and economic models.

While AI applications are expected to have a significantly positive impact on our lives, those same applications will also likely be abused or manipulated by bad actors.

From a technological perspective, the drivers of the change are twofold. First is the advancement in the two methodologies employed to create algorithms: deep learning and machine learning. Machine learning, in essence, is “technology that allows systems to learn directly from examples, data, and experience” (The Royal Society 2017, 16); deep learning, considered a subfield of machine learning, is roughly patterned on the neural networks present in the human brain — such that there are networks of artificial neurons used in the processing of data. The second driver of this change is the vast quantity of data that can be employed for training, combined with an exponential increase in computing power.

In their current, and likely future, iterations, these technologies present policy makers with a number of dilemmas. When technology can learn for itself, “think” for itself and — when combined with autonomous robotics — ultimately do for itself, the governance challenges become complex. There are ethical questions, and problems around explainability, safety, reliability and accountability, to name a few.

In the series of essays that follows, international experts seek to make assessments of the near-, medium- and long-term policy implications of the increased deployment of AI and autonomous systems in military and security applications, recognizing (of course) that the further the time horizon is extended, the more abstract and speculative the analysis becomes. The series also seeks to address some of the big, looming policy questions:

  • Is existing international law adequate?
  • Will this technology upend or change traditional state power structures?
  • Will AI be a stabilizing or a destabilizing force in international relations?
  • How will states work with the private sector, and vice versa, and what impact will those decisions have?

As this series of essays makes clear, the most significant and complicated governance challenges related to the deployment of AI are in the areas of defence, national security and international relations. In this context, Canada’s defence policy, Strong, Secure, Engaged, laid the problem bare: “State and non-state actors are increasingly pursuing their agendas using hybrid methods in the ‘grey zone’ that exists just below the threshold of armed conflict. Hybrid methods involve the coordinated application of diplomatic, informational, cyber, military and economic instruments to achieve strategic or operational objectives. They often rely on the deliberate spread of misinformation to sow confusion and discord in the international community, create ambiguity and maintain deniability. The use of hybrid methods increases the potential for misperception and miscalculation” (National Defence Canada 2017, 53).

This suite of challenges is set to be magnified as AI gets better and better, and as adversarial actors continue to lurk below that threshold of traditional armed conflict. Inevitably, these challenges will continue to put pressure on the existing international rules and governance structures, many of which were designed for a different era. The moment is right to contemplate innovative policy solutions to match these challenges. We hope that the thinking advanced in this essay series may offer some guidance for policy makers in these extremely challenging times.

Promoting Innovation While Offering Practical and Flexible Regulation

As Daniel Araya and Rodrigo Nieto-Gómez tell us in their essay “Renewing Multilateral Governance in the Age of AI,” the most challenging part of developing AI policy and regulatory regimes is identifying what, specifically, must be regulated. This, they note, is due to the fact that AI technologies are designed not as end products, but rather as “ingredients or components within a wide range of products, services and systems,” ultimately encouraging the proliferation of combinatorial technologies. Therefore, as Araya and Nieto-Gómez suggest, successful regulation is “less about erecting non-proliferation regimes…and more about creating good design norms and principles” that carefully weigh design concerns against technical and ethical ones.

So, where should policy makers begin? In her essay “Public and Private Dimensions of AI Technology and Security,” Maya Medeiros suggests that individual governments will necessarily take the lead on a national level, but must coordinate “effort between different public actors. Different regulators with similar policy objectives should adopt universal language for legislation to encourage regulatory convergence.”

There is good news: We do not have to reinvent a regulatory regime for AI. Many of the sectors that will be significantly impacted by AI already have a strong regulatory history. Consider the automotive sector. In the United States, the National Highway Traffic Safety Administration regulates the safety of automobiles, and the Environmental Protection Agency regulates vehicle emissions, while state and local governments are able to establish their own safety laws and regulations as long as they do not conflict with federal standards.

But, even as specific sectors take different approaches to regulating AI capabilities, new AI norms, laws and regulations need to be general enough so that they do not become outdated. As Liis Vihul urges in her essay “International Legal Regulation of Autonomous Technologies,” they cannot be so vague that they are useless. For the defence sector, there is already a place to start: Vihul describes a “dense international legal framework” for warfare in which many rules may already exist that regulate the use of autonomous technologies in conflict. However, we will learn only in the application of these tools whether the old framework holds up.

Security Challenges Ahead

As if the challenge of governing AI was not hard enough, Michael C. Horowitz suggests, in “AI and the Diffusion of Global Power,” that AI will necessarily make cybersecurity threats a lot more complicated. He argues that cyberespionage could evolve to focus on algorithm theft, while data poisoning could prevent adversaries from developing effective algorithms. Amandeep Singh Gill, in his essay “A New Arms Race and Global Stability,” specifically identifies the spoofing of image recognition models by adversarial attackers as one such risk, which would only make the disinformation challenges outlined by Samantha Bradshaw that much more complicated for governments to address. These challenges are, collectively, as Bradshaw points out in her essay “Influence Operations and Disinformation on Social Media,” a systems problem. So, rather than simply labelling the content as a problem, we need to find a way toward a solution — starting with acknowledging that social media platforms have both the responsibility and the technical agency to effectively moderate our information ecosystems.

Finally, the risk of miscalculation looms as possibly the most significant threat posed by AI, because these tools are so new that we have yet to develop robust policy, legal and process frameworks for protecting against their misuse, intentional or unintentional. In his essay “Artificial Intelligence and Keeping Humans ‘in the Loop,’” Robert Mazzolin assesses whether, when and how humans should be kept in the decision-making loop; he suggests that decisions will depend on how adept AI becomes at the crucial tasks of discriminating between different data sets to properly “self-learn,” and noticing attempts at manipulation. Similarly, Gill argues that we need to advocate for the “gift of time” to safeguard the use of autonomous weapons.

Next Steps for Policy Makers

It would be easy to feel overwhelmed by the task of governing emerging AI technologies. Thankfully, this series’ contributing authors have laid out a blueprint for assessing catastrophic risk, supporting peaceful and commercial use, and preserving North American technological leadership for near-term AI applications. Four key recommendations emerge that policy makers can apply in addressing the expanded use of AI and its highly unpredictable potential.

First, policy makers must prioritize developing a multidisciplinary network of trusted experts on whom to call regularly to identify and discuss new developments in AI technologies, many of which may not be intuitive or even yet imagined. In the same way that Marvin Minsky could not have predicted 70 years ago what AI would be capable of today, the one certainty for today’s policy makers is uncertainty. The large-scale deployment of AI, especially in security contexts, presents a range of horizontal public policy issues — international trade, intellectual property, data governance, domestic innovation strategy and national security, to name a few. Given the often complex interrelationships within and among these areas of concern, having access to multidisciplinary expertise will be a must.

Second, policy makers must work to develop strategies that are flexible enough to accommodate tactical shifts when the technology advances — for example, as computing power and algorithmic quality improve — and that allow for system-level changes. The policy frameworks they develop must be capable of attenuating the potential negative aspects of AI, while also maintaining enough elasticity to account for the inevitable advancement in technology capability and expanded use cases.

To put it bluntly, these principles, which include rights to privacy, freedom of thought and conscience, among others, are too important to trade away for the sake of design.

Third, policy makers must invest significant time and resources — in close cooperation with the private sector — to identifying the specific AI applications most in need of governance frameworks. This work must principally include continuously assessing each of AI’s many subsectors to identify the relative technological advancements of specific nations and regions. It will also require that policy makers move quickly (but deliberately) in certain areas where the storm is already upon us — one need only consider the interplay between behavioural nudging, big data, personal data, micro-targeting, feedback loops and foreign adversarial influence on everything from elections to societal cohesion.

Fourth, working in tandem with existing international regulatory bodies, policy makers must ensure not only that universal AI governance frameworks are consistent with their respective national regulations, but also that foundational principles — notably, human rights — are respected at every stage from design to implementation. To put it bluntly, these principles, which include rights to privacy, freedom of thought and conscience, among others, are too important to trade away for the sake of design. Safeguarding them will require policy makers to remain vigilant and to better understand the geostrategic elements of technical design, international standard setting and market development, because these are areas where adversarial states are always seeking advantage.

Conclusion

As challenging as this moment may be, it offers a significant opportunity for policy makers; it is critical to remember, as Vihul points out, that “autonomous technologies are in their infancy.” Today and in the near future, we are talking only about “narrow” AI applications such as those derived through statistical machine learning. If, as Horowitz argues, artificial general intelligence — or machines capable of carrying out a broad range of cognitive tasks as well as or better than humans — is achieved, the governance playbook will need to be revised.

While narrow AI systems will likely continue to outperform their human counterparts, there is little evidence to suggest that these applications, as sophisticated as they may be, will evolve rapidly into systems with general intelligence. For example, a recent test by MIT Technology Review of Open AI’s new language-generating model, GPT-3, displayed the model’s “poor grasp of reality” despite its impressive “175 billion parameters and 450 gigabytes of input data,” with the reviewers concluding it “does not make for trustworthy intelligence” (Marcus and Davis 2020).

Put simply, AI applications have, and will continue to have, significant limitations, and those limitations must be accounted for as systems of governance are designed.

Instead, disruption will more likely happen in the combination of technologies — robotics and AI, for example. Identifying these trend lines, and being able to offer flexible but specific-enough policy within adaptable regulatory and legal frameworks — essentially, governance guardrailsthat can respond when technology does evolve, will be critical for ensuring the new dimensions of international security.

As adversarial states continue to engage in the use of hybrid methods in the “grey zone,” policy makers can expect the challenges to become more pronounced as AI technology continues its rapid development. They can also expect that modern conflict and the future battlespace will be profoundly entangled with AI and autonomous systems. As the world moves into a deeply fragmented time, defined by distrust and great power competition, AI holds the potential to be a destabilizing force that can increase the likelihood of miscalculation, if it is deployed without adequate governance mechanisms in place.

Works Cited

Marcus, Gary and Ernest Davis. 2020. “GPT-3, Bloviator: OpenAI’s language generator has no idea what it’s talking about.” MIT Technology Review, August 22. www.technologyreview.com/2020/08/22/1007539/gpt3-openai-language-generator-artificial-intelligence-ai-opinion/.

National Defence Canada. 2017. Strong, Secure, Engaged: Canada’s Defence Policy. Ottawa, ON: National Defence. http://dgpaapp.forces.gc.ca/en/canada-defence-policy/docs/canada-defence-policy-report.pdf.

The Royal Society. 2017. Machine learning: the power and promise of computers that learn by example. Full report (DES4702), April. London, UK: The Royal Society. https://royalsociety.org/machine-learning.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

Autonomous systems are revolutionizing our lives, but they present clear international security concerns. Despite the risks, emerging technologies are increasingly applied as tools for cybersecurity and, in some cases, cyberwarfare. In this series, experts explore digital threats to democracy and security, and the geopolitical tensions they create.