n the last two decades, cyberspace has emerged as a new potent, strategic domain for conflict — from great power competitions and regional wars to intrastate conflicts involving violent extremism. The development of both offensive and defensive cyber capabilities has been rapid and is continuing; normative doctrine is largely yet to be adapted to this evolving environment; and rules of engagement and measures for control are only beginning to emerge. Global fragmentation, rising geostrategic interests and lack of consensus on controlling the harmful use of artificial intelligence (AI) and cybersecurity technologies amplify challenges to peace and security.
In this new multipolar order, AI technologies have the potential to transform and intensify cyber conflicts. Adversarial data manipulation is becoming one of the most powerful and deceptive tools in the cyberwarfare arsenal. New AI techniques to conduct offensive cyber operations could manipulate and corrupt the integrity of essential civilian data sets, from medical data, biometrics, social security files and tax records. The risk is multifaceted and extends across knowledge- and information-based sectors.
Malware augmented by deep-learning algorithms could learn to corrupt human genomics data, part of clinical trial research managed by universities or the private sector. In the world of biosecurity, such AI-based malware could obfuscate important genetic data that helps companies and universities organize taxonomy of pathogens and develop algorithmic tools for threat screening.
Take two other completely different civilian domains. Large, centralized digital ID and biometrics repositories — such as Aadhaar in India — are increasingly combined with AI programs to monitor financial transactions and could be subject to adversarial data manipulation. Data-targeting attacks could also infect schools’ digital systems, through outdated software, unsecured networks and compromised personal devices.
When assessing how international law applies to these cybersecurity scenarios, states, policy makers and international law experts face complex legal ambiguities. International efforts to produce legal interpretation and delineate rules of engagement in cyberspace should intensify their focus on preventing adversarial data manipulation.
AI’s Impact on the Cyberthreat Landscape
Two socio-technical trends augment the attack surface and sophistication of cyberthreats, including the potential for data targeting.
First, cities are not ready for cyber conflict. Across the globe, urban environments are growing more interconnected, crowded and heterogeneous, providing pervasive pressure points and vulnerabilities that can be exploited through cyberattacks. Cities are also on the edge of morphing with a hyperconnected internet of billions of unsecured mobile devices, which generates unprecedented levels of risks.
Second, what is substantially different in the current age of technological convergence is the potential for AI to integrate as an innovation catalyst through myriad aspects of modern societies. AI-led computing provides powerful ways to analyze, at scale and in real time, large data streams about populations living with digital technologies. The same way, AI programs increasingly run the protocols required to automate growing parts of cities’ critical infrastructures, essential services and industrial platforms. As algorithms learn about technical systems and human beings, their behaviours and vulnerabilities, AI becomes better skilled to analyze and predict how cities function and protect themselves. Yet, while AI learns to optimize business, urban and daily lives, technological convergence can be misused to engineer sophisticated cyberthreats.
Developments at the convergence of AI and cybersecurity show that AI malware can learn to manipulate the integrity of a digital trove of information (from modifying individuals’ DNA sequences in genomics-analysis systems, to injecting data into satellite imagery used in situational awareness). As part of a “red team test,”1 researchers at Sandia National Laboratory have demonstrated how a malware injection can infiltrate genetic analysis software and alter fragments of DNA sequences within individuals’ genomes (Fellet 2019). Furthermore, AI malware can also compromise the functioning, performance and predictive value of other algorithmic systems (from compromising image processing of cancer biopsies [Mirsky et al. 2019], to corrupting real-time analysis of sensors in transportation [Siegelmann 2019] or industrial control settings [Pauwels 2021]). The capacity of AI-enabled cyber operations to automate such intelligence threats will affect life-and-death scenarios in civilian contexts, outside of traditional military settings.
Billions of people rely on the data sets and data-driven analytics that underpin most of our modern governance institutions, manufacturing platforms and industrial control systems. Experts have already reported increased cyberattacks (International Committee of the Red Cross [ICRC] 2019) aimed at targeting the safety and control systems (Joshi 2020) that operate critical infrastructure such as electrical (Stringer and Lee 2021), water and sanitation (Global Water Intelligence 2021) facilities. In the near future, cyberattacks augmented by AI could be designed to manipulate the functioning of the automated data-based protocols that help run power plants and drinking water utilities. For instance, drastically augmenting levels of chlorine or sodium hydroxide in residential water supplies could impact human health. The advent of automation in urban architecture also provides an increasing potential to expand a cyberattack’s impact through interconnected and cloud-based industrial sectors, such as food and vaccine production, biotech supply chains, transport and logistics. The reverberating effects of cyberattacks have to be analyzed in their multifaceted dimensions, including implications for food security and physical security.
Legal Considerations
Legal experts and multilateral governance processes have indicated how cyber operations do not take place in a legal vacuum. During times of armed conflict, international humanitarian law (IHL) — including the principles of distinction, proportionality and precautions — provides a comprehensive regulatory framework that can be applied to offensive cyber operations (Gisel, Rodenhäuser and Dörmann 2021). The IHL regime affords protections to civilians and civilian objects and prohibits certain types of hostile cyberactivities, including, inter alia, direct attacks against civilians and civilian objects; indiscriminate attacks that do not distinguish between military objectives and civilians or civilian objects; disproportionate attacks that may cause incidental loss of civilian life, injury to civilians, damage to civilian objects or a combination thereof; and attacks that would destroy, remove or render useless objects indispensable to the survival of the population.
In a 2019 position paper, the ICRC emphasized two legal ambiguities about how IHL applies to cyber operations during armed conflict (ICRC 2020).
Assessing cyberthreats and gaps in legal protection in the biosecurity sector would therefore gain from being considered by technical and legal experts in the field.
First, there is no internationally agreed definition on what constitutes a cyberattack or cyber hostilities within IHL. Important technical questions persist on how to define and qualify, in the context of armed conflict, offensive cyber operations or technical terms such as “attack” when they rely exclusively on cyber means. In particular, legal ambiguity remains on whether cyber operations that would not cause physical damage but result in limited disruption of essential services’ functionality, or in erosion of public trust in critical systems, would qualify as an attack, and therefore violate IHL. For instance, it is likely that an AI-led cyber operation could qualify as an attack if it is designed to disable automated protocols and drastically increase levels of sodium hydroxide in urban water supplies with direct implications for human health. Now, consider a scenario in a private sector’s biotech facility: while avoiding detection, an AI-based malware targets vulnerabilities in automated data protocols to manipulate networks of sensors and impact quality control processes. Resulting harms extend from making pharmaceutical products that do not match specification standards (leading to waste and shortage), to undermining trust in stocks of vaccines and therapeutics. In this case, forensics analysis to characterize the nature and threshold of the attack and determine technical attribution would be complex and could remain highly contested. Would this scenario — where an AI-malware influences automated data processing required for biotech quality controls — qualify as an attack under IHL? Extrapolation from the Second Oxford Statement by international experts in the context of the coronavirus disease 2019 (COVID-19) pandemic could help support such legal qualification: “International law prohibits cyber operations by States that have significant adverse or harmful consequences for the research, trial, manufacture, and distribution of a COVID-19 vaccine, including by means that damage the content or impair the use of sensitive research data, particularly trial results, or which impose significant costs on targeted facilities in the form of repair, shutdown, or related preventive activities.”2
Second, another salient question of legal interpretation posed by the ICRC is whether data sets can be considered as “protected objects” with the consequence that adversarial cyber operations targeting essential civilian data sets for manipulation or destruction would then violate IHL. While IHL affords specific protection to digital medical records, a growing amount of biological data about civilians is not necessarily managed and stored in hospitals and medical facilities per se but transferred to universities, direct-to-consumer genomics databases and private sector platforms. Yet, even in such context of decentralized data management, the Second Oxford Statement may serve as a basis to extrapolate and infer that an AI-led cyberattack would be prohibited if the malware is designed to manipulate the integrity of human genomics data in a biotech research trial setting.
Interestingly, the same statement would not apply to important data sets, data analytics and algorithmic processes used in a parallel sector, biosecurity. Consider a case where malicious actors conduct an adversarial attack to poison the integrity of pathogens’ genomics data sets critical to screening of biosecurity threats by universities and the private sector. Pathogens’ genomics data does not necessarily rest in centralized high-security databases but can be held in public and open-source repositories. Biosecurity screening efforts are complex, relatively fragmented and increasingly integrate predictive algorithms. Assessing cyberthreats and gaps in legal protection in the biosecurity sector would therefore gain from being considered by technical and legal experts in the field.
Beyond the health and biotech sectors, targeting other sensitive civilian data sets, such as biometric ID, social security and tax records, could cause substantial harm. According to the ICRC (2020, 490), “excluding essential civilian data from the protection afforded by IHL to civilian objects would result in an important protection gap.” For instance, cyberattacks augmented by AI could be designed to evade detection, avoid destruction of cyber infrastructure, but still delete or alter the integrity of data within biometrics repositories used for financial governance. Data sets in the education sector are another potential target, not covered by IHL, where deletion of large data swaths would have corrosive implications. Hacks of personal and administrative data can already put students and education professionals at risk for longer-term cyberattacks. Such data is highly commoditized on the dark web as it can be exploited for identity theft and harassment, surveillance and profiling. Compromising schools’ automated data-processing applications and connected devices (including surveillance cameras [Henebery 2021]) has also resulted in halting virtual learning programs and corrupting campuses’ safety and security protocols (Check Point 2021).
While the question remains unresolved under IHL, a number of states have argued that civilian data sets should be afforded the same level of protection as civilian objects (see analysis by ICRC legal expert Kubo Mačák [2022, 12]). Even during armed conflict, for IHL to apply to adversarial data targeting, attribution would need to prove the responsibility of the state (including connection with proxy forces in certain cases) and the relation to the armed conflict. Determining legal attribution and relation to hostilities can be extremely difficult to establish.
Geostrategic Interests and the Grey Zone
Offensive, indiscriminate and harmful cyber operations are increasingly exploited for projecting power within multipolar confrontations. The ongoing invasion of Ukraine by the Russian Federation is a prominent example. Microsoft (2022) has documented the methods of Russia’s use of cybercapabilities, including malware designed to wipe out and corrupt critical data sets, as part of the “hybrid” war waged in Ukraine.
Yet, in the last decade, multipolar tensions have also erupted outside of armed conflicts. States and non-state actors have waged cyber operations that caused significant civilian harm but remained below the threshold of war. Progressively, cyberspace has given rise to an increase in proxy operations, advanced persistent threats and information disorders, a spectrum of different forms of offensive operations that are conceptually associated with hybrid warfare or grey-zone conflicts. Such operations offer many advantages to global and regional powers, with greater opportunities to inflict damage or disrupt daily life with minimal physical presence and deny responsibility.
A 2020 ICRC report confirmed rising concerns over “malicious cyber operations taking place below the threshold of armed conflict, and therefore outside of the scope of the protections that IHL affords to civilians” (Lawson and Mačák 2020, 34). In the 2020 SolarWinds and the 2021 Microsoft Exchange Server attacks, state-sponsored malicious actors did not refrain from conducting indiscriminate cyber operations that resulted in harms to thousands of civilian institutions, including schools, medical facilities and critical infrastructure platforms (Renals 2021).
In April 2021, the United States and the United Kingdom attributed the SolarWinds attack and the exploitation of five publicly known cyber vulnerabilities to the Russian Foreign Intelligence Service (FBI 2021). In July 2021, a coalition of nations, which included the United States, the European Union and, for the first time, all North Atlantic Treaty Organization members, attributed the Microsoft Exchange Server hack “with a high degree of confidence” to malicious cyber actors affiliated with China’s Ministry of State Security (The White House 2021). Cybersecurity experts point to the emergence of a web of profit where states are now going as far as leaking cyber vulnerabilities (what we call “zero-day” vulnerabilities) to multiple malicious proxy groups (Goodin 2021).
States and non-state actors take advantage of an opaque ecosystem where AI and cybersecurity techniques proliferate and can be harnessed to craft and repurpose cyberweapons. Growing evidence points to the fact that cybercriminal groups and private firms are in measure to offer as a paid service to wage adversarial attacks on the running data-analytics programs and industrial control systems that support most infrastructures critical to civilian populations (Caltagirone 2019). In the above-mentioned 2020 ICRC report, the authors highlighted that “some non-State actors have the potential to deliver effects through cyber tools comparable to or exceeding those available to many States” (Lawson and Mačák 2020, 34).
Both state and non-state actors can hire the services of private entities with knowledge of advanced AI and cybersecurity techniques in the pursuit of proxy agendas or interests. As mentioned in a 2021 report by the UN Working Group on the use of mercenaries: “non-State entities that are not integrated with the armed forces play a highly significant and increasingly large role in the provision of cyberservices to and on behalf of States. The evolving threat of the privatization of cybersecurity attacks through a new generation of private companies referred to as so-called ‘cybermercenaries’ is proliferating, and there is an increasingly blurred line separating the private and national spheres” (UN General Assembly 2021, 8, para. 23).
Ensuring Data Integrity through Collaboration
In this context of multipolar tensions, where boundaries are blurred between national and corporate responsibilities, legal and technical experts, civil society, states and private sector actors urgently need to work together to better understand, mitigate and regulate the harmful impact of adversarial data manipulation. Beyond strengthening legal and normative frameworks, policy makers and technologists should use foresight and collaborate on techniques to help secure data integrity. For instance, encryption can be used to protect data at rest. Secure multi-party computation can help protect data in motion. Data authentication and verification mechanisms, such as cryptographic checksums and digital watermarking, may become critical to ensure data integrity. Importantly, modelling and simulating the ways in which sensitive civilian data sets are stored, accessed and retrieved for analysis is a useful method for testing such data systems, forecasting potential threats, identifying systemic vulnerabilities and building mitigation plans to address them. Diverse sectors facing information security risks — for instance, in genomics and biomedicine — already rely on these forms of sandboxing or operational foresight.
Ultimately, what is at stake is how we frame corporate responsibility and accountability in cyber conflict. Most efficient and scaled-up technological capabilities in AI and cybersecurity are the intellectual property of private companies. These companies are in a race to develop cybersecurity programs that can detect the behavioural strategies used by AI-enabled malware to propagate across systems and avoid detection. Already, in the conflict that opposes the Russian Federation to Ukraine, the large technological platform Microsoft is providing effective cyber defence to civilian institutions in Ukraine and in the United States (Burt 2022). As AI is enhancing the speed, stealth and autonomy of cyberattacks, public sectors and civilian protection actors will become increasingly dependent on the cutting-edge expertise of AI and cybersecurity companies. This asymmetry gives private sector actors across the globe unprecedented power and a potential role to play in civilian protection.