mid the coronavirus disease 2019 (COVID-19) pandemic, foreign state actors have been spreading disinformation on social media about the disease and the virus that causes it (Bright et al. 2020; Molter 2020). Covering a variety of topics — from its origin to potential cures, or its impact on Western societies — the creation and dissemination of COVID-19 disinformation online has become widespread.
States — such as Russia and China — have taken to Facebook, Twitter and YouTube to create and amplify conspiratorial content designed to undermine trust in health officials and government administrators, which could ultimately worsen the impact of the virus in Western societies (Barnes and Sanger 2020).
Although COVID-19 has highlighted new and incredible challenges for our globalized society, foreign influence operations that capitalize on moments of global uncertainty are far from new. Over the past few years, public and policy attention has focused largely on foreign influence operations targeting elections and referendums, but health-related conspiracy theories created and amplified as part of state propaganda campaigns also have a long history.
One example is the conspiracy theory that AIDS (acquired immune deficiency syndrome) was the result of a biological weapons experiment conducted by the US government. Historians have documented how Soviet operatives leaked “evidence” into foreign institutions and media outlets questioning the origin of the virus (Boghardt 2009). Because the US government was slow to respond to the AIDS epidemic, which disproportionately affected gay men and people of colour, conspiracy theories about its origin heightened suspicions within these communities that the US government was responsible (Qiu 2017). Decades later, public health research has shown that many people still hold conspiratorial beliefs about the human immunodeficiency virus (HIV) that causes AIDS, which has negatively affected treatment for the disease (Bogart et al. 2010).
Although COVID-19 has highlighted new and incredible challenges for our globalized society, foreign influence operations that capitalize on moments of global uncertainty are far from new.
Part of the reason why the HIV/AIDS conspiracy was effectively inculcated into the belief systems of everyday people was because it involved identifying and exploiting pre-existing divisions among society and then using disinformation to sow further discord and distrust. Today, state actors have applied the same playbook used during the Cold War as part of contemporary foreign influence operations: in the lead-up to the 2016 US presidential election, for example, disinformation and conspiracy theories injected into social and mainstream media were used to exacerbate racial tensions in the United States, particularly around the Black Lives Matter movement (DiResta et al. 2018; Howard et al. 2018), but also around religious (Hindman and Barash 2018) and gender divides (Bradshaw 2019).
What has changed from the Cold War–era information warfare to contemporary influence operations is the information and media landscape through which disinformation can be circulated. Innovations in technology have transformed modern-day conflict and the ways in which foreign influence operations take place. Over the past two decades, state and non-state actors have increasingly used the internet to pursue political and military agendas, by combining traditional military operations with cyberattacks and online propaganda campaigns (North Atlantic Treaty Organization 2016). These “hybrid methods” often make use of the spread of disinformation to erode the truth and undermine the credibility of international institutions and the liberal world order (National Defence Canada 2017).
Today, unlike in the past, when disinformation campaigns were slow, expensive and data-poor, social media provides a plethora of actors with a quick, cheap and data-rich medium to use to inject disinformation into civic conversations. Algorithms that select, curate and control our information environment might prioritize information based on its potential for virality, rather than its grounding in veracity. Behind the veil of anonymity, state-sponsored trolls can bully, harass and prey on individuals or communities online, discouraging the expression of some of the most important voices in activism and journalism. Sometimes the people behind these accounts are not even real, but automated scripts of code designed to amplify propaganda, conspiracy and disinformation online. The very design of social media technologies can enhance the speed, scale and reach of propaganda and disinformation, engendering new international security concerns around foreign influence operations online.
Foreign Influence Operations in a Platform Society
From public health conspiracies to disinformation about politics, social media has increasingly become a medium used by states to meddle in the affairs of others (Bradshaw and Howard 2018; 2019). From China’s disinformation campaigns that painted Hong Kong democracy protestors as violent and unpopular dissidents (Wong, Shepherd and Liu 2019), to Iranian-backed disinformation campaigns targeting political rivals in the Gulf (Elswah, Howard and Narayanan 2019), state actors are turning to social media as a tool of geopolitical influence. And it is not just state actors who turn to social media platforms to spread disinformation and propaganda. Populist political parties, far-right media influencers, dubious strategic communications firms and the charlatans of scientific disinformation have all found a home for conspiracy, hate and fear on social media (Campbell-Smith and Bradshaw 2019; Evangelista and Bruno 2019; Numerato et al. 2019). What is it about the contemporary communication landscape that makes social media such a popular — and arguably powerful — platform for disinformation?
Social media platforms have come to dominate almost every aspect of human interaction, from interpersonal relations to the global economy. But they also perform important civic functions. Increasingly, these platforms are an important source of news and information for citizens around the world (Newman et al. 2020). They are a place for political discussion and debate, and for mobilizing political action (Benkler 2007; Castells 2007; Conover et al. 2013). Politicians also rely on social media for political campaigning, galvanizing support and connecting with their constituents (Hemsley 2019; Howard 2006; Kreiss 2016). But social media platforms are not neutral platforms (Gillespie 2010). Scholars have described how their technical designs and governance policies (such as terms of service, community standards or advertising policies) embed a wide range of public policy concerns, from freedom of speech and censorship to intellectual property rights and fair use or tensions between privacy and surveillance online (DeNardis and Hackl 2015; Gillespie 2019; Hussain and Howard 2013; MacKinnon 2012). Platform design and governance also impact the democratic functions of platforms, including how disinformation and propaganda are spread. While it is important to recognize that all technologies have socio-political implications to various degrees, several characteristics of social media platforms create a particular set of concerns for the spread of disinformation and propaganda.
Aggregation
One of the most salient features of today’s information and communication environment is the massive amount of data aggregated about individuals and their social behaviour. The immense amount of data we leave behind as we interact with technology and content has been called “data exhaust” by some scholars (Deibert 2015). Our exhaust — or the by-product of our interactions with online content — is used by platforms to create detailed pictures of who we are not only as people and consumers, but also as citizens or potential voters in a democracy (Tufekci 2014). The collection, aggregation and use of data allows foreign adversaries to micro-target users with political advertisements during elections. Like all political advertising, these messages could drive support and mobilization for a certain candidate or suppress the political participation of certain segments of the population (Baldwin-Philippi 2017; Chester and Montgomery 2019; Kreiss 2017). We have already seen foreign agents purchase political advertisements to target individuals or communities with messages of mobilization and suppression (Mueller 2019). Although platforms have taken several steps to limit foreign advertising on their platforms, such as currency restrictions or account verification measures, foreign actors have found ways to subvert these measures (Satariano 2018).
Algorithms
Platforms apply algorithms — or automated sets of rules or instructions — to transform data into a desired output. Using mathematical formulas, algorithms rate, rank, order and deliver content based on factors such as an individual user’s data and personal preferences (Bennett 2012), aggregate trends in the interests and behaviour of similar users (Nahon and Hemsley 2013), and reputation systems that evaluate the quality of information (van Dijck, Poell and de Waal 2018). The algorithmic curation of content — whether it be a result of personalization, virality and trends, or reputation scores — affects how news and information is prioritized and delivered to users, including whether algorithms present diverse views or reinforce singular ones (Borgesius et al. 2016; Dubois and Blank 2018; Flaxman, Goel and Rao 2016; Fletcher and Nielsen 2017), nudge users toward extreme or polarizing information (Horta Ribeiro et al. 2019; Tufekci 2018) or emphasize sensational, tabloid or junk content over news and other authoritative sources of information (Bradshaw et al. 2020; Neudert, Howard and Kollanyi 2019).
Anonymity
Platforms afford different levels of anonymity to users. Whether users must use their real name has implications for whether bots, trolls or even foreign state actors use anonymity to mask their identity in order to harass or threaten political activists and journalists, or to distort authentic conversations about politics (Nyst and Monaco 2018). With anonymity, there is a lack of transparency about the source of information and whether news, comments or debate come from authentic voices or ones trying to distort the public sphere. Related to the question of anonymity is the question of data disclosure and how personal data disclosed to third parties can be used if unscrupulous firms or foreign state actors are able to use psychographic profiles to suppress voter turnout (Wylie 2020).
Automation
Platforms afford automation — where accounts can automatically post, share or engage with content or users online. Unlike a human user, automated accounts — which are sometimes referred to as “political bots” or “amplifier accounts” — can post much more frequently and consistently than any human user (McKelvey and Dubois 2017). Although there are many ways to classify automated accounts and the activities they perform (Gorwa and Guilbeault 2020), they generally perform two functions when it comes to foreign influence operations. First, by liking, sharing, retweeting or posting content, automated accounts can generate a false sense of popularity, momentum or relevance around a particular person or idea. Networks of bots can be used to distort conversations online by getting disinformation or propaganda to trend (Woolley 2016). Second, automation has been an incredibly powerful tool in the targeting and harassment of journalists and activists, whereby individuals are flooded with threats and hate by accounts that are not even real (Nyst and Monaco 2018).
The Future of Disinformation and Foreign Influence Operations
In conclusion, the spread of disinformation and propaganda online are growing concerns for the future of international security. The salient features of platforms — aggregation, algorithms, anonymity and automation — are some of the ways contemporary technologies can contribute to the spread of harmful content online, and foreign state actors are increasingly leveraging these tools to distort the online public sphere. The use of social media for “hybrid” methods of warfare is a broader reflection on how technological innovation changes the nature of conflict. Indeed, technology has always been recognized as a force that enables social and political transformation (Nye 2010). Similarly, the unique features of our contemporary information and communication environment provide new opportunities for state actors to use non-traditional methods of warfare to pursue their goals.
As we see innovations in technology, we will also see innovations in the way in which propaganda and disinformation spread online. The Internet of Things, which is already revolutionizing the way we live, creates even more data about us as individuals and as citizens. What happens in a world where we can measure someone’s physiological response to propaganda through wearable technology? We interact with “chatbots” like Alexa and Siri every day. What happens when the growing sophistication of chatbot technology is applied to political bots on Facebook or Twitter? How will the platforms differentiate between genuine human conversations and automated interactions?
Thus far, combatting disinformation and propaganda has been a constant game of whack-a-mole. Private responses focus on third-party fact-checking or labelling information that might be untrustworthy, misleading or outright false. In the form of laws and regulations, governments place a greater burden on platforms to remove certain kinds of harmful content, often without defining what constitutes harm. But propaganda and disinformation are also systems problems. Too often, public and private responses focus on the content. However, these responses ignore the technical agency of platforms to shape, curate and moderate our information ecosystem. Rather than focusing solely on the content, we need to look at the deeper systemic issues that make disinformation and propaganda go viral in the first place. This means thinking about the features of platforms that enhance or exacerbate the spread of harmful content online.