What the UK Riots and Springfield, Ohio, Can Teach Us About Disinformation

Our digital information landscape can spark and stoke conflicts even within diverse and democratic societies.

October 9, 2024
springfield
A mural adorns a wall in Springfield, Ohio, September 11, 2024. (Julio-Cesar Chavez/REUTERS)

Recent events, such as the riots in August in the United Kingdom that targeted racialized immigrant communities, and the spread of disinformation targeting the Haitian American community in Springfield, Ohio, offer critical warnings and lessons for Canada. These incidents underscore how our digital information landscape can spark and stoke conflicts even within diverse and democratic societies.

Two key points demand attention. First, our understanding of how mis- and disinformation spreads online — and what its impact is on individuals, groups and society — is still incomplete. Second, identity-based mis- and disinformation is not just pervasive but also deeply polarizing, undermining and threatening social cohesion. Racialized ethnocultural communities often bear the brunt of the spread of mis- and disinformation, experiencing its impacts in distinct and intensified ways. This is particularly frightening for minority populations.

Beyond Elections and Foreign Interference

Disinformation campaigns and influence operations from foreign threat actors during democratic elections tend to receive the lion’s share of academic research, policy and media attention. Foreign interference — which includes the use of disinformation — has had a demonstrable impact on recent elections in mature and established liberal democracies, including the United States, France and Germany. As recent findings from the preliminary report of the Public Inquiry into Foreign Interference indicates, Canada is not immune to this threat. In fact, foreign interference in Canada’s democratic and electoral processes, including the use of disinformation, is expected to play an even larger role in future election cycles.

The 2024 presidential election campaign in the United States may be a harbinger of this escalation. Targeted disinformation against the Haitian American community has been amplified by former President Donald Trump and his running mate as part of their campaign, explicitly connecting xenophobic disinformation about the Haitian community in Springfield, Ohio, to the larger debate around illegal migration and immigration reform. What’s more, these baseless rumours have led to very real security concerns for Haitian Americans and the local community, and resulted in school closures and the mayor’s invoking of emergency powers.

These threats in the digital information landscape have rightly captured Canadians’ attention, with one poll estimating 84 percent are concerned about disinformation and its potential impact on democracy. Canadians are also aware of the potential weaponization of emergent technologies and capabilities. Some 80 percent of respondents indicated they are concerned about the abuse of artificial intelligence (AI) and the spread of AI-generated mis- or disinformation in the lead-up to the 2025 federal election.

While growing awareness and attention to the threat of mis- and disinformation in public discourse and among ordinary citizens is positive, these recent events in the United Kingdom and the United States underscore the need to pay greater attention to mis- and disinformation in everyday contexts to help build resilience against these disinformation threats. The very nature of these hyperconnected digital spaces blurs platform boundaries while also blending local, domestic, transnational and global information ecosystems. The rapidity and reach of digital information across platform and geographic boundaries means information (and mis- and disinformation) is also transferred between and beyond election cycles.

Misinformation is typically defined as content that is false, inaccurate or misleading, but unintentionally so, while disinformation is designed to mislead and cause harm. This distinction is neat and tidy in theory, but in practice, much more difficult to determine. The virtually instant transfer of content online means that it’s increasingly difficult, if not impossible, to trace its origins, or the originator’s intent. The cross-platform movement of digital content, as well as the capacity to scale mis- and disinformation content using generative AI, will make this problem even more challenging.

It is therefore imperative that we understand how this near-constant flow — the “slow drip of polarizing and illiberal narratives” — erodes social cohesion. In the United Kingdom, for example, anti-minority and Islamophobic sentiments have been routinized and normalized in far-right and even mainstream political discourse. This summer, after three young women were fatally stabbed, this discourse provided ample “kindling” that was easily “ignited” into days of rioting when mis- and disinformation circulated online regarding the identity, including the ethnic and religious background, of a man alleged to be the attacker.

In the digital information space, illiberal narratives from one context have transnational influence, creating ideological solidarity across hard borders. As a result, while everyday forms of mis- and disinformation may not, for instance, directly alter election outcomes, they can prime individuals to certain narratives that are subsequently accepted as part of the political milieu.

Digital behaviours and preferences, including the choice of platforms for communication and to share information, are often motivated by the need to connect personal networks in the individual’s country of origin.

Different Communities’ Experiences

The wave of UK violence and the targeting of the Haitian migrant community in the United States highlight another dimension of the mis- and disinformation experience — namely, that it is not a singular or universal experience. For racialized diaspora communities, disinformation and identity interact in complex ways.

Indeed, members of ethnocultural immigrant and migrant communities often belong to multiple and overlapping digital information environments that are transnational. These environments include their home and destination countries. Digital behaviours and preferences, including the choice of platforms for communication and to share information, are often motivated by the need to connect personal networks in the individual’s country of origin.

Members of these communities tend to use encrypted private chat and direct messaging apps at a higher rate than the Canadian average. These digital spaces are critical ways for individuals in these communities to stay connected, but are also de facto spaces for information sharing on a wide range of topics.

We conducted a research survey that found a strong correlation between reliance on these private spaces for information and the duration that users have been in Canada. However, these closed digital spaces can also become hotbeds for the spread of mis- and disinformation within these communities. Indeed, these spaces can be used by adversarial threat actors to target vulnerable communities as part of coordinated foreign influence and disinformation campaigns.

Community members therefore often face a double-edged sword of mis- and disinformation. On the one hand, they are attempting to push back against foreign-sourced mis- and disinformation campaigns that deliberately target their communities. On the other, they are encountering mis- and disinformation narratives originating from and circulating within their communities.

For members of diaspora communities, mis- and disinformation can manifest as acts of transnational repression that may include threats to physical security, intimidation, surveillance and threats to loved ones in home countries. In our own research, we find that the very threat of transnational repression can alter digital behaviours and practices, especially around information sharing.

Immigrant and migrant communities, especially racialized ones, face an added layer of vulnerability. These communities are also often the targets in far-right populist political narratives. They are often scapegoated for strains that can range from general economic maladies, to overpriced housing markets, challenges in the labour market and threats to public safety. These narratives, often underpinned by false, inaccurate and misleading information steeped in racially charged and xenophobic tropes, create an environment characterized by mistrust, discord and potential conflict. For ethnocultural communities, the harms are both individual and collective.

The violence in the United Kingdom and the disinformation targeting the Haitian community in the United States highlight how the spread of mis- and disinformation leads to very real security concerns and differential impacts for immigrants and migrants. These events highlight the urgent need to bolster resiliency among members of ethnocultural communities.

In a world of shifting geopolitics and an evolving digital landscape — amplified by a surge of generative AI — this is a crucial first step in protecting these vulnerable members of society.

A version of this article first appeared in the Toronto Star.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Authors

CIGI Senior Fellow Bessma Momani has a Ph.D. in political science with a focus on international political economy and is a full professor and associate vice‑president, international at the University of Waterloo.

Shelly Ghai Bajaj is a post-doctoral fellow in the department of political science at the University of Waterloo.