To Overcome AI-Enabled Propaganda, Support Communities Already Fighting It

US election disinformation spread using AI and over social media is only part of the problem.

October 17, 2024
USelection
Mail-in ballots are already being sent out for the 2024 presidential election, November 5, 2024. (Fresno Bee/TNS/ABACAPRESS.COM via REUTERS)

Half the world's population goes to the polls in 2024. This series of commentaries, in partnership with the Centre for the Study of Democratic Institutions at the University of British Columbia, explores the intersection of technology with the most pivotal among these elections.

With all of the hype about artificial intelligence (AI) in recent years, it can be difficult to distinguish genuine uses of the technology from wishful marketing. This uncertainty is heightened in the political space, which thrives on its own ecology of spin. While campaigns and candidates might claim they’re making use of state-of-the-art technologies for analysis and outreach to voters, the truth is less clear — and often deliberately obscured.

In order to understand exactly how generative AI and large language models are actually being deployed during the ongoing elections in the United States, my research team recently interviewed a group of 20 political technology vendors, campaign consultants and other relevant experts.

Among our key findings was that “[generative] AI’s greatest impact is in cultural and linguistic mimicry and the ability to ‘look like a tribe member’ in order to change people’s minds.” Many interviewees said AI is quickly becoming a core tool for back-end voter analytics, allowing campaigns to generate bespoke messaging campaigns aimed at pivotal voting groups. The messaging generated can be delivered in a variety of languages and, via large-scale analysis of voter data, with a great deal of socio-cultural and geographically specific nuance. Those we spoke with highlighted the use of such tools to sway minority communities and diaspora groups during increasingly contentious campaigns.

These communities are referred to as marginalized for very good reason. They are regularly deprived of the freedoms granted majority groups. They not only lack representation in mainstream politics, but also navigate public life at a marked deficit in terms of access to informational resources. In the United States, politicos leverage the lack of content about politics and electoral processes in languages other than English to, for instance, sow disinformation among a range of marginalized demographic groups both online and off.

Such influence campaigns have included “false information about immigration, inflation and abortion rights” as a means of “exploiting the traumas and fears of specific communities.” The recent false claims from the top of the Republican ticket about Haitian Americans in Springfield, Ohio, eating pets are, for instance, used as a means of demeaning one community while simultaneously perpetuating myths of the model minority. Ongoing efforts to claim, falsely, that undocumented immigrants are illegally voting in large numbers have similarly detrimental impacts. And as the political consultants and vendors we interviewed pointed out, AI and automation are increasingly used to scale and hone such efforts.

It might be less than surprising, then, that many of the community leaders my research team has worked with live in constant fear that their friends and family will be the next ones targeted with disinformation and vitriol — all as they try to navigate an information ecosystem rife with untruths about both civic life and the broader American experience.

But the polarization, anger and apathy tied to the deluge of technologically enhanced political advertising and outright disinformation online do not have to be a foregone conclusion. Communities across the country have been honing grassroots efforts to build resilience against predatory influence campaigns for years. They need help to protect the hearts and minds of voters.

Researchers at the Digital Democracy Institute of the Americas and APIAVote have done significant work to reveal the extent to which, respectively, Latino and Asian and Pacific Islander Americans are particular targets of false information about politics. Critically, however, they have also demonstrated the ways in which these communities are fighting back against such manipulation.

My team’s work has highlighted similar experiences — and response efforts — within sub-groups of these and other communities across the United States: Brazilian, Chinese, Cuban, Filipino, Indian, Mexican, Russian and Venezuelan Americans, to name a few. My collaborators and I have worked alongside community leaders to showcase how such groups have built their own fact-checking and information literacy efforts across the country. These community-embedded activities are particularly effective at addressing falsehoods across the various languages and cultures in the States. They are a firm reminder that there is not one experience of the United States — nor of US politics — and that all Americans have the right to high-quality information about elections and voting.

But members of these communities repeatedly tell us they are underfunded and under-supported in their efforts to combat disinformation. And while the problem is exacerbated by AI and social media, marginalized communities in the United States have been contending for centuries with propaganda intended to disenfranchise and harass them. They made it clear to my team that they see efforts to disinform people of colour and those who lack a voice in mainstream US politics as structural, baked into the institutions, laws and technologies meant to serve society and facilitate elections.

Disinformation spread using AI and over social media is, then, only part of the problem. But it’s far from inconsequential, given that many people across the world now get their news from such spaces. Social media companies’ efforts to self-regulate the purposeful spread of falsehoods on their platforms — including propagandists’ clearly illegal campaigns tied to preventing people from voting — have failed. Social media provides its billions of users with neither an open marketplace of ideas nor an indecipherable noisescape. Instead, it is a space where powerful political and commercial entities have repeatedly shown they can shepherd the flow of information in their favour. Perhaps this is why the US Congress continues to fail to regulate the space. They may express rare bipartisan concern about the nefarious political and social potential of generative AI, but many House and Senate campaigns rely on social media advertising as a potent means of influencing voters.

The European Union’s efforts to hold social media companies accountable for the content they curate and recommend have been more successful. Canada’s new federal commission, the Public Inquiry into Foreign Interference in Federal Electoral Processes and Democratic Institutions, has revealed how diaspora communities are especially impacted by disinformation about elections and is another step in the right direction. Its findings should be leveraged to create sensible laws aimed at protecting and supporting these groups.

The United States, home to Meta, Alphabet and a slew of the most powerful technology firms in the world, must also overcome its paralysis and act. Congress should respond to the conservative US Supreme Court’s recent gutting of the Voting Rights Act by passing the proposed John R. Lewis Voting Rights Advancement Act. Many of the hundreds of regressive voting laws proposed and enacted since the court’s move further disenfranchise marginalized communities and allow for exactly the kind of disinformation discussed in this article. Critically, any laws passed must be built for the digital age — meaning they must be flexible enough to accommodate rapid changes in the technological and mediated landscape.

Laws passed in support of marginalized communities’ informational rights should also include carve-outs for these groups’ ongoing efforts to combat disinformation internally. Although they should absolutely not be saddled with all the work of fixing this problem, they should be enabled to continue to work within their communities to generate linguistically and culturally appropriate means of educating and responding. Governmental support should include funds and administrative assistance for convening the various groups already conducting such disinformation response efforts in one space. By doing this, stakeholders could discuss what works and what doesn’t — and create a framework for responding that others could pick up and amend for their own purposes. This would allow community-based responses to scale up.

We have been contending with the problems of disinformation online for years now. It’s time to act to support those most harmed. Rather than turning to Silicon Valley and technologists — who either propose limited technological solutions or outright ignore this problem (which they helped to create) — we must invest in societal solutions that prioritize people’s knowledge and experience of the issue, and of their own communities.

If Silicon Valley has a role in solving the ongoing crisis, it should be in providing funds to allow these better-placed groups to succeed.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Samuel Woolley is the Dietrich Endowed Chair in Disinformation Studies at the University of Pittsburgh.