Will there be blood?
This is the question on observers’ minds as the United States faces the delicate period after a presidential election — the first since the January 6, 2021, attack on the US Capitol. This time, security officials are preparing for the risk of violence on the day Congress certifies the election — but there remains a serious risk of violence at some other time, in some other place. And social media companies, having failed to learn the lessons of January 6, may once again facilitate it.
In 2022, I worked as a consultant to the US House Select Committee to Investigate the January 6th Attack on the United States Capitol. Specifically, I was charged with conducting the January 6th Committee’s investigation into the role of social media companies such as Facebook (now Meta), Twitter (now X) and YouTube. My findings, along with those of my colleagues, were combined in a 122-page memo, which, while unreleased by the committee, was later published by The Washington Post and other media.
Four years of subsequent scholarship have largely confirmed that memo’s main assertions: that Donald Trump incited the insurrection on social media; that QAnon and other conspiracy movements that were incubated online became major gateways to the Stop the Steal movement; and that Stop the Steal was organized primarily by a small number of users of tools such as Facebook Groups and Telegram channels.
Differences between the context in 2020 and today suggest that violence leading to subversion of the election results could be more geographically dispersed across the states, not concentrated in Washington. For starters, Donald Trump is not in power. Military leaders will not need to worry about his orders; the Department of Justice can be expected to vigorously oppose his fraudulent election lawsuits. The sitting vice president, who will preside over the certification of the election, is Kamala Harris — Trump’s Democratic opponent — making a “false electors” scheme unlikely this time around.
In this context, the form of violence most likely to overturn the election will look more like the 2000 election than the one in 2020. In that election, which saw Republican George W. Bush eventually defeat Democrat Al Gore, the outcome was in doubt for weeks after the vote. If the election is close, don’t imagine a new January 6; think of the Brooks Brothers’ Riot — when a violent protest by Republican operatives disrupted the recount in Miami-Dade County in Florida, setting the stage for the Supreme Court to determine the outcome. This time, though, the angry crowd outside the election facility will not be made up of political operatives in ties but could quite possibly include extremist militia movements, which are once again organizing on social media.
Any number of other unpredictable things could also happen — there have, after all, already been multiple assassination attempts against Donald Trump this year — but a disruption of the count is among the most foreseeable.
Twitter is now X, a platform owned by the billionaire Elon Musk, who has emerged as a major Trump campaign booster.
Here’s a look at how this could unfold: On election night, the race is too close to call. Vice President Kamala Harris has a razor-thin lead in the “Blue Wall” states of Michigan and Wisconsin, while major media outlets have called Georgia, North Carolina, Arizona and Nevada for Trump. The deciding electoral vote will come from Pennsylvania, which will likely once again experience delays in processing mail-in ballots. The results are not known for days. This occurs after Trump-aligned media personalities and politicians have been priming the public for claims of fraud for years, well before the election campaigning began. This brings us to another difference between January 6 four years ago and what we may see this time: the posture of social media companies. For a period of time, platforms specifically banned false claims about the outcome of the 2020 election, but later reversed these policies, failing to recognize that claims about 2020 were used to justify similar claims in 2022 and 2024. Since 2020, the last presidential race, all major platforms have relaxed their rules against false claims of electoral fraud — which has set the stage for similar claims already circulating this time around. Twitter is now X, a platform owned by the billionaire Elon Musk, who has emerged as a major Trump campaign booster. Musk has repeatedly shown a penchant for irresponsible commentary — calling civil war in the United Kingdom “inevitable” during far-right riots in August — and has already repeated false claims that Democrats are registering huge numbers of non-citizens to vote. It is all too easy to anticipate that Musk might act as a chaos agent, refusing to moderate, or perhaps even amplifying, false claims of fraud or incitements to violence on X while officials in Pennsylvania count ballots.
Will other platforms take steps to prevent their services from being used to organize that violence? The truth is, we don’t know. In 2020, Facebook took a number of “break-glass” measures designed to slow the speed of rumours, prevent extremist organizing and reduce the risk of violence. The full list of these measures was never made public, though a partial list was included in the January 6th Committee’s social media memo. Today we have even less transparency on Facebook’s response than we did in 2020 — and the company has intentionally backed away from questions about politics while cutting jobs from its trust and safety team. Other platforms, though less in the spotlight, have made similar cuts — and tech executives writ large appear keen to avoid the wrath of a potential second Trump administration. The latest example is Amazon owner Jeff Bezos’s decision to kill an editorial endorsement of Harris by The Washington Post, which he also owns.
A delayed result in a close election, a right-wing tech mogul pushing claims of fraud and an under-moderated social media environment: these are kindling for election arson. Perhaps they will inspire acts of literal arson — ballot drop boxes were burned in Oregon and Washington in October, and similar destructive acts by lone wolves could be enough to derail the count in Pennsylvania. Or, protests outside of tabulation centres could turn violent, compromising the security of the facility and the integrity of the vote count. Either would be enough to challenge the results in court, setting the stage for the election to be decided either by the Supreme Court or (less likely) by the Republican-led House of Representatives.
The bloodiest outcome, though, would be for a single spark of violence anywhere to light a blaze that spreads elsewhere. Protests and organizing would probably not be limited to Pennsylvania; the January 6 insurrection was preceded by armed protests at several state capitols, including Oregon, Idaho and Michigan. It would only take one act of violence by a lone wolf or an extremist organization to create a political crisis that could sweep the nation. There is more than enough anger and high-calibre ammunition in the United States to imagine this outcome.
It would be nice to think that the danger might pass if the United States makes it through this flashpoint without significant incident. But Republican resistance to Democratic election victories is now a perpetual organizing principle of the party. Elon Musk’s transformation into a far-right activist is unlikely to be reversed as his fortune grows. And the political incentives for other tech executives all seem to point in the wrong direction. If nothing changes, four years from now the United States may find itself in a similar position.
In March 2023, law professor Kate Klonick warned of “The End of the Golden Age of Tech Accountability,” a time when voluntary efforts to improve social media would wane. She was right. The problem is not that social media platforms do not know what to do, or that it is too hard. Meta’s experience with the break-glass measures has shown that industry can put considerable thought and resources into these challenges. To the extent their services can be used to facilitate election violence, they have a responsibility to prepare for and mitigate that risk. Scholars and researchers have offered innumerable recommendations to help them do so.
The problem is that they do not want to. The rewards for doing nothing are much greater than those for good behaviour. It’s easier and more profitable to lay low, rely on congressional inaction to escape regulation and avoid the enmity of a vengeful strongman. Tech executives’ new posture necessitates a complete rethink of the model response to election disinformation from previous cycles, which engaged the platforms as partners. After the next president is inaugurated, scholars and activists concerned with these issues should ask each other what will take that model’s place.