It’s Time to Think Beyond Disinformation and False News

Researchers, policy makers, journalists and technology experts continue to focus overwhelmingly on the threat of false content on social media. But disinformation is one problem among many.

October 6, 2021
2021-10-05T155158Z_2_LYNXMPEH940WV_RTROPTP_4_USA-CONGRESS-FACEBOOK.jpg
Facebook whistle-blower Frances Haugen testified before the US Congress on October 5, telling law makers the social media giant knew its apps were harming the mental health of some young users. (Reuters)

Facebook is having a very bad month. This week, several of their digital media products — including Facebook, Instagram and WhatsApp — went down for an extended period. Their stock has taken a plunge. Meanwhile, The Wall Street Journal released several stories over the last two weeks about ongoing problems and misdeeds at the firm. The much discussed series — “The Facebook Files” — reveals a slew of issues with the company’s products: rule exemptions for high-profile users engaged in harassment; suppressed internal knowledge about Instagram’s harmfulness for teen girls; a user base becoming increasingly angry and malicious despite the company’s purported intention to foster constructive engagement; uncontrolled waves of mis- and disinformation about the COVID-19 vaccine; and much more.

The continued issues with false information from the anti-vaccine crowd are cause for serious concern — particularly on social media platforms where billions of users around the world get their news. Facebook and other social media companies must work to curb such content. But The Wall Street Journal’s reporting highlights the fact that there are numerous problems at Facebook and across its various apps and websites besides just bogus content. In fact, the larger proportion of stories showcase an interwoven, far-reaching collection of troubles with both platform management and design.

The same is true with the set of information-centric issues across social media platforms the world over. Facebook, YouTube, WeChat, TikTok and many others aren’t simply facilitating — or failing to curb — the spread of harmful disinformation. Most social media platforms are crucibles for a wider range of nefarious and coercive communication, including coordinated propaganda, hate speech, state-sponsored trolling, sophisticated phishing campaigns and predatory advertisements. Organized crime groups and known terror organizations use them to communicate and organize.

This isn’t to say that there aren’t benefits to social media. Democratic activists in autocracies can leverage social media in their fights for freedom, and people use these platforms to keep in touch with friends and family. Instead, it’s to say that disinformation is one problem among many. Despite this, many researchers, policy makers, journalists and technology experts continue to focus overwhelmingly on the threat of false content over social media. The Wall Street Journal’s new series adds to a growing trove of research and reporting that points to much broader informational issues online.

Those working to study and address the communicative problems and the attendant socio-political fallout over social media must look beyond the problem of disinformation. It’s time for us to expand our frames for understanding the ongoing crisis with digital media to include broader issues of propaganda, coordinated hate, manipulative content about electoral processes, and incitement of violence. Often, the people and groups using social media as a mechanism for sowing such campaigns do not use disinformation or false news at all. In fact, the illegality of the content and intentions of its creators are frequently more clear-cut with these types of informational offensives.

There are problems with retaining a myopic focus on disinformation. First, politicians and others in positions of power now regularly levy allegations of spreading disinformation, falsehoods or “fake news” against anyone with whom they disagree. Former president Donald Trump, Narendra Modi, Jair Bolsonaro and Rodrigo Duterte, for example, regularly accuse their opponents — and the news media — of spreading false information about them.

Because of these increasingly regular practices — characterized by NBC News in 2019 as Trump’s “playbook” of “deny, divert, discredit” — the concept of “fake news” has completely lost its meaning. More than this, though, the idea of disinformation — and the public’s perception of it — has become unmoored from what the term actually means: the intentional spread of false information. Instead, many now dismiss allegations of sowing disinformation as a political stunt in one way or another.

Second, a narrow focus on disinformation stymies research on other issues and can lead to unintended consequences. Not only do researchers themselves focus less on other issues, such as hate speech, electoral tampering, or incitement to violence, but their focus on disinformation spurs others to act accordingly. Like a snowball gaining speed and bulk as it rolls downhill, our study of and concern with disinformation has developed its own momentum.

What if, instead of letting our focus on disinformation run away with us, we stopped it in its tracks? This doesn’t mean not discussing or studying disinformation at all — far from it. Instead, it means preventing disinformation’s spread via societal and technical means, as well as not feeding into it via unconsidered reporting or publication.

As Whitney Philips suggests in her fantastic report The Oxygen of Amplification describing “better practices for reporting on extremists, antagonists, and manipulators online,” we must not add fuel to these foes’ efforts. Giving their activities the wrong sort of attention, particularly when they involve disinformation and various forms of coordinated inauthentic behaviour online, plays into their hands and results in an acceleration of the problem rather than a curbing of it.

Undoubtedly, the situation we’re in is complex. It will take concerted effort to begin focusing our attentions beyond disinformation and toward issues perhaps even more readily addressable. We must be more granular when we present evidence and allegations of politicians’ misdeeds online. Are they inciting violence, tampering with the voting process or spreading targeted hate? If so, this must be said, clearly detailed, and dealt with accordingly. We must move beyond looking at extremists’ use of falsehood and toward their intended outcomes societally — in terms of both immediate and long-term effects. If they are using a platform to plan a potentially violent rally or to sow racism, we should focus on dealing with those ends rather than on only addressing the means that get them there.

Disinformation is a very serious problem, but it is one among many. It is also one that can be fed by those intending to prevent it. We must keep this in mind as we work to address it and the range of other informational issues online.

By viewing disinformation as a problem alongside, but not necessarily more pressing than, other issues, we can actually work to address the problem of false information rather than amplify it.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Samuel Woolley is the Dietrich Endowed Chair in Disinformation Studies at the University of Pittsburgh.