As exemplified by the 2016 election, by the outcome of the Brexit campaign and by numerous other instances in liberal democratic governance, we have, increasingly, a problem of information in our society. Where foreign influence operations, launched by governments, launched by non-state actors and terrorist organizations, are undermining the trust that sort of stands as the foundation of our political systems.
So, you have three separate issues: you’ve got the issue of severity, you’ve got the issue of impact and you’ve got the issue of scale. The issue of scale has to do with the number of people that are on, say, social media platforms. That if you were to try to disseminate fake news by Facebook, you have the theoretical reach of, say, 1.2 billion people.
On the issue of severity, you have this problem of potential negative consequences that might follow from the internalization of influence operations. You end up with much higher odds of having somebody take that information and act badly.
And then you have the issue of impact where, because of the nature of the information environment that we live in, people turn to social media platforms with increasing frequency to get their news and that forms the bedrock of all of their subsequent political decisions. And so you have a polluted online information environment essentially having severe political and economic and social consequences.
So, the current information environment is a real and persistent threat and probably a growing threat to liberal democratic governance. And so, if that is the case, what we need to do is focus on ways in which we can potentially fix the sort of deficits in our information environment. And there are at least three ways in which we could potentially do this.
One is by dealing with issues of exposure — that if the problem of the current information environment is that we’re exposed to fake news, if you were to reduce exposure, you’re going to improve the environment. And so, you have to deal with the sources of that exposure, you know, troll farms in foreign countries that are producing this content, malicious foreign governments who are trying to capitalize upon open information environments in order to cause political disruption. And, in some ways, the platforms have to deal with some of the algorithmic sorting that they have going on, that’s pushing people down pathways towards increasing exposure to fake news content.
The second issue is to deal with people’s receptivity to fake news. And so, you could imagine having the social media platforms building in a function that would allow you to link from a particular story that you’re contemplating sharing or liking or consuming to fake news sites so that people could essentially get an easy shunt to fact-checking information.
And then, finally, you have the potential solution of counter narratives — that every story has a counter narrative to it. And there’s been some empirical investigation on the ways in which exposure to misleading content of some sort, when immediately paired with factual content, tends to reduce the believability of the misleading content.
So, ultimately, it’s going to be individuals, technology companies, media platforms and governments and they’re all going to have to work together in a way that preserves the free flow of information, preserves the trust and, ultimately, hopefully builds back some of the lost trust in this information ecosystem and that also preserves what we’re trying to protect — that we don’t want to start censoring information because to do so ultimately destroys the thing that we’re trying to protect at the end of the day.