In a simpler time, when the United States sought to fight — rather than fuel — online extremism, President Barack Obama held a three-day summit with community activists, religious leaders and law enforcement officials working against terrorist recruitment.
A central focus of Obama’s 2015 summit, which came in the wake of terrorist attacks in Paris, Sydney, Copenhagen and Ottawa, was the role that technology companies play in amplifying the voices and enabling the coordination of networked terrorist organizations such as al-Qaeda and the Islamic State. A core paradox of the internet is that the same tools and capabilities that enable free speech, political mobilization and positive collective action also enable black markets, hate speech, criminal activity and, indeed, terrorist recruitment.
Unfortunately, too, the very actions governments or technology companies might take to limit the nefarious uses of the internet also risk limiting its positive uses. Obama’s message from the summit seemed straightforward: “We need to find new ways to amplify the voices of peace and tolerance and inclusion, and we especially need to do it online.” But the reality is that fighting harmful speech online is complicated.
One of the participants at the 2015 summit made this point clearly. “We’re being outdone both in terms of content, quality and quantity, and in terms of amplification strategies,” said Sasha Havlicek, the founding CEO of the Institute for Strategic Dialogue (a London-based think tank working on online extremism, digital information operations and election interference) and our guest on this week’s Big Tech podcast.
At the time, Havlicek co-chaired the European Union’s internet radicalization working group. She has also spearheaded partnerships with the United Nations, the European Commission, the Global Counterterrorism Forum, Google and Microsoft, and she developed the Online Civil Courage Initiative with Facebook’s Sheryl Sandberg in an effort to amplify efforts to counter hate speech online. The problem, as Havlicek described it at the 2015 summit, was that “governments are ill placed to lead in the battle of ideas.” Instead, she argued, private companies needed to step up and fight online terrorism.
And to a certain extent, they did. While platform companies enjoy broad immunity in the United States from intermediary liability (and, consequently, also in many other jurisdictions, since most platforms are based in America), they self-censor when they know there is clear political and public consensus that they should do so, as with, for example, terrorism-related content or child pornography. As a result, a combination of human content moderation and artificial intelligence systems are (in theory) able to take down much of this content before people see it.
There are, of course, limitations to this approach. The fact that video footage of the 2019 Christchurch mosque shootings was, and still is, seen by millions on YouTube and Facebook shows that the methods are not foolproof. But, in general, the most radical content is highly censored on major internet platforms.
Content moderation, however, is far more difficult when it comes to broader categories: hate and harmful speech. Here, Havlicek is an expert.
Hate speech is different than terrorist speech or child pornography. It captures a far wider spectrum of content: everything from clear Nazi propaganda, for example, to harmful speech against minority groups, or abusive gendered comments. Perhaps more critically, hate speech is generally defined by national law. And rightly so. What a society deems to be hateful is bound in history, culture and national context. Germany, for example, has very strict laws about Nazi speech. Canada has its 1982 Charter of Rights and Freedoms that gives its citizens rights to be protected from speech, not just rights to speech itself. And the First Amendment of the US Constitution values the right to freedom of speech above all else.
But this appropriate national context bumps up against two core challenges of platform governance. First, global companies that have to moderate speech on billions of pieces of content daily need global solutions. And second, to force platforms to make structural changes, the market must exert greater pressure than most individual countries can provide. There is a disconnect between the localized character of the problem, and the structure of the platform ecosystem itself. In other words, we need nuance and scale simultaneously.
This is the challenge that not only governments but the platforms themselves face. The former have a growing mandate to protect their citizens online, and the latter have a business model and a design that severely limit their capacity to act. The answer, from my perspective, is global coordination on penalties for non-compliance (to universalize the costs) combined with national definitions of harmful speech (to allow for local jurisdiction). Such a coordinated response would incentivize action from the platforms while placing the responsibility on defining the limits of speech on democratic governments.
The last thing we want is private companies making decisions on what speech is acceptable and what is not. But the only viable alternative is democratic government taking on this function themselves. This role will not be easy, but it is likely, over time, to become a core mandate of governing in the digital age.