In the first of three commentaries in this series, we highlighted how the harmful nature of social media platforms is baked into their automated-advertising-fuelled business models that feed on maximizing user engagement. Our argument is straightforward: if you don’t change these companies’ incentives and reform their business models, attempts to deal with the real problems of illegal and harmful online activities will be ineffectual in the long run.
In this piece, we consider an even more fundamental — and politically explosive — question: the state’s role in internet and platform regulation, which is the subject of our 2021 co-edited book.
Engaging in sound internet and platform regulation is an exercise in not taking things for granted. In the digital world, few ideas have been as enduring — and as misleading — as the idea of the free and open internet, a concept often portrayed as an unadulterated good.
Take the spring 2021 debate in Canada over Bill C-10, focused on Canadian content and culture, and the Liberal government’s (clumsy and poorly explained) attempts to introduce provisions that would have allowed, among other things, the government’s arm’s-length telecommunications regulator to require certain social media platforms to prioritize Canadian content posted to their platforms.
To say that this did not go over well among certain corners of the Canadian digital policy community would be an understatement, although criticism was disproportionately expressed in English Canada, with little pushback in Quebec. What could have been a nuanced argument over whether these specific regulations were appropriate, or how to amend the provision, quickly devolved into free-speech total war.
The digital rights group Open Media called it a “dangerous censorship bill.” The Internet Society, whose corporate members include Google and Facebook, published an open letter signed by some of Canada’s leading internet scholars calling on Prime Minister Justin Trudeau “to stop harming the Internet, the freedoms and aspirations of every individual in this country, and our knowledge economy through overreaching regulatory policies that will have significant, yet unintended consequences for the free and open Internet in Canada.”
Breaking out the big guns, they compared Canada’s proposed regulations to “actions taken by authoritarian governments” to “curtail freedom, and seek to control parts of the Internet’s infrastructure.”
Free-Speech Grenade Impedes Debate
These concerns, which some expressed with more nuance than others, found expression within the federal Conservative Party, which decried the proposal as a form of “censorship” and has promised to repeal the bill if elected (that the bill never made it through the Senate before the election call means they could kill it just by not reintroducing it).
In a liberal-democratic society, there’s little room for debate once you’ve pulled the “censorship” pin on the free-speech grenade. It’s a conversation-ender in the same way calling someone a Communist was during the Cold War. The prominent US digital rights group Electronic Frontier Foundation (EFF), for example, asserted that the government’s proposed online harms plan was “one of the most dangerous proposals” in the world, leaving little room for meaningful debate. Similarly, comparisons linking Canada’s proposals to those of authoritarian governments effectively serve to delegitimize democratic efforts to regulate online speech and platforms.
“Free and Open Internet” Is a Choice that Helps Some and Hurts Others
The ideology of a free and open internet that siloed the C-10 debate as one of free speech versus censorship is built into the default way of thinking about the internet. This binary view of speech presents an ideological barrier to addressing real and existing problems such as revenge porn, hate speech, and the generally inhospitable and speech-inhibiting online environment for many marginalized groups, even among those who decry the current poisonous social media environment.
As Dutch internet scholar Niels ten Oever, among others, has noted, there is nothing “natural” or technically “neutral” about the “free and open internet.” Human decisions, values and preferences — political, legal, technical and economic — are deliberately embedded in the design and operation of the internet. This is because it was created by people, companies and governments (notably the United States) with particular ideas about how it should be run, such as preferences for private-sector investment and US-style free speech above all other social concerns.
Specifically, writes ten Oever in our edited volume, internet governance as it is currently practised prioritizes interconnection and interoperability as the dominant norms. Internet policies that extend the reach of the internet and the number of individuals and organizations able to connect over it are taken to be an unquestioned good. The internet’s creators incorporated the values of interconnectivity and interoperability into the structure of the internet itself, creating a global network based upon free flows of data.
This idea of a free and open internet in which free speech is the guiding principle is evident in social media companies’ self-portrayal. They sell themselves as mere technical, passive “intermediaries” facilitating interactions among users, thereby downplaying the extent to which they themselves create a heavily structured and content-curated environment, in pursuit of profit. Even though they are only companies that use the network of the internet — they’re not the internet itself — and even though their algorithms, by definition, order and present content in a way that’s just as “unnatural” as anything a government could propose, they’ve co-opted this ideology to the extent that regulation of their activities is seen as an attack on the internet itself.
However, as we’ve seen countless times over the past couple of decades, unfettered connections and interoperability themselves create problems.
Unfettered free speech itself can destabilize society. It may also interfere with the pursuit of other legitimate social goals, such as the promotion of a distinct culture against the backdrop of a commercially dominant society. As we’ve learned to our detriment, power imbalances mean that bad speech can often drive out good speech. We have repeatedly seen this when journalists, artists and activists publicly leave social media because of racism, misogyny and discrimination from users, including rape and death threats. Free speech can provide a platform for groups to incite genocide. Nor is this an internet-only issue: the 1994 Rwanda genocide was facilitated, in large part, by radio stations.
The point here is that interconnection and interoperability — free speech — is not always a good thing. It’s but one set of values that a healthy society must consider when setting policy.
In the United States, law professor Mary Anne Franks refers to this type of free-speech absolutism (specifically regarding platform governance) as a “cult” of free speech, a form of constitutional fundamentalism that problematically privileges free speech above other constitutional rights.
Governments do not have the privilege of taking a one-dimensional approach to internet governance: they must deal with problems beyond just maximizing interconnection and interoperability.
And here we come to the crux of the problem: If you start with the belief that the best internet is one that maximizes interconnection and interoperability, then, as ten Oever observes, any action a government takes will look like an attack on the internet. It will look like censorship. And you will have no option but to oppose it.
From a free-and-open-internet perspective, state policies that prioritize objectives other than interconnectivity are generally regarded as illegitimate and forms of censorship. This explains the outraged cries of censorship that accompanied Germany’s Network Enforcement Act, commonly known as NetzDG, that requires social media companies to remove hate speech and other illegal content.
Values of interconnectivity and interoperability underpin ideas of a global internet and thereby delegitimize claims of national control over the internet. State efforts to assert domestic control are often derided as “splintering” the network along political boundaries. But Germany was not seeking to create its own fire-walled internet in the style of China or Russia (the latter of whose efforts have mostly failed, as scholar Ilona Stadnik contends in our edited volume).
While one may critique the substance of NetzDG or its enforcement, Germany’s efforts were much more limited than any Chinese or Russian efforts, targeting a perceived problem and legitimately implemented through a democratically accountable process.
Reasonable people will differ as to the nature and degree of government regulation of online content; simply supporting specific internet governance provisions doesn’t automatically make one a jackbooted, censorship-loving authoritarian.
Governments Have a Legitimate Right to Identify Problems and to Regulate
The way out of this conundrum is to recognize that governments and societies have legitimate reasons to want to regulate both the internet as a network and (even more so) the monopolies that have grown to dominate sectors such as search, social media and online advertising. That a majority of Canadians say social media companies should be required to monitor posted content and block or remove hate speech suggests a level of recognition of the need for stricter regulation.
These are not outlier perspectives. Countries such as Austria, Germany, New Zealand and the United Kingdom are ahead of Canada in debating and passing rules to address online harms. It is ridiculous and absurd to compare these countries to authoritarian regimes such as China’s or Russia’s.
More pointedly, those who embrace American-style free-and-open-internet discourse would do well to consider how US free-speech ideology has left that country unable to differentiate between healthy debate and Fox News’s anti-democratic propaganda, threatening the survival of the United States as a democratic republic and imperilling the actual freedom of millions of Americans.
We are not arguing that regulating social media is simple or uncontroversial. And as we indicated in our first article, we have strong reservations about the Liberal government’s approach to social media regulation.
Rather, we are calling for a different starting point. As tech reporter and former EFF activist April Glaser puts it, social media and internet governance conversations need to emphasize “real-world harm to communities” over “constitutional abstractions” like free speech. We also need to recognize that reasonable people will differ as to the nature and degree of government regulation of online content; simply supporting specific internet governance provisions doesn’t automatically make one a jackbooted, censorship-loving authoritarian. The 2021 report by the Women’s Legal Education & Action Fund titled Deplatforming Misogyny: Report on Platform Liability for Technology-Facilitated Gender-Based Violence is a good example of what policy from a different starting point might look like.
Rather than assume that social media regulation is a de facto attack on free speech, we could benefit by thinking through platform and internet regulation as if it were any other policy issue, and consider: What is the harm in question? How serious is it? What is the best way to address it, considering the pros and cons?
These questions sound normal, even banal. But they’re only possible if we jettison the idea that government regulation is a threat in and of itself, and that interconnection and interoperability, rather than the multifaceted needs of a healthy society, should be the measure of sound internet and platform policy.
Above all, we need to move beyond the question of should the government regulate to how should the government regulate. While ensuring that the government has the capacity to address these issues, we also need to go beyond a narrow focus on speech issues to consider these companies in their wider economic and social context, but we will deal with that in our third and final article of this series.
Next, Beyond Speech: Regulating the Digital Economy: Part Three