When it comes to social media companies, the September 20, 2021, federal election could be called the “unfinished business election.” When Prime Minister Justin Trudeau asked the Governor General to dissolve Parliament on August 15, he effectively pulled the plug on an ambitious and controversial series of bills and proposals to regulate the actions of social media companies and their users. These ranged from Bill C-10, focused on Canadian content and culture, to Bill C-36, which proposed to redefine hate speech. The government capped its social media agenda with the release on July 29 of a set of documents outlining a very detailed framework to regulate “hate speech and other kinds of harmful content online.”
A cynic might see in the timing of Bill C-36 and the online harms documents evidence of blatant electioneering. More optimistically, the election-driven reset of the Liberals’ social media regulation plans offers us a unique opportunity to assess the Liberal government’s basic platform governance philosophy and to consider how the next government should regulate social media platforms.
Based on our research examining the difficulty of global platform governance and how governments problematically outsource regulation to internet companies through often-secretive agreements, in this first of three articles we offer a road map to guide the next bills regulating social media companies, and the digital economy more broadly.
Superficial versus Actual Change: It’s the Business Model
It’s safe to say that if the normal use of your services leads to your company being implicated in an actual genocide, as Facebook has been, then you’re probably doing something very wrong. Certainly, the fact that social media companies foster intolerance, promote hate speech and allow for the spread of socially destabilizing misinformation and the easy propagation of revenge porn should be unacceptable for anyone interested in living in a vibrant and healthy society.
However, a serious and effective response to these unacceptable behaviours requires an understanding of what’s driving them. Regulation of social media — indeed, regulation of any online content and services — must begin with an understanding of the business models and, more broadly, the assumptions underlying the digital economy. If we want to effectively regulate social media companies, we must focus on where they make their money, and how they keep their costs so low (and thus their profits so high).
Social media companies make most of their money via advertising. As a result, their business models are designed to maximize user engagement and to promote viral content. Companies design algorithms to recommend content, people and events to follow to increase user engagement, even when this may involve following extremist groups or sharing medical misinformation.
Given their commercial reliance on user engagement as a growth metric, companies are often reluctant to enact measures that deal with bad actors, such as ridding their systems of bots and fake accounts, or setting rules that may constrain growth or limit viral content. Facebook, for example, has discarded algorithms designed to reduce misinformation, including by changing how content is recommended to users if those changes decrease user engagement. While social media companies regularly condemn hate speech and disavow violent extremism, until governments dismantle business models that monetize hate, violence and misinformation, regulatory efforts will remain largely ineffective.
When considered from this angle, we can start to see the shortcomings of the common regulatory response, by governments, activists and academics, which demands better systems to identify and remove problematic content. The Liberal government’s discussion guide on its framework follows this path, proposing that companies “establish robust flagging” systems, including through automated means, to identify and address problematic online content.
This approach effectively asks social media companies to do what they’re doing, just more of it. Most social media companies already have flagging programs in place. These, to be sure, are too-often troubled by significant problems with inaccurate or abusive flagging, or are unduly reliant on users’ policing problematic content.
However, the core problem is that calling for more and better systems simply feeds into these companies’ existing reliance on automation and their preference for self-regulation, which are central features of social media companies’ business model. Social media companies minimize costs by automating many activities, and outsourcing the human component of their content-moderation systems to low-paid, often foreign, workers, a pattern similar to the labour offshoring that countless industries have engaged in for decades. These features reflect these companies’ business priorities: the minimum they can do to minimize outrage while protecting the business model. The Liberal approach, in other words, may be effective at the margins, but it will fail to curb the incentives that give rise to the problematic behaviour in the first place. (We should also note that in this article we are only discussing the most egregious and illegal harms facilitated by social media platforms. The Liberals’ online harms bill would leave, for example, Twitter’s business-model-driven low-level toxicity, which privileges outrage and snark over thoughtful dialogue, completely unaddressed.)
If companies have an incentive to behave badly, we must remove the incentive.
Thinking about Substantive Reform
If companies have an incentive to behave badly, we must remove the incentive. The Liberals’ online harms proposal seems to be counting on the threat of sizeable fines to keep the platforms in check, although these global companies have shrugged off massive fines in the United States and Europe. But nothing in their proposal addresses the incentive to prioritize quantity (and speed) over quality.
What could tackling incentives look like? Governments must consider reforming advertising as a revenue source, with the goal of minimizing social media companies’ reliance on user engagement as a growth metric.
Regulation could also include restrictions on different types of advertising to limit targeted behavioural advertising, which necessitates platforms’ siphoning the detailed personal information of users and sharing it with advertisers so that they can personalize ads. Contextual advertising is less privacy invasive as it places ads using keywords, for example, “electric vehicles,” based on the content of the webpage the user is currently viewing?.
Important, too, are reforms that limit users’ monetization of misinformation and other harmful content. Spreading harmful content can be a profitable activity for both platforms and users. During the pandemic, people have profited from pushing fake cures and medical conspiracy theories.
Advertising is not the only way that a company can make money: a subscription-based model, such as that of Netflix, provides a stable revenue source that could reduce the need for companies to obsess over the quantity of engagement and focus more on the quality of engagement. The nationalization of social media services as public goods is an option: the CBC was created to respond to similar problems related to new communications technologies in the past.
Governments could also get more involved in regulating social media companies’ algorithms so that they respond to democratically determined priorities, rather than reflecting the profit-driven motives of foreign companies. As Sara Bannerman at McMaster University argues, governments could require algorithmic systems to prevent recommendations of illegal content such as hate speech.
As it happens, requiring platforms to adapt their algorithms to promote the “discoverability” of online Canadian content was a key provision of Bill C-10, for example, through “watch next” recommendations or banners alongside content. Unfortunately, this form of algorithmic regulation, poorly explained by the government, was pilloried by the federal Conservatives and many internet activists as an attack on freedom of speech.
This response was troubling in its own way. It suggests a lack of understanding that effective government regulation of social media companies will almost certainly require addressing their rule-setting algorithms in one way or another. The hostility with which this proposal was assailed suggests that we need to rethink our cherished myths about the nature of the internet and its relationship with free speech. We will address this big issue in our next piece.
Finally, online regulation requires broader reforms in legislation, the courts and the criminal justice system, which could include more resources, training or the creation of new institutions. Simply flagging content as illegal does not equal a police response.
For example, following the murder by vehicle of a Muslim family in London, Ontario by a man facing terrorism charges, and other similar violent hate attacks, there were calls for new provisions to counter hate crimes, as well as stricter penalties. Despite the Liberal government’s introduction of Bill C-36 to redefine hate speech, the government said it had no plans to overhaul the way courts deal with hate crime, even though that was a demand emerging from the online summits the federal government hosted on anti-Semitism and Islamophobia. We will return to the question of where social media governance fits in the bigger picture in a later article.
None of this is to say that these structural reforms would be easy. However, structural problems require structural solutions. Otherwise, you’re left just treating the symptoms.
Next: How "Free Speech" Kills Internet Regulation Debates: Part Two