No one said combatting online hate would be easy — or cheap.
When Mark Zuckerberg announced in a Facebook post that his 2018 New Year’s resolution included a pledge to address online hate, few likely anticipated how the laudable goal would impact the bottom line. But, on July 26, Facebook experienced a historic stock plunge as investors realized that there’s a significant cost associated with maintaining a safe and responsive online public square. Investors aren’t the only ones losing faith; legislators are beginning to ask whether Facebook’s self-regulation is enough to counter the proliferation of hateful content and fake news.
It’s safe to say that financial and regulatory challenges for Facebook — and other social networks — aren’t over yet.
Recently, Twitter has taken major steps to allay growing concerns that users and bots were infecting the online atmosphere with odious views. To be sure, taking action against content without also infringing on freedom of speech is no small task. The balance is contentious and differs from nation to nation.
In Canada, former Heritage Minister Mélanie Joly suggested that the government has expectations of the online behemoths but did not specify how those expectations could be met. Canadian parliamentarians have primarily been focused on revelations of significant privacy breaches, rather than on questions around the moderation of hateful content. The topic came to a tragic head in April following the Toronto van attack in which a young man drove onto downtown sidewalks, killing 10 and injuring many others. Politicians weighed in following Facebook’s disclosure that the accused driver had posted about allegiance to the incel movement — a slang term for “involuntarily celibate,” which originated in a now banned Reddit forum.
“Criminal behaviour is not, obviously, tolerated in reality and can’t be tolerated online. People’s behaviour must be the same online and offline,” Joly said. “We call upon the web giants to make sure that they counter any form of hate speech and any form of discrimination.”
In one of those rare moments of non-partisanship, federal Conservative Member of Parliament Marilyn Gladu agreed that the government should be engaged on the issue of online hate. “When it comes to the digital space, I do think that the government has a role in taking more action than we’ve seen in the past to eliminate all kinds of hate speech,” she said.
The need for improved governance of online fora is a global issue, but to date, no one country has managed to successfully address the Wild West of online hate.
As in Canada, Germany’s political parties of all stripes found common ground on the issue and passed legislation in an effort to hold online media platforms accountable for online abuse.
The Network Enforcement Act — sometimes called the “Facebook Law” — came into effect on January 1, 2018, and legislates hefty fines of up to €50 million (CDN$75 million) for social media companies that fail to remove “obviously illegal” content, including defamation, incitements to violence, and hate speech within 24 hours of the content being reported. Where it isn’t fully clear if content is illegal, companies would have seven days to consult and decide.
“Experience has shown that, without political pressure, the large platform operators will not fulfill their obligations, and this law is therefore imperative,” Federal Minister of Justice and Consumer Protection Heiko Maas said when Germany passed the law. “Freedom of expression ends where criminal law begins.”
But the Network Enforcement Act has come under significant criticism from those who believe that public companies should not be arbiters of speech. “Governments and the public have valid concerns about the proliferation of illegal or abusive content online, but the new German law is fundamentally flawed,” said Wenzel Michalski, Germany director at Human Rights Watch. “It is vague, overbroad, and turns private companies into overzealous censors to avoid steep fines, leaving users with no judicial oversight or right to appeal.”
In an essay in The New Republic, Harvard lecturer and author Yascha Mounk points out that for legislation to be effective, it would “need to accomplish at least three tasks: [it] must slow the spread of intolerant attitudes, weaken extremist political forces, and be safe from abuse by authoritarian populists.” Mounk argues that Germany’s sanctions don’t meet these conditions and have instead opened the door for autocrats to impose a type of censorship under the guise of countering the same forces. With Russia, the Philippines and Singapore all considering similar legislation, Mounk suggests these moves “show just how easily restrictions on free speech can be flouted at home while being twisted to serve the ideological purposes of straight-up autocrats abroad.”
Bernhard Rohleder, the chief executive officer (CEO) of Bitkom, Germany’s federal association for information technology, telecommunication and new media, argued recently that responsibility for online hate should fall on the shoulders of the courts, public prosecutors and police departments. Rohleder wrote that these institutions require more robust tools and resources, including more staff, to enforce existing laws online.
Back in Canada, new Heritage Minister Pablo Rodríguez must grapple with this issue. He’ll want to review the recent report by the House of Commons Standing Committee on Canadian Heritage (CHPC), Taking Action Against Systemic Racism and Religious Discrimination including Islamophobia, to get a full understanding of the degree to which the issue of online hate is preoccupying minority communities and human rights advocates and agencies.
During the weeks of testimony before the CHPC last year, many suggested there isn’t enough being done to address the proliferation of online hate. Renu Mandhane, the chief commissioner at the Ontario Human Rights Commission, told the CHPC that we must “challenge the very real hatred that we are seeing, not only in the media, but just generally, online and otherwise.”
Another witness, Michel Juneau-Katsuya, president and CEO of the Northgate Group, talked about how there is a concerted effort online to “fuel insecurity” about minority communities. He told the committee that “this phenomenon must be broadly denounced by companies, professional monitoring and accreditation associations, as well as members of the public and anyone on the Internet. We must also hold to account those who have more direct access to the public. It is generalized inaction that could have serious consequences right across the country.”
Experts from Canada’s public safety department talked about counter-speech as a way to combat online hate. Ritu Banerjee, senior director of the Canada Centre for Community Engagement and Prevention of Violence at the Department of Public Safety and Emergency Preparedness, suggested “the use of humour and the building of empathy between speakers and recipients of hate speech to shift the conversation away from expressions of hate and de-escalate the risk of violence.”
Pointing to the Canadian initiative called Project Someone, Banerjee suggested that such initiatives would be effective in raising awareness because they provide “tools and training for educators who want to promote discussions on and awareness of hate speech through art and multimedia platforms.”
However, such projects aren’t a cure-all. “Project Someone...perpetuates the all too common idea that links Islam and terrorism, for the purpose of combatting radicalization,” said Anver Emon, Professor of Law and Canada Research Chair in Religion, Pluralism, and the Rule of Law at the University of Toronto, during his testimony to the committee. In short, he argues that attempts at addressing online hate and violent extremism must be broad enough to address all forms of extremism, including white supremacist and right-wing extremism.
Among its final recommendations to government, the CHPC report stated that law enforcement and security agencies needed more funding to “investigate hate speech on the Internet and to enforce existing laws.” Here, Canada and Germany are on opposite sides of the spectrum; the CHPC made no mention of any new legislation. Instead, the report recommended more focus on education to promote diversity, inclusion and media literacy.
In its response to the CHPC report, the federal government concluded that law enforcement agencies required capacity to investigate and prosecute hate crimes both offline and online — although it did not specifically commit to either increasing funding or introducing legislation in this area. The government also stopped short of committing to a campaign to increase media literacy, applauding instead the Canadian Election Integrity Initiative — a partnership between a national non-profit organization called MediaSmarts and Facebook Canada.
It shouldn’t really need to be spelled out that trusting Facebook — the company at the core of so many debates on the perpetuation of fake news, online hate and privacy breaches — to resolve a national media literacy crisis is a massive gamble.
The CHPC did not explore whether or not the federal government should reinstate section 13 of the Canadian Human Rights Act, repealed by the previous government in 2013. While somewhat controversial, the section was deemed to be an effective recourse for those concerned with online hate. Various communities have lobbied the federal government to explore returning some form of the legislation. Earlier this year, the justice minister’s office reportedly signalled that the department was re-evaluating the clause; there have been no updates since.
An unfortunate reality is almost certainly influencing governments’ wariness to legislate increased policing of online content: it’s likely that in the rush to comply, content that shouldn’t be removed will be censored. In fact, it’s already happening.
A website called onlinecensorship.org — which is run by the Electronic Frontier Foundation and Visualizing Impact — chronicles these examples. The site invites users to report on the takedown of their content, in order to “shine a light” on the decisions made by social media companies and their impacts on freedom of expression. For instance, it highlights how, in 2017, 77 social and racial justice organizations wrote to Facebook about “censorship of Facebook users of colour and takedowns of images discussing racism.”
The European Commission has already spent considerable time and resources trying to address this issue. Its efforts may provide some direction for those in Canada looking to strike a balance.
Beginning with a voluntary Code of Conduct, agreed to in 2016 by all the major social media platforms, the European Union has, arguably led the charge on regulating online hate. One of the European Union’s most recent recommendation on illegal content on online platforms was adopted this past March and includes a call for “clearer ‘notice and action’ procedures” that would make it easier to fast-track reports made by “trusted flaggers” and that content providers are given time to contest decisions to contest takedowns.
The recommendation also encourages “more efficient tools and proactive technologies” and “stronger safeguards to ensure fundamental rights,” and urges industry to cooperate with small companies around best practices and cooperate more closely with law enforcement agencies whenever there is a possibility that illegal content is a threat to life or safety. On that, the European Union recommends that national governments implement specific law.
Julia Angwin, a journalist at ProPublica, argues that legislation is absolutely crucial. During a panel discussion in Ottawa hosted by the Public Policy Forum this past spring, Angwin suggested that unless social media platforms are held fully accountable for third-party content, they will do the “least amount possible to win in PR.” Despite the money spent by companies such as Facebook, Angwin said only real liability will have the desired outcome in minimizing online hate. “We’ve given them a free pass that no other media has. We’ve given them an enormous grant of immunity that they’ve taken for a ride.”
Considering the documented rise in online hate and groups that promote anti-immigrant, racist, Islamophobic and anti-Semitic views, the Canadian government has little time to waste when it comes to improving the regulation of technology giants.
Canada’s existing efforts — anti-racism initiatives, the newly created Centre for Community Engagement and Prevention of Violence and government-funded research projects, to name a few — likely can’t tackle online hate alone. It’s a multi-faceted issue tied to a global network run by powerful technology giants. Political will is necessary to forge a Canadian response to the online conundrums that are perplexing government officials around the world.