It has been five years since the violent events of August 2017 in Charlottesville, Virginia, where white supremacists rallied by torchlight in their effort to “Unite the Right“ as part of the “Summer of Hate” organized by the neo-Nazi Daily Stormer website. Legal proceedings against rally organizers revealed they had used Facebook, Instagram and Discord to plan a violent riot, in which anti-racism activist Heather Heyer was killed. It’s a pertinent moment to reflect on the regulatory responsibilities of technology companies.
Despite regulatory efforts worldwide, major questions remain about when and how these platform companies — and governments — should act in relation to illegal and harmful conduct online. As regards online hate, we’re arguably in even greater need of meaningful action now than in 2017 — with the increased visible presence of white supremacist groups in Canada, the United States and elsewhere, and right-wing political parties courting extremists’ votes.
Following that march in Charlottesville, anti-racism activists called on companies to withdraw their services from the white supremacists. At the time, I wrote about Cloudflare’s termination of its security services to the neo-Nazi site The Daily Stormer, and reflected upon enforcement challenges. In the years since, those challenges have multiplied, as governments have been largely absent in this area (Germany’s online anti-hate legislation being a notable exception), leaving civil society groups to pressure tech companies to remove hate speech and ban offenders.
The Kiwi Farms case in August 2022 demonstrates the persistent, significant challenges to addressing online hate. Founded in 2013, Kiwi Farms is a far-right web forum whose members have a history of doxing, harassing and advocating violence against LGBTQ+ people, particularly trans people, including making death threats. LGBTQ+ activists called on Cloudflare, the company that provided its security services to Kiwi Farms to deter cyberattacks, to terminate its services. Cloudflare initially resisted, stating on August 31 that “voluntarily terminating access to services that protect against cyberattack is not the correct approach.”
After concerted pressure from activists who orchestrated popular social media campaigns against Cloudflare, it caved. On September 3, Cloudflare blocked Kiwi Farms from its services. Since then, Kiwi Farms has struggled to maintain services, similar to other entities that lost services because of their promotion of violent hate speech, such as the neo-Nazi Daily Stormer.
What can we learn from the Kiwi Farms case? First, we should not consider the deplatforming of such entities as a straightforward success, even as we applaud the outcome. Marginalizing Kiwi Farms to the murky edges of the web is undoubtedly a social good, but it operated unscathed for years, causing misery. It was only after considerable pressure that Cloudflare acted, a pattern similar to cases of hate speech I have written about in which online intermediaries tend to respond to public pressure and possible reputational damage.
Second, we should question why these cases require such significant labour from civil society activists, coupled with well-timed, successful social media campaigns, to compel action from often-reluctant companies. Activists’ multi-year efforts to take down Kiwi Farms garnered mainstream attention when Canadian Twitch streamer and trans advocate Clara Sorrenti, known as “Keffals” online, mounted a social media campaign against Cloudflare for its service provision to Kiwi Farms, which forced Cloudflare into action. Sorrenti began the campaign after Kiwi Farms harassed her, forcing her to temporarily leave Canada for her safety. Relying upon civil society groups, often from vulnerable, marginalized communities, to orchestrate global campaigns to convince big tech companies to take action against known bad actors is unsustainable and unfair to communities struggling to assert their human rights.
Third, and most controversial from the perspective of free-speech advocates, the state needs to play a direct role in regulating hate speech. We should ask why governments, including their law enforcement agencies, can be notably — and inexcusably — absent in addressing cases of online hate. In some cases, it’s about enforcing the laws already on the books. Kiwi Farms was a straightforward case of bad actors employing hate speech, inciting violence, stalking and harassing victims — all criminal offences with real-world manifestations.
In other cases, hate speech legislation may need to be amended. In Canada, for example, the federal government is in the process of strengthening legislation on hate speech. However, any legislative change must be accompanied by sufficient enforcement resources, and the institutional will, to address crimes.
Finally, we need to strongly counter tech companies’ claims of neutrality. Companies often contend that they are neutral providers of technical services and cannot effectively judge the legality of content on their networks. Cloudflare, for example, compared itself to a telephone company and a fire department in arguing that it should not distinguish among users, even though days later it terminated its services to Kiwi Farms.
Private companies supply the critical infrastructure making up the content layer of the internet. Their offerings range from the familiar services of payment, web hosting and marketplaces to the somewhat less commonly understood services of the domain name system and security providers that guard against cyberattacks. Many of these services have become essential as critical infrastructure, thereby leading civil society groups to regularly call upon the providers to act against problems such as hate groups.
The reality is that tech companies routinely do differentiate among users to serve their commercial interests. Social media companies, for example, amplify certain content and downrank other types. They and other intermediaries also block legal sexual content, in part because companies fear violating broadly worded, controversial US laws targeting sex trafficking. In short, companies already intervene to regulate speech and discipline their users, but they do so on their terms and according to their commercial preferences.
Five years from now we cannot be in the same position: taking a reactive, ad hoc approach to violent online hate speech and sitting back while activists push big tech firms into action. We need to resist companies’ self-serving claims of neutrality. We must also recognize companies’ commercial moderation practices for what they are — regulation, but designed to serve corporate interests rather than the public good.