In a resonant phrase, Kate Klonick conferred upon the major American social media companies the title of “the New Governors.” She deployed that moniker to illustrate what “private platforms are actually doing to moderate user-generated content and why they are doing so” (Klonick 2018). Given, in particular, their scale, their massive user bases and their potential for causing, facilitating, mitigating or preventing all manner of harm, governing aptly characterizes the work platforms do to determine the speech that is tolerated and the speech — and speakers — that are disciplined, even banished. The term has a decidedly internal character: “governing” describes how the companies develop and enforce their own private terms of service, articulated through variously named guidelines, rules or standards, with respect to content that users upload to their privately owned platforms. It suggests a relationship between the companies and their users, the governors and the governed. To be sure, their rules may take into account the off-platform consequences of user-generated content. But whether drawn from company business models, a refraction of the US Constitution’s First Amendment or a human rights framework, these rules essentially look inward in an effort to create an internally cabined kind of platform law.1 And as long as they are inward-looking, contained within a supposed bubble of company governance, one might think that their influence beyond the platform may extend to a conception of developing industry standards — a cross-corporate converging of rules and values — but not much beyond that.
And yet something has changed in recent years, accelerated over the course of 2021. With pressure from the public, governments, politicians, and human rights defenders and monitors, the biggest of the platforms have begun to explain their decisions around high-profile cases. This is not to say that they are adequately explaining the full range of content decisions they make; so far, the trend is haphazard, a tiptoe into the world of “governing” transparency. The research program Ranking Digital Rights, in evaluating 26 of the leading actors in technology and communications, noted in its 2020 index that “the global internet is facing a systemic crisis of transparency and accountability.”2 Nonetheless, in the context of some platform decisions and rulemaking, the companies are more regularly deploying a public-facing rhetoric drawn from public law. Twitter and YouTube explained their decisions to remove or suspend the account of then-US President Donald Trump in more detail than they typically do for such actions. Facebook created a tribunal-like mechanism, the Facebook Oversight Board, to reach decisions on some of the hardest content questions that appear before the company, articulating the centrality of human rights standards in its earliest decisions.
These new steps may help develop a broader understanding of how companies in the technology space conceive of their human rights responsibilities and implement them in the context of specific cases.
Meanwhile, democratic governments are considering regulation that may increase such public reasoning. The European Union has preliminarily adopted a Digital Services Act (DSA)3 (European Commission 2022) that could incentivize company articulation of the reasons for account actions, which (given trends) would likely involve human rights and other public law standards. Consider, for instance, article 15 of the draft DSA, which requires internet companies that take account of content actions to notify account holders “of the decision and provide a clear and specific statement of reasons for that decision.” A German federal court4 went so far as to find that the failure of Facebook to provide adequate notice about its rules and actions interfered with the rights of users in Germany. Similar kinds of transparency mandates are being considered in the United States and the United Kingdom. The mechanisms of international human rights law — at the United Nations and in regional fora — are moving in similar normative directions in a push for clarity and disclosure.
What are we to make of company articulation of public law standards? At one level, this move to public law may be no more than private company implementation of such instruments as the UN Guiding Principles on Business and Human Rights. Put another way, these new steps may help develop a broader understanding of how companies in the technology space conceive of their human rights responsibilities and implement them in the context of specific cases. When fed back into the UN business and human rights mechanisms, company decision making may provide substantial learning about the intersection of state human rights obligations and company responsibilities. Likewise, it may be that the language of content-moderation decision and rulemaking necessarily borrows from public law terminology and frameworks, given the kinds of issues at stake. Approaching content-moderation explanations from this perspective may reflect no more than an effort to create or borrow from a shared language so that company decisions, standards and processes are understandable to a public that increasingly sees the companies as powerful state-like forces within society.
But there is another element here that deserves consideration: a possibility of private rulemaking’s spillover into public norms. That is, will company articulation of principles of public law — whether framed as constitutional law, human rights law or regional law governing fundamental rights — have an impact not only on the platforms’ rulemaking and enforcement but also on public law itself? Might private rulemaking lead to legal development in public law? Or put another way: Will private rulemaking and enforcement influence the shape and content of global norms for freedom of expression, privacy and other human rights? If so, what are the pathways to such impact? Can we expect that public institutions will refer back to company decision making that articulates human rights or other public law standards, almost as a kind of development of a public-private common law of user-generated content? Would this be a good thing? What are the risks, if any, to principles of democratic governance? Should public law incentivize or constrain these developments as an aspect of regulation of the super-dominant companies of the tech industry? Should public regulation aim to channel decisions regarding public norms into democratically legitimate fora?
The global debate over the power of social media — and the power of super-dominant companies in the internet sector, at all levels of “the stack” — has reached an inflection point. Regulation is coming — indeed, in authoritarian environments, it is already here — and the shape and resilience of global norms are at stake. Policy makers, legislators, jurists and the public need to think through the questions just posed in order to determine how, as Benjamin Barber asked nearly a quarter-century ago, the internet can “belong” to us in a democratic sense.