On Forging a Path to Digital Rights

How can we make progress toward establishing reliable, accessible rights when there’s so little institutional machinery to work with?

April 13, 2022
map2018-11-26T100751Z_1_LYNXNPEEAP0OR_RTROPTP_4_SPACE-NASA.png
Countries are increasingly embracing drones and satellites to map land and minimize conflict rising from ownership disputes, but unequal access to these technologies can further endanger the rights of vulnerable people, analysts say. (Joshua Stevens/NASA/via REUTERS)

While there’s an enormous amount of debate about what they should be, or how they should work, nearly everyone agrees that the world needs more, and more clearly articulated, digital rules. The United States’ White House is drafting a “Bill of Digital Rights,” the Chinese Government has implemented personal data and national security legislation, India is reviewing proposed changes to its 2019 Personal Data Protection Bill and Europe is on the cusp of passing the Digital Services Act. These efforts, while well-intentioned, are generally counterproductive — if only because they all exist in largely self-defined terms, without much consideration (or likelihood) of international interoperability. They undermine their stated goals by focusing on consolidating domestic power instead of building toward universal, equitable and interoperable digital rights.

Depending on the audience, it may be genuinely educational or insultingly obvious, but it’s worth restating that no rights — whether human, commercial or criminal — are self-executing. Every single right that civilized people enjoy relies, at some level, on its protection by enforcing institutions — and often, these institutions have substantial incentives to disagree with each other. For example, while the European Union and Ireland ostensibly share an interest in enforcing the General Data Protection Regulation (GDPR), civil liberties groups are currently suing Ireland for, essentially, exploiting its position to curry favour with technology companies — a suit which comes after a long-running dispute over whether Ireland’s technology company tax breaks did the same. These types of political incentives occur at all levels, across an extraordinary range of rights-enforcing institutions — from international governance institutions, such as the World Trade Organization and the UN Security Council, to domestic governance institutions, such as court systems and commercial regulators, to fully private systems, like social media platforms. When it comes to digital rights, the gaps and political economies that separate these varied institutions have slowed, if not altogether stopped, meaningful progress in building globally accessible digital rights.

The Political Path to Digital Rights

As the World Economic Forum highlighted in its recent white paper “Pathways to Digital Justice,” there is an enormous gap between the harms technologies cause and our understanding of how to resolve those harms. This isn’t for lack of attention or resources. Data governance and digital rights have prominent placement in international negotiations, billions of dollars have been spent by the technology industry to motivate and shape global standards, and advocates and policy makers alike have undertaken significant, sustained campaigns to prompt action. What these projects haven’t accomplished, though, is the materialization of mechanisms that enable those with digital rights or interests or who have experienced harms to seek redress. Even before we address the wisdom or feasibility of novel digital rights, it’s already hard to count on the few meaningful digital rights that do exist, because they’re only enforced by authorities and mechanisms that are inaccessible and unaccountable to the public. Rights you can’t enforce are hardly rights at all.

So, how can we make progress toward establishing reliable, accessible rights when there’s so little institutional machinery to work with? While there are no easy answers, there are quite a few good answers.

While there are lots of ways to compel institutions to take action on digital rights, history demonstrates that the strategy and tactics must be as rooted in non-violent conflict and political organizing as they are in legal precedents or profit motives.

The same gaps and disincentivizing disconnects between rights-enforcing institutions causing governance chaos are also offering opportunity for emergent actors to present solutions and for existing institutions to extend their work into the enforcement of digital norms. The fragmentation of these systems creates a political economy of its own, forcing difficult choices for rights advocates — such as whether to focus on local actions to maximize depth of influence on the system, or to focus on larger enforcement ecosystems, to maximize the number of people impacted by a change to the rules. These types of strategic considerations are important not only for those focusing on policy fixes but also for those seeking redress.

Compelling an organization or institution to enforce a right, digital or otherwise, is an exertion of political influence, on behalf of both the person whose interests it defends and the institution exerting authority, as their authority is thus extended. And while there are lots of ways to compel institutions to take action on digital rights, history demonstrates that the strategy and tactics must be as rooted in non-violent conflict and political organizing as they are in legal precedents or profit motives. And it often takes all four aspects, in concert, to create a reliable digital rights enforcement infrastructure.

Scoping “Digital Rights”

Before jumping into how to build a digital right, though, it’s important to specify what we do — and don’t — mean: here, a digital right is an enforceable, legally recognized interest emanating from an action executed through, or impacted by, the use of a digital system. This definition differs from the more pithily offered “digital rights are human rights”: human rights are human rights, and they’re not somehow obviated by the existence or use of digital tools. Separately, however, the use of digital systems fundamentally impacts the power balance of existing, and often regulated, relationships, by altering how those impacted are able to understand, participate and seek redress for their outcomes. Digital rights are often intended as a corrective for the shifts in equity that occur as a result of digital transformation, and so, in addition to normative rights, often include “procedural” rights — rights aimed at preserving and protecting agency in rights-affecting decisions. For example, the GDPR’s protections, while fundamentally rooted in data markets, are also rooted in the idea of faithfully executed consent — granting rightsholders the ability to challenge the substance of, and to an extent the use of, data about them.

One of the secondary, but determinative, characteristics of digital systems is that they almost always rely on private companies and actors for implementation, even where they materially shape publicly protected rights. So, for example, a number of cities and states around the world use digital platforms owned and administered by private companies to manage critical, rights-determining services — such as payments infrastructure, COVID-19 vaccine appointments, and even decisions as to when prisoners should be released from jail. That reliance intermediates the governance of those technologies and the companies that own them, so that these private interests enjoy public, protected relationships, all executed through the design and administration of their own systems.

For something to be a right in more than name, it needs to connect to a publicly accessible enforcement system, not just to appear to adhere to legal standards.

That reliance often leads technology advocates to claim, as (in)famously described by Professor Lawrence Lessig more than 20 years ago, that “code is law.” While an enormous amount of scholarly debate has argued the accuracy of the observation, it’s been most commonly rebuked with the truism that law is, in fact, law. And Lessig’s broader point, explained through his “pathetic dot theory,” is that our lives are governed by a combination of social regulation systems, all of which have a bearing on our rights in practice — and most of which are obfuscated or absent in the design of digital systems. For something to be a right in more than name, it needs to connect to a publicly accessible enforcement system, not just to appear to adhere to legal standards.

Achieving that connection is no small task — both because of the technology industry’s cultural bias toward scale at all costs and because access to justice has been declining, globally, for years. The technology industry’s historical approach to scale is to create globally accessible digital businesses and then to reverse-engineer their relationships with public and legal authorities. This approach not only deprioritizes the rights of the people impacted by digital systems, but also dismisses the legitimate authority of the few publicly accessible rights-enforcement institutions that exist for most of the world. Even digital platform providers are realizing that high-integrity digital systems are impossible until we have accessible mechanisms that enable us to reliably enforce our rights.

Digital Rights Starting Points

Many organizations are engaged in defining their digital politics. Most are understandably struggling with how to not only establish the rules but also build the systems to realize them. It’s one thing to say, for example, “We value your privacy,” in the abstract. It’s quite another to enable users to audit your use of data, so that they can see it for themselves. It’s still one step further to have mechanisms that let users investigate, challenge or hold violators directly accountable for the ways in which their actions impact user rights.

There’s nothing simple about the design of that architecture. It makes sense, therefore, that rather than reinvent the wheel, most technology platforms have begun modelling aspects of their customer and content management after existing judicial systems. The goal is to be able to operate a system that works at the scale of global standards, for ease of implementation, and is responsive to the locally contextual considerations that characterize and define justice. Yet, it’s critically important to recognize that although these systems are often modelled on a justice system, they are privately and opaquely administered, with no independently guaranteed rights in process or substance.

Even the prospect of independent oversight — while critical for a range of rights and market-integrity reasons — creates significant political contests and, more often, the need for digital systems to structurally cope with conflicting authorities. One prominent example is how platforms implement government policies around political speech.

Thailand’s restrictions on the coverage of its royalty may be acceptable domestically but aren’t appropriate to extend internationally. In such situations, platforms have to build and maintain systems capable of federating on the basis of local policies, an expensive proposition, or choose to only operate in places that agree on the same political speech policies, which would limit their ability to profit and expand. These structural considerations are foundational for anyone interested in realizing digital rights, not only because they are how digital systems operate in practice, but also because they frame the very incentives for and possible approaches to creating digital rights at all.

Here are a few stages of building a strategy for creating a digital right in practice, with some of the important considerations and tactical trade-offs at each step.

Defining the Problem, Politically

This may feel obvious, but digital rights require more articulation than most because they are typically implemented without traditional limitations such as geographic jurisdiction. At this stage, the first and most important questions are: “Are your interests best served by establishing a right?” And, if so, “What would that look like?” For example, are you trying to create an affirmative condition, such as a “safe” or “secure” environment, or are you trying to prevent a behaviour, such as companies selling your data or ignoring software defects with dire consequences?

Digital rights require more articulation than most because they are typically implemented without traditional limitations such as geographic jurisdiction.

Prioritizing the Rightsholders

For many digital advocates, the hardest question is “Who is this right for?” It’s a sympathetic position — those who devote themselves to creating rights are typically doing it so that everyone has the same rights — and yet, recognizing that no enforcement infrastructure reaches everyone equally, even a successfully created right will be implemented unequally. What varies is not whether the rights as drafted are equal, but whether their prioritization in the system is transparent, as well as how the system compensates with investments for those unserved by that design. By commission or omission, rights advocates pick and choose which institutions they influence and, as a result, which groups of people stand to benefit from their advocacy. For people seeking redress, the answer to “Who is this right for?” is more obviously “me” — but knowing that doesn’t necessarily help them identify the authority best fit to solve their problem. One of the reasons most people don’t know how to access their rights is that platforms overtly avoid specifying who they’re for and what they protect, forcing victims to argue why those involved are responsible for the harms caused, before they’re even able to start arguing the merits of their specific case.

Asking Who “Owns” the Problem

While many parties may play a role in the integrity or harms of a digital system, the responsibility for those harms is, typically, established by government — either by setting the regulatory requirements for legitimate private agreements or by directly asserting its authority through enforcement. For public authorities, the problem-of-ownership question is often answered through jurisdiction, which justifies an institution’s influence on the basis of the characteristics of a dispute, such as where it happens, who it happens to or the type of legal issue involved. For private actors, such as the companies that own digital platforms, “owning a problem” is typically related to the legal articulation of a duty or liability, which is generally set by governments but then specifically agreed through contracts, such as terms of service or data licences. The broader expectation is that people who build digital systems have a responsibility for ensuring that what happens is legal, in both process and outcome.

The challenge for digital rights, more widely, is that digital systems often span theories of public jurisdiction and private responsibility, as well as political systems, creating significant conflicts of interest and law. As a result, the rules that define digital systems are most commonly influenced and enforced by institutions on the basis of their control over two things: one, a company’s assets and infrastructure, and two, a company’s access to a lucrative market. In other words, digital rights are determined as much by an authority’s leverage over the actors involved as by their claim to jurisdiction.

Creating Leverage and Incentives

One of the increasingly common features of modern digital rights legislation is that they not only create enforcement powers but also establish a base of leverage over the private companies that control digital systems. The terminology for “creating leverage” varies — it’s often referred to as digital sovereignty or data localization, which requires that digital systems host their data within the territorial jurisdiction, enabling governments and courts to physically access digital assets. For some countries — such as China and, more recently, India — submission to local authorities has been a precondition of market access, usually implemented through banking and financial regulations. In other digital rights legislation, for example, the European Union’s proposed Digital Services Act, registering a local representative is a precondition for access to publicly owned, digital resources, such as data. These requirements are an attempt to both consolidate domestic influence and create a point of leverage over privately owned digital interests. In other words, they are also a recognition of the disparity between legal jurisdiction and the leverage necessary to compel private, digital actors to change their behaviour.

For those without the direct authority to set market conditions, the work of establishing digital rights often comes down to appealing to a government or private company’s (benevolent?) self-interest by aligning incentives. With government, that work often means motivating individual politicians’ constituencies to pressure action or clearly illustrating the public harms caused by a system. For private systems, there are even more options — from changing the costs of production by intervening in the digital supply chain, to using insurance products to increase the cost of doing business, to working directly with shareholder advocates. There are significant opportunities to manage the incentives of the authorities that define digital rights in practice — especially by using long-term interests to encourage upfront investments in rights and integrity, even when they incur a short-term cost. Those investments, and the advocacy that compels them, vary substantially in impact, which is affected particularly by how directly a government or company is responsible for the actual administration and implementation of the underlying system or right.

Implementing, Participating In and Embedding Independent Oversight

Most digital rights aren’t implemented by the institution or authority that establishes them. As with financial or environmental regulation, the standards are set by an authority, but the organizations that operate the systems that realize them are more likely to be private companies. That segregation not only leads to a huge range of practice, from well-intentioned actors with user-accessible mechanisms for overseeing and enforcing their rights all the way to organizations elaborately “performing” digital rights while basing their business model on their exploitation. Rather than focus on one end of the spectrum, it’s worth noting three things about digital rights implementation.

First, digital systems often impact a broad range of rights, meaning that it’s insufficient to build systems that centre around enforcing one right — even when it’s a procedural right. The most effective digital rights implementations create open mechanisms for those impacted to raise concerns and triage the legalities and appropriate systemic response.

Second, digital rights enforcement is, for the most part, not accessible to people who have had a claim to rights enforcement. That’s not to say there aren’t important efforts to establish and enforce digital rights, but that the actual means of participation are indirect. Nearly every credible enforcement mechanism relies on some form of third party to agree to investigate and pursue a rights claim — whether it’s a domestic regulator like the US Federal Trade Commission or a social media platform’s content moderation or trust-and-safety team.

The term for a right that individuals can enforce themselves is a direct right of action. However, with digital rights, the vast majority of enforcement is done through public authorities, such as domestic regulators, or, when fortunate, the systems a platform has developed to ensure trust and safety. Unfortunately, these are pretty limited mechanisms, considering the diversity of rights-affecting impacts of digital systems — and considering the relative inaccessibility of courts to most people (and most courts’ limited leverage). Unless a digital system proactively designs mechanisms for direct rights of digital action, there are very few credible means by which users can participate in realizing their rights.

Finally, even in digital systems where users are able to raise concerns about their rights, those mechanisms rarely connect to any kind of independent oversight. Most digital systems don’t provide the transparency or participation mechanisms necessary for people to understand how they’re affecting their rights — and where they do, they almost never enable a user to raise the issue with an independent authority like a regulator or court. People pursuing rights claims often have to work against the digital system that harmed them, even to assemble the information that enables them to bring the claim, let alone to reach an independent enforcing institution.

Unless a digital system proactively designs mechanisms for direct rights of digital action, there are very few credible means by which users can participate in realizing their rights.

Of course, all these issues are compounded by the fact that digital rights are, mostly, implemented by the organizations who stand to benefit from their exploitation. For example, digital privacy rights are, in practice, implemented by organizations that have data and an incentive to share it. Implementing privacy means giving users visibility over what’s happening in the systems, a means of participating — often as an individual and part of a collective — in the decisions that influence data’s use, and a mechanism for raising accountability in case the system is abused. The reality is that the complexity of monitoring and exerting leverage over actors with adverse interests, just in order to ensure the preservation of a baseline set of rights and integrity standards, is so overwhelming that most don’t even try.

And that’s just one type of right. For a sense of the kind of oversight necessary to ensure compliance across a big technology company, it’s worth remembering the US government’s enforcement of an antitrust ruling against Microsoft — where, essentially, the government embedded staff inside of the company for years to ensure compliance. That approach was heavy-handed and early in the history of big tech, whereas modern efforts require companies of a certain size to have in-house compliance professionals, along the lines of the GDPR’s concept of the Data Protection Officer. While ensuring compliance is also an important part of any digital rights enforcement system, very rarely does such a system have the mandate, infrastructure or influence to fulfill the functions of participatory governance or justice.

Building in Adjudication, Accountability and Adaptation

While most of the coverage of digital rights systems tends to focus on holding bad actors accountable or redressing the harms they cause, it also elides the systemic importance of legal systems. Effective adjudication systems do more than resolve disputes: they interpret and evolve the underlying law. As noted by Nobel laureate Elinor Ostrom, participant-driven adaptation is a key tenet of any effective governance system, particularly those intended to operate at any scale or over any meaningful duration. For a digital right to be effective at scale, over time, the implementing architecture will need to provide for the short-term needs of dispute resolution and meaningful accountability, but also for the ways those processes shape the underlying rules and, ideally, their implementing architecture. Without that, even where they exist, digital rights are likely to remain subjectively applied and decreasingly relevant to changing circumstances, undermining both their legitimizing and governance benefits.

Conclusion

There isn’t any one path for the creation or effective implementation of digital rights, but there are lots of obvious ways in which digital rights, once created, are rendered useless because of their implementation. Too often, the pathway to creating digital rights is so challenging that it precludes considerations about their implementation, governance and eventual adaptation. Ultimately, digital rights advocates who fail to consider the mechanics of their democratically governed implementation often see their work undermined, at best, or actively exploited, more commonly, by those who benefit from their absence.

As we look to a raft of promising, emergent digital rights frameworks, it’s worth remembering that nearly all of the systems they’re meant to govern are international in practice. That’s not a hypothetical observation; the Chinese government has already begun attempting to apply its national security law extraterritorially to businesses in the United Kingdom. Similarly, the United States and the European Union have been operating without a functioning data transfer framework since Schrems II — the European Court of Justice case that invalidated the Trans-Atlantic data-sharing agreement for a second time, enabling both governments to establish digital rights without any consideration for how they’ll interoperate with inter- and extra-national interests. Fundamentally, all of these digital rights frameworks are intended to shape the design of infrastructure that operates across legal jurisdictions and political systems and with different scales of impact — and almost all of them lack the means or mechanisms to deal with the inevitable conflicts that will arise.

As the old saying goes, “If you want to go fast, go alone — if you want to go far, go together.” If we want digital rights that will endure, let alone create a better world, we’ll need to focus a lot less on individual rights and a lot more on the architectures of our participation in their implementation and governance.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Sean Martin McDonald is the co-founder of Digital Public, which builds legal trusts to protect and govern digital assets.