COVID-19 Lessons: Building Disaster-Ready Technologies

October 12, 2020
2020-07-16T222523Z_1431613075_MT1HNSLCS000IGSOW4_RTRMADP_3_HANS-LUCAS.JPG
A nightclub security guard scans a QR code for a pre-filled registration form during the outbreak of COVID-19 in Lausanne, Switzerland. (Reuters/Dom Smaz)

Ten months into the COVID-19 pandemic, the virus is surging and our response efforts, from political messaging to contact-tracing apps to selective lockdowns, are internationally fragmented, conflicting and most concerning mistrusted. The COVID-19 pandemic, while unique in some ways, follows the very typical disaster hype cycle — where emergency gives way to techno-solutionism, and unchecked scientific experiments get deployed at scale, and justified through exigency. The problem with rushing unproven technologies into emergencies is that they don’t exist in a vacuum and so emergency response efforts are deeply coloured by the political, economic and cultural contexts in which they’re deployed. And in many contexts that means subjecting already vulnerable populations to violence, isolation and poverty. In a pandemic, the failure to earn the public’s trust in the exercise of emergency powers in particular, emergency surveillance and law enforcement powers isn’t an abstract political failure, it’s a public health problem.

So, what has COVID-19 taught us about how we develop public interest technologies and institutions amid disaster? Perhaps the most important realization has been the importance of public adoption and trust. Too often, trust is treated as an individual’s problem to solve as opposed to something that has to be earned through transparency, good-faith implementation, and accountability. For better or worse, contact-tracing (and exposure notification) technologies have become the face of the technology industry’s efforts in the pandemic. And, as a result, contact-tracing applications have become a mirror, in many places, for the complicated relationship between public health authorities and their people, further refracted through their relationship with technology and law enforcement. Said a different way, governments and technology companies are realizing that they have to earn trust not through an app or a slogan, but by designing an effective, holistic response.

In a pandemic, the failure to earn the public’s trust in the exercise of emergency powers — in particular, emergency surveillance and law enforcement powers — isn’t an abstract political failure, it’s a public health problem.

Disasters can bring out the best in us, but only if we act with careful planning, preparation and coordination and much of the world is learning that the same is true for how we build and use public technology and data in emergencies. The challenge of negotiating the details of how and why we prepare and coordinate — especially in ways that protect our basic humanity, and transcend politics and conflict — is the problem that created the humanitarian movement. And, just as well-meaning governments, technology companies, and philanthropies often rush into disaster response efforts without consulting the professionals, the rush to roll out new technologies during disaster often ignores the communities and organizations that have deployed tech in previous emergencies. There are a number of humanitarian and international development professionals with years of experience deploying technology interventions into uncertain environments – and they are rarely consulted by domestic policy or industry leaders in ways that build on their hard-won lessons. Failing to learn the lessons of previous disasters is the easiest way to get stuck in them, and the same is true when building and deploying technologies that require high degrees of public trust to work, especially in vulnerable and last-mile communities.

Here are five of the most important challenges and lessons for deploying public technologies in disaster.

The first is validation. People don’t experience life’s problems or solutions through technology protocols or feature descriptions, and so it’s important to test for — and communicate the value of — technologies based on their impact, not their features. Fundamentally, governments and technology companies around the world are still deploying contact-tracing apps without any meaningful scientific indications of their value or impact. Even in places such as Iceland, where adoption has been high, the number of notifications remains in the hundreds. And, as many have noted, there’s a more fundamental issue: mobile phones aren’t able to easily or equitably capture information that’s critical to understanding the risk of COVID-19 virus transmission, such as whether a person is wearing a mask or is outdoors. The fact that these apps have been deployed in dozens of countries around the world while the science underpinning them is unproven, at best, is a sign to the public that governments are willing to experiment on them, recklessly. The disregard for clear experimentation goals or for checks and balances in implementation, as well as the lack of rigour in explaining the practical value of these technologies to the public, undermines not only individual interventions but also the legitimacy of and trust in the underlying institutions.

The second major structural challenge in developing public interest technologies is deployment. The adoption of technology platforms is broadly fragmented, across access to infrastructure, the cost of connectivity, device hardware, and operating and application software, among many other variables. Simply put, focusing public interest responses to any crisis on technology interventions means that those technology differences become differences in access to emergency public health services. It’s well understood at this point that technology is not a standalone solution to any complex public problem. What public institutions often fail to recognize is that when public initiatives are delivered through technology, they inherently embed those inequalities in the service using them, becoming a source of inequality themselves. In other words, introducing technology interventions into a public emergency response does not merely threaten inequality of protection, it actively creates it, if this risk is not proactively addressed before deployment.

The third challenge is the practical importance of preserving the difference between the experimentation settings, like labs, and emergency treatment and response settings, like mobile phones or hospitals. Once technologies are deployed by governments, they assume a life of their own. In the case of COVID-19 technologies, there are already numerous examples of law enforcement using violence to enforce COVID-19 distancing and technology use, often in ways that result in death. There was a period of the response in Kenya, for example, where police enforcing curfew and lockdown killed more people than the disease did. In Israel, public health facilities report that the overreach of public health surveillance has actively hurt the success of the response effort. There are also significant commercial and criminal abuses of technologies that are designed to mislead users into sharing data — and even some governments accused of deploying these technologies to target political opponents. These factors, at the very least, can cause significant damage to the public’s trust in legitimate response efforts. And the people who were already afraid of or resistant to public institutions are likely to remain so. It’s easy to forget that it took the Nuremberg Trials and war crimes before the international community got serious about building infrastructure for bioethics and it’s worth asking how many more war crimes it will take before we build the institutional infrastructure for ethically testing public technologies. There are currently no public labs or market regulators that make the distinction between experimentation and practice in technology, and if the headlines — even pre-COVID-19 — are any indication, the political costs of experimenting in public will remain extraordinarily high.

The fourth clear challenge is that disaster technologies require affirmative communication of the underlying data governance as a precondition of success. Most of the media coverage of contact tracing and exposure notification apps wasn’t about whether they could plausibly work but about whether they could be exploited to harm users. That messaging and that standard simply aren’t enough to encourage public understanding, let alone adoption in many parts of the world. The 2020 Edelman Trust Barometer shows a universal decline in trust across sectors, and there’s recent research suggesting that COVID-19 has actively undermined trust in science. In the United States, we’re seeing a record number of patients ignoring and refusing to communicate with contact tracers, in part because law makers politicize contact tracing in their remarks. Another piece of recent research found that the largest single source of COVID-19 misinformation is US President Donald Trump. Studies have also demonstrated the relationship between the consumption of specific media in the United States with the likelihood of contracting COVID-19. The fragmentation in public messaging is large and leading to very real public health consequences. The broader lesson, however, is that “the app probably won’t help us hurt you isn’t public health messaging and that any use of technology as a public health measure should require clear, affirmative statements about the specific, expected impacts of use of that technology. There are already a number of cases, as in the United Kingdom, where users understandably struggle to make sense of their notifications in context. And messaging, or its failure, is often as determinative of both public adoption and success as are the features of the underlying technology.

The final challenge: public technologies have large and dangerous second-order impacts. Any time public agencies and institutions release a technology, they also, de facto, legitimize it for use by a range of often unanticipated actors, from insurers to educators to employers. The mobile applications that start as opt-in and consent-focused may become mandatory, not because the government says so, but because employers’ insurance providers do. This scenario has already played out with contact-tracing apps and data collection programs becoming required elements of resuming life in India, Singapore and the United Kingdom. People, industry and institutions are, ultimately, risk-avoidant. In a lot of cases it’s easier for universities or other powerful organizations to compel their constituents to download an app than it is for them to resist the pressure to resume business as usual. The European Commission, for example, is already discussing expanding contact-tracing app approaches internationally to other disease categories. Said differently, it is not enough for a government to rely on existing rights and protections when deploying a novel technology, nor is it enough for it to limit its consideration of potential harms to those from direct government action; governments must also consider the impact of private use of the same tools, and the protections people need to ensure their well-being against commercial use. Those effects, too, are part of how the public sees public interest technology, when weighing whether or not to adopt it.

Designing Trusts to Balance Politics and Science

But the secondary — although arguably as important — role for research ethics is to help public institutions navigate the increasingly difficult relationship between politics and science. One of the polarizing trends emerging amid the digital response to COVID-19 has been the politicization of science. That politicization isn’t new, or surprising, but it has a tremendous public cost.

Ethical research and experimentation infrastructure isn’t just window dressing; it progressively aligns the design of risk and accountability for experimenters. In research settings we afford subjects elevated protections under the law — including the need for ongoing informed consent, transparency in the nature of the experiment and professional oversight to ensure that the methods are proportional. The way those rules are enforced varies a lot based on context, but they don’t just rely on governments — as in bioethics, it’s possible to build a significant amount of trust-building industry architecture ourselves.

Any time public agencies and institutions release a technology, they also, de facto, legitimize it for use by a range of often unanticipated actors, from insurers to educators to employers.

One of the classic ways that law creates protections, especially in contexts where the power asymmetries and the potential for misuse are high, is through legally accountable experts, often called fiduciaries. Fiduciaries are people who manage our fundamental rights and interests when we’re unable to do so ourselves. Fiduciary duties often apply to professionals such as doctors, lawyers and asset managers, but also have applicability in land conservation, public interest journalism and, increasingly, data rights. One type of fiduciary entity, specifically one charged with overseeing the rights and assets of others, is a trust. A trust is a legal instrument that creates fiduciary duties based on the assignment of rights over property.

Data trusts are legal instruments that manage data, or the rights to data. Trusts are common law instruments — and although they vary significantly in nuance, every legal jurisdiction has some form of legally recognized rights assignment. Some jurisdictions use the term trust to refer to multiple things, including organizations, and most jurisdictions have at least slightly different requirements. Despite that, legal trusts are the most common and harmonized private law instrument for fiduciary management of assets in the public interest — and data trusts are best understood as that legal instrument applied to data rights management.

Building Trusts Now for Disaster’s New Normal

Data trusts are unique in that they enable a wide range of stakeholders to explicitly architect the terms, rights and accountability surrounding the exchange of data for a specific purpose. And, notably, they align with a range of important structural requirements for ethical experimentation, fiduciary representation and data governance.

There are a number of instances in which data trusts are already being deployed to manage the integrity of critical public interest technology initiatives. For example, the Johns Hopkins Medicine Data Trust Council applies institutional oversight to the internal sharing of data collected through clinical care with researchers. OpenCorporates is a UK-based organization focused on building the world’s largest beneficial ownership registry, and has put ownership of that database into a trust, ensuring that if the organization folds, the assets will be managed in the public’s interest. The Black Lives Matter Movement in the United States is similarly experimenting with data trusts toward creating a legal basis for representation and rights protection in digital environments. And, of course, Facebook’s recently announced Oversight Board is legally organized as a trust — toward ensuring its independence. In each of these cases, the trust is designed to perform a specific function which, if successful, will contribute to the integrity of and public influence in digital asset management and rights ecosystems.

Preparing Publics and Technology for Disaster

If COVID-19 has demonstrated one thing, it’s that we will very likely need the ability to deploy technologies in response to public crises. Accordingly, we’ll need to make investments in the institutional infrastructure necessary to regain the trust of the public — both in technology and in the science and health information it’s designed around. Those institutional infrastructures exist in a wide range of fields and, where successful, create representation and accountability relationships. COVID-19 has also demonstrated that without that infrastructure, there can be significant political, commercial and, even, public pressure to launch technologies that aren’t ready for deployment often in ways that come at great political, financial and health costs.

The more we can acknowledge the experimental nature of the technologies we deploy to solve public crises, the easier it will be to identify and scale successful interventions. That isn’t just a question of “is the tech good enough”; it also requires that the tech be deployed in ways that create clear statements of impact, transparent reporting, and direct accountability. Data trusts aren’t a silver bullet solution to the problems that emergency technology interventions can create, but they are an important tool for creating meaningful accountability and due process in digital governance, even during an emergency. And if governments expect the public to use whatever technology intervention is deployed to coordinate the next disaster response, trusts may be a key part of transparently and accountably building the foundation.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Sean Martin McDonald is the co-founder of Digital Public, which builds legal trusts to protect and govern digital assets.