Opinion

The Post-pandemic Future of Trust in Digital Governance

July 29, 2020
Scassa_Teresa-Square.jpg
scassa-landscape.jpg
Reuters/Dado Ruvic

This article is a part of Global Cooperation after COVID-19, an essay series offering analysis of the post-pandemic world.

E

ven prior to the COVID-19 pandemic, “trust” was a key concept for governments as they asked citizens to make a leap of faith into an increasingly digital and data-driven society. Canada’s Digital Charter was billed as a tool for “building a foundation of trust.” Australia’s Data & Digital Council issued Trust Principles. Trust was a key theme in “Strengthening Digital Government,” a statement from the Organisation for Economic Co-operation and Development. Yet, in spite of this focus on trust, a 2017 study suggested disturbingly low levels of citizen trust in government’s handling of their data in the United Kingdom, the United States and Australia.

The COVID-19 pandemic has further laid bare this lack of trust in government. In the debates around contact-tracing apps it became clear that Western governments did not enjoy public trust when it came to data and technology. When they sought to use technology to support public health contact tracing during a pandemic, governments found that a lack of trust seriously constrained their options. Privacy advocates resisted contact-tracing technologies, raising concerns about surveillance and function creep. They had only to refer to the post-9/11 surveillance legacy to remind the public that “emergency” measures can easily become the new normal.

Working with privacy advocates, Google and Apple developed a fully decentralized model for contact tracing that largely left public health authorities out of the loop. Not trusting governments to set their own parameters for apps, Google and Apple dictated the rules. The Google-Apple Exposure Notification system is limited to only one app per country (creating challenges for Canada’s complicated federalism). It relies on Bluetooth only and does not collect location data. It requires full decentralization of data storage, demands that any app built on the protocol be used voluntarily and ensures post-pandemic decommissioning. Governments that saw value in collecting some centralized data — and possibly some GPS data — to support their data analyses and modelling found themselves with apps that operated less than optimally on Android or iOS platforms or that faced interoperability challenges with other apps in the “return to normal” phase.

As governments lose public trust, the private sector is building its trust capital. Google and Apple’s collaboration on their exposure notification system positioned them as privacy guardians.

The stunning irony, of course, is that our cellphones are already gateways to massive data collection and tracking by private sector companies. Devices and apps siphon personal data continuously and ubiquitously. Early in the pandemic, Google produced (de-identified) visualizations of people’s movements and activities based on data gathered from their tracking of individuals. Other companies such as Fitbit and Facebook jumped in with their own visualizations based on the data they harvest. Clearview AI volunteered its massive facial recognition database built from images scraped from social media platforms for tracking quarantine violators. And, even as these companies continued their routine harvesting of personal data, many individuals balked at much more modest government contact-tracing apps.

The lack of trust in government was evidenced as well by global demonstrations against systemic racism in law enforcement and the abuse of power by police against racialized individuals. When Minnesota’s Public Safety Commissioner used the term “contact tracing” to refer to investigations of protestors, a link was forged between public anxiety over contact tracing and a broader mistrust in state authorities. Early in 2020, as COVID-19 was taking hold in some countries, news about the quiet adoption of Clearview AI’s facial recognition technology by police services across North America also highlighted a lack of transparency and accountability in technology adoption — and the potential for abuse through technology — that can only undermine trust.

Much of this lack of trust is well-earned. Indeed, private sector companies’ ability to harvest massive quantities of personal data with such ease is in large part the responsibility of governments that have been unwilling to enact adequate privacy protection. Governments have benefited from this rampant data collection; detailed location and communication data in the hands of private sector companies is always just a production order away for state officials. During the pandemic, several governments turned to the stockpiles of private sector location and activity data to supplement their contact-tracing efforts.

As governments lose public trust, the private sector is building its trust capital. Google and Apple’s collaboration on their exposure notification system positioned them as privacy guardians. Companies that provided COVID-19 data visualizations presented themselves as alternative data sources at a time when many governments were struggling to provide pandemic data. Massive demonstrations raising issues of systemic bias in policing have led to major corporations putting the brakes on facial recognition technology before many governments have taken steps in this direction. And, as governments dither about how to rein in devastating online hate speech and disinformation on social media platforms, private sector companies are initiating an advertising boycott to force Facebook to take action.

Governments must learn from the lessons of the pandemic and act to regain essential trust. The rebuilding of trust in government in the digital context goes well beyond privacy, of course. Still, this is one place where change is within reach. After all, many governments ultimately made privacy a priority in adopting contact-tracing apps, choosing to build trust rather than push compliance. They may not have had the trust they needed for the app they wanted, but they may, at least, have a road map to get there.

In the post-pandemic future, governments who value trust will take steps to rebuild it. They will enact or reform data protection laws to ensure meaningful rights and real enforcement. They will bolster ethical and human rights-based approaches to technology adoption and use. They will become more transparent, especially with respect to technology — posting source code, sharing privacy impact assessments, and communicating more frankly about expectations and design for technology tools, including artificial intelligence. Governments will pay more attention to ethical and rights-protective development of new technologies — including automated decision making and facial recognition technologies — building trust by creating real limits and enforceable rights. Privacy alone will stop being the measure of government technology initiatives, and a wider lens that looks at impact — in particular, the differential impacts of technological solutions — will emerge.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

Essay Series

We are in the midst of fighting the first phase of the COVID-19 crisis, but it is already clear that the impacts will be manifold and enduring. It is not too early to reflect on the lasting impact that the outbreak is sure to have on global cooperation, globalization, faith in public action and in science, social cohesion, and the trade-off between civil liberties and personal privacy.