Eighteen months into the COVID-19 pandemic, a palpable restlessness has set in as people everywhere are desperate to re-enter the world and resume their lives. But in this new reality, even old and familiar activities have a new digital dimension. Fancy a European holiday? Be prepared to present a mobile app or quick response (QR) code as requested while travelling. Hoping to catch a Broadway show in New York City or dinner in L.A.? Planning to return to campus this fall? Odds are that you may have to present proof of vaccination through a mobile app or by other digital means to travel, enjoy theatre, dine out or even attend lectures — in fact, New York City will require proof of vaccination for most indoor activities. While few question the legitimacy of the public health measures, there are rightful concerns about the technologies employed in the process, including digital identity-related tools.
The uptake of digital “identity and access management” (IAM) tools and solutions was already dramatically accelerated by the shift to remote life during the pandemic. But as we increasingly present mobile passes, QR codes and other digital credentials to access all manner of venues, activities and experiences in the physical world, the once back-office function of digital identity is beginning to take centre stage in everyday life. No longer confined to usernames and passwords or the presentment of other login credentials for online services, we now move through the world as “phygital” beings with both physical and digital dimensions, including digital identities. Yet, despite our increasing exposure and vulnerability through the growing ubiquity of digital identity systems and tools, we have few laws or even norms that adequately govern identity systems in this brave new phygital world.
Although digital identity-related technologies are theoretically subject to a complex and intricate set of laws, regulations and other requirements, in practice the rules are anything but clear. As a first principle, individual digital identity tools and solutions are governed by consumer protection, tort, contract, data protection and privacy laws, and other generally applicable bodies of law. But generally applicable laws and regulations that were not designed with digital identity in mind are often ill-equipped to address the specific kinds of challenges that arise out of digital identity systems, especially in so far as they incorporate new and advanced technologies such as artificial intelligence (AI) and advanced physical and behavioural biometrics that identify an individual based on his or her voice, keystrokes, or other physical and behavioural traits and patterns (even as modalities such as voice ID are known to exhibit both racial and gender bias). And in some places, such as in the United States, where there is no single federal law that regulates the collection or use of an individual’s biometric data and no comprehensive federal privacy law, huge gaps remain.
Despite our increasing exposure and vulnerability through the growing ubiquity of digital identity systems and tools, we have few laws or even norms that adequately govern identity systems in this brave new phygital world.
Moreover, digital identity systems are often deployed through complex and diffuse supply chains combining both business-to-business and business-to-consumer corporate actors. Often, the entities designing and building AI-based identity solutions are not the ones using or deploying them, so if a system malfunctions or delivers an erroneous result, the path of recourse isn’t clear. The chain of responsibility and accountability for privacy and security often breaks down, leaving individuals with limited control over how their information is used or any decisions that are made, challenging core data protection and privacy principles of fairness, transparency and accountability, among others. Matters are further complicated by the relationships between public sector entities and private sector vendors procured to provide ID services.
Further, certain identity systems or solutions may have their own governance frameworks to administer their operation. For example, public sector ID schemes may be governed by government-enacted laws or regulations, such as India’s Aadhaar Act, which regulates the operation of India’s biometrics-enabled national identity system and the use of each citizen’s unique Aadhaar identification number, or Estonia’s Identity Documents Act, which mandates national electronic identity cards and regulates their use by the public and private sectors. Aside from Estonia’s, most laws and regulations enacted to govern national identity and public sector-sponsored ID schemes mostly leave private sector solutions unregulated or fail to adequately account for the nature of tools acquired from the private sector, even as they are growing in popularity and importance.
Finally, there are broader identity-related governance frameworks that apply more generally to identity solutions within a given ecosystem or region. For example, Australia’s Trusted Digital Identity Framework (TDIF) outlines rules and standards that apply to all accredited identity service providers within Australia’s digital identity system. Similarly, Europe’s eIDAS Regulation is generally applicable to “notified” trust services such as e-signatures, digital certificates, electronic seals and other identity services within the European Union. Frameworks like the TDIF and eIDAS are designed to address mutual recognition and interoperability of ID services across participating member states and entities. And, in the United States, the REAL ID Act sets federally equivalent standards, including minimum security standards, for state-issued driver’s licences.
These generally applicable ID governance frameworks tend to focus on the technical and technocratic requirements of a discrete identity-related product or service at the micro-level — for example, a vaccination certificate or driver’s licence — rather than on the nature and role of digital identity technologies as comprising complex socio-technical systems with political and economic dimensions.
And yet, when digital identity systems and solutions fail, they can have a range of tangible consequences for people. While existing law and governance frameworks tend to focus on the privacy and security of data implicated by these systems, they tend to ignore the risks to people. For example, errors or malfunctions in proof-of-vaccination or certification schemes for COVID-19 could deny people access or entry to all manner of places and activities, including at the borders; inflict distress or reputational damage; result in lost income or other financial loss; or threaten the personal health or safety of individuals.
Despite the inadequacy of governance schemes for digital identity, making us machine-readable humans and managing our identity and access through digital identity infrastructure is big business and growing bigger each day.
Notwithstanding their serious implications for personal privacy, security and autonomy, existing digital ID-related governance frameworks fail to adequately address these kinds of tangible harms. More importantly, they almost entirely neglect questions about the role or legitimacy of digital ID systems and solutions in the first place, including when and why identification should be required, or the impact of certain business models and commercial incentives on underlying ID systems. Technologies such as contact-tracing apps deployed in response to the COVID-19 pandemic (so-called “pandemic tech”) provide a clear example of the limitations of existing frameworks.
Despite the inadequacy of governance schemes for digital identity, making us machine-readable humans and managing our identity and access through digital identity infrastructure is big business and growing bigger each day. The global market for IAM tools and solutions is expected to reach US$29.79 billion by 2027, while the global identity verification market is expected to reach US$17.8 billion by 2026. The market for cloud-based identity as a service (IDaaS) worldwide is anticipated to grow at a compound annual growth rate of 22 percent from 2020 to 2027.
While demand for enterprise-grade IAM services was already growing before the pandemic, consumer-facing identity services are also surging in its wake. Even already dominant players like Apple, Google, Amazon and Microsoft are growing their digital identity services. For example, Apple and Google, who together control more than 99 percent of the global market share for smartphone and mobile operating systems, recently introduced mobile wallets to store digital versions of driver’s licences and other identity credentials. Amazon and other companies have also capitalized on pandemic-induced anxiety and germaphobia to accelerate adoption of contactless and touchless payment and digital identity solutions, even as scientists have determined that the probability of catching or transmitting the virus from surfaces is very low.
Private sector tools frequently incorporate new and advanced technologies — such as AI, machine learning, and blockchain or distributed ledger technology, as well as advanced biometrics —- that are poorly understood, untested at scale and often not subject to sufficiently clear legal or governance frameworks. To address the governance gaps, the private sector increasingly relies on technical standards and the use of privacy-enhancing technologies. Moreover, democratic processes and decision making about the use of emerging technologies for digital identity, including in the case of pandemic tech, are often outsourced to technical standards bodies, the private sector and industry consortia. But technical solutions for privacy and security, even if baked into technical standards, fail to address the way that ID systems and solutions operate in practice, and the potential consequences for people.
From a socio-technical and political economic standpoint, it is important to consider that privately owned and operated tools and solutions typically feature profit-maximizing business models and are driven by commercial incentives that may threaten the privacy, security and other fundamental rights of individuals and communities. For example, many digital identity schemes feature “pay per verification” fee schedules that may incentivize the overuse of identity credentials, requiring them where not previously required or necessary. Just as we have learned with respect to social media companies, business models can have serious consequences for civil and human rights. And yet, at present, there is not only an absence of legal or governance frameworks to address the commercial incentives behind ID schemes, but also virtually no public conversation on the topic.
We are at an inflection point as we enter a phygital world marked by ubiquitous digital identification tools. Through biometrics, we no longer have to prove that we have or know something but rather that we are something, namely, that we are who we say we are. As we automate the tools and systems that either confirm or deny claims we make about ourselves, it is becoming critical to articulate the norms and the rules for the operation of these systems in our everyday lives. Privacy and data protection laws are foundational but insufficient. We need to identify specific governance frameworks that go beyond the technical configurations of ID technologies to address their socio-technical dimensions, underlying commercial incentives, and the limits of when and where ID can be required. Without accounting for these dynamics, we risk the erosion of anonymity and autonomy in a fully phygital world.