On March 2, Google made waves in the online advertising world by introducing the beta version of its browser Chrome 89, featuring new “Web crowd and ad measurement” tools. The tools are components of Google’s “Privacy Sandbox,” a project first announced in August 2019 as an experiment to explore replacements for third-party cookie-based advertising in “a secure environment for personalization that also protects user privacy.” Translated, that means that Google will be creating its own set of tools that will allow advertisers to behaviourally target ads through browser-based APIs without giving advertisers direct access to individual users’ personal details. The Privacy Sandbox has included proposals for an array of open standards identified by bird-themed acronyms — including PIGIN, SPARROW, PELICAN and TURTLEDOVE — which are technically complex and continually evolving, making the details difficult to pin down. But this complexity also obscures the real issues at stake — namely, the further enclosure of digital spaces and the limits of our ability to effectively govern companies of Google’s size and scale.
The wider world has been focused on the specifics of Google’s sandbox, including what the changes mean for online advertising. The phase-out of third-party browser cookies is getting the most attention, specifically through Google’s proposed use of Federated Learning of Cohorts (FLoC), an unsupervised machine-learning algorithm for clustering people into groups for ad targeting based on their recent browsing history. One prominent advocacy group characterized FLoC as Google’s attempt to “reinvent behavioral targeting” using something akin to a behavioural credit score, with high risks of discrimination. To be clear, Google is not ending tracking; it’s pivoting to more opaque methods. It’s also not imagining an alternative to the behavioural advertising-based internet, core to its business and growth so far. So, what’s really going on?
But this complexity also obscures the real issues at stake — namely, the further enclosure of digital spaces and the limits of our ability to effectively govern companies of Google’s size and scale.
Experts believe that Google’s initiative is motivated by the growing spectre of regulation, led primarily by Europe with its General Data Protection Regulation in 2018, its forthcoming ePrivacy Regulation, and a suite of other legislative reforms, including the draft Digital Services Act and Digital Markets Act. The French data protection regulator recently fined Google US$120 million for the unlawful use of tracking cookies, and Google remains under investigation by data protection authorities in Ireland and the United Kingdom for its cookie practices. Consumer privacy efforts are also picking up in the United States, led by the California Consumer Privacy Act and its successor, the California Privacy Rights Act, set to become law in January of 2023, as well as a number of forthcoming privacy laws in Virginia and other states.
And the Privacy Sandbox is just one of several changes that Google has announced in recent years, in the name of user privacy. Others include updated settings and controls for its Google Analytics products, adjusted Application Programming Interface (API) permissions to block certain third-party browser extensions, the discontinuation of its Trusted Contacts emergency location-sharing app, and the automatic deletion of trash in Google Drive after 30 days. According to Google, the changes are motivated by a desire to keep pace with users’ expectations for privacy online. In fact, rival browsers such as Firefox and Safari blocked third-party cookies several years ago. But with Chrome still controlling nearly 64 percent of the browser market share, this is unlikely to be the full story.
In fact, on closer inspection, many of the changes announced by Google are not as privacy-enhancing as they seem. For example, the upgrades to Google Analytics give advertisers expanded artificial intelligence (AI) and machine-learning-powered insights, deeper integration with Google Ads, and better cross-device measurement capabilities, reminding us that Google’s customers are advertisers, not people. Similarly, the renewed efforts to block third-party browser extensions leave it to Google’s discretion whether pro-privacy, ad-blocking extensions are “malicious” or not. It’s also not clear how redirecting users from a small list of trusted contacts to broadcasting their location data via Google Maps is a better deal for privacy. The same is true of proposals such as FLoC, where privacy depends on the strength of the clustering algorithm and the size of a cohort (ultimately putting minorities and marginalized individuals at heightened risk).
But these measures, along with the Privacy Sandbox, do share something else in common — they channel more and more activity into Google’s own first-party ecosystem, routing everything through Google’s APIs and automated tools. Google claims that the changes are designed to ensure that “the incredible benefits of the open, accessible web continue into the next generation of the internet” but the title of Google’s original announcement — “Building a More Private Web”— is in some ways, more apt than it seems: it signals a shift from the “open Web” to a series of private “walled gardens” or digital fiefdoms, further entrenching the dominance of companies who own and control these ecosystems. Who needs third parties when Google is the first and only party?
In this way, Google’s “privacy” overhaul is significant because it represents a broader shift toward the enclosure of digital spaces through the use of privately owned and opaque automated tools. And Google is not alone. In 2019, at Facebook’s F8 developer conference, Mark Zuckerberg famously said that “the future is private” and added: “Over time … a private social platform will be even more important to our lives than our digital town squares.” Likewise, in the name of user privacy, Apple recently rolled out additional requirements for its App Store, triggering anti-competitive accusations from competitors and scrutiny from regulators around the world.
Google itself is under increasing scrutiny from antitrust and competition authorities around the world. The United Kingdom’s Competition and Markets Authority has recently opened a new investigation into Google’s Privacy Sandbox after publishing a study last July about the impact of digital platforms on online advertising. Google is also being sued by the United States Department of Justice for allegedly abusing its online search and advertising dominance; it’s being separately investigated by the attorneys general of Texas and several other states. The company is also under scrutiny by the Australian Competition and Consumer Commission. While important, such post hoc antitrust and competition measures — in particular, narrow ones focused on search or cookies — are unlikely on their own to solve the problem of enclosure.
In the end, Google’s Privacy Sandbox is important not so much for its phasing out of third-party cookies or changing its browser as for what it foreshadows. After all, we are quickly moving into a digital world that is less browser-dependent, with mobile ad spending representing 68 percent of all digital ad spending in 2020, and growing. Smart and Internet of Things (IoT) devices continue to proliferate, with non-graphical user interfaces that include voice activation and speech recognition, gesture and motion detection, and other forms of human-computer interaction. Google, too, is expanding its non-search business lines, including through IoT devices, AI-driven life sciences and infrastructure such as Google Fiber and Google Cloud.
This shift also exposes the limits of our ability to effectively govern companies of Google’s size and scale with existing tools. Replacing third-party cookies with FLoC is a prime example. Unlike cookies, which are easier for regulators to audit, clustering algorithms and other AI and machine-learning-driven techniques are more opaque and much harder to scrutinize. They also make it easier for Google to evade laws that, despite being drafted to be technology-neutral, are showing their limits. When and how can Google modify the clustering algorithm? Based on what factors? And how will we even know? The opacity drives more asymmetries of information and knowledge, deepening the epistemic crisis we face. In turn, it becomes even harder to hold companies accountable for the risks, uses and harms resulting from their activities.
It is also worth reflecting on Google’s choice in naming its project the Privacy Sandbox. We typically associate the term “sandbox” with a safe space designed and run by a public authority or regulator to enable industry experimentation with products or services that don’t meet existing legal or regulatory requirements without the normal consequences of those activities. They are controversial tools, widely criticized by consumer groups for circumventing formal rulemaking procedures and allowing companies to evade the law. It’s pretty clear that big tech firms want to make their own laws. They already behave like quasi-sovereign entities, taking on a host of quasi-sovereign functions from “minting digital currencies, verifying digital identities, and even building cyberweapons,” as one expert recently put it.
In this way, the Privacy Sandbox is only a test. But it’s an important test of our ability and willingness to hold large technology companies accountable to democratic institutions and oversight.