Data is essential to the success of many technology firms and a key ingredient of prosperity in the digital economy. At the same time, how data is collected and used — and what protections and benefits users receive in return — prompts intense scrutiny of big tech firms and debates about data governance. Recognizing that firms need data to succeed and that citizens need better protections and fairer exchanges to continue to participate in the digital economy, some organizations and firms are proposing a novel approach: paying people for their data.
Pay-for-data arrangements promise to reduce exploitation and encourage firms to be more selective about the data they gather. They also provide a mechanism to allow firms to amass more and higher quality data should they need it. But it’s not clear that the models can address people’s most pressing anxieties about how firms handle their data, including concerns about privacy, security and misuse. And there is a very real possibility that pay-for-data arrangements might generate new digital divides in the data economy. The approach is worth exploring, but the extent to which it could improve or harm citizens’ interests and trust is far from clear.
From Exploitation to “Data Dignity”
In a 2018 Harvard Business Review paper, Jaron Lanier and E. Glen Weyl make a case for something they call “data dignity.” They argue that people have been exploited by tech firms — essentially, deceived into giving valuable data away with little or no compensation from the firms who collect and benefit from it. Instead, they say, people should be paid for the data they share. In turn, people should pay for services that require data from others. The idea is likely to receive even more attention and scrutiny since American Democratic presidential candidate, Andrew Yang, included it in his platform.
The underlying normative argument is that data is a form of labour, and that its extraction without fair compensation is a form of labour exploitation. Technology firms need more and better data about users to improve their prediction machines, which allow them to offer more valuable services to their business clients, many of whom are interested in targeting and persuading more finely segmented markets. But firms need users to transform their thoughts and behaviour into formats that prediction machines can use, whether by uploading videos, creating memes, tweeting or simply liking content on social media platforms. Those activities are a form of labour or work, according to some thinkers. And, with few exceptions, it is unpaid labour. Lanier and Weyl want people to be paid for that labour.
To facilitate the exchanges and protect people from the monopsony power of big tech firms, Lanier and Weyl suggest we create “mediators of individual data,” or MIDs — organizations similar to what others have called data trusts. MIDs would collect and manage user data, and negotiate agreements with firms that specify what data the firms can access, its permissible and impermissible uses, security expectations, and data royalties for people whose data is shared.
Perverse Incentives and Unintended Consequences
The idea that we should find ways to reduce exploitation and share value more fairly in the digital economy is reasonable. In practice, however, pay-for-data models will do more than compensate people for data they already share; they will create incentives for people and firms to behave differently, sometimes with negative consequences.
There is a risk that pay-for-data models will encourage people to give up more data than they should or otherwise would. Being paid for data one already shares (or has had extracted) seems fairer relative to an exploitative status quo, but should people be incentivized to give up more? In fact, it’s possible that pay-for-data models could exacerbate inequities in privacy protection and open new fronts on the digital divide. Although some analysts predict much lower and higher returns, most estimates of the dividends people could receive from pay-for-data models are in the range of $250 to $500 annually per user depending on digital activity. These amounts are probably too small to encourage people from higher income households to share more data than they already do, but they might be large enough to sway people from lower income households to do so. In the absence of better data security and protections against misuse, incentivizing poorer households to expose themselves to more risk is questionable at best.
The pay-for-data approach also risks shaping tech firms’ behaviour in troubling ways. Many data surveillance and extraction activities happen in the dark, with users having limited to no awareness about what firms are collecting and how. If we restructure the data economy in ways that require big tech firms to pay for data, we also create perverse incentives for them to continue to keep hidden as much data surveillance and extraction as they can in order to avoid these new costs. This is not to say that all or even most firms will behave this way, but enough have a track record of clandestine data extraction to prompt us to examine whether pay-for-data arrangements might make the situation worse.
Data Markets and Human Dignity
Would paying people for data solve the right problems in the right way? Pay-for-data arrangements are usually framed as a way to ensure people receive fair value for what is already being taken from them. But more than data is being taken. We face privacy violations, surveillance without consent, and manipulation of our attention, behaviour and mental well-being. Paying people for data will not solve these problems. They require stronger regulation and data governance solutions.
As the communications and social theorist Nick Couldry suggests, we need to “change our imagination — to equip people to see that good-faith proposals like data dignity miss the key point of where the assault on dignity comes from.” Indeed, we should consider the possibility that being paid for data undermines, rather than enhances, human dignity by further commodifying our lives — treating us as mere labourers or passive resources to be mined, and turning information about who we are, what we think and feel, and how we behave, into tradable property.