Part of my job involves doing a large number of academic lectures, public talks, policy briefings and media interviews about the growing harms presented by digital technologies and what we might be able to do about them. The volume of these engagements has increased dramatically due to the COVID-19 pandemic, as we are no longer restricted by the laws of physics. It’s just Zoom after Zoom.
There are three questions I get at these engagements more than any others: Do I use Facebook (not really); what’s the policy solution (there are at least a dozen, there is no silver bullet); and recently, the most common, what do you think of the Netflix documentary The Social Dilemma?
It is safe to say that no other book, piece of journalism or government declaration about the harms of big tech has had greater reach than this documentary. Released in 2020, it has reportedly been viewed by 100 million people, streamed in 190 countries and 30 languages. It is to big tech what the 2006 documentary An Inconvenient Truth was to climate change. The Social Dilemma has been remarkably successful in raising awareness about the core problems with big tech. For this it must be applauded.
But, in the world of the journalists, academics and policy makers working on the challenges posed by digital technology, The Social Dilemma was received with some skepticism. Some worried that by focusing on the power of big tech to determine our behaviour, the film not only attributed too much power to these systems, but also, in so doing, actually strengthened rather than resisted the market logic of the companies themselves. Others were concerned that the documentary provided tech solutions to problems created by tech. Still others pointed out the irony of the movie’s use of the very clickbait strategies it deplores.
The most powerful critique of The Social Dilemma, however, relates to what wasn’t said and who wasn’t included in the film. The protagonists of The Social Dilemma are a group of former employees of dominant tech companies, earnestly grappling with the implications of what they built. Although there is power in these sorts of moral reckonings, by featuring the voices of the mostly white men who became enormously wealthy building the products they now denounce, The Social Dilemma ignores the voices of experts who have long studied, understood and articulated these very harms. In short, it leaves out the people who saw the problem when those being interviewed were creating it.
While recent instances of algorithmic bias have renewed interest in the ethics of tech, a canon of work that looks at the relationship between technology and society through the lens of race often gets left out of the conversation. Safiya Noble, Ruha Benjamin, Meredith Broussard, Virginia Eubanks, Joy Buolamwini, Charlton D. McIlwain, Arlan Hamilton, Lisa Nakamura, Wendy Hui Kyong Chun and Simone Browne — to name only a few — have been far ahead of the curve in seeing and exposing the downside risks of tech, because they understand the communities that are most affected by the risks of these technologies. And it is no surprise that many of these academics, researchers and writers are women of colour.
Back when tech journalism was largely focused on gadget reviews, when many scholars (including myself) were studying the benefits of technology, and when many policy makers were celebrating the digital economy rather than regulating it, this research community was documenting the acute risks of the technological systems being built. There is no doubt that if journalism, academia and governments were more diverse, then and now, they would have come to these concerns far sooner.
I recently spoke to Mutale Nkonde for my podcast, Big Tech. Nkonde is the founder of AI for the People, a non-profit that seeks to challenge the narratives around the assumed social neutrality of machine-learning technologies. Nkonde has been one of the leading policy voices in the United States fighting for accountable and fair governance of big tech. She has been remarkably successful in developing legislation in a country that generally treats big tech with kid gloves.
Nkonde has been involved in three major policy campaigns: the Algorithmic Accountability Act, the DEEPFAKES Accountability Act, and the No Biometric Barriers to Housing Act. Introduced in 2019, these three bills, which address, respectively, the biases of algorithms, the ways deepfake videos can be abused and the dangers of facial recognition software, deal with harms that are most clearly seen through a lens of racial justice. The artificial intelligence and machine-learning systems that power algorithms and underlie deepfake and facial recognition applications are products of and subject to the same systemic and personal biases and inequities that skew the wider world. And at times they can even amplify them.
While these technologies’ harms may be most acutely felt by people of colour, their impact on society extends far more broadly. When facial recognition leads to a false arrest, when an AI sentencing algorithm biases prison sentences, or when a deepfake shows a politician lying, it not only harms those directly implicated but also undermines our trust in our public safety, judicial and political institutions. We therefore need to govern big tech and the technologies these companies are developing not only because they jeopardize the rights of racialized communities or vulnerable populations, but because, in doing so, they undermine the integrity of democratic societies. Amplifying the diverse voices of those with most knowledge of technology’s injustices points a way forward with this process.
By all means, watch The Social Dilemma and get angry. But when looking for solutions, listen to those who live through and best understand the harms caused by big tech.
This article first appeared in the National Post.