In the summer of 2016, just a few months before the US presidential election, Facebook fired the humans that curated its trending news module. By August, the service was fully automated, and it produced strange results, highlighting false stories and other bizarre articles that didn’t fit the category of “news.” Effects aside, altering a major news-focused feature a few months before an election seems rather reckless. To date, Facebook has not given a clear reason for that decision nor revealed if they considered the potential consequences.
Other examples abound of “moving fast and breaking things” – a favourite Silicon Valley motto. Around 2013, Facebook entered Myanmar, a country with no experience in internet use. Facebook did not establish offices there nor hire content moderators who could read Burmese. As hundreds of thousands of Rohingya were violently expelled from Rakhine State in Myanmar in 2016 and 2017, Facebook was used widely by generals and others to spread hateful and genocidal messages against the Muslim Rohingya in the predominantly Buddhist country. An independent human rights assessment commissioned by Facebook found in 2018 that Facebook had been used to “incite offline violence” against the Rohingya. A question remains: why wasn’t an assessment conducted before Facebook entered the country in the first place?
A few more months to trial a product before launch seems reasonable when we think about the suffering of the Rohingya or the potential for new ad policies to affect elections.
Silicon Valley companies often roll out new features without considering downstream effects. Only recently — and mainly since the 2016 US election — have platforms started to consider how to balance growth with social responsibility. Yet, features and remedies are often launched suddenly and without consultation. YouTube abruptly announced earlier this year that it would feature Wikipedia boxes on videos with questionable content. It seems that Wikipedia itself did not even know about this initiative. Again, it is unclear whether or not YouTube conducted any research into the possible effects of those boxes or whether they have examined their efficacy since. Even more recently, Twitter announced that it would ban all political ads but has not yet released any documents defining what will be considered “political.”
Alongside combatting companies’ strategic use of ignorance (a practice known as agnotology), we might consider how to change companies’ approach in general. In other areas, policy makers have adopted the precautionary approach, which, simply put, is “first do no harm.” It means considering the negative consequences for a broader public before introducing something. The approach is most commonly used when considering health care or food systems (as in genetic modification) or environmental policy, and places the burden of proof on those introducing something new. They have to demonstrate that any new initiative will not harm the broader public.
The precautionary approach has multiple potential facets.
Risk assessments: First, companies could conduct risk-based assessments, as commonly happens for large-scale infrastructure projects. No engineer builds a bridge without calculating its stability. If platform companies want to be our online infrastructure, we might ask for similar levels of care as for physical infrastructure.
New feature trials: Second, companies could conduct small-scale trials of new features. These would be akin to medical trials that test drugs for safety and efficacy on a small number of informed and consenting volunteers before bringing a drug to market. Medical trials are heavily regulated, hugely expensive and take many years. But they have also helped to prevent severe and unintended side effects, such as the birth defects caused by thalidomide, which was prescribed in the late 1950s and early 1960s to pregnant women for morning sickness.
Privacy: Third, the precautionary approach can help with privacy. Several municipalities have started to adopt this approach with facial recognition technologies. San Francisco, for example, banned the use of facial recognition software by police and other agencies in May 2019. Scholars like Joy Buolamini and Timnit Gebru have warned that new facial recognition systems could exacerbate biases against marginalized groups and visible minorities in the real world. This is true for other realms of the internet too: Safiya Noble has explored how search engines amplify racism when suggesting search terms and serving advertisements, to give two examples. The precautionary principle would ask if such systems reinforce and exacerbate offline biases. It would push companies not to introduce a product until it had fully considered the potential side effects.
We would be starry-eyed to believe that one principle could solve all the problems of platform governance. As with any principle, this one has some potential downsides.
First, if governments used the precautionary principle to ask for risk assessments, these assessments themselves would not be foolproof and could be gamed. The opioid crisis has shown that medical trials do not always prevent devastating outcomes. At least, though, companies can be prosecuted for wrong-doing, in this case for erroneously claiming that opioids were not addictive.
Second, the principle can theoretically do more harm than good. A recent paper found that Japan’s decision to stop using nuclear power after the Fukushima Daiichi reactor disaster raised energy prices dramatically. Between 2011 and 2015, the number of people who died from cold increased, because they could not afford to heat their homes. Three researchers have argued that the number of these deaths outstrips those caused by Fukushima.
Third, the precautionary principle can lock in big players and stifle innovation. If risk assessments are expensive, only the larger companies will be able to afford them. Operating under the precautionary principle could make entry into a market more difficult because the start-up costs would be higher.
Despite these concerns, a new approach is warranted. A few more months to trial a product before launch seems reasonable when we think about the suffering of the Rohingya or the potential for new ad policies to affect elections. “Move fast and break things” has long been the motto of Silicon Valley. Let’s make sure that democracy is not on the list of things that were broken.