In early 2018, a YouTube employee tried to figure out how much alt-right videos contributed to the company’s success. The employee experimented by creating a “vertical” for alt-right content (“vertical” means the grouping for a certain type of content). The experiment showed that potentially radicalizing alt-right content generated as much engagement as music, sports and gaming. A member of YouTube’s executive team said to Bloomberg that they did “not recall seeing the experiment.”
This story presents a paradox. If a company employee found evidence that alt-right content was vital for YouTube’s profits, why would the company not investigate further? More broadly, why have social media companies been reluctant to research the potentially negative effects of their products, or to provide researchers with access to that data?
There are incentives for platforms to remain ignorant about the impact of radicalizing algorithms or about the number of real people clicking on ads.
An explanation lies in the concept of agnotology — the study of ignorance. Agnotology helps to explain why YouTube did not follow up on its employee’s experiment and why other platforms are similarly leery of investigating the ramifications of their profit-seeking strategies. Agnotology explains the counterintuitive idea that there are incentives for platforms to remain ignorant about the impact of radicalizing algorithms or about the number of real people clicking on ads.
Decades of research have explored how knowledge is constructed and wielded for power. Ignorance, too, can be strategized and weaponized. Michael Smithson argued in 1985 that ignorance reduces accountability and enables people to act because they do not know the consequences of their actions. Similarly, companies can act much more freely when they have not investigated the possible results of their actions.
The Upside of Ignorance
There is a long history of deliberately manufactured agnotology. The tobacco industry is an obvious example. Major tobacco companies spread doubt or obstructed studies into the detrimental effects of smoking. The firms also tried to find and fund a few scientists who would dispute the harms of smoking. As historians, Naomi Oreskes and Erik Conway show in their book Merchants of Doubt, other dangers, such as global warming, dichlorodiphenyltrichloroethane (the insecticide more commonly known as DDT) and the ozone hole, have been similarly denied.
In an edited volume on agnotology published in 2008, Robert Proctor argued that the term deserved more widespread attention. He noted that ignorance “overlaps in myriad ways with — as it is generated by — secrecy, stupidity, apathy, censorship, disinformation, faith, and forgetfulness.” Presciently, he included examples of how new communications technologies could foster ignorance or disinformation. After reading a misinforming website disputing that HIV caused AIDS, former South African president Thabo Mbeki reduced efforts to combat the disease.
We already have examples of social media companies’ agnotology. If companies prioritize engagement, that is a content-agnostic approach. For many years, YouTube and other social media platforms focused on the goal of engagement — the amount of time, interactions and video views generated by online users. In 2012, YouTube set itself a goal of reaching one billion hours of video viewership per day. YouTube redesigned its recommendation algorithm to try to keep users watching more videos. The more time that users spent on YouTube, the more ads they would see and the more profit YouTube would generate. YouTube CEO Susan Wojcicki said in 2018 that “the billion hours of daily watch time gave our tech people a North Star.” YouTube reached its goal in October 2016.
The laser-like focus on the goal of engagement precluded investigating any potential downsides, such as examining the types of videos that increased engagement. Although the company has since claimed that it is focused on responsible growth, employees speaking anonymously and former employees such as Guillaume Chaslot have criticized YouTube for failing to consider how optimizing solely for engagement might also optimize for hate.
Agnotology can also provide legal protection. YouTube lawyers apparently told employees not to search proactively for problematic videos. Although the policy was not put in writing, it was meant to protect YouTube from potential legal liability. If YouTube knew about videos spreading conspiracy theories or lies about public figures such as Supreme Court Justice Ruth Bader Ginsburg, it might have to act upon them. Although companies currently do not bear liability for content on the platforms, YouTube seemed concerned about the potential regulatory effects of greater intervention. Perhaps not knowing seemed safer.
Platforms’ agnotology also has political dimensions. They are struggling with accusations of “conservative bias” from US President Donald Trump and Republicans in the United States. Facebook commissioned a study from former Republican Senator Jon Kyl on the subject. Kyl’s report was based on interviews with around 130 conservative groups and individuals, not on numbers from Facebook. Kyl’s eight-page report did not find evidence of conservative bias but still concluded that Facebook’s policies could restrict free expression. Some research, including Jen Schradie’s recent book The Revolution That Wasn’t, suggests that Facebook actually favours conservative activism. Of course, were Facebook to conduct a thorough investigation into the subject and find similar results, it couldn’t release the research — that would be political anathema to the current White House. The best solution for Facebook is to not produce a detailed quantitative study. Agnotology has political utility.
An Erosion of Trust
Social Science One — an initiative meant to provide researchers access to Facebook data — illustrates that company-foundation collaborations don’t seem to be working or are progressing very slowly. Social Science One’s funders (ranging from the Koch Foundation to the Omidyar Network) have been so frustrated by lack of access that they plan to close the project unless sufficient data (specifically URL shares data) is made available by September 30.
Even if more comprehensive data is released by social media companies, governments and civil society will — perhaps rightly — have a difficult time trusting that information.
Even if more comprehensive data is released by social media companies, governments and civil society will — perhaps rightly — have a difficult time trusting that information. Nicholas A. John tried to investigate Facebook’s claims about the number of friendships made between people from opposing sides of conflicts, such as Israel and Palestine or Ukraine and Russia. At one point, Facebook claimed that nearly 200,000 friendships were made daily on Facebook between Israelis and Palestinians. The number seemed somewhat unlikely, given that only 1.7 million Facebook users lived in the West Bank and the Gaza Strip. John spent more than a year going to impressive lengths to verify Facebook’s numbers and understand how they were generated. Facebook never responded to his inquiries and in February 2019 took down the page that displayed the friendship numbers that John was investigating. Among other things, John’s experience reminds us to be skeptical of unverifiable information created by social media companies.
More than ever, companies have an incredible amount of data about our lives and activities. However, there is much that they do not examine, perhaps for fear of the results or perhaps because they have not prioritized that type of knowledge. This is where governments can step in. Policy makers could push companies to collect and share reliable data with third-party researchers. Governments can help by creating the framework for civil society and researchers to understand not only how platform companies operate but also the investment they are making in agnotology.