It has been well over a decade since the British General Medical Council found that Andrew Wakefield falsified elements of his study linking autism to the measles, mumps and rubella vaccine. Yet the unsubstantiated argument underlying Wakefield’s work, which the discredited physician has since falsely presented as a “cover-up,” continues to spread online in connection to broader anti-vaccine campaigns related to COVID-19.
Meanwhile, the Delta variant of the virus is spreading across the globe at an alarming rate. With its particular rise among unvaccinated communities, conversations about the problem of vaccine dis- and misinformation are at the forefront of world news. Many of these reports point to the viral spread of misleading, and often completely false, social media content related to the various COVID-19 jabs.
The spread of anti-vaccine content over social media is nothing new. When I first began studying computational propaganda in 2013, my team at the University of Washington was well aware of highly prolific individuals and groups on Twitter and YouTube that spent inordinate amounts of effort to amplify inaccurate and even deliberately misleading anti-vaccine content. Much of that malinformation linked back to Wakefield’s bogus work, faulty pseudoscience or outright conspiracy websites.
Now, as the head of the Propaganda Research Lab at the University of Texas at Austin, I see much of the same information — including links to the same studies and sites bearing the same distorted arguments — flowing in relation to the COVID-19 vaccines. This content includes a constant stream of highly sensational, patently untrue stories and memes about how the vaccines contain microchip tracking devices or other secret ingredients designed to harm people. A lot of the misinformation is less over the top and designed to sound reasoned, claiming that the vaccines weren’t properly tested, or that they somehow “shed” to inoculate or otherwise affect the unvaccinated.
All of this begs the question: Why does this false content about vaccines continue to spread at alarming rates online? Relatedly, why do we continue to see broadscale spread of other medical, social and political information campaigns purposefully designed to manipulate and mislead? Of course, part of the reason is related to freedom of expression. This is especially true in countries such as the United States that have such stringent free speech laws. However, as US President Joe Biden has bluntly asserted, deaths caused by misinformation about the COVID-19 vaccines are intimately tied to the actions of social media companies themselves.
Our current digital communication system — defined by platforms such as Facebook and YouTube — was built to prioritize ad sales at all costs. This has meant that social media companies have optimized their design for consistent, constant engagement at the expense of concern for what kind of information users might be consuming. The logic goes that the longer you spend on a site, and the more engaged you are, the more ads you see and interact with. Social media giants also purposefully built their systems to continuously collect our personal data while we spend time on their platforms, selling this behavioural data to the highest bidders: political campaigns, corporations, governments and even organizations with less-clear intentions and origins. This myopic focus on maintaining attention and harvesting user data has led to serious oversights in design and planning. These oversights have, in turn, led to the rampant spread of disinformation, conspiracy theories and organized propaganda online that adds to growing issues of political polarization, extremism and hate.
Researchers, following a line of rigorous work in science and technology studies and other fields, have pointed to a persistent problem of encoding free market concerns into our information communications technology systems at the cost of prioritizing human rights, democracy and equity. A number of publications, including my own recent book The Reality Game: How the Next Wave of Technology Will Break the Truth, explore the perils of continuing to maintain and proliferate digital media technologies that value rapid innovation over careful design, growth over sustainability, and quantity over quality of content.
Our social media systems, and the algorithms and artificial intelligence behind them, need to be redesigned to prioritize values beneficial to all members of society — such as human rights and equity. There are pragmatic and concrete changes we can make to platform design, financial models and the code itself. Ultimately, it is time for a new type of digital communication tool built on a foundation of social change and informed policy.
A number of authors have written about what it will take to generate new forms of social media and broader digital technology systems — and the solutions are more often social than technical. Jenny Odell offers up the possibility of spaces that build community by re-establishing shared context and truly supporting connection rather than fracturing these things by demanding our constant attention and illicitly harvesting our personal data. Safiya Umoja Noble suggests breaking up informational monopolies, generating sensible public policy and addressing deeper socio-political problems as means of more systematically undoing the oppression we see born out of social media algorithms. Virginia Eubanks calls for a new narrative and politics of poverty that humanizes and supports rather than quantifies and stifles in order to dismantle high-tech-enabled predation of the poor.
Many of the largest social media firms are so massive and so widespread that they face a serious uphill battle in retrofitting their technology to benefit society rather than fragment it.
A few years ago, during my time at the Institute for the Future in Palo Alto, California, I worked on a collaborative project led by writer, researcher and game designer Jane McGonigal. The Ethical OS was aimed at producing a tool kit that could help technologists “anticipate the future impact” of the tools they construct — or, as McGonigal aptly put it, “not regret the things [they] build.” During our work, it became clear that the issues associated with today’s technological and media systems were not simply bound to one problem, such as misinformation.
With this in mind, we came up with eight areas we felt mapped out risk zones for designers and others interested in creating better tools for society:
- disinformation and propaganda;
- addiction and the dopamine economy;
- economic and asset inequalities;
- machine ethics and algorithmic biases;
- the surveillance state;
- data control and monetization;
- implicit trust and user understanding; and
- hateful and criminal actors.
In order to understand how we might build new digital media systems that prioritize social concerns before technological or financial ones, we can invert these eight risk zones to discuss their more beneficial opposites. In other words, the next set of social media tools should prioritize the following:
- fact and high-quality information;
- healthy engagement and connection;
- economic and asset equality and equity;
- human rights-oriented algorithms and machine learning;
- mass privacy and user peace of mind;
- data protection;
- explicit, non-predatory terms of service and internal procedures; and
- compassionate and principled actors.
Governmental policy working to curb social media issues should also prioritize these ideals. It’s possible, in fact, that democratic governments could use these guidelines to generate policy that would promote a new generation of media technology companies that serve the tenets of democracy rather than those of control. Many of the largest social media firms are so massive and so widespread that they face a serious uphill battle in retrofitting their technology to benefit society rather than fragment it. It’s difficult, to say the least, to determine how to control the flow of information in previously brakeless systems that already host billions of users living in numerous countries and who speak a wide variety of languages.
Many experts have proposed algorithmic audits as a means of determining what lies beneath the hood, so to speak, of the social media code that prioritizes and delivers content to users. This is a promising option for both extant firms and new companies seeking to build more societally beneficial software. It would also be advisable, though, to institute and support longer-term educational programs at universities, companies, start-up incubators and accelerators, and other entities that teach socially conscious coding and algorithmic accountability. It’s not enough for such organizations to simply promote, however, “ethical” practices in software development and technology design. Such endeavours often use “ethics” as a hollow, subjective catch-all that allows them to evade deeper, clearer commitments to particular freedoms and rights. In the end, we are left with an ethical technology version of “greenwashing” — a phenomenon some experts have deemed “machinewashing.”
Ultimately, those who study the ongoing problems associated with social media often begin to realize the need to focus serious attention on the “social” and less on the “media.” As Noble points out, “an app will not save us.” These are large-scale, long-term problems and they will require commensurate solutions. So, any new form of social media or digital communication tool that emerges must be grounded in social concerns and informed by both hindsight and foresight. A new media tool isn’t going to single-handedly solve the problem of misinformation or online hate, but it can definitely be encoded with values that help combat and control these issues.