In 1990s and early 2000s, discussions in the United Kingdom raged about how media might be harming young women. Fashion magazines were condemned for digitally altering models’ photographs to convey unrealistic body images that could encourage eating disorders. By 2010, there was pressure on the government to introduce labelling on airbrushed photographs. The Girlguiding UK organization joined in, noting that these types of doctored celebrity shots fostered “damaging and unrealistic pressures” on young women to change their looks or diets in destructive ways.
Mass tabloids such as the Daily Mail often featured stories about young women who had become anorexic after seeing overly thin models featured in magazines. One headline in 2012 read, “My anorexia was fuelled by celebrity magazines: Victim demands ban on airbrushed photographs.”
These assertions were backed up with scientific evidence. In 2001, a study of teenage girls published in the American Journal of Health Education found a moderate to positive correlation between reading women’s health and fitness magazines and “eating-disordered diet methods.” Other studies showed how unrealistically thin models contributed to many women’s depression because they presented an ideal of unattainable female beauty.
The British government conducted studies, but no regulation emerged. Other countries acted. In 2012, Israel was the first country to regulate the use of image editing in fashion photography and advertising. European countries followed suit. In 2017, France passed a law that mandated labelling on any commercial photos that had been airbrushed or digitally altered.
Transparency from social media companies matters. We cannot regulate responsibly if we do not have the relevant data.
This was progress, but it came at a glacial pace. The image editing software Photoshop was invented in 1987 and released commercially in 1990. Beyond airbrushing, there are countless examples of other types of photo manipulation. Twenty-five years ago, Time magazine, for instance, darkened O. J. Simpson’s mugshot that appeared on its cover, which led to allegations of racism.
The airbrushed images — and the slow movement toward regulation — remind us that we have long confronted problems of whether and how to regulate media manipulation. To do so, we need strong evidence about its effects and about the remedies. This is more complicated than many assume.
In particular cases like airbrushing, the cause and effect were relatively clear: altered images contributed to unrealistic standards and physical harm. Current debates about harm caused by media take place on shakier ground. Vic Baines recently wrote a lengthy assessment of the United Kingdom’s white paper about online harms. Baines argued that governments, civil society and social media companies have cooperated and made substantial progress in finding and removing child sexual abuse material. But progress occurred because those groups reached a “consensus in the definition of harm and the appropriate response.” Baines cautioned that we do not have similar levels of evidence (or even the available data sets) to understand all the harms listed in the white paper, including spam, data protection and online bullying, among other things.
This is why transparency from social media companies matters. We cannot regulate responsibly if we do not have the relevant data. In an attempt to address this very problem, France recently proposed an approach that mandated “transparency by design” from the major social media companies. That approach envisions an ex ante regulator who would enable greater transparency from the companies and more involvement from civil society. The proposal followed a unique experiment in which French civil servants were embedded at Facebook for several months.
Transparency will require more than the current company transparency reports. These list the number of posts taken down globally per quarter in categories such as hate speech. The reports also tell us what percentage of those posts were detected by AI. Even absolute numbers leave a number of questions unanswered: would an increased number of takedowns indicate better enforcement, more hate speech on the platforms or reflect the increased user base? Might takedowns be less effective than reducing the virality of certain posts? Do takedowns encourage posters to change their behavior, or do they breed resentment? What are the societal effects?
As a first step, governments could mandate more transparency from social media companies, which could help researchers, government and civil society to examine the effects of social media. Careful media regulation is necessary; it must consider the real effects of content — whether it’s a Facebook post or an airbrushed magazine image — rather than the assumed effects.
Humans have always been subject to a slew of cognitive and social biases — but social media’s clever use of algorithms and other techniques to profit from those biases is new.
Unfortunately, decades of research on media effects show that there are no simple mechanisms for identifying cause and effect. For example, reading one article on terrorism does not make someone a terrorist, but frequent exposure to particular types of media can change people’s world views. Media effects depend on a person’s character, age, gender, upbringing, surroundings, and even their mood at the time of consuming a piece of media. It’s also hard to identify what actions are inspired by media alone — people are also influenced by factors beyond media, including friends, family and the experiences of everyday life.
Of course, social media is — at least in part — a site of new and amplified manipulation, bias and hatred. Social media regulation faces a challenge: addressing what is new versus what is human. Humans have always been subject to a slew of cognitive and social biases — but social media’s clever use of algorithms and other techniques to profit from those biases is new. Relative to traditional media, social media more consistently stokes anger. Anger is an especially problematic emotion; humans tend to be less analytical and more susceptible to disinformation when experiencing anger.
Today, the example of airbrushed magazine images remains instructive; three lessons in particular seem applicable to current discussions about online influence and harm. First, social media has caused new problems, but problematic practices in media are not new. Second, governments need to find ways to support research and enable access to credible evidence — this may provide a good opportunity for cooperation between like-minded governments. Third, policy makers cannot wait as long to implement appropriate solutions if the harms are clear and demonstrable.
Evidence-based policy needs good evidence — which was available when the unrealistic standards perpetuated by fashion magazines were under scrutiny. But today, even in an era of mass data collection, evidence about the impact of online information is lacking. Surely, the related policy will suffer — but citizens, without access to a safe and well-regulated digital environment, could take the biggest hit.