Gender-Based Violence Is Built into Our Laws, Minds, Cultures and Media

Combatting it requires a multi-dimensional approach that transcends traditional boundaries in research, academic disciplines and policy.

April 14, 2022
OGBV2019-10-01T191934Z_1_LYNXMPEF903C3_RTROPTP_4_WOMENS-DAY.png
Demonstrators rally for women’s rights and equality ahead of International Women’s Day in Jakarta, Indonesia, March 4, 2017. (REUTERS/Darren Whiteside)

Tech-facilitated gender-based violence (TFGBV), like many problems that bedevil human existence, can be characterized as a “wicked problem.” Wicked problems have certain characteristics that pose difficult challenges to researchers, policy makers and tech companies. Not only is there no definitive formulation of the problem, but there is also no definitive solution, no opportunity to learn by trial and error, and no ultimate test of any particular solution.

Conceiving of TFGBV as a wicked problem suggests that dealing with this phenomenon can be complex and messy. No universal definition of TFGBV exists and there are disagreements about how to define it and which phenomena to include or exclude. The first step, therefore, is to examine definitional challenges that researchers must confront.

Definitional Issues

Suzie Dunn argues that “scholars, legislators, advocates, and the general public are still grappling with what new behaviors such as online stalking, image-based sexual abuse, and harmful digital misrepresentations are.” She speaks of a continuum of violence ranging from the physical, sexual and emotional to the psychological. Mona Lena Krook proposes a slightly different typology: physical, psychological, economic, sexual and semiotic violence. Semiotic violence deploys words, gestures and images to silence women or render them incompetent. Many consider it a new form of violence made possible by digital technology, producing “digitally native forms of violence against women” such as revenge porn, “upskirting” and “deepfake” pornography.

Vividly violent and vicious threats personally aimed at women through direct messages in their Twitter feed, and manipulated images of violence that incorporate individual women’s faces and bodies, are very different types of violence from offensive labels. Such epithets can definitely cause harm, as reflected in the terms used to describe them, such as “assaultive speech” or “words that wound.” While hurtful words and phrases may violate platform terms of service, targeted harassment, problematically, usually does not, because we lack a coherent definition for it.

The kinds of behaviour that Dunn and others refer to are innovative forms of violence that digital technology allows through anonymity, connectivity, ease of spread, and the permanence of digital content itself. Gender-based violence has been facilitated through the ages via rumour and gossip, tabloid journalism, moral panics and theatrical forms of punishment such as public stoning or burning at the stake. With the advent of digital technology, the targeting of women has become ubiquitous in all walks of life, cascading across space and time. This is a difference that makes a difference in how we conceive of the harms that TFGBV inflicts on its victims.

Policy Challenges

Wicked problems are usually a symptom of another larger problem. Any effort to deal with TFGBV therefore risks ignoring the context in which it is embedded. For example, maximizing free speech in platform governance clashes with a woman’s right to be free from online harassment and abuse, pitching freedom of expression against freedom from expression. This clash of values may be a symptom of a larger problem, namely, why tech platforms attract so much content directing hate and abuse at women.

When Russia invaded Ukraine, CNN reported “an outpouring of pro-Russian sentiment, as well as misogyny against Ukrainian women, on China’s highly restricted and censored social media.” This online activity reflects the Chinese government’s tactic of using misogyny or chauvinism to influence social media users emotionally, triggering them to engage in trolling or doxing campaigns on their behalf. The invasion itself has been cast by Vladimir Putin as “standing up for ‘traditional values’ in the face of a morally corrupt West weakened by sexual liberalism.” A few days after the invasion began, attendees of an “America First” conference in Orlando, Florida chanted “Putin! Putin!” Conservative evangelicals in the United States have developed close ties with the Russian Orthodox Church, whose leader claimed that the invasion of Ukraine was to prevent the spread by the West of LGBTQ+ rights, in particular, gay parades.

Valerie M. Hudson and Patricia Leidl argue that “nationalism is both strongly gendered and powerfully predisposed to the use of violence. Nationalism and misogyny are joined at the hip, and…it is males who almost exclusively define group membership, based on male bonds of kinship or other affinity.” Furthermore, this social order is based on the subjugation not only of women, but also of men deemed less powerful. This subjugation is what binds misogyny with racism, homophobia, transphobia and xenophobia.

This leads us to the larger problem of which TFGBV is but a symptom. Like war itself, patriarchal nationalism, misogyny, sexism, racism and homophobia represent the continuation of politics by other means. These ideologies are being increasingly politicized and weaponized as tools in nationalist, identity politics, and violence, whether private or public, domestic or international, plays an integral part. In an essay on Putin’s “anti-gay war on Ukraine,” Emil Edenborg highlights this trend when he refers to “homophobia as geopolitics.” Digital technology greatly facilitates, amplifies and transforms this trend into unique forms of gender-based violence.

Hudson and her colleagues describe a system they call the Patrilineal/Fraternal Syndrome, where the primary mechanism of security provision in a society is the patrilineal kin network. They identify 11 practices that characterize this syndrome:

  1. physical violence against women;
  2. patrilocal marriage in which brides move to their husbands’ family compounds;
  3. early marriage for girls;
  4. personal status laws that benefit men and grant women few rights in the family;
  5. laws and traditions restricting women from owning property;
  6. practices of dowry and brideprice;
  7. son preference and sex-ratio alteration;
  8. cousin marriage;
  9. polygyny;
  10. sanction/impunity for the killing of women; and
  11. the treatment of rape as a property crime against men (a rapist can often escape punishment by offering to marry his victim).

Many of the themes encountered in TFGBV reflect these misogynistic traditions, such as the desire of white supremacists to restore medieval laws making women the property of men. The violent rage against women and girls exhibited by “incels” often stems from a feeling of humiliation. Yet many courts consider humiliation to be a mitigating factor when men beat or murder women.

The various elements of the Patrilineal/Fraternal Syndrome represent structural and cultural factors that underlie the assumption that gender-based violence is a normal aspect of everyday life. For Hudson and her colleagues, “this system of control begins at the most intimate level — the relationship of husband and wife — and from that point radiates through the extended family, through societies, and through nation-states, and ultimately has the ability to cross borders and destabilize regions.” Technology clearly facilitates this process, speeding and amplifying its spread throughout society and across borders. This is the greatest challenge to policy makers intent on curbing TFGBV.

Much online abuse evades detection. Abusers use techniques such as coded language, esoteric memes and satire to avoid triggering automated detection.

Challenges for Tech

Those who design tech platforms often incur “ethical debt,” in that they fail to take into consideration the potential social and ethical harms of what they are creating, which leads to unintended consequences, namely, bad online behaviours, which in turn have to be addressed. In Amy S. Bruckman’s words: “To understand the management of online behavior, you need to appreciate the connections between the social, technical, and financial aspects.”

Much online abuse evades detection. Abusers use techniques such as coded language, esoteric memes and satire — what Nina Jankowicz and her fellow researchers at the Wilson Center term “malign creativity” — to avoid triggering automated detection, requiring “moderate-to-deep situational knowledge to understand.” Because women are under-represented in the technology sector, such knowledge is hard to come by. Jankowicz and her colleagues point out that women experiencing online abuse are caught in a double bind: “By speaking their mind and standing their ground, they may subject themselves to further abuse. The decision to delete the content engendering abuse or [to] lock down accounts can also lead to harassment.” This dilemma stems from a form of governance known as “responsibilization” whereby, as described by Daniel Konikoff in his study of Twitter’s abuse and hate speech policies, “institutions disavow responsibilities or functions they would otherwise provide, and instead displace that responsibility onto its constituents, customers, or users.” In the words of Jankowicz and her colleagues, targets of online abuse “bear the onus of detection and reporting.”

Moderation of hate speech can produce errors. False positives represent content that is not hate speech but is removed anyway. This kind of content consists predominantly of posts where people talk about hate speech or experiences of victimization. False positives represent an error in logical type: confusing hate speech itself with speech about hate speech. False negatives represent content that is hate speech but is not removed. Research shows that those who post content about hate speech and the harms that it can inflict are often the same people who are targeted by hate speech, namely, racial and ethnic minorities, people of colour, women and girls, and members of the LGBTQ+ community. False negatives result in a kind of double victimization whereby those who have been victimized by or are aware of the dangers of hate speech are silenced when they try to warn about these dangers or to talk about their own victimization. The larger context for this problem is the lack of diversity in tech companies and among designers of tech platforms and web communities.

The need to moderate content and to weed out violent, misogynistic or hateful texts and images presents further challenges. The burnout rate for content moderators is high. The relentless exposure to horrific images and vicious content can take a heavy toll on those charged with the task of sifting through content and removing the most egregious posts. Content moderators have sued TikTok and Facebook for psychological trauma and post-traumatic stress disorder. Moderators are paid to do their work, but at minimum wage. Training is often inadequate: “Content moderators, frequently working under difficult conditions and without proper training, lack the required expertise in gender, race, and other marginalized communities — and yet they are tasked with making split-second decisions that may directly affect the physical and/or psychological safety of users.”

Jonathan Ong highlights the “need to deepen our understanding of the complex arrangements of our global digital economy and the porous boundaries between ‘respectable’ digital work and ‘shadowy’ disinformation production.” In the Philippines, for example, content moderation farms and troll farms are both geographically and socially linked. Young people from the precarious middle class — whom Guy Standing calls the “precariat” — are hired both to create disinformation for political campaigns and to moderate content and remove this very same disinformation. In Ong’s words: “The misogynistic and homophobic speech they are incentivized to produce then becomes the controversial content that the platforms’ content moderators are expected to scrub from social media.”

A Comprehensive Approach

These definitional, policy and technical challenges are all interrelated. According to former US Secretary of State Hillary Clinton, “It takes years and even generations of patient, persistent work, not only to change a country’s laws, but to change its people’s minds, to weave throughout culture and tradition in public discourse and private views the unassailable facts of women’s worth and women’s rights.”

Gender-based violence is built into our laws, our minds, our cultures and traditions, our private and public discourse, and, in the Information Age, our digital technology and social media platforms. Dual-use technology such as the internet, which has both military and civilian applications, affecting both domestic and international security, poses special challenges, since it can be used for better or for worse. To meet the complex challenges posed by TFGBV, we clearly need a multi-dimensional approach that transcends the traditional boundaries separating different research areas, academic disciplines, policy domains and the private and public sectors.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Ronald Crelinsten has been studying the problem of combatting terrorism in liberal democracies for almost 50 years. His main research focus is on terrorism, violent extremism and radicalization and how to counter them effectively without endangering democratic principles.