On June 2, nearly 100 million Mexican citizens can vote for a new president, 628 seats in Congress, nine governorships and thousands of municipal positions. The campaign has been heated and occasionally violent. Leading candidates have faced online threats, and a think tank found that 32 candidates had been killed in the run-up to the election.
Claudia Sheinbaum, the former head of government for Mexico City and a staunch ally of the current president, Andrés Manuel López Obrador, is leading in the polls. (López Obrador is not running; Mexico’s constitution limits presidents to a single six-year term.) Sheinbaum’s main opponent is Xóchitl Gálvez, a businesswoman, senator and Indigenous affairs official who leads the centre-right opposition coalition.
“This is the first time in our democratic history that the two major candidates are women, which is very important,” says Pamela San Martín, a senior electoral official during the last presidential election. She now sits on the Oversight Board, an independent body set up by Meta as a “supreme court” to improve the transparency and human rights consistency of its content moderation decisions.
Chris Tenove spoke to San Martín about the Mexican election and what social media platforms can do to improve — or undermine — free and fair elections globally. Their conversation has been edited for clarity and concision.
You helped oversee Mexico’s elections for years. What strikes you as significant about this one?
Mexico has made enormous efforts to guarantee women’s political participation since 2014, when parity was established in the constitution. It’s very important for Mexico and the wider region that the next president is likely to be a woman.
That’s a positive development.
However, there is enormous polarization in Mexico today. In 2018, for the first time in our history, a left-wing candidate won the presidential election, López Obrador. His party also won the majority of Congress and a lot of local positions.
For this election, Mexico is divided into those who are “pro” our current president and his political project, and those against him. It’s almost a warlike scenario in which the elections are not marked by the contrasts of opposing positions, but by the efforts to deny or disqualify or annul the other side.
Related to that, the campaign is taking place in an extremely violent and insecure environment. A significant number of candidates from across the political spectrum have been assaulted, threatened and even murdered.
Beyond the polarization and violence, are there other challenges to the integrity of this election?
There are some concerning issues around how this election is proceeding. Some of the rules and procedures for fair elections have been ignored. For instance, the selection of the two main candidates occurred months in advance of the electoral process, which disregarded the rules.
Also, in this election we have had very active participation by the president. I know that is perfectly normal in a lot of countries, but the involvement of public officials in campaigns is expressly prohibited by the Mexican constitution. This was one of the rules put in place to level the playing field, after the 2006 election that López Obrador himself narrowly lost.
Currently, every single day of the week, our president holds a press conference for over two hours. Anyone sitting in the press gallery can ask anything. He says his chest is not a warehouse, he will answer anything. The problem is when he starts talking about the election and electoral process.
That’s one of the rules not being properly upheld in this election.
What role is social media playing in the campaign?
Social media has changed campaigns throughout the world. It allows way more political participation and public debate. But at the same time, and this is very much so in Mexico, social media enables more attacks, sometimes coordinated attacks against political actors.
The polarization I was talking about encourages and naturalizes attacks, regardless of the veracity of the information, and we see this on social media.
In some cases, this activity is against the law. In Mexico, when it comes to elections, what is prohibited offline is prohibited online.
Social media platforms have generally established a good relationship with the electoral authority, however. They help promote authoritative information on the election, and Meta provides the electoral authority with information expenditures for political ads.
We have all these measures in place, but they have to be adequately enforced.
Looking beyond the election in Mexico, I’d like to ask about your work with the Oversight Board. Could you briefly describe what the board is and does?
The Oversight Board is a self-regulatory mechanism created by Meta, but with guarantees of independence. It currently has 22 board members, who are experts from all around the world with diverse professional, cultural, political, religious backgrounds and points of view.
The board makes binding decisions on emblematic cases, on whether an enforcement action on Facebook, Instagram or Threads is consistent with Meta’s policies and values and also with international human rights standards.
We investigate specific enforcement decisions but also elements that led to those decisions. For example, the design and functioning of Meta’s systems. We also get public and stakeholder input on cases.
The board seeks to provide fuller explanations to the public about what has been the very opaque operation of the platforms. And it makes policy recommendations that recognize the complexities of moderating content at scale.
You co-authored the Oversight Board’s recently released white paper, “Content Moderation in a Historic Election Year,” on what social media platforms should do to moderate content during elections. What are some of its key observations?
We identify nine key lessons, based on our past cases and experience.
One is that social media companies’ enforcement of policies is critical, not just the policies themselves. When we’re talking about global platforms, sufficient resources are needed to enforce their standards everywhere, including in local-language contexts. You cannot neglect some elections because they’re in countries or regions that are less lucrative.
Another lesson is that it is very important to help protect journalists, human rights defenders, civil society groups and political opposition from both online abuse and from over-enforcement of policies that can restrict their expression. This includes overly broad policies on “disinformation” or “fake news.”
For these and other issues, transparency is key. We need to know what is going on, what the platforms are doing and, if they made mistakes, what they’re doing to improve them.
I would also highlight the importance of addressing coordinated campaigns aimed at inciting violence and undermining democratic processes, especially when these come from heads of state or senior government officials. These should be addressed quickly and with tough sanctions by social media platforms.
On incitement of violence by elected officials, readers might be familiar with the case regarding Donald Trump’s suspension from Facebook after the January 6, 2021, violence. Another important case involved Cambodia’s former prime minister Hun Sen, and a video he posted on Facebook in which he threatened to have his supporters “beat up” his political opponents. The board recommended that his account should be suspended, but the company didn’t do so. What should we make of that case?
It’s unfortunate to me, Meta’s decision not to suspend former prime minister Hun Sen’s account. The board found that he used the platform to silence political opposition, to threaten the political opposition. It was the weaponization of the platform by a leader who also threatened those who opposed or criticized him with jail or the courts or even physical aggression.
Any political leader who does this should be treated the same way. That is why it is so concerning that Meta decided not to suspend his account, because of what it may signal to other political leaders around the world.
Following our recommendation to Meta, Cambodia declared the 22 members of the board to be “persona non grata,” which bars us from entering the country. That suggests some political leaders feel threatened by the board’s efforts to promote accountability.
Let’s shift to the issue of manipulated images, and the video of President Joe Biden that was altered to make it look like he was inappropriately touching his adult granddaughter’s chest on voting day in the 2022 midterms. What were the board’s recommendations arising from that case, and do they apply to deepfakes created using artificial intelligence (AI)?
The video was edited to loop around and make it seem as though President Biden is groping his granddaughter’s chest. However, the full video shows him putting a stamp of “I voted,” and he put it on that spot after asking her where he should put it, and her telling him, “Put it here.” The way the video was edited completely takes it out of context.
That was not a deepfake but a simple “cheapfake.”
When we analyzed Meta’s manipulated media policy, we found that it was very difficult to understand and that it was not very consistent.
First, it was not clear what harms it was trying to prevent or mitigate. So, we asked Meta to specify what those harms were for them to be better defined.
There were also distinctions in the policy that are not very easy to explain. For example, it treated content that portrays people saying something they did not say in a different way than content portraying people doing something they did not do.
Also, it only applied to AI-generated content. And it made distinctions between audio and visual content, which don’t make sense in this day and age.
We asked Meta to eliminate these distinctions and inconsistencies in its manipulated media policy. We also recommended that they rely on labelling as an alternative to removal, except when the manipulated content violated other policies.
A very short time later, Meta announced that it would start labelling all its AI content, along with some of our other recommendations.
The board recently announced two other cases for consideration, focusing on two explicit, AI-generated videos of women in public life. Why?
In these two cases, the policies that Meta enforced were bullying and harassment.
We want to assess Meta’s policies and enforcement practices for community standards, like bullying and harassment, hate speech or others, when these are applied to AI-generated content.
The Oversight Board’s decisions on specific pieces of content are binding. But its broader recommendations — which would have much more impact on Meta’s behaviour — are non-binding and frequently not adopted. How do you think about the board’s impact?
First of all, regardless of whether Meta accepts our recommendations, it has to publicly respond to them.
Our recommendations are not necessarily things nobody has thought of before. We build on work done by digital rights organizations and experts in the field. We hear from them that when they tell Meta to do something, the company often doesn’t answer. However, when the board tells them, “You should do this,” even if they say “no,” they still have to answer and explain.
From our own research we have found that Meta has agreed to implement or explore the feasibility of implementing close to 80 percent of our recommendations. And we track whether they do so. [For a peer-reviewed article on how they do so, see “Burden of Proof: Lessons Learned for Regulators from the Oversight Board’s Implementation Work.”]
There have been areas where Meta has consistently refused our recommendations (for example, regarding how algorithms or platform design decisions may have amplified harmful content).
But there have been many important recommendations that have been accepted by Meta and have been really impactful.
Thinking about the trajectory of the role that platforms are playing in the election context, what is your sense about where we are at?
There have been a lot of lessons learned by electoral authorities and by industry. We have to work together to ensure measures mitigate some of the harms we’ve been talking about, but also that they protect or enhance speech.
What we should expect from them, at the very least, is to not make the same mistakes over and over again.