The political order rests on weakening pillars today. From the United States, to Brazil, India, Indonesia and South Africa, vast numbers of citizens believe the “system” is broken, and that politicians and their parties no longer care about the average person. Perceptions of a rigged system reinforce this malaise. For many, a strong leader who is willing to break the rules seems the only solution.
This “caudillo” syndrome, as I call it, is the belief system that makes populist, anti-establishment politics possible. Think of Bolsonaro, or Orbán, or Trump; all play to this deep-seated discontent. My own research shows that such anti-system sentiment is a stable, global phenomenon and is correlated with a variety of illiberal outcomes, such as heightened corruption, the disregard for norms of political conduct and the flaunting of constitutional orders.
AI Populism
It is against this backdrop of increasingly fragile democracies that we must understand AI and other technological innovations.
Much ink has been spilled since ChatGPT and other generative artificial intelligence (AI) technologies hit the scene just over two years ago. Many have pointed to the transformative effect of these technologies on business and politics. And public opinion agrees with the experts. A recent poll by Ipsos, the company I work for, shows that 54 percent see the positive benefit of AI to society and to them personally. This optimism, though, is coupled with trepidation and fear. Concerns about AI range from job displacement to existential threats to humanity. Here, too, we find agreement around the world.
The experts largely agree with public opinion. Many warn of the potentially disruptive effects of AI on elections this year and on democracy in the longer term. Often technology’s influence is exerted through misinformation campaigns. These deployments run the gamut — from deepfakes in advertising to election-count meddling to disinformation narratives by state and non-state actors. Industry has even taken steps at self-regulation when it comes to AI and elections.
But AI and its impacts need to be understood in the broader context of our fragile world. Ultimately, it will be public opinion that mediates much of technology’s effects on our societies. Critically, I see distinct impacts occurring this year and in the longer term. Let me explain.
This Year
In the past few months, we have already seen the extraordinary power of AI to create deepfakes. Take the example of the malicious robocall from January that impersonated Biden through voice cloning and asked people in New Hampshire to stay home and not vote in the primary. Or, the late 2023 election campaign of Javier Milei, now the president of Argentina, which deployed AI to create fake images of his political adversary Sergio Massa as a communist and brigand. Such examples send shivers down the spines of democracy advocates.
To date, though, there is very little evidence that technology employed in such nefarious ways has had an effect on electoral outcomes. For instance, one Science study shows that voter behaviours and attitudes were independent of how information was algorithmically fed to them. Admittedly, most of this research focuses on social media, not on AI. But it is strongly suggestive. Ultimately, the fear of AI’s impact on election outcomes this year is probably misplaced.
Instead, our worry should be about AI’s strong potential to complicate consensus building, for example, on who won the election — as I like to call it, the “day after” effect. At its simplest, this might mean heightened uncertainty about all things concerning governance. At worst, this could involve actors who purposely sow the seeds of discontent. The January 6 US Capitol attack in 2021 is a perfect example: today, 31 percent of Republicans believe President Joe Biden was elected illegitimately.
Longer Term
In the next decade and beyond, our caudillo syndrome, together with AI’s disruptive nature, will only make governance that much more difficult. Here, my research leads me to believe we are entering a “populist super-cycle” that could last years, as belief that the system is broken continues to trend upward. We should expect increasingly more contested elections around the world, further sowing the seeds of distrust.
Second, AI may have other effects that become grist for the populist mill. For example, think potential job displacement. Already, a strong majority of Americans believe AI serves the powerful.
Presently, much of the populism around the world is grounded in cultural backlash against all forms of the other — immigrants, ethnic minorities, elites, experts and so on. The “job displacement” narrative only reinforces the notion that someone is to blame for perceived and real economic deprivation. A recent McKinsey report estimates that generative AI could displace 12 million workers by 2030.
In the not-so-distant future, politicians just might be running on anti-AI planks with strong calls for regulation and limits on technology. Today, there is global majority support for AI regulation. We should expect our first AI populist in the next 10 years. History is replete with such technological populism. Take the Luddite movement in England in the early 1800s. A large British military force was needed to put down workers who had been displaced by technological advancements in spinning cotton.
Finally, fear of an existential threat to humanity pervades public opinion as well. Here, think Arnold Schwarzenegger’s Terminator, not Sharlto Copley’s Chappie. Indeed, in poll after poll, people see such threats as real.
Existential demise is a strong political motivator to populism and demagoguery. History is full of millenarian figures with missions to save their people — Moses, Boudicca, Abraham Lincoln, Sitting Bull, Spartacus and so on. The belief that the end is nigh — whether real or perceived — will only strengthen anti-system sentiment as well as the willingness of political actors to take advantage of such beliefs.
Bringing It All Together
Technology is not all bad; it makes our world go round. Both public opinion and experts recognize this. But our near-term future is one of social entropy, not cohesion. The primary culprit is not AI but the erosion of trust in our system and the structural factors that sustain such attitudes.
I am an optimist. Ultimately, solutions will emerge from human beings, about how we should organize ourselves. Technology is a catalyst, not a cause. But, barring a quick fix, we must all be prepared for a very bumpy ride. In the years ahead, stable governance and consensus will be hard to come by.