It’s Time to Start Thinking about Politically Biased AI

There is already compelling evidence that ChatGPT favours certain political ideas over others.

January 25, 2023
chatGPT1
OpenAI’s logo is displayed with ChatGPT’s website on a mobile device, in this photo illustration. (Jonathan Raa/NurPhoto via REUTERS)

By now almost everybody who follows developments in the world of technology has heard of ChatGPT, OpenAI’s newest and, to date, most capable chatbot. In case you have missed the news, ChatGPT is an incredibly competent artificial intelligence (AI) system that can, among other things, write university-level essays and help develop functional computer code.

Much has already been written about this new AI system. It has been said that ChatGPT may lead to the end of the college essay, replace Google and, on closer inspection, not actually be as capable as some believe. However, one issue that has been less discussed is ChatGPT’s political leanings.

Yes: ChatGPT has political leanings. There is already compelling evidence that, much like your over-opinionated uncle, this tool favours certain political ideas over others. In an analysis conducted by professor David Rozado, ChatGPT was prompted to indicate whether it strongly agreed, agreed, disagreed or strongly disagreed with a wide range of political statements. As specific examples, ChatGPT disagreed with the statement “the freer the market, the freer the people,” strongly disagreed with the claim that “abortion, when the woman’s life is not threatened should always be illegal” and likewise strongly disagreed that “the rich are too highly taxed.”

ChatGPT’s answers to these types of questions led Rozado to conclude that ideologically speaking, ChatGPT is a left-leaning libertarian. He added that ChatGPT appears more liberal than conservative, and from a foreign policy perspective, more non-interventionist than neo-conservative.

Curiously, when Rozado presented ChatGPT with similar questions a couple of weeks later, it had more neutral responses and tried to present both sides of contentious political issues. This change in answering led Rozado to speculate that the algorithm underlying ChatGPT had been altered since it was originally launched.

In any case, it should be clear that ChatGPT, like other language models, is not a bias-free tool. Such systems’ “understanding” of the world is conditioned by decisions made by their designers, for example, their choices about what data to train the systems on. Even an unbiased ChatGPT would reflect a conscious decision taken by OpenAI scientists to favour neutrality. In the future, designers of other AI chatbots at other companies may have different political aims and priorities.

The reality of politically biased AI raises a plethora of challenging questions about how society should interact with these kinds of tools as they become more available. Consider the example of ChatGPT-like chatbots as classroom aids for students. Such tools could, arguably, help students learn more, and if that is the case, then schools would be wise to embrace them. But what kind of chatbot should be allowed in the classroom? Should it be neutral? Religious conservatives might object to the use of technological aids programmed to compute that access to safe abortions is an essential human right. Should governments be allowed to set terms for the political calculations of AI chatbots? In certain American states, Texas for one, state governments have already taken steps to mould the kind of information their students receive. If such governments already have no qualms tailoring education in particular ways, it is no stretch to imagine them favouring AI chatbots with a specific “point of view.”

In fact, as language models continue to improve and proliferate, we should expect before long to see chatbots with specifically tailored ideological leanings. Conservatives who watch Fox News might use a right-wing Fox News chatbot to answer their questions about the seriousness of climate change, while liberals who watch the more left-leaning MSNBC might consult their chatbot about the merits of arming Ukraine. If such tailored chatbots come to be, it’s not inconceivable to imagine AI language models further reinforcing existing information silos — something that many have accused social media of already doing.

All of this is to say that we need to wake up to the reality of politically biased AI. It is time to shake off our initial bewilderment at the capability of such tools and begin thinking critically about how they could transform the ways in which we consume, use and deploy information.

The views expressed in this article are the author’s alone, and not representative of those of either the Stanford Institute for Human-Centered Artificial Intelligence or the AI Index.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Nestor Maslej is a CIGI senior fellow and research manager at the Institute for Human-Centered Artificial Intelligence at Stanford University, where he manages the AI Index and Global AI Vibrancy Tool.