Truth-Agnostic Chatbots Show the Need for a Search Alternative

People depend on search engines to find information they can use. Now, tech companies are telling users that they can’t necessarily trust the information they provide.

March 6, 2023
chatGPThaggart
ChatGPT lettering is displayed on a mobile phone screen with the OpenAI website in the background. (Beata Zawrzel/NurPhoto via REUTERS)

Any doubts there may have been that we are in the frenzy phase of the artificial intelligence (AI) generative language bubble were laid to rest by Microsoft’s and Google’s early-February duelling press conferences announcing plans to incorporate generative AI into their search engines. Microsoft has moved at lightspeed to capture some of the ChatGPT buzz by incorporating OpenAI technology (in which it has invested billions of dollars) into its also-ran search engine, Bing. In response to having been caught flat-footed in the market by OpenAI’s November unveiling of ChatGPT, Google declared a “code red” and rushed to respond. To much public derision, its February 6 announcement showing off Bard, its new chatbot, included incorrect information generated by Bard itself.

For tech journalists, it’s doubtlessly been great fun. Microsoft CEO Satya Nadella channelled the worst of the testosterone-fuelled bro-ness that infects both Silicon Valley and Wall Street when he bragged to The Verge about Google: “I want people to know that we made them dance.” As for Google, the schadenfreude that its goof generated is a measure of how far the company has fallen in public esteem.

The rest of us, though, should focus less on the dramas and more on reconsidering the fundamental social bargain that has placed these companies at the very heart of the knowledge economy and society: unprecedented wealth in exchange for the responsible maintenance of online search. Their impulsive embrace of an unproven, unstable, falsehood-generating technology demonstrates that we need to start developing alternative, not-for-profit search capabilities.

The oft-expressed notion that Google is an advertising company, not a search company, is cynicism and fatalism masquerading as wisdom. From a social perspective, Google’s primary value remains its ability to index the World Wide Web. It is socially useful because it provides us with a way to access all this knowledge. Its monopoly position — Google Search controlled 92 percent of the worldwide market in 2022 — places on it an extra responsibility to deliver quality results.

Exactly how reckless are these companies being? Think about it in terms of how a search tool usually functions. When a user inputs a search term, Google (or Bing) serves up a series of links deemed to be relevant to the user. Although its algorithm remains a black box, Google Search is based in part on the assumption that the number of links that refer to a specific webpage can serve as a proxy for its authoritativeness.

This practice does not deliver authoritative knowledge. As I’ve explained elsewhere, for Google and Microsoft, truth has always been a matter of correlation, not any correspondence to an underlying reality. As Safiya Noble and others have reminded us over the past decade, thinking about knowledge in this way doesn’t reveal truth; it merely recreates dominant gender, racial and other biases.

Inserting these chatbots into search introduces an enormous degree of uncertainty and unreliability.

However, while search results reflect underlying biases, they also allow the user a degree of agency in requiring them to click on the link to evaluate that knowledge for themselves.

Now, consider what it means to put a generative AI chatbot on top of this format. As people, myself included, have pointed out in the three months since OpenAI unleashed ChatGPT on an unprepared world, generative AI has a tendency to generate falsehoods. This is because it is merely a complex auto-complete machine. The text that a GPT (generative pretrained transformer) generates — to call what it produces answers is to insult actual thought — is created by the GPT’s calculations of what the next word is likely to be, based on the texts on which the model was “trained,” itself the product of underpaid behind-the-scenes workers and often in horrific circumstances.

That it’s a machine for creating what can only really be called bullshit (following the definition of American moral philosopher Harry Frankfurt: speech produced with no regard as to whether it is true or not) has become comedically clear in the past several days, with Bing’s GPT producing text that is petulant, threatening, whiny and argumentative, and not at all helpful in serving up the world’s knowledge.

Inserting these chatbots into search introduces an enormous degree of uncertainty and unreliability. It’s tantamount to placing a BS-creation machine between the user and the search results. Google and Microsoft are well aware of how unreliable this tech is. While Google’s gaffe has received most of the attention, Bing has also generated its own share of howlers. And both companies explicitly warn their users that they cannot necessarily trust the output that they, as businesses, are serving them.

It’s audacious: People depend on search engines to find information they can use. Now, these companies are telling users that they can’t necessarily trust the information that they provide. These are not the actions of companies that care about supporting the healthy knowledge ecosystems all societies need to survive and thrive.

What’s more, adopting chatbots that spew truth-agnostic word salads fundamentally changes their relationship with the user and the rest of the Web. Where before they were link aggregators, now they claim to be able to provide authoritative knowledge, subject to a few terms and conditions. If they thought that content moderation was contentious, companies will find these debates pale in comparison to fights over the human-created and -enforced guardrails that will be necessary to allow this automated technology to create something resembling quality output. That’s because, while GPT chatbot technology may improve by some measures, it will continue to be plagued by the biases identified in search by Noble and others — only now the companies will effectively be acting as editors and publishers, while almost certainly deflecting blame for any unpleasant outcomes away from themselves and toward the machines they’ve created.

OpenAI may have triggered this particular avalanche, but Microsoft’s and especially Google’s reactions have been the most telling. Faced with a challenge to its search dominance, Google rushed to embrace a technology that, at the very least, delivers confidently asserted trash, rather than to improve its own quality-plagued model. The rush is a reminder of the inherent weaknesses of depending on for-profit companies to deliver what was, in previous times, as Chirag Shah and Emily Bender remind us, a non-profit activity: the cataloguing of the world’s information.

Thankfully, continuing to walk down the corporate search path is a choice, not an inevitability.

The internet itself was developed by government agencies and universities; corporations were latecomers.

It's almost impossible to imagine a technology like ChatGPT being adopted by search engines developed by non-profits along the lines of the public library or public broadcaster model. Not because librarians are more virtuous than businesspeople, but because librarians hold the pursuit of accuracy as a fundamental value, and they don’t have to worry about maximizing engagement at all costs.

Governments need to rediscover this public impulse in internet and digital development: public search should become as much a policy objective as regulating private platforms, with government engaging in either direct research and development or in providing support for non-profit alternatives. The goal, as Shah and Bender argue, should be a system “free of economic structures that support and even incentivize the monetization of concepts (such as identity terms) and allow commercial interests to masquerade as ‘objective’ information.”

Corporate search’s ChatGPT-driven embrace of generative AI may have exhilarated Microsoft and embarrassed Google, but the rest of us should take the opportunity to reconsider the costs of our information ecosystem. We have entrusted the world’s information to companies that have little regard for the essential service they’re supposed to provide.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Blayne Haggart is a CIGI senior fellow and associate professor of political science at Brock University in St. Catharines, Canada. His latest book, with Natasha Tusikov, is The New Knowledge: Information, Data and the Remaking of Global Power.