From Capability to Consequence: How the AI Narrative Is Changing

If 2023 was the year that AI broke into the public consciousness, 2024 has been the year that people responded.

August 29, 2024
AIworld-art
The public is more aware of and nervous about AI than ever before, the author notes. (Illustration by Paul Lachine)

In 2022 and even in the early days of 2023, the talk around artificial intelligence (AI) largely centred on questions of what the technology could do. ChatGPT was lauded as one of the first AI tools that could reliably communicate in plain English. GPT-4 was likewise celebrated for its multimodal abilities and human-level performance on exams such as the SAT, the Graduate Record Examinations and the Law School Admissions Test. Even some of the AI models that did not dominate public attention, such as the text-to-image generators DALL-E or Stable Diffusion, were still discussed in terms of the new things they, and by extension AI, were able to do.

Since then, the tenor of AI discussion has seriously changed. Now the public is less interested in strictly discussing what AI can do and more curious to debate how we, as a society, should react to it. If 2023 was the year that AI as a technology broke into the public consciousness, 2024 has been the year that people, from CEOs to senators, have responded.

Businesses have certainly taken notice. According to the 2024 AI Index Report from Stanford University, 2023 saw more private investment in generative AI than ever before. Close to 80 percent of Fortune 500 company earnings calls now mention AI, a new high. According to a recently released McKinsey survey, business use of AI has jumped to nearly 70 percent. In the last few months, another perspective has dominated industrial AI discussions: Is AI buildup justified? Are we currently in a bit of an AI hype cycle? All of these conversations reveal how businesses have become less concerned with what AI can do and more interested in understanding how these tools interact with business problems.

The public is likewise now more aware and nervous of AI than ever before. For the first time, a greater proportion of Americans report being more concerned than excited about AI. An Ipsos survey suggests that people across the globe reported being substantially more nervous about AI in 2023 than in 2022. Unsurprisingly, policy makers have responded to such concerns and sharpened their legislative pencils. In 2023, there were a total of 25 new AI-related regulations passed by US regulatory agencies and more than 180 AI-related legislative bills proposed federally. After lengthy debates, the EU AI Act was finally passed and put into law a few weeks ago. In California, policy makers are in the midst of a fiery debate concerning Senate Bill 1047, some arguing that the bill would impose justified safety standards on AI developers, and others claiming it would stifle the AI innovation ecosystem.

The changing narrative surrounding AI has not meant that there haven’t been any recent newsworthy technical releases. In the last few months, many major AI developers, such as Meta, OpenAI, Anthropic and Mistral, have launched new state-of-the-art foundation models. However, these releases have not gripped the public as much as they once did. Admittedly, this relative lack of interest might partly reflect the fact that this new wave of models has come with marginal technical improvements, leading some to question the likelihood of sustained progress in AI. This change of discourse also reflects the new public understanding on AI. People are aware that AI is here and understand that its potential future — whether it does more to help or harm humanity — depends on how we, as a society, decide to engage with the technology. This is less a technical question than a social one.

This narrative readjustment should be welcomed. A natural parallel exists between the rise of AI and social media, the latter of which is now a ubiquitous technology. It arguably took policy makers a decade to seriously respond to the arrival of social media and handle some of the new societal challenges it introduced, from easier election manipulation to rising teenage anxiety. The turnaround has been a lot quicker with AI. Within a year of AI’s arrival moment — ChatGPT’s launch — there has been major policy action in both the United States and the European Union, not to mention the burgeoning public discussion of what role this tool should play in all of our lives. Sustaining this discourse will be key to getting the kinds of AI systems that we want: those that will advance, and not hinder, human flourishing.

The views expressed in this article are the author’s alone, and not representative of those of either the Stanford Institute for Human-Centered Artificial Intelligence or the AI Index.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Nestor Maslej is a CIGI fellow and research manager at the Institute for Human-Centered Artificial Intelligence at Stanford University, where he manages the AI Index and Global AI Vibrancy Tool.