Episode 3

History Claps Back on Techno-Optimism, with Daron Acemoglu

The relationship between technology and development is more intricate and multi-faceted than meets the eye.

ep3-online-thumb-blue

Episode Description

Do emerging technologies inherently serve the greater good? Join Policy Prompt hosts Vass and Paul in a discussion with world-renowned economist Daron Acemoglu, on his recent book Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity, co-authored with Simon Johnson (PublicAffairs, 2023). They discuss the implications of technological prowess on the global stage, the impacts of artificial intelligence on the future of work and education, and the building blocks of techno-optimism.

Chapters

1 0:00:00

Welcome to CIGI’s Policy Prompt

2 0:00:59

What can economic history tell us about prosperity and technology?

3 0:01:51

Introduction to guest Daron Acemoglu

4 0:02:34

Why did Daron decide to devote his life to the work of economics?

5 0:04:47

Is economics the best lens to apply to understand the role of new technologies in shifting global power dynamics?

6 0:06:46

How have new technologies affected the way Daron executes his work?

7 0:10:29

Power and Progress book description

8 0:11:20

Overall takeaways, or patterns, based on Daron’s body of work

9 0:16:28

Trade-offs in the policy-making space, and alignment with the aforementioned economist lens

10 0:17:57

Assessing the adoption of techno-optimism by today’s economists

11 0:19:37

Why is there so much positivity and hype around AI specifically?

12 0:22:19

Should we push back on the AI hype?

13 0:29:26

Picking at the assumptions of technological progress

14 0:31:17

On the recent Goldman Sachs report claiming generative AI may be hitting a ceiling

15 0:33:49

How the “malleability” of technology will impact the future of work

16 0:36:12

Where do workers fit into considerations of increased use of AI across sectors?

17 0:37:43

How might the rising use of AI-powered educational tools impact the types of skills and knowledge that wind up being valued in the labour market?

18 0:42:15

What are the global energy implications of the rapidly growing digital landscape?

19 0:46:16

How would the proposed digital advertising tax work?

20 0:51:33

Daron’s “tricks of the trade”

21 0:52:09

Does Daron feel he’s moving the policy needle?

22 0:53:44

What’s next? What are the focuses of his next book?

23 0:54:57

Debrief with Paul and Vass


Vass Bednar (host)

You are listening to Policy Prompt. I'm Vass Bednar, and I'm joined by my co-host, Paul Samson, he's the president of the Centre for International Governance Innovation. Policy Prompt features long-form interviews, where we go in-depth to find nuances in the conversation with leading global scholars, writers, policymakers, business leaders, and technologists, who are all working at the intersection of technology, society, and public policy. Listen now, wherever you find your podcasts.

Paul Samson (host)

Among the biggest questions in human development, is to identify the ingredients needed to make societies prosperous, and to do so sustainably. And there are many ingredients, the impacts of new technologies on economic growth, productivity, and equality are central. What can economic history tell us about prosperity and technology?

Vass Bednar (host)

Yeah, asking for a friend, Paul, that's such a huge question. I love it. I love that your mind goes there right away. Because you're absolutely right, how are societies structured? If we think about public policy as kind of being the software of society, then maybe that's a way to think about that relationship between tech and sharing wealth. So can we fairly sustainably share the financial benefits that come from new and exciting technologies, or are we getting to a place where they're mostly captured by the elites through existing power structures? It's worth stepping back and asking if the current path of commercializing technology actually takes us back a step, to the extreme inequalities of past centuries.

Paul Samson (host)

And you even upped me on the big questions, way to go.

Vass Bednar (host)

I've been hanging out with you too much.

Paul Samson (host)

Today, we have the distinct pleasure of speaking with Daron Acemoglu, who's professor of economics at MIT, and widely acknowledged as one of the leading economic thinkers and researchers of our times.

Vass Bednar (host)

He's incredible. I really appreciate how he also engages in social media about his ideas. And Paul, I didn't think he was going to say yes to us. I'm just kidding. What we get to explore with Daron, are his observations on that fundamental interface between technological disruption and economic prosperity, drawing on data and insights gleaned from history. Daron, welcome to Policy Prompt. We're stoked to have you.

Daron Acemoglu (guest)

Thank you. It's my pleasure.

Paul Samson (host)

We wanted to start off with a question of, as I said, you're a leading economist, and why did you decide to devote your life to the work of economics as opposed to many other pursuits you could have taken on? And why is technology so key to that?

Daron Acemoglu (guest)

Well, we all make mistakes. No, but I actually came into economics because I was interested in the fundamental determinants of why some countries are prosperous, some others aren't, some are democratic, some others aren't, why some respect, to some extent, individual freedoms, and others don't. And I thought economics would have some of the answers. And actually, much of economics at the time, wasn't really focused on these questions, but that really pushed me towards studying the two topics that have become the center of most of my research, institutions and technology. And it is still my belief that big questions about inequality, economic growth, development, are all about the interplay of institutions and technology. So at the time, Adam Smith was writing his completely transformative, apocryphal book, the Wealth of Nations. He was worried about the wealth of nations, but the gap between the richest and the poorest countries was something like fourfold.

Today, it's about 60-fold. You cannot understand where we are today without recognizing that that's a lot about technology, because technology is one of the factors that has enabled some countries to surge ahead and grow very rapidly, and institutions. And institutions are critical in how some others have failed to take advantage of these global opportunities. But they're also shaped by technology, then they're also shaped by the global division of labor and power struggles within the world. So all of these are sort of interwoven with what the technological capabilities we have developed are, and how we are using them, which is a lot about institutions.

Paul Samson (host)

And is economics the best lens to apply to understanding these dynamics? Is that what drew you to it, and has kept you going on that?

Daron Acemoglu (guest)

It's hard to know. Once you are in a particular discipline, that does naturally shape your own biases and lenses. But I believe economics is quite well placed to think about some of these issues, but it's not the only one. And in fact, a lot of my research spans economics and political science, because one overwhelming fact of life, as I see it, is that power matters, both internationally, domestically, both about technology, and about who will enjoy the fruits of technology. And power is of course, something that economists, for a long time, set aside. So the interplay of economic and political factors, as well as collaboration between economics and political science, are critical for understanding power, as well as sociology. And we also have to learn a lot from history and social psychology.

So it's really a multidisciplinary thing, but I think economics has some unique perspectives that are quite useful. First of all, it's very data-driven, and I think the questions we're talking about are data-related, and you also want to test them, which again, economics is very well-placed for doing that. And also a lot of the questions are really about economic dynamics, economic growth, economic development, how do markets work, and who controls markets, who can benefit from power, and in what ways? So I think you couldn't just understand these things without a heavy economics perspective.

Vass Bednar (host)

Well, you've been thinking about these topics rooted in questions of power for quite a long time now.

Daron Acemoglu (guest)

For ages. It shows my age.

Vass Bednar (host)

Yeah, shows your age. There's no camera, no, no, no. We're curious, how have new technologies, or have new technologies affected the way you do your own job, how you conduct your research, how you share it? Is there anything different about how you position your ideas, or seek to have influence today, compared to even just a few years ago?

Daron Acemoglu (guest)

Well, I mean obviously, when I first started in economics in the early-1990s, we didn't even have software programs for statistical analysis. Absolutely, our research has been heavily shaped by technological capabilities. I have access to much better computers for all sorts of things, but also the way we communicate has changed, in some ways for the good, in some ways in more complex fashion, that I think has both goods and bads. But for instance, again, when I was first an assistant professor, the way you would disseminate your work, is you would first you send three copies of a paper to a journal and wait for a year. And then if the journal decided to publish it at some point, then other people would see it. And then once you were sufficiently senior, or in the right place, you could have access to working paper series. Now, the web and social media have made those things much more easily accessible to a broader audience. That's fantastic.

And I think social media and internet have also provided methods for some scholars to reach a broader audience, which I think is also good. I don't think that all that economists do should be confined to writing papers for a specialized audience. I think good economists, focusing on preparing podcasts are reaching a broad audience, I think those are quite valuable from a social point of view. But on the other hand, of course, social media also creates a race to the bottom. Sometimes it's like slogans, or sensationalist ideas that are more effective in social media, and I see that. I don't participate in social media that actively, but when I look at some of the platforms, you see that economic ideas have sometimes been so sensationalized that the necessary nuance is lost.

Vass Bednar (host)

So that's kind of... What do you mean there, like quips hashtags, people trying to grab attention... Amusement.

Daron Acemoglu (guest)

Right, amusement. Yeah. I think this is... I see social media as just the next step in a process that cultural critics, such as Neil Postman, noted 30 years ago, for example, when he wrote the book, Amusing Ourselves to Death. That was mostly about TV, but I think it applies centrally to social media as well, that mixing of serious analysis facts, scientific discussions, with amusement is a difficult thing. If you can do it well, of course it can expand its reach. But on the other hand, it often devalues it because amusement becomes the main purpose, and that determines what kind of ideas are expressed and how people interact with those ideas.

Vass Bednar (host)

We are speaking with renowned economist and writer, Daron Acemoglu, who's sharing from his latest book with Simon Johnson, Power and Progress Our 1,000-Year Struggle Over Technology and Prosperity. In a world where we often assume that technology, and right now, AI in particular, will seamlessly benefit all of us, Daron's deep research into contemporary data and historic evidence shows that in their own words, quote, "There's nothing automatic about new technologies bringing widespread prosperity. Whether they do or not, is an economic, social and political choice," end quote. You can find Power and Progress Our 1,000-Year Struggle Over Technology and Prosperity by Daron and his collaborator, Simon Johnson, at your local bookstore.

Paul Samson (host)

I mean, I think I would say that you've made the bridge from more traditional economic studies and research, to embracing things like podcasts and videocasts, and things like that, and that's great. It links a little bit to the point of your work being very widely cited, and resonates very well. I think it's generally viewed as balanced, and you may not like that category, but people sometimes refer to your work in that sense. And I think that that does relate to why a lot of people follow it closely and take it very seriously, right. You've acknowledged in terms of the technology debate, that there's been quite significant progress on living standards over time. You're not pessimistic about the future of potential of technology, but you're very focused on what you see as the key power struggle issues and the perennial dynamics around inequality and things, and I think that that's really resonated with people.

When you look at the work that you've done in several books, and across hundreds of articles, and things, are there some overall takeaways that come up, or patterns that you want to articulate? Are there things you can point to there that are huge takeaways?

Daron Acemoglu (guest)

Well, thanks, first of all, Paul, you summarized my work and objectives very generously and very aptly. And I think relating to this issue of social media age, et cetera, I would say one simple takeaway that actually is not as easy for people to sort of make part of their natural thinking, is that because it's sort of almost like holding two contradictory thoughts in your head, is that technology has been an extremely powerful engine of progress, prosperity, improvements in health, and all sorts of good things. And at the same time, there is nothing in the nature of technology that would do that automatically, so there is a lot that you can do with technology that goes against that. So you have to hold these two thoughts simultaneously in my mind, in order to both make sense of history, and to have the right perspective about the current and the future trajectory of where our society is going.

You cannot understand the most important trends in economic history without recognizing that the application of industrial technology, scientific knowledge, that started with the British Industrial Revolution sometimes in the middle of the 18th century, has been a critical force for why we are so much more prosperous, comfortable, healthy, healthier than people who lived 300 years ago. It's been an amazing process. Today, people who live in the poorest parts of the world have significantly higher life expectancy than those who lived in some of the most prosperous parts of the world in the early-19th century. Why is that? Because people have access to much better nutrients and antibiotics, and other drugs that have spread rapidly around the world, and hopefully, we don't get more immunity against antibiotics and we can keep those gains.

So that's the power of technology. But then it is very easy for many people, both in the lay audience and in actually, the scientific circles, to immediately jump into sort of an almost teleological interpretation, that we are bound to be technologically innovative, and technology is bound to make us better over time, perhaps even so much so that we reach something like singularity, artificial general intelligence, interplanetary life, and so on and so forth. All of those are part of an extremely optimistic read of technology, that it will always advance, and it will automatically create widespread beneficial effects for humanity. Neither of these two things are true. We've seen hundreds of years of human existence, where technology does not advance, even following other innovative periods. And we have seen many episodes in which technology advances, but it doesn't create any benefits for humanity at large, and it might actually be a tool for repressing, oppressing, impoverishing certain parts of society.

Paul Samson (host)

I don't want to pump the tires of economists too much, but it's good to do that a little bit with a leading one on the show. But there's an interesting thing about when you go to a policymaker, and you say, "Here's the issue, and there's trade-offs involved." And the general response is, "I don't want the trade-offs. I want both." But inherent in the economics framework, is that there will be trade-offs, right? And so there's kind of a hard truth to some of the economic analysis here, that's going on.

Daron Acemoglu (guest)

Absolutely, there are going to be trade-offs, and that's the part of the nuance. But actually, I think because economists are so in the know about what industrial revolution brought, in terms of subsequent increases in output productivity, comfort, that economists have themselves become extremely techno-optimistic. That I think the view that there are automatic forces for technological advances to ultimately bring benefits for everybody, are now very firmly held by many economists. And that's why economics and the tech sector became part of the argument that you should never question the wisdom of tech entrepreneurs because they're pushing technology forward.

Vass Bednar (host)

I think that's a really interesting frame to think about, this risk that some economists may almost be part of the hype, adding to the hype, drinking the Kool-Aid.

Daron Acemoglu (guest)

Absolutely, yes.

Vass Bednar (host)

To be someone using euphemisms that are maybe more social media-esque. I had noticed that you've suggested that today's disruptive innovators only see win-wins, and maybe aren't factoring in that nuance, or that potential for negative externalities, that there's almost too much optimism right now.

Daron Acemoglu (guest)

Absolutely. And that actually isn't exceptional.

Vass Bednar (host)

Okay.

Daron Acemoglu (guest)

Those who are at the helm of the innovation, the very top, always see win-win, or they discount the losers. Rockefeller did that. Factory owners of late-18th, early-19th century in Britain, did that, despite the tremendous poverty that their workers suffered and the horrible pollution that their factories created. But what's problematic, is that that win-win mentality takes over the entire sector, the whole tech sector and the media ecosystem. So I said economists have become very techno-optimistic, I think they are only second to the media. Journalists are extremely techno-optimistic. They're mesmerized by tech leaders. They're mesmerized by new digital tools, and they like the story of these rebels bringing down everything, move fast and break things, that just took hold of the US media-

Vass Bednar (host)

It did.

Daron Acemoglu (guest)

To an extent that I think is unprecedented.

Vass Bednar (host)

So even though we've been able to look back on that, I mean, I have seen some literature looking at the hype, and how to tech press releases were essentially directly reflected in news media. Why do you think there's still so much happy talk and hype around AI specifically? Why aren't we learning from our blind spots?

Daron Acemoglu (guest)

I don't know why we're not learning. But I think the media, again, this is the amusement part, the media likes a good story. And why were they so obsessed with Trump? Trump was a good story. AI is a fantastic story. It is a mesmerizing tool. It has impressive achievements. It immediately taps into all of the Hollywood movies and science fiction that is about machines becoming intelligent, or even more intelligent than humans, and all of the complex implications that something like that could have. It's really a very catchy story.

As a journalist focused on amusement and sensationalist headlines, it's natural that you're going to be drawn to it. Add to that the issue that the world economy, especially the industrialized world, has been suffering for the last 40 years, from slow productivity growth, especially over the last 20 years. The kinds of economic growth and improvements in our productive capacity that we experienced in the 40s, 50s, 60s, early-70s, those are a distant memory. And we're all looking for a savior. What will that savior be? Well, it's clearly not China. For a while, it was China, cheap goods from China will prop us up, and that's clearly not the case. Hopefully, we now recognize that. So now our next savior is AI. AI is going to revolutionize our economy. We're going to end up with much better materials, much better science, much greater productivity. The stories are endless.

Vass Bednar (host)

Policy Prompt is produced by the Centre for International Governance Innovation. CIGI is a nonpartisan think tank based in Waterloo, Canada. With an international network of fellows, experts, and contributors, CIGI tackles the governance challenges and opportunities of data and digital technologies, including AI, and their impact on the economy, security, democracy, and ultimately, our societies. Learn more at CIGIonline.org.

Paul Samson (host)

Yeah, exactly on this, Daron, you've come in lately with some numbers on AI, and some data, and said, "Look, what are we actually seeing? What should we actually expect," and push back on some of the hype. Can you talk a little bit about some of the recent findings you've thought-

Daron Acemoglu (guest)

Yeah, I'd love to. And let me try to first, put that into perspective. And I'm not trying to be a party pooper, and trying to argue against the hype because I dislike people being very happy and to look at the world through rose-tinted the glasses. I think the hype is actually very problematic, because what we are seeing is tremendous amount of investment without an understanding of what we're going to use this investment for, and even more hunger on the part of the tech companies for energy, for GPU capacity, for data, all fueled by this hype. But even worse, if you talk to a business leader, they will tell you, "If we don't invest in AI, our investors and our creditors, everybody's going to think we are falling behind because this is what the new thing is, and we should all be investing."

But of course, businesses don't know what to do with AI, and they're not going to be able to use it productively. So there's going to be a lot of wasted investment and new distortions precisely because of the hype. And in fact, the AI capabilities right now, are not where we would want them to be. If a business is going to use AI for its full customer service, or for its investment, or for its analysis of its hiring practices, AI could be a helpful tool for certain things, but the hype makes people think that it can do much more than it truly can. And the way that I have approached that is by essentially using broadly available data, to say, "Well, what are the things that AI can do at the moment given its current capabilities and projected capabilities for the next 10 years?" And the answer to that is actually complex, but not that complex, not as complex as you might have thought-

Vass Bednar (host)

Just before you tell us how complex it is, sorry, tell us for a second, using broadly available data, we talked about how access to data and things have shifted, what does that mean for the average person? By the way, the average person is me, so you use broadly available data, where did you look?

Daron Acemoglu (guest)

I mean, meaning what are the different occupations role in the economy?

Vass Bednar (host)

Okay.

Daron Acemoglu (guest)

But then I think... So you need essentially, three pieces of data to do this sort of analysis, plus a lot of assumptions, and I'll come back to some of those assumptions. One, is what is it that we do in the US economy? What are the functions, or tasks that we perform that can be improved by AI or overtaken by AI? Unless we know the answer to that, we can't make progress.

So then we need to have the composition of what it is that we do, and then we need to look at which one of these tasks can be performed by AI, where for that I rely on... So for the first one, this is just broadly available data, say from the Bureau of labor statistics and other sources, government sources are all open on this and anybody can go and use them. And for what is it that AI can do, I've drawn on work that other people have done on the capabilities of large language models, as well as some of my own analysis of that. And essentially, those two pieces together made me conclude, A, that pretty much anything that involves either interaction with the real world, custodial work, construction, blue-collar work, those are beyond the capabilities of AI for now, and they will be beyond the capabilities of AI in the next few years.

Perhaps in 20 years, AI can be integrated with robots, much better robots, and then perhaps different things can come out, but not for now. Anything that involves a heavy degree of social interaction, like this podcast, AI couldn't replace you. Not today, not next year, not in 10 years, that's for sure.

Vass Bednar (host)

That's good.

Daron Acemoglu (guest)

So I couldn't have a podcast with an AI, and you couldn't have an AI guest for your podcast. Psychiatry, true lecturing with frontier knowledge, all of these things are beyond the capabilities of AI. So the things that AI can do are essentially office tasks, so things that don't involve first-order interactions with the physical world, and that don't have a major social element, and that don't have very, very high level of judgment, or wisdom, or creativity. So when you do that, you end up with something less than 5% of the US economy. So even on an optimistic read, it's only those 5% tasks that are going to be first-order affected by AI. There might be some effects on other things, a few things that involve with the interactions with the real world might get indirectly affected, but the first-order effect is going to be on this less than 5% or so tasks.

And then the other thing that you need is to say, "Well, how productive AI is relative to humans, or how much cost savings will it bring?" And there I rely on a few experimental studies that have used generative AI tools, in either real-world setting or lab settings, to show how AI can improve. So all of those are interpreted as, "Look, AI can do good things," so they are not written by AI skeptics in some sense, they are written by neutral scientists, careful scientists, or sometimes people who actually are fairly optimistic about AI. But when you look at the data, the cost savings aren't enormous. You might save 10 to 15% of your costs because you're using AI instead of a human for certain tasks. So if you put these two things together, you end up with a modest number.

US GDP would increase, or US productivity would increase by something like 0.6 to 0.7 percentage points within 10 years, or US GDP would increase by less than 1%. So I'll take that, that's great, but it's not revolutionary. It's not in the ballpark of what tech leaders, or some of their boosters are promising.

TED with Cathie Wood:

We believe that is going to scale thanks to the convergence of these platforms and the explosive growth opportunities that they will provide, to more than $200 trillion. That is a 40% compound annual growth rate. It's very hard to believe, I know, and the markets, they do think we're a little crazy.

Paul Samson (host)

Right, so it's very clear analysis that's well laid out. I think that where the counterarguments come in are on the assumptions about how quick the tech itself will move, right? And there's a lot of computer scientists saying, "Look, expect ChatGPT 5 and six, and everything in very quick succession." So you're assuming a certain progression of technology, right?

Daron Acemoglu (guest)

Yes, yes. And there are a number of important assumptions. Obviously, those numbers that I gave you, you could dispute them. You can say, "No, no, AI can do even already, things that you say it cannot do that involve social interactions. AI can do psychiatry. You don't need to sit in front of a psychologist or a neuropsychiatrist, you can do it with an AI chatbot assistant." We can disagree on that. You can say AI is going to quickly revolutionize science, and that would completely change the calculus. Or you can say, "AI cannot do these things today. I accept, I go with Daron on that." But in five year's time, it'll be able to do it because we're going to invest so much money and the GPU capacity is going to increase sufficiently that its capabilities are going to double, triple, quadruple, whatever that means.

I'm not that optimistic on that. And moreover, even if it did, it's going to take a while for any new model to go through sort of what ChatGPT 3.5 and 4 went through, all the rough edges being sorted out, and all the reinforcement learning that needs to be done. So by the time it can actually be rolled out to businesses, it's going to take a while.

Vass Bednar (host)

Going back to social media, kind of our frenzy for ideas and debates, and starting to translate work outside of academic journals to a range of places. We're recording this in the summer, and recently, Goldman Sachs in particular, suggested quite provocatively, or at least it was received that way, that generative AI is already hitting a ceiling, that it's way too capital and energy-intensive, as you've sort of said. It can't solve complex problems right now, there's just no killer app.

Goldman Sachs:

The biggest challenge is that over the next several years alone, we're going to spend over a trillion dollars developing AI. Historically, we've always had a very cheap solution replacing a very expensive solution. Here, you have a very expensive solution that's meant to replace low-cost labor, and that doesn't even make any sense from the jump, right? And that's my biggest concern on AI at this point.

Vass Bednar (host)

How much did it matter that this analysis was coming from an investment banking firm, in terms of how it was received?

Daron Acemoglu (guest)

Partly because Goldman Sachs a year ago, was much more optimistic. I think Goldman Sachs itself is doing some soul searching. And I think it's right, because it is critical to get that right, because Nvidia's stocks have gone through the roof, some of the fastest increases in a company's stock price we have ever seen, and that's all on the basis of this hype. And we believe that Nvidia chips are going to continue to be in demand for the next several years, or if not more, and they're going to be the bottleneck for this amazing technologist realization. And if that's not the case, all of the investments that are being made in Nvidia, and similar chip companies, may be somewhat in excess. So it is right that Goldman Sachs and other investment banking firms should do some soul searching.

Vass Bednar (host)

I'm with you there.

Paul Samson (host)

So one thing that comes to my mind, is that there's obviously a lot of assumptions, and we never get all the assumptions right, but when you talked about the medical profession, it makes me think in Canada, the shortage of access to medical professionals is going to lead people to be tempted to use AI in things, right? So it's going to be fascinating to see how this evolves.

Daron Acemoglu (guest)

The other major point of emphasis of my work, if I could bring it, because I think it has relevance here, and it sort of relates to how I framed the technology discussion. The reason why I believe technology has sometimes been an amazing engine of progress, and sometimes has been an oppressive force, is because technology is highly malleable. It doesn't come with a predetermined direction, "This is how you're going to use technology." We advance our scientific knowledge, our collective knowledge, and then we decide what to do with that. And it's true at the broad level, it's true for specific technology classes as well. That's why institutions matter greatly. If we have the right institutions, it becomes much more likely that we're going to choose a direction that's broadly beneficial, and avoid the ones where technology becomes a tool in the hands of the powerful to exploit, or put down the rest.

So this malleability is also critical for how AI will impact many sectors, health, education, manufacturing. So yes, you're absolutely right, Paul, there will be a temptation to use AI to replace medical advice, medical diagnosis, and so on. In fact, the tech sector is banking on it. But I would say that's the wrong way to do it, because actually, it doesn't have reliability, it doesn't have that level of sophistication yet. But you could use AI if you developed it the right way, to help nurses, to help technicians. Imagine an AI assistant that can help nurses do some of the tasks that are now currently left to doctors, that could be hugely useful. But that requires a different direction of development for the AI technology. The chatbot format, I think, is not appropriate for many of these more concrete uses of AI, because it adds to unreliability, it adds to unpredictability. Instead, you want very high fidelity specific expansion in the capabilities of workers, such as nurses, such as technicians, such as physician assistants, and that's not what we're getting.

Paul Samson (host)

Yeah, the mechanism really matters here, in terms of delivering, but also, what you're describing hints to a broader labor question across multiple sectors, as you say. And you have written about that democracies are not delivering now on jobs and prosperity, but there's a major question about how workers fit into this. What rights do they have? How do they mobilize to protect their jobs, but also effectively harness AI to match that?

Daron Acemoglu (guest)

100%. I think you cannot understand the distorted direction in which AI has gone without recognizing that AI came on the scene, developed, and flourished during a period in which there was no worker voice. Unions, or other worker organizations, had no say in how new technologies are introduced in many sectors. There was no input from workers saying, "Well, these are the things that we're doing well in our jobs, and these are the things where we need help, so these are the places where a capable tool can be very helpful in increasing productivity." So I think that is a critical ingredient for understanding the strange way in which AI became so dominant without really tapping into what the needs for AI really are.

Vass Bednar (host)

Yeah. It's kind of weird sometimes, how we seem so eager to automate away parts of the economy, or how we're excited about that.

Daron Acemoglu (guest)

Exactly, that's the hype.

Vass Bednar (host)

Yeah, that is the hype, right? Look at this one simple trick, and then it's we got to chatbot instead. Much of your work has emphasized this importance of education and human capital in driving economic growth. How do you think that the increasing use of AI-powered educational tools will, could, should, or shouldn't impact the types of skills and knowledge that wind up being valued in the labor market?

Daron Acemoglu (guest)

Well, thank you for that question, because I think actually, education is my favorite sector to illustrate these things, before healthcare.

Vass Bednar (host)

Okay.

Daron Acemoglu (guest)

I'm glad now I get to talk about education. But let me make first, one small related point, which is that AI will also change our educational needs. The kinds of skills we need to impart in schools will almost certainly change. Even though AI is right now, hyped, it is true that it's going to stay with us, it's true that it's going to influence many aspects of the economy, and it will probably require new skills, and so we have to watch out for that. That's one aspect of it.

But in terms of how AI helps or hinders the teaching process, let me outline two scenarios. Scenario number one, we use generative AI more and more, for automated content, automated grading, automated tests, more online content, that instead of a teacher teaching, the teacher shows you a video from some other source, that's perhaps been, via generative AI, a little modified for what that teacher wants to do. And students also, rather than ask their teachers when they have additional questions, they go and ask ChatGPT. That's scenario number one. Scenario number two is one in which we use generative AI tools as a way of personalizing education more, whereby teachers use generative AI tools to recognize what are the specific difficulties that groups of students are having. In conjunction with other groups of teachers, they can change the content, they can change the way it's delivered. They can create sometimes bigger, sometimes smaller groups, in order to deliver much more effective teaching, especially helping those students who are falling behind in the standard organization of the classroom.

Now, you will not be surprised that I think scenario two has much greater capabilities of revolutionizing, in a positive way, education. And guess which one we are investing in right now?

Vass Bednar (host)

Number one.

Daron Acemoglu (guest)

Number one.

Vass Bednar (host)

100%, yeah.

Daron Acemoglu (guest)

100%. ChatGPT's launch was very much targeted at convincing a lot of students to use ChatGPT in exactly that way. That was where the hype came in.

TEDxSioux Falls with Natasha Berg:

When one of them started sharing how she had recently caught a student cheating on an essay using this new form of artificial intelligence called ChatGPT. She watched in awe, astonishment, and a bit of mild horror, as this program constructed an entire essay for the student with a click of a button.

Daron Acemoglu (guest)

And there was already, before ChatGPT, a huge industry investing in online content, automated grading, et cetera, and that has received a boost as well. To the best of my knowledge, no ed tech company is really preparing us towards that more personalized education. They're not investing in the tools for that in any way. Most teachers, most schools wouldn't be interested in it because many of them are in a mindset of let's try to cut costs, which requires, of course, reducing the teachers, rather than giving teachers more tools and having more teachers to deal with that personalization aspect.

Paul Samson (host)

That's super, thanks for outlining that. Fascinating. And moving so quickly, right? When you talk to universities, they're just in full-on panic mode in a way, as to how they're going to implement the way forward. Shifting to a different issue, and I recognize it's not your particular area of expertise, but it just always comes up everywhere, and that's about where the energy footprint is going. And particularly, now on generative AI, but the huge demand for electricity, the increasing use of data centers. On the one hand, there's a positive element about this focusing so much on energy, focusing attention on energy, because it's needed for all kinds of reasons. But on the other hand, there's potentially a risk here of increased costs, energy shortages, and things like that. Is this anything, an area you've thought about?

Daron Acemoglu (guest)

Yeah, I have also worked on the energy transition. I haven't worked, per se, on how the energy needs of the tech sector could be changed. But it is undeniable, it's quite clear that digital infrastructure is already very energy-intensive. Cloud computing, huge clusters of computers, and AI is increasing that tremendously because the number crunching necessary for the pre-training of these huge models, is very large, but also operating these models are hard. So every time you ask a question to ChatGPT, or to Anthropic, it has to go through a number of computations, which requires a lot of energy.

CBC News, About That:

Most people do not realize that a computer is basically a radiator. Every single unit of electricity that's consumed by a computer is transformed into heat. So the a 100 megawatt of electricity that we're consuming is producing 100 megawatt of heat.

Paul Samson (host)

Right, and there are more of them every day.

Daron Acemoglu (guest)

There are more of them every day. So then the question is, if this is really so important for humanity to do, and this is the only path, then we'll have to take that on the chin. We'll have to say then, "Okay, fine. We have to do other things in order to reduce our carbon footprint," and et cetera. But if you agree, or at least entertain the thought that we already talked about, that this is partly hyped, then a lot of that energy consumption is actually wasted. It's not necessary. It's not adding to anything. It's not so important for everybody and their dog, to ask lots of questions to ChatGPT at the moment. It's not adding to human knowledge. It's not so important for the training of the models. So just this way in which we've made ChatGPT so central, that everybody's forced to interact with it, and we've also encouraged VC farms, and other big money, to invest trillions of dollars, all of this is actually adding up to a lot of wasted resources and wasted energy.

Vass Bednar (host)

It's funny to think of it as wasted energy, like in both senses of the word.

Daron Acemoglu (guest)

Well, it is. And the most ironic part of it is that early on, and I'm sure still, some people are claiming that, AI was supposed to help us solve the climate crisis. I don't know how, but yes.

Vass Bednar (host)

I mean, yeah, talk about broken promises.

Paul Samson (host)

You're listening to Policy Prompt, a podcast from the Centre for International Governance Innovation. Policy Prompt goes deep, with extensive interviews with prominent international scholars, writers, policymakers, business leaders, and technologists, as we examine what it means for our public policies and society as a whole. Our goal at Policy Prompt is to explore effective policy solutions for pressing global challenges. Tune into Policy Prompt wherever you listen to podcasts.

Vass Bednar (host)

I want to go back to your intellectual energy for a second, because one of the many things you do that I admire quite a bit and appreciate, is that you translate your thinking to policy ideas. And back in the spring, you had a policy memo with your frequent co-author, Simon Johnson, arguing that we need a digital advertising tax. Could you just tell us a little bit about how that would work? It's certainly on our minds here in Canada, as we lurch forward with the digital sales tax.

Daron Acemoglu (guest)

I would love some government to implement, or experiment with it. So the foundations of that are twofold. One is static, the other one is dynamic.

The static one is that the way that companies are making money out of digital advertisement, which is quite common, is that they take users data, and then using that data they provide, prioritize, promote content that will be specifically appealing to them, grab their attention, perhaps emotionally trigger them so that they spend more time on the platform, and then that time is then monetized via digital ads, for which sellers, advertisers pay. Now, if you look at this way of monetizing it, it does create a variety of adverse social consequences, emotional problems, so all of that mental health crisis. I mean, I think this is a very, very important topic we are not talking enough about. In the Western world, the mental health problem has really reached alarming proportions, and I don't think it can be understood without social media.

TODAY:

When asked about the impact of social media on their body image, 46% of teens 13 to 17, said that social media makes them feel worse. Back in May, the US Surgeon General issued a warning, calling the youth mental health crisis the defining public health issue of our time.

Daron Acemoglu (guest)

All of the sort of extremism, online fights, polarization, not just caused by social media, but this sort of ecosystem contributes to it. So in economics, if something creates these negative social effects, these negative externalities, so to speak, you tend to put a tax on it. The way that we tax smoking, in the same way that we should be taxing carbon emissions, and European countries do. So that's the static case for digital ads being taxed.

The dynamic case, to me, is even more important, because as I've emphasized, I believe technology, especially digital technology and AI, are highly malleable, so they're future direction is very much up for grabs. If a particular type of business model becomes dominant, then it has a disproportionate effect on that future direction. And right now, the most dominant business model in Silicon Valley and beyond, is this digital ad monetized model. It creates no room for alternatives. So if I wanted to enter with a new social media company, that, for example, attempted to do things that have been done somewhat successfully in Taiwan, which creates much more of the public square, more reliable information, how am I going to monetize that? Because if I set up a subscription system, people won't pay for a small platform when there is this free Facebook, Twitter, et cetera. And if I try to redo something like Wikipedia today, that will be impossible. So essentially, the digital ads create a market structure in which alternatives cannot survive.

So the dynamic case is that via digital ads, you don't just discourage this statically harmful thing, but you also make the system more competitive by creating room for alternatives to enter. So for both of these, I think you need a healthy digital tax, so that the revenues that companies raise from digital advertising are taxed something like 20%, 30%. That's of course, a lot of money, and then you can use that money for a lot of things. So one of the things you can use it for is actually have publicly-funded AI research that perhaps targets more beneficial things. We can do more on AI education for regular people, for students, the resources that teachers would need for the more beneficial path that I outlined. That will be a double whammy. But of course, what it would mean, it's that the leading tech companies, Facebook, Google, Microsoft, even Amazon, would lose billions of dollars, tens of billions of dollars in taxes. And of course, there'll be a lot of pushback against that.

Vass Bednar (host)

We're used to that in Canada.

Paul Samson (host)

That pays for a lot of legal fees to defend against such taxes, for sure.

Yeah, so we're just wrapping up. But the first thing was in terms of the quality and the quantity of your work, I mean, it is really quite astounding how much you can produce, and so maybe you can share some tricks of the trade with everyone.

Daron Acemoglu (guest)

Great co-authors. I have great co-authors and collaborators.

Paul Samson (host)

Are you tempted to use any of the AI tools, like some of the unique image generation?

Daron Acemoglu (guest)

No. I have experimented with large language models. They weren't horrible, but they weren't that great either, so no, I do not use any AI tools.

Paul Samson (host)

Right. And what about, the related question is just your policy influence. You've got a huge body of work out there. People use it, they look at it, do you feel you're having a lot of policy influence?

Daron Acemoglu (guest)

I wouldn't say so. I'm delighted that you think some people follow it. I think some of my work is read, but I feel I have minuscule policy influence, and that may just be right. But also, I think in the current US environment, it is very difficult to have an influence on policy, it's become so polarized and so professionalized that neither of the two parties would listen to outside experts. I think in Europe, it's different, and I feel like European policymakers pay more attention to ideas, European and UK policymakers, than American ones.

Paul Samson (host)

Well, just on that, we're hoping there's an opportunity with Canada hosting the G-7 next year, to focus some expertise in policy-

Daron Acemoglu (guest)

Yeah, I said Canada also. I think Canadian policymakers-

Vass Bednar (host)

Yeah, throw Canada a bone.

Daron Acemoglu (guest)

Have more knowledge and interested.

Paul Samson (host)

Excellent, thank you. Vass, over to you.

Vass Bednar (host)

If anyone needs some of that hype, the too much extra hype that AI's getting, it's definitely Canada. We could use that boost now and then. But you don't see, or think you're having that much of a policy influence, and yet you still put your ideas out there for people to interact with and engage with, so thank you for that. I'm left curious, what's next? What's on the horizon? I read that you intend to explore hierarchies in your next book. As you engage in that deeper work. What's it like?

Daron Acemoglu (guest)

Yeah, I am. That is my big passion, is I want to understand foundations of institutions, especially democratic institutions, better, and in what way democracy has gone wrong, in the sense that it has started losing people's support. And that, I think, cannot be done just by looking at the present, but I think we have to go back and understand the foundations, in terms of human psychology, human history, of where democratic institutions have come from, when they have been successful, and what new democratic institutions could look like in the future. And so that's about egalitarianism, that's about hierarchy, that's about social power again.

Vass Bednar (host)

Well, we'll keep our eyes out for that too. And good luck with that work.

Daron Acemoglu (guest)

Thank you.

Paul Samson (host)

Yeah, we'll stay tuned for that one, for sure. But Daron, thanks so much for your time today, spending with us on this, and delighted to chat with you. It was super illuminating.

Daron Acemoglu (guest)

Thank you, Vass. Thank you, Paul. It was my pleasure.

Vass Bednar (host)

Yeah, thanks.

I think an element that's quite fascinating for me, was just his reflections on the work of a public intellectual, and how it's evolved over time, so that he's absolutely rooted in data and the academic rigor that his colleagues would expect, but that he's willing to bridge and broker those ideas for a range of audiences, not only online, but also directly package them for a policy community. And I don't think personally, that leading thinkers can expect that others will necessarily take on that labor for them. And I wanted to use the language of work of labor, because it is a form of work that's often invisible, or presumed, and just doesn't happen otherwise.

So the way he's expanded his role over time, I think is really cool. And the other thing for me, that I'm always curious about economists, is it strikes me that there's this textbook way that you learn about the capital E economy. And then you're in the Economist Club, you get it. You now understand the economy. But that openness to recognize that the economy, the incentives shift, unexpected elements happen, and that he goes back to history and leans on other disciplines in order to better understand power. Again, that interdisciplinarity is something I really admire and encourage, and I think enriches his work and his thinking so much.

Paul Samson (host)

Yeah, that's a great summary. He's a full card-carrying economist, no question about it, right? He's rigorous, he's widely respected by the more technical economists, and he relies a lot on hardcore data and analysis. So he's got that credibility, but he remains open-minded in the way that he approaches the issues. He's always looking for that next data set, but he knows that these things are evolving, and I find that is one of the attractions to his work. I think that people find he acknowledges that there are still uncertainties about a lot of these things, and that things continue to evolve. So we'll stay tuned on his work, and would encourage listeners to look up some of the articles he's written. They're very accessible, as you said, Vass. He speaks to people clearly, and with a lot of data and rigor behind it, and so we want to try to help people like that be involved in the policy debates, even more than they are now. And he's clearly involved, but I think we can try to help give them additional platforms and ways to plug in.

Vass Bednar (host)

Yeah. And he said he is not a party pooper, I'd say he's much more of a party animal. He's undeterred, right? He's facing these really tough questions, and also our disappointment, and why is the economy getting tougher and worsening for so many people? Why aren't we seeing the rainbows and lollipops that we've been promised by venture capitalists? So the fact that he's not turning away from that, and inspired to dig in, I think is also really encouraging.

Paul Samson (host)

Yeah. And he is banging the drum of the urgency about it, because the system right now has so much momentum behind it, so much media hype. I think that was a great point about the biggest hypers of all are the media. They love a story that... I mean, not all media.

Vass Bednar (host)

We're getting better. It's getting better, getting more critical. But certainly, the 2000s, there's been empirical work, sort of looking at the neutrality or the positivity of how tech was covered, and the way we're more critical now on surveillance forms of technology just didn't happen back then. We were kind of-

Paul Samson (host)

That's fair. That's fair.

Vass Bednar (host)

Yeah.

Paul Samson (host)

Yeah. The term media is too broad, right? It's more of like a mainstream, traditional media on these conversations, but the broader media is very much out there, as you say. So let's keep those discussions rolling, and there's lots more to come in these areas, and on these topics.

Vass Bednar (host)

Policy Prompt is produced by me, Vass Bednar, and Paul Sampson. Tim Lewis and Mel Wiersma are our technical producers. Background research is contributed by Reanne Cayenne. Marketing by Kahlan Thomson. Brand design by Abhilasha Dewan. And creative direction by Som Tsoi. The original theme music is by Joshua Snethlage. Sound mixing by François Goudreault. And special thanks to creative consultant Ken Ogasawara. Please subscribe and rate Policy Prompt wherever you listen to podcasts, and stay for future episodes.