Episode 11

How to Predict the Future with Accuracy (throwing darts with Robert de Neufville)

Expect surprises in this brave new world.

PP_S1_EP11_Superforecaster

Episode Description

Show Description:

Warren Buffett once said he would rather trust his money to monkeys throwing darts than financial advisers. So how do the monkeys’ chances of hitting the target stack up against those of, say, pollsters, Magic 8 Balls or star charts? Maybe the monkeys have practised.

Meet Robert de Neufville, who is super at forecasting: someone whose predictions have proved far more accurate than regular forecasting and regularly outperform intelligence analysts’. Robert holds degrees in government and political science from Harvard and Berkeley, co-hosts the NonProphets: (Super)forecasting Podcast and has extensive experience in analyzing existential risk. Robert and hosts Vass and Paul discuss everything from Buffett’s monkeys and Moneyball to the importance of parking your biases, knowing what to research and the difference between hype and meaningful signal, to the value of expertise, new things to worry about and the need to stay skeptical.

Mentioned:

Further Reading:

Credits:

Policy Prompt is produced by Vass Bednar and Paul Samson. Our technical producers are Tim Lewis and Melanie DeBonte. Fact-checking and background research provided by Reanne Cayenne. Marketing by Kahlan Thomson. Brand design by Abhilasha Dewan and creative direction by Som Tsoi.

Original music by Joshua Snethlage.

Sound mix and mastering by François Goudreault.

Special thanks to creative consultant Ken Ogasawara.

Be sure to follow us on social media.

Listen to new episodes of Policy Prompt biweekly on major podcast platforms. Questions, comments or suggestions? Reach out to CIGI’s Policy Prompt team at [email protected].


63 Minutes
\
Published February 24, 2025
\ \

Featuring

Robert de Neufville

Robert de Neufville

Chapters

1 0:00:00

Welcome to CIGI’s Policy Prompt

2 0:01:29

Introduction to Robert de Neufville, a “super” forecaster, who joins Vass and Paul to demystify the realities of forecasting

3 0:02:33

How did Robert get into superforecasting?

4 0:04:32

Can anyone forecast geopolitical questions? An intelligence agency launched a tournament to figure that out

5 0:08:01

Where do we situate judgmental forecasting across, say, systems that analyze poll results?

6 0:09:20

On the techniques of superforecasters: the best guide to what’s going to happen in the future is the past

7 0:19:15

What is hype, and what are meaningful signals?

8 0:22:45

The difficulty in forecasting events for which there are no historical precedents

9 0:25:17

What kinds of events have surprised Robert, and what does he worry about now?

10 0:36:12

Why we should rely on others’ expertise

11 0:40:54

How does cognitive forecasting link up to policy making or the intelligence community?

12 0:48:30

How does one start to think about engaging in this work of forecasting? Is there specific training?

13 0:54:57

What’s something listeners should keep in mind about the future? And, a note of optimism

14 0:57:11

Vass and Paul debrief


Paul Samson (host)

Hey, Vass, I've got a question for you to start off today. Can you predict the future? Have you ever predicted something correctly? How did you do it?

Vass Bednar (host)

I mean, I want to say that, yeah, for sure. I've predicted things correctly, but-

Paul Samson (host)

Nice, [inaudible 00:00:20].

Vass Bednar (host)

... I don't think I wrote them down. I don't have the evidence for you. I do love to make little bets in conversation, ask people what they think is going to happen with something and sort of register mine. But I never put any money down or anything. I do rely daily on probabilistic weather predictions, and I do have friends who really deeply buy into star charts as being deterministic of someone's life, and I'm superstitious. But other than that, I'm no magic eight-ball.

Paul Samson (host)

Well, deterministic universe, we could maybe line up a guest on that, but today we're going to talk about forecasting and forecasters. It's been a tough time for them in most fronts in recent years, like election forecasts, economic forecasts, et cetera. It reminds me of one thing, which is Warren Buffett once said, "What are you better off with, a financial analyst or a monkey throwing darts at the stock list," right? And his estimation was, you're actually better off with the monkey because they cost less, right?

Vass Bednar (host)

Okay.

Paul Samson (host)

So who are these forecasters anyway?

Vass Bednar (host)

Well, we're not interviewing a monkey today. We are in luck. We've got a great guest who can explain and sort of demystify these realities of forecasting. Robert de Neufville is a superforecaster and more.

Paul Samson (host)

Yeah. And superforecasters are able to consistently outperform others that sets them apart, even experts like they do better in those areas to be a superforecaster.

Vass Bednar (host)

Now, Robert became a super forecaster 11 years ago in 2014. He did that by being among the top forecasters in a major forecasting tournament. We're going to ask him about that. He's researched different issues including the rise of catastrophes that are severe enough to threaten human civilization. He spent at dinner parties. He's contributed to the Economist, the Washington Monthly, the National Interest and California Magazine, and he has his own podcast, but today we're welcoming him to this one.

Paul Samson (host)

Indeed. Robert, welcome to Policy Prompt.

Robert de Neufville (guest)

Thanks. It's great to be here. And I do want to say, I am fun of parties.

Vass Bednar (host)

Robert, do you remember where you were when you first heard this term, superforecasting? What did it spark for you? How did you get into it?

Robert de Neufville (guest)

Memories are always a little bit confusing when you go a little ways back, but I think the first time I heard it was an interview on NPR with a forecaster named Elaine Rich, who was great, who I later came to know. Basically talking about the Good Judgment Project, which is what I signed up for immediately after hearing that. I thought I would be good at it, before-

Vass Bednar (host)

Why?

Robert de Neufville (guest)

... I signed up for that.

Vass Bednar (host)

Wait, why did you think you'd be good at it?

Robert de Neufville (guest)

Because I estimate probabilities anyway. I think for me it's a little bit of maybe a neurotic habit. If I'm stressed about something, I ask myself, what's the realistic probability that it'll actually happen? And that's a way of making myself feel like I'm more in control of the situation or something. But I've always sort of done that. But also, when I was in grad school and sort of failing to write my dissertation, I spent a lot of time on an early ancestor of those sports fantasy sports sites called Pro Trade. And I basically won every competition initially. They changed the rules at some point and made it so that you had to spend a lot more time than I should have-

Vass Bednar (host)

Because of you.

Robert de Neufville (guest)

... while I'm trying to run my dissertation. But initially, it was essentially predicting how well athletes would do, how well foot players would do their stats in advance, and it wasn't hard to win. I think I was one of the only people in this small site that was actually using any systematic method for estimating on the basis of past performance, how they perform in the future, and it gave me a huge advantage. So, that's basically the same thing that I ended up doing in the Good Judgment Project.

Paul Samson (host)

So there's obviously rigor to this. It's not some just loose judgment methodology, like it has to be rigorous. Can you tell us what is superforecasting, and can you talk a little bit about that famous or infamous tournament that determined that superforecasters were at least 50% more accurate than those of regular forecasters?

Robert de Neufville (guest)

Yeah, there's a bunch of questions there. I'll start with the term superforecasting. I actually don't know exactly what that means. I'm not being facetious when I say that.

Vass Bednar (host)

Yeah. Was it-

Robert de Neufville (guest)

People talk about superforecasting. When they did this study, they did this... All right, I will back up. The tournament was basically set up by a government agency called IARPA, which is sort of the intelligence agency equivalent to DARPA that-

Paul Samson (host)

Correct. Yeah.

Robert de Neufville (guest)

... is a Defense Advanced Research Projects Agency. And they wanted to figure out, could anyone forecast geopolitical questions, because it turns out people are pretty bad at it as far as... They hadn't found a lot of people were good at it. And the Good Judgment Project entry, originally, it wasn't a Good Judgment Project. That was the name of one of the entries in it by Philip Tetlock and Barbara Mellers. They identified a bunch of people who were consistently good at it, and they called those people superforecasters. So when I joined a little bit after, that process had already started. I really wanted to be a superforecaster, I worked really hard at it, and I was among the top people. I qualified as one of the people who was consistently better than others. What I say when I don't know what superforecasting means.

So it's a registered trademark of Good Judgment Inc, which is the professional spinoff of the Good Judgment Project. And it's a great trademark. They market it a lot, but it's not totally clear to me what superforecasting as a gerund or whatever that is means. So superforecasters are people who are able to predict more accurately consistently than most people, right? But superforecasting is sometimes used to refer to what I will generally called judgmental forecasting, which is sort of the thing that I'm doing, which is essentially making a probabilistic estimate of what's likely to happen. And it's sometimes used, I think to refer to the techniques that good forecasters use. It's hard to specify exactly what those techniques are. They sort of worked backwards from what good forecasters did.

Vass Bednar (host)

Yeah. And tried to give it a name.

Robert de Neufville (guest)

It's not identical to what superforecasters are in some sense. So they market this superforecasting thing, they'll teach you how to do it better. I'm not sure it's a rigorously defined academic term.

Vass Bednar (host)

Okay. Well, it's definitely not just a party trick, right? We've already determined you're fun at parties and we want to have you at parties. It's also-

Robert de Neufville (guest)

I don't actually know any party tricks.

Vass Bednar (host)

It's also different from the pleasant ambiguity of a horoscope or a fortune cookie. So there's technique and terms of this trade. Maybe you can help us benchmark it against things like some websites that take the average across a suite of public opinion polls to make a prediction. Where do we situate judgmental forecasting against that kind of analysis?

Robert de Neufville (guest)

Well, public opinion polls aren't really forecasts at all. You can use them to make a forecast, but a poll is basically a snapshot of what people think at a given time, right? And we think that if we do a poll of how many people are going to vote for Donald Trump or Kamala Harris, for example, that that'll reflect how many people ultimately do that intention to vote, but there is a difference there, right? Like I am extrapolating from a snapshot, which is already a statistical artifact and maybe not totally accurate in the first place. So yeah, polls are really useful in making forecasts, but they're not forecasts.

Paul Samson (host)

So one thing I wanted to dive into a little bit as I was trying to get my head around it, like what kept coming to me was that superforecasters are really disciplined at parking their bias and preconceptions. That's a trained art a little bit, right? And we all fall into it from time to time, but presumably the superforecasters are better that allow them to outperform experts in their own domain. But what does that template look like? Is there some number-crunching, probabilistic math going on here? Or is it just each super forecaster has their own template?

Robert de Neufville (guest)

I think each super forecaster has their own template. It does involve a lot of judgment, which means it's hard to sort of say, "Here's the brute force calculations that you could do that would always get you to an accurate answer," which is why it's hard even for the top AI models to compete with the best human forecasters right now.

There is a technique. I think it does vary a lot from forecaster to forecaster. I'm not constantly trying to do cognitive debiasing. I try to be aware of my biases and know certain topics like American elections where maybe I'm too invested to see them clearly. But there are techniques, right? I think what I would describe, I often compare it to a skill like playing basketball, right? Some people have more natural ability in that, that they might be taller or be more athletic or something. But there are also techniques you can use. There's shooting form you could use to ensure that your shot goes straight every time. So, we do some of those things. There are, in general, the best guide to what's going to happen in the future is the past. If you ask me what's going to happen in a certain situation, the first thing and the most important thing I'm going to do is look back at similar situations in the past and try to establish a base rate of what's happened, how often things have happened in those similar situations.

That's a little bit complicated. That itself requires judgment because I have to say, what situations in the past are meaningfully similar to the question you're asking about. And there may be a lot of different ways to do that. And sometimes a good thing to do is to look at it from a bunch of different ways and triangulate the probability out by saying, "Well, maybe it's like these situations. Maybe it's like those situations if I'm not sure what's a good comparison." But fundamentally, I am trying to see how often things have happened in the past, establishing sort of a normal rate at which certain things happen and then work from there.

Vass Bednar (host)

Okay. Maybe we can talk a little bit more about your habits when you're on this forecasting court, right? What do you read and pay attention to as you sort of build your pulse and constantly keep all this historical knowledge at your fingertips?

Robert de Neufville (guest)

Well, a lot of it is about knowing what to research and being a good researcher at a certain way. My good researcher, I mean, knowing what information is likely to be meaningfully useful. The research shows basically, you might hear this idea that experts are no good. They've been compared to the dart-throwing chimp that you were talking about, that they basically... You might be a subject matter expert, but your forecast about your domain isn't necessarily any better than [inaudible 00:12:03]. The fact is that you don't often need great expertise to forecast something. A smart person can come up to speed on the essentials for a lot of questions. So where the forecasts are good isn't necessarily their subject matter expertise, although as group, we tend to be widely read across a wide range of things.

But specifically the skill in estimating probability given certain sets effects. So, a lot of what I'll do when I'm given a new subject, I mean, elections. I've done a lot of election forecasting, right? And not just in the US but in countries I don't know very much about the politics of, but you can look at the frequency in which things have happened in other elections and do that research and figure out roughly how often things happen. So that's the thing I will do to come up to speed on a topic I don't know about. And a lot of cases, I really don't know that much about it in advance.

Vass Bednar (host)

Policy Prompt is produced by the Centre for International Governance Innovation. CIGI is a nonpartisan think tank based in Waterloo, Canada with an international network of fellows, experts, and contributors. CIGI tackles the governance challenges and opportunities of data and digital technologies including AI and their impact on the economy, security, democracy, and ultimately our societies. Learn more at cigionline.org.

Paul Samson (host)

So let's dive into a specific example of this as well. One thing that we were looking at was this new crowd forecasting site called Glimt, which was launched by Swedish Defense Research Agency. You probably know about it, Robert, that in cooperation and with support of Ukraine. And the questions that they have asking for input over the recent couple of days was A, will Russia withdraw from Syria? B, will Russia use a nuke on Ukraine in 2025? C, will Russians protest the war in 2025, or D, percentage of the Donbas controlled by Russia in the fall? These things seem to be proliferating, do you see those as useful tools in all of this? Or is this something aside and yeah, aside issue?

Vass Bednar (host)

I thought it was bonkers to read about too, by the way, right? Asking citizens, what do you think will happen with this major geopolitical elements? Sorry, I'm not the guest.

Paul Samson (host)

Yeah. [inaudible 00:14:41].

Robert de Neufville (guest)

No, you said, I probably know about that. I don't know about that. I feel like I should know about that site. There are a lot of things like that. In some ways, fundamentally, those are the questions that we were asked in the Good Judgment Project, and we were just regular citizens. You can ask regular citizens those questions, and often you'll get really a lot of garbage because some people really are not thinking about it in a systematic way. But those are the questions that a good forecaster can produce meaningful, useful probabilities about. I don't know exactly how the site works and who's getting asked, and how they're trying to aggregate it or what signal they're getting. How they're aggregating the probabilities and how useful that is. But that's the thing that superforecasters look at, for sure. Those questions.

Vass Bednar (host)

You mentioned garbage. I mean, do you pay attention to garbage? Do you need to factor that into your work? I'm curious what people consistently get wrong about this discipline, your discipline, and what annoys you about that?

Robert de Neufville (guest)

Oh, that annoys me. I'm annoyed by a lot of stuff.

Vass Bednar (host)

Me too, by the way.

Paul Samson (host)

[inaudible 00:15:55]. Yeah.

Robert de Neufville (guest)

Yeah, there's a lot of annoying stuff. I'm actually not that annoyed by it, but there are definitely times when someone will make a forecast and often pundits will make a forecast. I'll think, "Oh, that's just totally wrong." I don't even know that much about it, but that's insane. Well, first of all, pundits, right? And this is something I talk a lot about in the research, they have an incentive to spew out hot takes, right?

Vass Bednar (host)

Mm-hmm.

Robert de Neufville (guest)

So a sober forecast probably isn't that interesting. So, a lot of the forecast you see on TV or whatever, buy some talking hat are terrible, terrible. And they don't pay any price for it because nobody remembers that they got it wrong. And they'll just go back often and say they got it right. So, nobody keeps score. One of the things that's valuable about the research is they started keeping score and seeing who actually was doing a good job of it.

Vass Bednar (host)

Who's keeping score, though? Like who is they? It doesn't seem like it's just everyday people watching, or just-

Robert de Neufville (guest)

Sure.

Vass Bednar (host)

... tell me more about who's keeping score.

Robert de Neufville (guest)

Well, in the original project, they would evaluate our forecast for how closely they fit outcomes. And essentially what you do is, if you're asking about a discrete event like, "Will Trump win the election?" Either that happens or it doesn't. So, if I say there's a 50% chance that he'll win, that's a terrible forecast, that's like flipping a coin. It's not useful. If I say, "There's a 60% chance that he'll win," and he does, well, you would say, "The outcome was a one, it happened with a 100% frequency," and I said 60%. That's not very useful on its own just as one forecast. But if I make a hundred of those forecasts, you can basically see how well my forecasts are correlated with outcomes.

And essentially, the more closely they're correlated, and this is... You may have heard about a Brier score, this is what a Brier score does. The more close they're correlated, the lower your Brier score is, the better your forecasts were over that big collection. So that's how they keep score. Now that I'm making forecasts on my private subset, or my personal substack, nobody is really keeping score except for me-

Paul Samson (host)

And the data centers.

Robert de Neufville (guest)

No one's reporting on it other than me, but that's how you do it.

Paul Samson (host)

Obviously with the internet, with Twitter X, all these social media platforms, there's a lot of couch or sofa surfing predictors and forecasters out there. One of the ones that comes up all the time, of course, is artificial general intelligence. How close are we? When is it happening? And a lot of the companies themselves fuel this, right? And so Sam Altman OpenAI recently came out on X saying, the rumors about AGI are Twitter hype and there's lots of cool stuff coming, but chill out and cut your expectations by a hundred X, right? Like there's so much out there. Comes back to that question before of, what do you pay attention to, and what do you just totally ignore? And maybe a comment on AGI as well if you're willing.

Robert de Neufville (guest)

Yeah. No, it's a good question. I think a lot of the skill of the forecasters figuring out what's hype and what is meaningful signal, finding the signal in the noise. It's tricky with AGI, well, for a variety of reasons, but I hadn't heard that Sam Altman said that. Typically-

Paul Samson (host)

It's like two days ago or something, right? It was very recent.

Robert de Neufville (guest)

On some level it's not surprising, but Sam Altman and the other heads of AI companies are trying to raise a staggering, staggering amount of money. So, they've actually been talking some of the stuff up.

Speaker 1:

According to recent online posts from OpenAI, CEO, Sam Altman, one of the most meaningful AI breakthroughs may be coming faster than most people would believe.

Robert de Neufville (guest)

And I tend to discount that because if you're trying to raise whatever it is, $2 trillion, you need to tell investors a pretty good story about what they're going to get back about it. And it's like, trying to buy a car or something. They be totally honest with you about what they're selling.

Vass Bednar (host)

What? Car salesmen are being totally honest? Guys.

Robert de Neufville (guest)

Well, that's the thing is if somebody-

Vass Bednar (host)

I get it.

Robert de Neufville (guest)

Something seems like a sale pitch to me.

Paul Samson (host)

The new used car salesmen of this age are the promoters of a lot of these fancy toys, right?

Robert de Neufville (guest)

That's what I think. But I will say, I have heard buzz from people who are more connected to artificial intelligence. From AI research I was talking to actually just last night, that there is people at OpenAI are whispering that, "We're close," and that's where some of the hype is coming from now. I don't think that means we're necessarily close. You would expect some people to start saying, we are close a little early. Also, it's not that clear that AGI is a well-defined category or-

Paul Samson (host)

It's not.

Robert de Neufville (guest)

... what exactly happens when we get there. This is all talked about in vague, loose terms often, but I think we probably are close to some meaningful breakthrough. What's happening with the current models is very impressive and they're systematically flawed in some ways. But those flaws are probably patchable and could be patched pretty quickly by using things like chain of thought techniques and maybe, I don't know, expanding the context window so they can be more aware of the world around them. So these things may make a big difference. Now, I personally don't think that we're about to reach some threshold that's going to lead to God-like magic AI powers. I'm skeptical of that story, although very smart people disagree with me on that. But I think there's a good chance that sometime in the next 3 to 5 years, we're going to see a pretty impressive breakthrough that will transform some things. I want to add about that before you ask me another question though.

Vass Bednar (host)

Sure.

Robert de Neufville (guest)

Things like AGI are very difficult to forecast because... I said that the best guide to the future is the past, right? There's no past on this. We've never really invented artificial general intelligence. So, I can't tell you how often that happens. I can maybe look at some other technological breakthroughs and try to extrapolate, this is how often technological breakthroughs happen.

Paul Samson (host)

Like printing press or something.

Robert de Neufville (guest)

That's there's a lot about this situation that is unprecedented and unique. That's also a problem with trying to predict something like the use of a nuclear weapon, that has barely ever happened. So, there are certain things that because they're so rare and because there was maybe no possibility it could even happen with nuclear weapons, there was no possibility it could have happened before they were invented. There was no possibility AGI could have been invented a hundred years ago. So it's very difficult for me to say, "Well, this is how often that kind of thing happens." So for unprecedented things like that, maybe no one's good at doing that. I can use my reasoning, but that's not really what the best forecasters have mostly been doing.

Vass Bednar (host)

Well, do you want to stay in the space of catastrophe a little bit? Maybe we could make some of this work more concrete. You mentioned talking to an AI insider just the other night. I'm super curious to know what you're keeping your eyes and your mind on right now. Maybe for a future post on the private substack. I know it's public, but I just like that you mentioned it.

Robert de Neufville (guest)

I don't know why it's going to [inaudible 00:23:41].

Vass Bednar (host)

The combination of private and substack personal, I get it. Yeah, maybe walk us through it a little bit more, some of your thinking as detailed as it is appropriate to be.

Robert de Neufville (guest)

Yeah. So as you mentioned upfront, I used to work as a researcher for a nonprofit that looked at the risk of global catastrophes, things that are sort of large enough to affect the entire planet and maybe change the course of human history or something. And one thing is that for most of our history, most of global catastrophes you could think of would be natural, like a super volcano or a meteor or something like that, or a pandemic which could be artificially induced but has a natural component in any case.

But as technology has gotten more and more impressive, we have a new capacity to harm ourselves potentially with our powerful technologies like nuclear weapons. Potentially like strong artificial intelligence that we're not totally in control of, maybe with biologically engineered diseases, right? So there are new things to worry about, and now probably the big risk to ourselves comes from us. And as it gets easier and easier to do certain things like maybe engineer a disease in your garage, then you would think the chance of these things happening goes up. So, that's always been my big concern. What am I worried about right now? I'm a little anxious about artificial intelligence. As I said, I don't really believe the runaway singularity story, but it's not totally clear that we can control the impact of what we unleash on the world.

And so I have some anxiety about that. New nuclear weapons are always still a concern. One concern is related to artificial intelligence. Some people think there is or should be an AI race between the US and China.

Speaker 3:

There's a Chinese startup that few people had ever heard of until the past few days and it has emerged as a real player in the AI arms race it's called-

Robert de Neufville (guest)

In this story, if the US is getting close to some powerful transformative AI, then maybe China has the incentive to try to stop us violently, right? So, I don't think the risk of nuclear war went away when the Soviet Union disappeared. I mean, there's also still Russia, right? So, I think there's still some risk of nuclear war. The risk of biologically engineered diseases or lab leaks is not gone. It's real. I don't think COVID was a lab leak, but it's not implausible that it was or that there could be a lab leak in the future that did even worse things. So those are the things that I'm more anxious about in terms of global catastrophes.

Paul Samson (host)

Right. I mean, there are a lot of nuclear weapon powers, including North Korea. So, the list is quite long. But one of the blog posts that you made at the beginning of 2025, Robert, was you basically said, "Expect surprises in this brave new world." So, how do you factor in those surprises or even black swans? Is there something that has shocked you? Yeah. Could you give an example of something that's just like, this was truly a black swan in the way you thought about things, or just give us an example of a surprise.

Robert de Neufville (guest)

I mean, I guess, one of the big things for me is how American politics and global politics is changing. I didn't think Trump would win. I gave Kamala more than 50% chance of winning. I think in hindsight that was a mistake. I mean, obviously she didn't win, but given what I knew, I think I probably should have had a higher chance that Trump would win.

Vass Bednar (host)

But did you not want to admit that or were you listening to different things?

Robert de Neufville (guest)

Absolutely, I was biased, probably. I find it difficult to get inside to imagine why someone would vote for Trump if we're being honest, and that makes it difficult for me to conceive that lots of people will. I know that intellectually, but I think it's not so much that I didn't want it to happen, so I thought it wouldn't. Because I'm often very pessimistic about things, but I think there probably was some like how could people vote for this guy that was affecting my forecast? But it's not totally irrational either, right? In the past, a lot of things that he has done would've been disqualifying, right? So when I say surprises, I think that American politics has changed so that the norms that were very firm that you couldn't really disregard, you can now disregard, and I don't know where that takes us. And it's not just the US either. This is something that's happening, I think all over the world.

Paul Samson (host)

It's global.

Robert de Neufville (guest)

So, I'm expecting some stuff at the political level to happen that would've been inconceivable for much of the post World War II era. And I don't know exactly what, but some of the things that we would've thought, "Oh, that'll never happen." It's probably going to happen.

Paul Samson (host)

Right. Yeah, actions that are wildly off course or unpredictable get reactions that are wildly off course or unpredictable, right?

Robert de Neufville (guest)

Yeah. And I also just had this general sense that people we're in a weird place. I think a lot of it is post-COVID. The people really were fed up and a lot of it is prices, but it was a hard period for a lot of people. And I think there's a lot of anti-incumbent, anti-system energy, and I don't know where all that goes. I mean, we're going to see in your Canadian election what happens, but I think we've got the anti-incumbent vibe there too, right?

Paul Samson (host)

Oh, yeah.

Vass Bednar (host)

Oh, definitely. Definitely similar vibes. I mean, it's fascinating to hear that phrase inconceivable or incomprehensible, and then also couple or root it in a conversation about forecasting. If we can come back to your history as a prolific portraitor for just a second-

Robert de Neufville (guest)

Okay.

Vass Bednar (host)

... it strikes me. It's like-

Robert de Neufville (guest)

It was a two or three month period, I should say.

Vass Bednar (host)

Oh yeah, right. No, I know. Still, right? Just that rooting and because you mentioned that platform. Fast forwarding to now, it does strike me that it feels like almost everything is being... This isn't a real word, but gamblified, right? In the US right now, you have the Polymarket platform.

Speaker 2:

While poll after poll showed the presidential election in a dead heat.

Speaker 4:

Look, this is our running average of the polling averages in the battleground states. What, they're all close.

Speaker 2:

In the world of online prediction betting, a different picture emerged. Crypto gambling site Polymarket showing that a majority of users thought Donald Trump would win in the final weeks [inaudible 00:30:52].

Vass Bednar (host)

It's not as popular or I think maybe allowed in Canada. And you have CALC, right? Like prediction markets are all the rage and every little outcome in the near term is turning into a bet. We wondered what you saw as being some of the dangers of the expansion of that world, right? Where's that line between it being playful and engaging to maybe being closer to gambling or something that's more addictive? I'm asking you a hundred questions, how should we think of it, and what does it say about outcomes in the world?

Robert de Neufville (guest)

Yeah. Well, I'm not that worried about the gamblification of everything. I think that for people with gambling issues, it's pretty serious potentially to expand it. A lot of stuff still isn't really legal in the US. I think out in the Trump administration that there may be less enforcement of those past things, but you could bet on politics at Betfair in England for a long time. We've been sensitive about that in the US. They don't like it when you bet on American politics because maybe there's some idea that you're going to get rich by influencing it or something.

There were no American political questions in the US Good Judgment Project because they didn't want us... I don't know, it was uncomfortable in some way. I don't think it's so bad. And I think those prediction markets did a better job than a lot of the skilled forecasters than I did for the US election. And I think probably for the right reasons, they were picking up on a signal that I was missing, and sometimes that can be really good. So, it's useful potentially, you can get useful information. Maybe you could even hedge against political risk by using those prediction markets, but I'm also not as optimistic about the potential of them. I think that the reasons to get involved in prediction market. One of the main reasons is it's fun, and that really makes the markets for things like major elections pretty viable.

But you're not maybe going to go on a prediction market to do some boring question. So the boring questions don't get answered very well, unless there's a lot of money in them, and that's already being handled by the stock market. So, at some point a lot of people were creating markets like, "Should I date this person?" Well, that's not going to work because you don't have enough friends to really give you good information about that. And they're all probably biased, they have their own ideas about what you should be doing. In order for that to work, there'd have to be millions of dollars at stake and people would be monitoring you and trying to figure out your personality and stuff that you wouldn't want anyway. So there's this very like, "Oh, prediction markets can solve all our information problems." I don't think that's true. On some level, you're probably better off just paying good forecasters. You have a specific question you want to answer because prediction market's not going to open source it for you. But I don't think they're super harmful except potentially as more gambling.

Vass Bednar (host)

Fair enough.

Paul Samson (host)

Right. One thing that strikes me about it is that there is this... Let's call it, tokenization of everything. Digitalization of everything. It's global, it's 24/7, it's an adrenaline rush, right? You get onto a prediction market and there's people from around the world. Isn't there a chance of this being quite transformative, that all of this stuff like to Vass's point, that whether it's gamblification or tokenization or whatever, there's a convergence of platforms and activities that are creating this whole new interaction and the implications of that seem to not be insignificant regardless of what they're talking about or outcomes.

Robert de Neufville (guest)

I think there's certainly a chance. I mean, as a forecaster, I often feel there's a chance of a lot of different outcomes. I'm not super certain of anything really. If you're super certain, a lot of times you're overconfident, but I don't know. I don't know if I believe it's going to be super important. You're right that people are doing more and more of this, but I'm less convinced that we're going to be betting on everything in our lives anytime soon. I think-

Paul Samson (host)

Right, like your dating example. Yeah.

Robert de Neufville (guest)

... maybe a little bit of a flash in the pan, but I could be wrong.

Paul Samson (host)

Yeah.

Vass Bednar (host)

Yeah. It's not personally attractive to me. I don't engage in something like that. But I do observe the environment of advice shifting. You were mentioning friends, small sample sizes to people seeking advice in a way that's maybe more performative, more open, more implicitly relying on crowdsourcing, like posting your personal problems in a viral Reddit thread. I don't know what we're trying to accomplish there, and I have no question about it for you. I'm just telling you.

Robert de Neufville (guest)

I don't know [inaudible 00:35:50].

Vass Bednar (host)

What about any connections to that growth of self-directed investing and autonomy sort of taking things into your own hands, not relying on expertise, kind of becoming the expert. Do you see that as an extension of the broader forecaster thinking?

Robert de Neufville (guest)

You see jokes on Twitter X or social media platform sometimes that every time there's a new news event, everyone becomes an expert in a new field. And you first you're-

Vass Bednar (host)

Oh, true.

Robert de Neufville (guest)

... an expert in epidemiology, and then you're an expert in fire management.

Vass Bednar (host)

Tariff. Yeah.

Robert de Neufville (guest)

Yeah. Tariffs, whatever it is. And it's a little bit funny because that is sort of what we do as forecasters, right? We go from subject to subject and try to get up to speed on them, but I don't really recommend that everyone try to be an expert in everything. It's just not worth your time. You should rely on other people's expertise. And it's a little bit tricky, right? Because then you have to figure out who are the people to rely on. And I can't give you a really hard and fast role to distinguish the junk science from the real science if you're just logging on the internet for a few minutes.

But in general, you have to rely on the expertise of others. I can't be an expert on everything. I'm lucky if I can come up to speed on one subject for a brief period of time. So, I am a little concerned if we're being honest about the turn against experts. I think that we should continue to trust experts and institutions. And I understand why people don't, there's a lot of reason to be angry about society and the advice we're given, and people get body horror about vaccines, but no vaccines are good. They've saved a lot of lives, and I don't want to see polio coming back or hear people's kids getting measles or whooping cough. So, I wish we would pay more attention to experts. But it is really difficult for me to tell people, "Here's how you know who the real experts are," because it's not that simple. It's a practice of distinguishing real information from miss or disinformation.

Paul Samson (host)

Yeah. It feels like that toolkit, like the ability to pick the right information, the most credible information is going to be one of the key things going on. I just saw some stats yesterday about the news source or information source that Canadians are using, and it's spread across traditional media, print and TV to Facebook to just raw Twitter and X feeds to specialized or local news to Reddit chains and this kind of things, right? It's just unbelievable how spread out that is. So it's even more difficult to say, "I'm going to trust this expert." You actually have to pick which lane you're even looking in first for that information.

Robert de Neufville (guest)

Well, I think it's really hard because I don't think... It's not obvious that the way to make money is to provide people with accurate information. If you're Facebook, you don't really care whether or not people are paying attention to your website because you're providing useful information or because you're making them angry or exercising them. In fact, you probably get more engagement by providing disinformation. So, right now I think that a lot of these sites have a business model which is to provide junk. And you can't really blame people for lapping it up. That's what it's designed. It's designed to trick you on some level. And I think with a media ecosystem where you're used to have one paper that covered a big geographic region and they had an ad model that doesn't work today, maybe they had a different incentive, although there was always been yellow journalists and stuff like that. But right now, I don't think anyone has an incentive to help you distinguish real facts from fake facts.

Paul Samson (host)

Right.

Vass Bednar (host)

There was a report, so here up here in Canada, in the great white north, the federal government has a shop called Policy Horizons. And that shop uses strategic foresight to help the government prepare for future impacts. And recently in a report, the most probable near term hideous thing they could envision happening was that within the next three years people wouldn't be able to tell what was real or what's not in the media ecosystem. I feel like I'm already there. I'm like, "Guys, I'm in the future." There's lots of stuff I have to ask my friends like," Is this real?" And kind of guess and check. Maybe you could talk to us a little bit about how cognitive forecasting, forecasting does link up to or influence policymaking and/or intelligence communities. There's that playful element we talked about in terms of playing around in prediction markets, but there's also I think really interesting connective tissue to public governance institutions.

Robert de Neufville (guest)

Yeah. So a couple of things. One, I would say, I think we're in a different media environment, but at the same time, this has always been a problem. We've always been in a red queen race between disinformation. And as we can fake more images, people have to learn that some images are fake and that's just been going on forever. I would recommend that you be skeptical. It sounds like you're saying, you ask your friends, "Well, maybe your friends aren't the best source, I don't know your friends." But in general asking yourself...

Paul Samson (host)

They're good, they're good, I can [inaudible 00:41:24].

Robert de Neufville (guest)

They're good? Asking yourself, is this right? Is it too convenient like, it sounds good to me? You should probably be doing that with information you're getting. You shouldn't trust any information that comes to you. You should have a little bit of skepticism when it comes in, and that's the way to be a better information consumer. As far as the policy thing, forecasting has been... When they discovered that some people like me were relatively good at forecasting, there was a lot of excitement like, "Oh, we can use this information. They're going to come in, and policymakers they're going to do different things because of it." And that didn't really happen, and it's turned out that it's actually pretty difficult to use the forecasting that I do to change policy. So, if I say there's a 8% chance of a coup in a certain country, it's not obvious what you as a policymaker do with that information. Is that even a lot? Maybe you don't have a set that is a lot, but you maybe don't know exactly how to use that.

So, it's turned out that really what policymakers need is a more rich understanding of what the forecast mean rather than some point probability. So, it helps a lot to ask a lot of different forecasted questions about what subject. To look at the forecasters rationale for why something might happen or might not happen. To have counterfactual forecast or conditional forecast. I mean, where you say, if something happens, then what will happen, right?

Vass Bednar (host)

Mm-hmm.

Robert de Neufville (guest)

And look at the consequences, because policymakers need to understand how things work as much as they need some blank, "All else being equal, this is what's going to happen." But it's tricky. The example I like to use is a little bit with a COVID pandemic, right? Let's say, you want to figure out your government policymaking. You want to figure out, should we impose a mask mandate, right? So you might ask a forecaster, let's focus on the mask mandate.

How many deaths will there be in this region if we don't employ as a mass mandate? And how many deaths will there be if we do? That's a way to try to get at the consequences of a policy, right? But what's tricky about that is that forecasters will say to you, probably, good forecasters will say-

Vass Bednar (host)

How probably?

Robert de Neufville (guest)

... "Well, there're going to be more deaths if you oppose a mask mandate." And that's not because they think the masks are going to cause the deaths, but because they know there's not going to be a mask mandate unless the pandemic's pretty bad, right? So, mask mandates are associated with more deaths because they're a sign of them, right? That's how you get the mass mandate. So, it's actually really difficult to make forecasts in a way that gets at the consequences of policy questions. You'd have to ask something like, today, given what we know right now, what would happen if we impose a mask mandate? But it's a little bit tricky because you can't really score that question or see how well you've done because you can't evaluate the all else being equal part of it. So, making forecast useful has been the big problem for people who are trying to sell them like the Good Judgment Project, like the Swift Centre, which I work with as well. They have to figure out how to make the numbers useful for policymakers.

Paul Samson (host)

Right. I'm thinking about a lot of the work that we've done, we've actually couched it in scenarios. And the reason why is that we started off with some projections. I wouldn't even call them forecasting, [inaudible 00:45:02] saying, "Okay. Well, who's going to grow in terms of economic growth? What are the demographics look like? What's the debt level? How well are they doing on innovation?" You can do fairly well projecting those out, but then it comes down to a lot of other variables. So, we pitch this in terms of various scenarios based on the data and projections. And that tool seems to work quite well with the policy community, get some thinking about different alternatives and options and things. Do you find that to be a helpful way, or how does scenarios relate to what you're working on or forecasting?

Robert de Neufville (guest)

I think it's useful. I think that it gives you a way of breaking it down a little bit. I'm sometimes a little bit leery about scenarios because people tend to anchor on them. If you say, "Here's a thing that could happen," people may be overweight, the realistic possibility of that scenario will happen. But I think that that's the thing you need to do. You need to look at different types of circumstances and ask yourself, "Well, what might happen in this type of circumstance?" And I think you need a lot of communication with policymakers. You really want to talk to them about what they need to know and what... In designing forecasts, it's not just about having me predict a probability, it's also about, what are the right questions to ask? In order to know that you need to work with the policymakers and figure out what their issues are, I think, that's what we're discovering.

Paul Samson (host)

I'm going to follow a sports analogy here with a double header and go right away to another question, and that is when you mentioned your own story about the sports, what attracted you? It makes me think of a lot of buddies I had who were obsessed with sports stats and things like that. And I can see how the betting markets, the prediction markets have come out of that in a certain way, right? Like you think of that movie Moneyball, was it called, right, where it's... But it mostly seems to be guys. Is this somehow something that just really appeals to guys or is it somehow an exclusive space? Are there many women involved in this, or is it for some reason just a guy thing that are really dominating it?

Robert de Neufville (guest)

Well, I'm not an expert in gender really, but there are certainly fewer women doing this geopolitical forecasting that I'm doing. I don't know exactly why that is. I think there is something about the analysis. I mean, maybe sports even more so, but something about the analysis that appeals to men. But I also think it has to do with just generally being fewer women interested in geopolitics for social and cultural reasons that aren't even about appeal, but historical, I wouldn't even be able to do that analysis. But there are a lot of reasons why you don't have as many women in some of these fields, and that carries over geopolitical forecasting.

Paul Samson (host)

Yeah, it'd be interesting to know on the Polymarkets and things like just what the gender numbers are there. Maybe it's just more men playing around with these things at midnight more.

Vass Bednar (host)

Maybe men are more performative with it. I mean, I'm only half kidding there, but what's a prediction if you don't share it? For anyone listening of any background who's interested, how does one start to think about learning how to engage in the work of forecasting or... I mean, I think you mentioned a way that people could pursue training. Where should people start?

Robert de Neufville (guest)

Yeah. I mean, if you just Google, there are some guides to some of the techniques. I recommend Good Judgment Inc, which is I said is a spinoff of the original Good Judgment Project has something called Good Judgment Open where they post questions you can look at and try to predict. And in some ways it's like any skill, the best thing to do is to practice. And you try some of these questions and you think, "Oh, my forecast was bad on that one. What did I do wrong?" It really helps to try to figure out what you did wrong after you do something wrong, and that's a really good way to get good at it. And you can also do things. You can also use prediction markets and stuff. Metaculus is a really good resource. They have a bunch of questions that you can take a look at and try to predict, and it's just a practice talking to people on the site about the forecasts and reading their rationales and asking yourself, do I agree with that? Do I not agree with that? And that's how you get good at it, I think.

Paul Samson (host)

We touched on it a little bit earlier, but the idea that we're in a moment where there are a lot of moving pieces, a lot of those pieces are chaotically moving, some of them are political actors, some of them are states. Is there such a thing as a meta-forecast in the sense of when there are that many moving pieces and that much volatility, let's say, in the system. Does the chance go up of something really breaking, right? Like it's a moment where you know something's going to go wrong. It's either in Europe or it's in the South China Sea, or there's going to be something go wrong because the chances overall are higher, even though you don't know what the, "It is," or you don't need to specify what the, "It is." Is there such a thing as a meta forecast in that sense?

Robert de Neufville (guest)

Well, I think you mentioned my prediction that you should expect surprises. And that's a little bit throwing my hands up and saying, "I don't know what's going to happen in 2025," but I think that's a meta forecast. I think that there will be more of those surprising things. Now in any normal year, there are some surprising things. You have millions and millions, billions of events. I don't even know how many things you would want to categorize as events. Some of them are going to be pretty surprising just by random chance. So they're always going to be... [inaudible 00:51:08] shouldn't be a surprise if something happens in the South China Sea, but I think my meta forecast is that there may be more of that this year because of innovations in artificial intelligence, because of the political format and so on. So that is a meta forecast, I think.

Paul Samson (host)

Right. And when a surprise happens in a messy system, the surprise can have a bigger consequence, right? If your car is starting to break down and you've got a problem, you're thinking, "My car's in trouble." And then you have another surprise. That surprise can be a lot more serious at that moment than under normal circumstances, right? It's not of a great analogy, but you know what I mean, right?

Robert de Neufville (guest)

I do know what you mean. I think that a lot of the things that I would predict, like political things are a messy system. It's like the weather. Not everything in the human world is like that. It's pretty easy to predict population growth, for example. We can probably do that normally far in advance, but politics is a complex system. You have a lot of agents that are adapting to one another. So, it's a little bit like the weather in which a small thing that happens can totally shift us into a new state. And that makes it difficult. And if more of that small stuff happens, we may be more likely to be perturbed into a totally different world. I think that's probably true.

Paul Samson (host)

And I've just got to say, that we work on the Centre for International Governance Innovation. We're very focused on technology governance as being one of those key variables, right? And you're just seeing it across the board, whether it's AIs we've talked about, or quantum technologies emerging. Whether it's disinformation, outer space, there's just so many moving pieces right now, and they all feel like there's some surprise element that's very, very possible there. And way too many loose ends.

Robert de Neufville (guest)

Well, I would be surprised if we're going to Mars anytime soon. There seemed to be a lot of hype about, lots of outer space stuff, and that's [inaudible 00:53:01] probably slowly.

Paul Samson (host)

[inaudible 00:53:02] the inauguration. Yeah.

Robert de Neufville (guest)

But yeah, I think that's right. I'm sometimes amazed by how little technology has changed. It's been huge in some ways. Medical technology, information technology has been huge, but we're still fundamentally driving the same cars as we were when I was born. They're different cars, but they look pretty similar, and they're functionally pretty similar.

Paul Samson (host)

Retro station wagon.

Robert de Neufville (guest)

It's weird as [inaudible 00:53:25] things don't seem to change very much at all. But then under the hood, there are these huge innovations where cardiovascular treatments are much better than they were when I was a kid.

Vass Bednar (host)

Or there's just more surveillance everywhere. It's also funny just to think about the future, you mentioned outer space. Some of the time as a working/overwork by myself, toddler mom. Sometimes like, "Wow, will my son go to outer space?" Is that the closest I'll get to outer space? What would it feel like to watch my son shoot into the upper, like out through the atmosphere?

Paul Samson (host)

Will he be required to go?

Vass Bednar (host)

Right. And would he need me, anyway? Is he allowed different questions? Listen more seriously-

Robert de Neufville (guest)

Do you let the toddler go to outer space? What are the rules in your household? Yeah.

Paul Samson (host)

[Inaudible 00:54:10] consent form.

Vass Bednar (host)

Actually, when he was a tiny baby, I would talk to him and sometimes I was like, "You know what? If you go to outer space, I think we should go together." I don't know if you can go alone because I'll just miss you so much.

Robert de Neufville (guest)

That's sweet.

Vass Bednar (host)

Yeah. I hope that didn't deter him. So, I want to ask you a slightly more serious question. You've gestured at the upside of the world right now, curveball unpredictability, and we're still sticking with. We can make educated guesses about future events. What's something that you think our listeners should keep in mind either about the future going forward or how to think about it?

Robert de Neufville (guest)

Let me just say, I like the phrase educated guesses because that's basically, what good forecasters do. They make good educated guesses. We're not getting visions of the future from Apollo. We're just a little bit better at making educated guesses and do it systematically. As far as thinking about the future, I think Steven Pinker is roughly right that the trends are generally positive. And I was on a podcast recently and they wanted to ask family planning advice. Should you bring a kid into this world? Are things going to get worse? And I'm not really qualified to give family planning advice, but I think generally things are getting better.

If you look back, there's a pretty, we're so much richer. Our health is better. It always feels bad. Every moment feels bad. I feel bad about American politics, but I would've felt bad about American politics in 1962. So, I think that things are generally getting better. I don't think it's a guarantee. I don't think there's some Hegelian necessity that we all reach some perfectly good egalitarian Star Trek future. I don't think Hegel was into Star Trek, but I think things could go wrong, right? I'm worried about authoritarianism and democratic backsliding, and I don't know for sure that the world keeps me becoming more democratic. I don't know for sure that our lives get better. There could be a catastrophe of our own making that gets them worse. I'm not ruling any of those things out, but my general sense is that the trends have been positive and overall with some local disruptions will probably continue to be positive for as far as I can see.

Vass Bednar (host)

I like that as a note to end on. I'm finding that helpful. Even though you mentioned authoritarianism and stuff like that.

Robert de Neufville (guest)

Yeah. It's way I really moan anxiety.

Paul Samson (host)

That's great. Well, thanks so much for being with us today here, Robert, and it was a pleasure to meet you. And thanks so much for these ideas and comments.

Vass Bednar (host)

Thanks, Robert. Yeah.

Robert de Neufville (guest)

Really interesting conversation. Thank you.

Paul Samson (host)

What do you think, Vass, do you have any thoughts on what we just heard there? Any predictions of what comes next? Anything at all reflections?

Vass Bednar (host)

You and I are chatting the week of the Trump inauguration, and I find myself feeling very emotionally and cognitively destabilized. I feel things are uncertain, things are changing, things are changing fast. I'm not sure how Canada's going to change this year. Everything seems very noisy. So, I find myself attracted to this idea that with the right information and the right data, we can peer around the corner. So yeah, I'd love talking to Robert and understanding his discipline and his field, and how he's trying to help humanity and help people think about what's most likely to happen, but also we have to be ready for what it feels like when things go totally sideways. What about you?

Paul Samson (host)

Yeah. I mean, I'm obsessed with these meta-forecasts in general. So, I gravitate towards that stuff. And I think it comes out here a lot. One example that we didn't ask specifically, but it's like, how healthy is democracy? There are problems on the horizon, right, authoritarianism, et cetera. But the question would be, will democracy be gone in half of the current democracies 10 years from now, or that kind of thing? I'm not sure that the superforecasters would say, "It is going to be gone," but they'd probably very much say, it's going to be a rocky road and that we've got some work to do, right? To keep that going and keep it healthy in all countries, pretty much, right? You see it in Europe now, you see it in South Korea, you see it in Canada. Are there any democracies that are totally smooth? I don't know, Iceland, they're not too many, right?

Vass Bednar (host)

For now, for now.

Paul Samson (host)

Yeah, for now.

Vass Bednar (host)

Where do you read these? Where do you go for meta-forecasts? Like do you have any sneaky places on mind that you sometimes check out that you're learning from?

Paul Samson (host)

I think it's a [inaudible 00:59:25]. It's like what we said of, what source do you go to, if you want the truth on something, the truth. I mean, exaggeration, but you want the best information you can find. You got to probably pick a couple of platforms and a couple of sources, and then make your own judgment between them. So, it's like that for meta forecasts too, in my mind. If you've got a whole bunch of information, like what's your big takeaway of it? And then, you got to throw in that surprise factor. That's the killer, right? Because it's like, "Okay. Well, you're right if all these things happen, but what about that one X factor that if it happens, it blows this totally out?" And the chances of an X factor happening are probably like 10%. Like he said, there will be a surprise in 2025. There will be, is it economic? Is it health? Is it an individual? Is it war related? Is it conflict related? It's one of those things that's going to happen. Which one is it? We don't know.

Vass Bednar (host)

And one thing we didn't get into with him is we were probing a little bit on how governments can harness this discipline, right, for better or for worse.

Paul Samson (host)

Yeah.

Vass Bednar (host)

One wonders, how you learn and maybe that's a bridge between scenarios and forecasting. What can you do in the interim to increase, if anything, the likelihood of the probability that you want to have happen, right? If we know what some of the huge risks are now for our future, how can that be instructive to our direct tomorrow instead of something that's ambiguous?

Paul Samson (host)

I mean, my final takeaway would be that you need those redundant thoughtful systems, right?

Vass Bednar (host)

Mm-hmm.

Paul Samson (host)

And this is where the military tends to get things better than many other organizational entities where you've got constant challenging of what you think is going to happen because you have do the plan, right, in the war gaming and all that stuff. We don't do enough of that in the economic planning, the social implications of things. And government tends to just become a little bit of an insular inside the beltway, as they call it in DC or Ottawa kind of air. And there's not enough outside thinking, like we should institutionalize that more.

And I'm not talking about tune our own horns, but just the structurally, you want people challenging with wild ideas and what ifs out there more than we have. Fun times. It's going to be a 2025 will not be boring, Vass, I predict that.

Vass Bednar (host)

Good. Okay.

Paul Samson (host)

I don't know. Do you have any other predictions? And oh, one other thing I want to say, another prediction I have is that I think everyone on this call agreed that we may be happy about some things. There'll be some optimistic outcomes and all that, but we will still be unhappy with politics under any scenario. That seems to be a solid prediction is that people are grumpy about politics no matter what, no matter when.

Vass Bednar (host)

We're in our grump era. I like that. I mean, look. I prefer even numbers to odd numbers, so I'm already a little nervous about 2025, but it's an even number because if you think of 25 as being quarters or how it's a square root, you can get some evenness. So that's my glimmer of optimism in my strange personal numeracy.

Paul Samson (host)

You're waiting for 2028. Now, we know.

Vass Bednar (host)

Numerology I meant, oops. Yeah, I'm waiting for 2026.

Paul Samson (host)

You know waiting for 2028, or 2026 that was it, not 2028?

Vass Bednar (host)

I'm waiting for 2028 big time. Yeah.

Paul Samson (host)

Okay, so we'll come back to that in another episode to reveal what the 28 means.

Vass Bednar (host)

Sounds good. Policy Prompt is produced by me, Vass Bednar and Paul Sampson. Tim Lewis and Mel Wiersma are our technical producers. Background research is contributed by Reanne Cayenne, Brand Design by Abhilasha Dewan and Creative Direction from Som Tsoi. The original theme music is by Josh Snethlage, sound mixing by Francois Goudreau. And special thanks to Creative Consultant Ken Ogasawara. Please subscribe and write Policy Prompt wherever you listen to podcasts, and stay tuned for future episodes.