Episode 10

In Our Computational World, What Do We Know? (seeing the many worlds with Michael Richardson)

Do only humans have the agency or perspective to observe, record and react to the world?

PP_S1_Michael_Richardson

Episode Description

Join hosts Vass and Paul for their fascinating conversation with Michael Richardson, associate professor of media and culture at the University of New South Wales in Sydney, Australia, about the ideas in his book Nonhuman Witnessing: War, Data, and Ecology after the End of the World (Duke University Press, 2024). Michael explores the ethical and political implications of witnessing in an age of profound instability, and how our ways of making knowledge and experiencing the world are being mediated in fundamental ways by nonhuman systems — from the embodiment of history, trauma and change in animals and natural landscapes, to the “immediately computational” witnessing by technologies such as surveillance cameras and artificial intelligence.

Mentioned:

  • Potawatomi scholar Kyle Whyte: https://seas.umich.edu/research/faculty/kyle-whyte
  • Mario Blaser and Marisol de la Cadena, editors, A World of Many Worlds (Duke University Press, 2018)
  • Future of Life Institute: “Slaughterbots are here” (https://autonomousweapons.org/)
  • “The infamous COMPAS [Correctional Offender Management Profiling for Alternative Sanctions] sentencing software”: see “Code is law: how COMPAS affects the way the judiciary handles the risk of recidivism,” by Christoph Engel, Lorenz Linhardt and Marcel Schubert, Artificial Intelligence and Law, February 2004, https://doi.org/10.1007/s10506-024-09389-8
  • Australian Research Council Centre of Excellence on Automated Decision-Making and Society: www.admscentre.org.au/

In-Show Clips:

Further Reading: 

Credits:

Policy Prompt is produced by Vass Bednar and Paul Samson. Our technical producers are Tim Lewis and Melanie DeBonte. Fact-checking and background research provided by Reanne Cayenne. Marketing by Kahlan Thomson. Brand design by Abhilasha Dewan and creative direction by Som Tsoi.

Original music by Joshua Snethlage.

Sound mix and mastering by François Goudreault.

Special thanks to creative consultant Ken Ogasawara.

Be sure to follow us on social media.

Listen to new episodes of Policy Prompt biweekly on major podcast platforms. Questions, comments or suggestions? Reach out to CIGI’s Policy Prompt team at [email protected].


72 Minutes
\
Published February 10, 2025
\ \

Featuring

Michael Richardson

Michael Richardson

Chapters

1 0:00:00

Welcome to CIGI’s Policy Prompt

2 0:01:33

Introduction to researcher and teacher Michael Richardson, author of Nonhuman Witnessing: War, Data, and Ecology after the End of the World

3 0:02:40

Questions of justice, transformation and what power means for people as driving forces in Michael’s career

4 0:08:37

What concepts underpin and unite Michael’s work?

5 0:12:57

Letting go of the idea of a “singular” world

6 0:22:56

Discussing the notion and some examples of “algorithmic enclosure”

7 0:25:04

Efforts to make the planet a “computable object”

8 0:29:02

Human and nonhuman ways of “making knowledge”

9 0:33:26

Who gets to speak, and who gets to bear witness? Plus, questions about the agency and standing of artificial intelligence

10 0:40:20

What might these new forms of witnessing mean for our collective history, storytelling and memory making?

11 0:47:41

Technological “intelligence” and “autonomy”

12 0:55:24

Are we too trusting of algorithms right now?

13 1:00:05

Challenges presented by the increasing presence of synthetic media and synthetic data

14 1:05:56

Bigger questions about the notion and value of authenticity

15 1:10:17

Vass and Paul debrief


Vass Bednar (host)

You are listening to Policy Prompt from the Center for International Governance Innovation. I'm Vass Bednar.

Paul Samson (host)

And I'm Paul Sampson.

Vass Bednar (host)

Our in-depth interviews find nuances in the conversation with leading thinkers that work at the intersection of technology, society, and public policy.

Paul Samson (host)

Listen now wherever you find your podcasts.

Vass Bednar (host)

Hey, Paul. Can I get a witness?

Paul Samson (host)

Well, what about a nonhuman witness? Does that count?

Vass Bednar (host)

It does, but wait, why?

Paul Samson (host)

Well, there's a growing discussion about posthumanism, whether only humans have the agency or perspective to observe, record, and react to the world. Our guest today explores how nonhuman entities like animals, ecosystems, and even artificial intelligence could bear witness to events.

Vass Bednar (host)

Right. Like how my recycling bin knows that I ate candy last night or when a drone with a camera records an event that no human was there to see.

Paul Samson (host)

Well, sort of but do you have that smart bin with the camera or the AI built in?

Vass Bednar (host)

No, but there are some cameras-

Paul Samson (host)

You can get them, you can get them.

Vass Bednar (host)

... My neighbors have security cameras. I don't think they're monitoring my candy, but who knows what they're checking-

Paul Samson (host)

They could.

Vass Bednar (host)

... Out in the alley.

Paul Samson (host)

They're monitoring the bin.

Vass Bednar (host)

Right. Smart garbage. Well, our guest today is bearing witness to that, or at least some of it. Michael Richardson examines how animals and natural landscapes embody history, trauma and change. And in that way they can essentially testify to environmental degradation, climate change, and biodiversity loss.

Paul Samson (host)

And he also discusses how technology like surveillance cameras and AI is increasingly participating in its witnessing process, recording events that shape society and hold significant ethical implications.

Vass Bednar (host)

What I really appreciate about his work is I had never really thought about nonhuman witnessing before, beyond forensic information like DNA and fingerprints and data trails. He's really expanded my view.

Paul Samson (host)

Yeah, he's an Associate Professor of Media at the University of New South Wales and he's clearly a transdisciplinary thinker. Among other things he's directing the Media Futures Hub and the Autonomous Media Lab. Michael Richardson, welcome to Policy Prompt.

Michael Richardson (guest)

Thank you, Paul. It's a pleasure to be with you.

Paul Samson (host)

Thanks for joining us. We're going to start off right away, Michael, with a question to give our listeners a little more background about yourself, could you say what brought you to where you are now in your career or what kind of passions even have driven you potentially even beyond your work?

Michael Richardson (guest)

I think I've always been driven by questions of justice and of transformation in the social, cultural, and political world that we live in and trying to work towards a more just world. Before I was an academic, I was a political speechwriter, actually wrote speeches for Canada's Jack Layton, the former leader of the NDP. And after that, when moving into academia, I continued to pursue questions about how power works in the world. But in particular, I shifted my focus from questions about electoral politics, which had sort of driven me as a younger person into more fundamental questions I think about what power means for people and how it's experienced.

My PhD work was about cultural representations of torture and particularly American torture post 9/11 and trying to understand how we would think about these things that were often articulated as not torture or particular kinds of experiences that might hover in between and how you would bear witness to things that were so difficult to grasp and sometimes ephemeral or uncertain in how they could be talked about. And that work has then evolved and led me more towards questions around technology, warfare, and climate change in particular.

Vass Bednar (host)

And were those some of the topics that you were contributing speech writing for for the late Jack Layton? I'm curious how and when that happened in your lifespan here.

Michael Richardson (guest)

Yeah, it's funny you fall into unexpected places. I'd worked in electoral politics in Australia a little bit in around 2007. I just finished a master's degree at the London School of Economics in International Relations and was casting around for interesting things to do and ended up working on the Kevin Rudd electoral campaign in Australia. I wouldn't say I'm quite as enthusiastic about the Australian Labor Party these days to be honest, but back then it was an exciting, very exciting time. And I think when shortly after that I was moving to Canada for love, for my now wife and the opportunity to potentially write for Jack came up through her work with the NDP and with other family connections to the NDP. And in the lead up to the federal election in 2008, I was given the opportunity to write a test piece for Jack and then an election was called a few days later and I was thrown in the deep end and learned how to do the whole thing on an electoral campaign.

I was really lucky to be mentored and guided by some pretty smart and experienced people at the NDP. And then after the campaign finished, we did okay that time. It wasn't quite the historic event of 2011 when Jack became the leader of the official opposition and not too far away from government, but we did okay that election and I was invited to be the full-time speechwriter afterwards and I spent a year and a half, two years in that role, which was an incredible learning experience really about Canada, about the craft of writing, about what it meant to have a voice. At that time, climate change and issues around warfare were certainly part of the equation of my work, but often the focus was more on working Canadians and the impact of the global financial crisis on people's workplaces and so on. As you might remember, it was a time when mills were closing. Steel towns were under enormous pressure and Jack and the NDP were really focused on speaking for those constituencies.

Speaker 4:

Because of their reckless economic policies, we've now posted the worst deficit in history. We've got a terrible job record and they have gutted the fiscal capacity of government to address the issues that we're facing. How did they do [inaudible 00:07:33]-

Michael Richardson (guest)

I learned a lot about Canada and about those issues and so on. But I think importantly for me, I learned a lot about writing and about the kind of demands of being a professional writer and academia is a different game, but I think that experience of writing on command and for lots of different audiences has stood me in really good stead. And while the book we're talking about today is certainly academic and theoretical in nature, it's driven by big concepts and it's not necessarily intended for a huge public audience. I do try and write in other modes as well, and I hope that even when I get into the dense theory world, it's still clear enough to be comprehensible.

Paul Samson (host)

That's a great way to start. And I did meet Jack Layton a couple of times and once at a climate conference, and I've never heard anyone say they didn't like working with Jack Layton. The guy was truly a universal person that everyone truly liked. Let's jump in a little bit to some of the concepts that have evolved with your trajectory. A lot of the work that you're doing is cutting across issues relating to the environment or climate change data and now increasingly algorithms and war. The question that we were wondering is this a holistic framework that has evolved for you about how you picture the world and how you seek to understand it? Or is there something else behind this frame? What's behind the frame that you're using?

Michael Richardson (guest)

I think the thing that underpins and unites all of my work is an interest in relationality or how things are connected together. And I don't mean connected together in a sort of loose way or in a way that's like analogous to a network or another kind of metaphor that might be common today. I mean that in a really sort of fundamental ontological sense. I think that the world is composed of relations and connections and things that knit us all together and knit the ecological world into the more human world, into the technological world, and so on. And so in all of my work, I'm trying to think relationally and explore things across these kinds of modes of connection, which I think can be rich and dynamic and fluid. And so when I'm looking at things like witnessing and other more specific manifestations of the way we make knowledge or the way we perceive the world or the way we know about them, I'm always resting on and drawing on this idea of relation that all things are connected to one another.

The work I've done most recently about nonhuman witnessing really orients a relational perspective on how we make knowledge towards this specific question of the role of the nonhuman in the making of a particular kind of knowledge. In other work I've looked at how does embodied experience of the world, so how does our affective and emotional life and our affective connections to the world around us, how do those things shape the way we witness and testify to experience in the context of torture and political violence for example. But in the new work, it's sort of more oriented towards the nonhuman and the technological and ecological, but still with that sort of guiding ethos of the relation.

I guess an important thing here for me is that I really see theoretical concepts like nonhuman witnessing as strategic interventions rather than sort of enduring descriptions that I want to have maintain a significance and a kind of static form in the world. I think that theories and frameworks and stuff should help us make a better world together rather than be something that just sits there forever structuring the way we think about things. I think that we have to think about philosophy and theory in this way as guiding where things go. And so that to me is a political question and a cultural one, and it can also become a policy question about, well, how do we actually manifest those things in the world of government or in the world of social policy, etc.

Vass Bednar (host)

We're speaking with Michael Richardson, writer, researcher, and teacher about the ideas in his latest book, Nonhuman Witnessing: War, Data, and Ecology after the End of the World. Michael's exploration of the many different ways in which we witness and make sense of the crises around us is creative and profound. You can buy Nonhuman Witnessing: War, Data, and Ecology after the End of the World at your local bookstore. Speaking of making worlds better, the book, I know we've already said the title, but it has after the End of the World in the title, is this intended to capture the various or different worlds from the perspective of groups or individuals or human and nonhuman witnesses? And maybe what can we learn from indigenous knowledge in this respect?

Michael Richardson (guest)

That's a great question, Vass. Yes, the end of the subtitle, after the End of the World-

Vass Bednar (host)

Yeah, sorry. Yeah.

Michael Richardson (guest)

... Is really about trying to locate the book in two different senses of the end of the world. One is the sense that we are already living in catastrophe induced by human-made anthropogenic climate change. We know this from the predictions of climate scientists and the findings of the IPCC and various other bodies that we are hurtling through the 1.5 degrees Celsius average temperature lift. We're likely going to crash through the two degree limit. And so in a certain sense we are really fundamentally already in the space of the end of the world as we have known it, things will necessarily change and probably quite radically. That's one sense that I wanted to locate the book in is this idea that catastrophic futures aren't in the very far distance, but they're already being experienced. And another dimension of that is that they have already been experienced for hundreds of years by many peoples. You mentioned indigenous knowledges and experience, and this is certainly the case. I mean the Potawatomi scholar, Kyle Whyte has written eloquently about the experience of First Nations people in North America and Turtle Island.

Speaker 5:

[inaudible 00:14:58] and talked a lot about apocalypse and talked a lot about the sovereignty, the vision of tribes that for us, our worlds had ended multiple times through colonialism, but we continue, persist, have goals and visions, are doing all sorts of amazing things in the world to resist, fight back, and show a better way to live.

Michael Richardson (guest)

Describing the way ecological devastation, transformations of biodiversity, transformations of land use, and so on have already been experienced for hundreds of years by many peoples. And those things are only now being experienced by the wider world and particularly by settler colonial peoples. And so that's one sense of the end of world that sort of already here and already has been here for a long time for some parts of the world. The other sense is in the idea that we need to let go of the notion that we live in a singular world governed by one way of knowing and in which things all sort of fit within the same sets of systems where we can share knowledge perfectly between one another. And we need to let go of that idea because it's never been true, but also because we really know it to be true in some pretty fundamental ways right now.

And one of those would be for instance, the really radical differences between knowledge systems that descend from the European tradition so heavily influenced by Greek and then Roman thinking and then Judeo-Christian philosophy and so on, that those forms of knowledge differ quite radically from what we might find in China, Japan, in Asia, and then amongst indigenous peoples all over the world who will often have a really radically different sense of what things are in the world like are rocks inanimate objects or are rocks a form of life. I mean this really depends on how you situate yourself in relation to those things and the kind of fundamental ontological, epistemological, and cosmological beliefs that animate the way you approach the world, not just think about it but feel about it and live within it and so on. And so I think it's really important to understand that the world singular is actually part of the problem, at least in my view because when we say, "There is a world singular," we immediately say there must then be a singular source of knowledge and truth about that world.

And in the tradition that we're from a lot of those ways of knowing are scientific, they're heavily influenced by the European Enlightenment, and they're shaped by particular conceptions of the human and who counts as human. And now I should say here that I'm not anti-science at all. I obviously think science has a really important role to play in our world and how we know about it. I'm just concerned when we say that that's the only way we could know things is through scientific knowledge or when we say that's the best way that we could respond to things. To give a kind of concrete example here in Australia, as you might know, we had incredible wildfires about four or five years ago in particular. It's basically rained nonstop since then so we've had less bushfires recently, but we had incredible bushfires-

Speaker 6:

I was praying.

Speaker 7:

33 lives lost, 3,000 homes destroyed, 1 billion animals killed, and an estimated 30 million hectares burned.

Speaker 8:

There are no records in the history of fire danger ratings where we've ever seen fire danger ratings this high at this time of the year-

Michael Richardson (guest)

And that experience has raised really big questions about, well, how do we do fire management and how do we know about the world around us. And the Western practices of fire management, it's not that they've been bad necessarily, but they take a very particular approach to thinking about the role of fire in the bush and about the way bush and land and so on would be managed and looked after. Whereas there are longstanding, tens of thousands of years longstanding indigenous methods of managing fire in this place, which depend upon a very different notion of the land and the things that inhabit it. Instead of thinking about it as sort of land and bush in this sort of relatively inanimate way, it's thought of as country as something that has its own animacy and liveliness, that has its own meaning making, its own histories and relations and priorities and so on.

And so when you approach fire management from that perspective, you approach it in a really different way. And so what we probably need is some combination of the two. And so we need to have those things work together, indigenous fire management in a more Western science driven version. But we can't flatten one into the other. We can't say, "Okay, well let's just take two or three principles from indigenous knowledge and just plug it into the other system." Because it doesn't work that way. It's a holistic and relational sense of the world that has its own complex relationship to Western science. That doesn't necessarily dismiss Western science but wants to sit it in a particular kind of relation to the various indigenous knowledges here.

And I think we can see a version of this playing out in loads of different ways and places and so on, and the movements to have rivers and mountains and other entities recognized as alive beings and vital forces in the world are a good example of those types of knowledges and understandings of the world being kind of thrust forward. And so when we look at the world that way, we start to recognize that there is no singular world and we in fact inhabit a world of many worlds, which is this lovely formulation from Mario Blaser and Marisol de la Cadena riffing in turn on the Zapatista notion of horizontality or connections between things at a horizontal level rather than placing everything in a hierarchy.

Speaker 9:

Well, the critique that they're making of what's going on goes much deeper than what I can, and many of the frameworks that I'm bringing can fathom. They, by refusing to engage kind of in a antagonistic mode with what I saw as their oppressors, they were pointing to another world.

Vass Bednar (host)

Policy Prompt is produced by the Centre for International Governance Innovation, CIGI is a non-partisan think tank based in Waterloo, Canada with an international network of fellows, experts, and contributors. CIGI tackles the governance challenges and opportunities of data and digital technologies including AI and their impact on the economy, security, democracy, and ultimately our societies. Learn more at Cigionline.org.

Paul Samson (host)

Yeah, so fascinating concept around multiple worlds and I think that we're seeing that in Canada as well where traditional knowledge is being looked at and integrated in parallel to traditional scientific methods. And as you say, "One is not replacing the other, but there's an integration that's going on that's really, really important." And one of the concepts that you've been talking about I think is when you look at some of those, the impacts on traditional peoples, traditional worlds, you've looked at technology and you've looked at how data is being used and particularly autonomous activities let's say, and algorithms. And I thought there was a really powerful notion of algorithmic enclosure, which sounds like it, to go back to the multiple worlds, could threaten some of those worlds or at least the sustainability of those worlds. And so is there a way that you can describe algorithmic enclosure and maybe just what you mean and some examples around that idea?

Michael Richardson (guest)

Yeah, thanks Paul. That's really well put actually. The idea that enclosure threatens the many worlds. By algorithmic enclosure I'm trying to provide a way of thinking about this growing presence of algorithmic systems, whether that is artificial intelligence or cultural recommenders like we get on Netflix, you open up Netflix and it tells you what you should watch, or the application of algorithmic tools in social services and so on. That everywhere we look around all these different facets of life, we are seeing algorithmic architectures and computational systems folding together more and more of the world. And so that there is this sort of sense that increasingly the world or the world of worlds is being pulled together and enclosed within algorithmic systems. Something that's not in the book but that I've been working on lately with a bunch of colleagues is around efforts to make the planet itself, the whole of the planet and its systems into a computable object. And-

Vass Bednar (host)

What does that mean? What does that mean to be a computable object? I have no idea like beyond looking at it on Google Earth.

Michael Richardson (guest)

Google Earth is a good example-

Vass Bednar (host)

Is it? Okay, wow.

Michael Richardson (guest)

... Where you are trying to transform the planet into something that you can access through a computer and that you can generate information about computationally. This is a longstanding effort. If you think about climate change models, for example, they've existed for decades and they've been really important in us obtaining knowledge at that sort of planetary scale about what's happening with earth systems, whether that's ocean currents or atmospheric systems, how those things work together, what's happening with the changes in carbon and temperature and all of those other things in those systems. And so those are an example of an effort to compute the planet. They're an example of a kind of tendency towards algorithmic enclosure potentially as well, which is not to say that climate models are a bad thing, but rather they are an attempt to account for the planet and its systems within a particular mode of knowledge.

On the more sort of concerning side I think is our algorithmic systems that are designed to provide a more militarized form of surveillance at a planetary scale or that monitor human life at a very large scale. That could range from Google to Facebook to TikTok to various systems of state military and national security surveillance. And so I think for me what's important about algorithmic enclosure is sort of less, it's less that there's one or two bad actors out there doing this thing for nefarious purposes, but more that this is a transformation that we're experiencing about how we live in the world, how the world's governed. And to me, important question is how we make knowledge about the world. And so as things are captured and enclosed more and more algorithmically, we tend to privilege that type of knowledge more and more. And the move within government or within militaries or wherever, is increasingly towards relying on those types of systems to inform decision making, which has implications for people all over the world and often in ways that they don't recognize as well.

The question then for me becomes, well, what forms of knowledge allow us to really get at that algorithmic enclosure and to engage with it on a level that relates to its own constitution like what makes it up.

Vass Bednar (host)

Then combining those two things and sort of pulling in the nonhuman witnessing element. If we had a comprehensive computational model of the planet and we're tracking those climate change models or climate changes and reality over time, would that be then a form of nonhuman witnessing that we would categorize or talk about? I wonder if you could give me the elevator graph, the tweet, tweet is way too small-

Paul Samson (host)

Elevator pitch.

Vass Bednar (host)

... Yeah, the elevator. I don't know. The escalator ride on nonhuman witnessing.

Michael Richardson (guest)

Sure. I mean you really nailed it there actually. That's precisely the kind of thing that I'm talking about that a planetary scale system that captures and produces a kind of knowledge about this earth that we live on but produces it at this level that is about a kind of significance and a political or ethical obligation to act or to respond or to at least take note, but where the primary forms of knowledge making and the primary forms of registration of what's going on are not people, not us, but rather these technological systems. If you think about a climate monitoring system, you would think about its satellites. There's various kinds of remote sensors, there's ocean temperature sensing, loads of different thermometers, and various systems that are aggregating all of this data often in ways that are not human at all. We can't stick our finger out in the air and go, well, I guess some people can and go, "It's .1 of a degree higher than it was." And we can't do that in this sort of cumulative planetary scale. We rely upon these huge nonhuman systems.

And we play a role. People play a role in designing and managing those systems and so on. But ultimately the forms of knowledge that they're making and the kind of witnessing they're producing of a world in flux, a world in change, a world potentially under threat is necessarily nonhuman in the sense that the human is part of the system, but it's not necessarily the locus of the way knowledge is being made, it's not at the heart of it. And so I wanted to have a way of talking about that kind of knowledge making that brought in the ethical and the political. And because I've been working on issues around witnessing and testimony for a long time now, a question that started to arise for me was like, well, what is happening? What is this kind of knowledge that we're making through these types of systems?

And so we could think about, yeah, we could think about a climate model, but we could also think about a drone used in news journalism to record the bushfires we were talking about before or to capture flooding in Pakistan or to capture some of the violence that's taken place following the Russian invasion of Ukraine. And so through all of those technologies, how we're experiencing the world and what we're seeing is being mediated by nonhuman systems in this really fundamental way. It's not just, oh, we are looking through a telescope, but rather it's being recorded on a digital system where the world is never imprinted materially in the way it is in when you take a photo with an old camera and lights imprinted on a negative, it's immediately computational, it's immediately being stored in various digital architectures being sent through networks, being processed by different kinds of systems. And that is all prior to, but also fundamental to what happens when we then take it up and deploy it socially or culturally or politically or just to generate knowledge about what's happening in an ocean, for example.

I realize I'm failing utterly to give you the elevator pitch-

Vass Bednar (host)

Our elevator broke. Our elevator broke. Now it's going... No, no, you're not failing mean. I sort of wanted to test my grasp and sort of just pull that out concept forward just a little bit more in our chat because I think Paul and I were just kind of running with it with you.

Paul Samson (host)

There's a fascinating parallel here to the debates around artificial intelligence and inevitably we're going to go there in this conversation and there's a huge debate about under what circumstances could agency emerge and be attached to certain elements of artificial intelligence. And when you think about the nonhuman witnessing and the kind of examples you just gave about a sacred mountain or a river, is there a way where the system could combine with the current legal system to determine a new kind of agency here? Or were we talking about systems that are going to have to be recognized but still remain separate?

Michael Richardson (guest)

I think I would say that all technological systems, all nonhuman systems, and many things if not most things in the world have some kind of agency to them already prior to us deciding whether they do or they don't. And so I think that agency in a sort of really basic existential sense in that way is already distributed well beyond the human. But at the same time we humans have a whole bunch of legal architectures and political systems and social systems and stuff that grant or don't grant agency or standing in different kinds of contexts. And a good example of that, the courts where you have to have a certain kind of standing in order to be viewed as being able to bear witness within a court. Whether an AI can do that is I think a really fundamental question.

I want to kind of bracket that for a moment though to say that this question of standing before the law and this question of who gets to speak and who gets to bear witness has been a really powerful and important one historically. And one of the things that animates my interest in the nonhuman and in nonhuman witnessing is how it then relates back to what the human is or is not. And I think an interesting thing to me is that who gets to count as human has often been intimately related to who is allowed to bear witness.

When I say that, I mean in a Western legal tradition in particular, which is what I know about and have lived with and so on, but within the Western legal tradition, certain people were denied the right to bear witness in court. We can track that back to Greek and Roman courts where slaves were not allowed to bear witness or were only allowed to bear witness through the experience of their tortured bodies. We can track that through the Middle Ages and so on up until the emergence of laws of proof and ideas of eyewitnessing which shifted who and what was allowed to speak in court and what kinds of witnessing were considered valid. And then we can follow it into the United States in the era of slavery where Black people and slaves especially were not granted the right to witness before the law. And this was part of what separated the chattel slave from the broader category of the human which was given over to the slave owners and settlers and so on of North America. And you could follow a similar story in Australia, in South America, and in other places around the world.

The reason I mentioned this is that this category of the human and the capacity to stand and bear witness in a court and to kind of be deemed to have agency is a deeply political one and it's one that we have sort of grappled with and done violence through for a really long time. I think these questions about the agency and standing of artificial intelligence are really important, but they're also ones that we should understand within this sort of historical context. I'm always, to come back to the original question, Paul, I'm always a little hesitant to try to speak about the legal world. I work with a bunch of legal scholars on various projects and I wouldn't want to say anything that gets me in too much hot water with them.

But I would say that I think this question of contestability and of being able to bring something into the space of the public forum, whether it's the court or the legal court or whether it's the court of public opinion or wherever it might be, is a really important issue to reckon with. I think it's something that the nonhuman witness is an important part of. Whether we get to a point where we say, "Well, AI has agency in standing and can answer for itself," I think is in some ways less a question about AI and its capacity and more a question about the kind of frameworks that we want to apply to what it knows and does.

This is in some ways... One way to think about this would be with the famous Turing Test of whether an AI system can fool a person into thinking that they're speaking to another person rather than an AI. We're probably well and truly past that point, at least in some respects and with some systems, but I think whether we count things as having a kind of standing before the law is actually something really fundamental for us to work out. My suspicion is that it's sort of already happening in the sense that AI systems are already being asked to account for themselves in certain ways. I think, with apologies to all of those legal friends, I won't remember the actual cases, but there's been a couple of cases in the U.S. where AI systems have been asked to explain their own choices and determinations-

Vass Bednar (host)

Oh, wow. I didn't know that. I didn't know that.

Michael Richardson (guest)

... I don't know what... I don't recall what the details of those are, so-

Vass Bednar (host)

That's okay.

Michael Richardson (guest)

... I'll leave it to you guys whether you cut this particular bit from our chat or not.

Vass Bednar (host)

No, no, no. I mean Michael, my understanding is that you're experiencing a new form of witnessing because I think either you can't see me or I stay frozen, but I can see myself and I'm here. Maybe sort of extending that in terms of who or what can hold standing. I wondered kind of looking ahead what these new forms of witnessing could mean for our collective history, storytelling, and memory making. What could this look like in the future if we're looking back even on some of those cases and having these machines and machine systems be part of having standing and bearing that witness.

Michael Richardson (guest)

I hope that we get to a point where we have a much richer, more diverse, more plural sense of what counts as knowledge and where it can come from and where ethical and political obligation might reside and where ethical and political capacity and agency and so on can sit. And by that I mean that I hope we get to a point and if we are imagining ourselves as a sort of future historians that we could look back and say, "A really important transformation that we went through in the 21st century, in its early part was to be able to shift the grounds of who gets to testify, who gets to say this matters, or what gets to say that certain things matter and that we end up in a place where we don't place such an enormous privilege on the narrow sense of knowledge that we humans can acquire and where we consider more fora, more places, and more modes of knowledge to be powerful and legitimate and able to matter in their own right without being equated to or reduced to the dominant systems that we already have."

I think when I look at issues around artificial intelligence for example, or around certain aspects of governance, we often see a really strong emphasis on transparency or explainability and other types of concepts. Those can be helpful in certain ways, but they also demand that things be transparent according to a particular way of thinking or a particular place that one stands. And I think we have to take seriously the idea that transparency doesn't always help us. And sometimes what transparency does is it simplifies and reduces the richness of the world into things that can be clear according to particular frameworks or particular ways of understanding. And I think that's risky because we talked at the top about indigenous ways of knowledge, for example. We've touched on how various types of computational systems work, but we could also extend this to the kinds of insight and knowledge making that happens through arts practices or through collective events and protest and so on that isn't necessarily reducible to transparency and accountability and so on.

And so I think a really important thing is to understand the importance of opacity and of not knowing and of being okay with not knowing and working out how we can go forward with opacity built into the way we approach governance, the way we approach rights, the way we approach who gets to know and speak and so on. Now that gets really tricky pretty quickly when we're like, "Well, does the AI system have a right to opacity and what rights do we grant to the technical systems that we have made?" Those technical systems might acquire knowledge and make determinations and propose things to us in ways that we don't understand at all. The famous sort of blackboxing of the machine learning system for instance, where we actually don't know why it is producing the outcomes that it's producing. Like we know at a certain level, but as soon as we ask why this specific outcome, it is impossible to know that because that's obscured in the hidden layers of the systems themselves and no one can actually pull that back out.

There is a certain opacity built into these systems, but is that something we want to grant a "Right" to in the context of artificial intelligence and other technological systems? I don't know. I don't have a good answer for that and I think it's a collective problem and one that we should be working towards together. To me, that question about opacity is much more straightforward if we're talking about more than human ecologies or nonhuman animals or other forms of knowledge are sidelined by the sort of dominant paradigms that foreground quantification, scientific knowledge, and computability, and so on. There, I think, yeah, we need to understand that there's really important knowledge making that happens through art, through storytelling, through indigenous forms of knowledge, through many, many other dimensions where we will not get it reconciled cleanly and neatly with the dominant ways we have of grasping the world right now. I think we need to get to a point where what counts as witnessing what counts as this fundamental means of knowledge making is broader, richer, more diverse, but not flattened out into just being all the same thing.

Paul Samson (host)

You're listening to Policy Prompt, a podcast from the Center for International Governance Innovation. Policy Prompt goes deep with extensive interviews with prominent international scholars, writers, policymakers, business leaders, and technologists as we examine what it means for our public policies and society as a whole. Our goal at Policy Prompt is to explore effective policy solutions for pressing global challenges. Tune into Policy Prompt wherever you listen to podcasts.

Yeah, I think your legal scholar colleagues are going to be very pleased with the way you responded on agency, which I thought was fascinating. And the idea that someday there may be an AI in the box as a witness having to respond as to how it concluded something or provided information, which sometimes is beyond the human capacity to do at least in a limited time period. We're heading to a new series of L.A. Law where they have AI in the witness box or something. But where I wanted to go next was on a little bit of another cut on the same general theme and that's but to stress the autonomy element of a lot of these technologies. There's the intelligence part about just how complex and how intelligent is this thing, but the autonomy thing in many ways is what's really causing challenges for systems to say, "Can we go fully autonomous?" And of course the answer is no in most systems still and including the military.

But you've talked about some examples that are quite spooky and documented in horrific ways where a single technology, whether it's a Reaper or something else, has so much powerful system that the humankind of counts on it. And yeah, it's still human in control but not really, right? And so that autonomy sense seems to potentially be more significant than the AI aspect. Do you want to say anything about the autonomy element of these things?

Michael Richardson (guest)

This distinction between intelligence and autonomy is a really important one to make. I think the tendency in the popular media, and honestly I think in a lot of the academic discussion as well tends to be around the degree to which technical systems are intelligent or not, whereas the degree to which they're autonomous or not is maybe one that is more immediate and one that has more significant ramifications in the world around us. We know for example, that there's been huge movement in the military space towards various types of autonomous systems.

Speaker 10:

Launch, launch, launch.

Speaker 11:

This is the new Valkyrie drone. It's a prototype of a system that is said to take out targets over a distance of up to 3,000 miles and run on artificial intelligence.

Michael Richardson (guest)

But that has been slowed to some degree by various hesitations on the part of militaries. Interestingly enough, it's often militaries and the strategists within them that are most concerned about movements towards lethal autonomous weapons systems, for example, because they're the people who are ultimately making decisions about deployment. They're people who are trained to make life or death decisions. They might not always make the right ones, but they are things that those people have thought about really extensively and often for a really long time. And so even as someone who tends to be pretty anti-war and very skeptical about the claims of militaries, my experience of being around military personnel and of hearing them speak a lot about these types of things is that they often have really nuanced and deeply thought through ideas about the risks of autonomy.

We've seen though, despite this, this sort of movement towards increasingly autonomous weapon systems and targeting systems in particular. A lot of the hype that we encounter is about the threat of killer robots, for example, that there'll be robots marching around with guns replacing flesh and blood soldiers or that there'll be robot micro drones flying around. Listeners might be familiar with the Slaughterbots viral videos, for example.

Speaker 12:

Local PD are saying a suspicious device may have been discovered. Could this be Slaughterbots?

Speaker 13:

No parts on the ground I can see [inaudible 00:51:24]-

Michael Richardson (guest)

I'm from the Future of Life Institute and others that have sort of shown what this might look like. And those are really powerful images and they are things that we should be concerned about. But in certain ways, autonomy is already here to varying degrees in more subtle and more embedded and harder to see contexts. A good example is an algorithmic targeting system which might be attached to a drone, for example, and it scans the environment watching people or a battle space or surveilling a village or whatever it might be. And it has an AI system on it that is not able to make a decision on its own necessarily to actually execute a lethal strike or to undertake a particular action, but which is sifting through the imagery and other data that it's capturing and picking out the things that are of significance and recommending those for courses of action to operators on the ground.

A system like this has been developed by SRI, Inc. which is a military systems manufacturer in the United States called Agile Condor, which proposes to do this type of thing. It's a pod that sits on a drone and makes recommendations, and we know that people on the ground are quite likely to follow those recommendations made by computational systems. And we know that's the case because we see it in so many different places. If you teach students today, you see it in the responses that come back sometimes in student essays where students have asked an AI about the answer for something and they tend to follow it. We see it in government too, where automated systems around welfare applications or other social support services, their recommendations about denying, limiting, or providing access to various kinds of benefits will typically be followed. We've seen it with the infamous COMPAS sentencing software, which is used in a number of different jurisdictions in the United States and other places to help magistrates and justices make sentencing decisions for the people in their courts.

And we know that while the ultimate decision resides with the magistrate and the justice, they will overwhelmingly follow what's recommended by the AI system. And so that system operating with a kind of autonomy around what it's doing and producing makes these decisions that humans then follow. There've been a bunch of reports recently about the use of AI target acquisition systems in Gaza, for example. And it certainly seems like from the reporting that's been followed, that the Israeli Defense Force, the IDF, is largely following the recommendations made by those systems fairly uncritically. And it's actually using those systems to grow its target list and respond accordingly with bombing and other lethal actions. And so when we sort of think about the role of autonomous systems within militaries and within governance, we need to recognize we need a better way of understanding the degrees of autonomy that they hold and how to deal with the kinds of recommendations and so on that they make.

Vass Bednar (host)

Michael, what do you want policy people and politicians to take away from your work? If you were kind of, I don't know, going back in time or forward in time and contributing some political speech writing related to this body of work, what would you want to make sure comes forward? Are we just too trusting of algorithms right now?

Michael Richardson (guest)

I think we're much too trusting about algorithmic systems and we're not trusting enough about other forms of knowledge. I think there is skepticism for sure in policy circles and in government about the role of AI, and I'm not so sure in Canada, but here in Australia there are endless government inquiries at every level about how to deal with AI, what its potential negative effects are, what the potential benefits are, how to do it responsibly, what safeguards to put in place, you name it. I have colleagues who are producing these sorts of submissions all the time, and quite rightly, no one quite knows what to do yet. The field is changing all the time. And so there's a sort of desperate need to grapple with these systems. At the same time, I think that while that sort of uncertainty is still very strong, the growth in the role of computational systems, a version of what Paul highlighted from the book as algorithmic enclosure has already taken hold.

My sense, and I should foreground here that I'm not a policy analyst and it's not my background as an academic or professionally prior to academia, but my sense is that quantification and computational knowledge and statistical knowledge and so on is a really strong driver of how policy gets made. And so I think if I was in that position of the future historian that we've played with before, I'd be hoping to look back and see that a shift was made in the evidence grounds, the evidentiary grounds of how policy is produced, where the weight given to statistical information and quantification and computational systems was diminished, not discarded. I'm not anti those things necessarily. I just think that over privileging particular forms of knowledge is dangerous. And what I would like to see is a much greater emphasis on lived experience and on nonhuman forms of experience.

And to me, what I would think of as nonhuman witnessing so forms of knowledge that get produced through events and encounters and so on, where the main production of knowledge is happening through some sort of nonhuman entity, whether that is an algorithmic system or an environment like sand that turns into silica when a nuclear explosion takes place or whether it's through forms of knowledge and ways of thinking about the world that have not traditionally been privileged. And so I think if we did see that take place, the way policy happens would be different. And I think the way it's targeted would be different.

How that would play out like what policy would look like in that kind of a kind of future is a hard thing for me to say with any kind of confidence. But for me, the question is about how we make knowledge. What are the things that we privilege in the making of knowledge? What are the processes we put at the foreground? And I think at the moment we place a really heavy weight on human knowledge and in particular expert knowledge. We place a really big weight on computational knowledge and statistical knowledge and not enough on lived experience of people, but also not enough on nonhuman forms of knowledge that are not readily reducibles to statistics.

Paul Samson (host)

Now the leap of generative AI over the last couple of years has caught pretty much everyone off guard, even the computer scientists that knew it was coming to some degree. It's been good enough to be super interesting in its application and useful in some cases, but very prone to manipulation. And now with the multimodal increase it jumps forward, like particularly video and voice. There's an ability to kind of generate synthetic information, which is entirely false or directly deep fakes or other manipulations. How do those things enter into this system? It certainly muddies the waters and potentially creates challenges for legitimate voices, let's say, that would be marginalized perhaps by this onslaught of additional synthetic information.

Michael Richardson (guest)

Yeah, the emergence of generative AI and sort of alongside that and part of it, the increasing role of synthetic media and synthetic data, both as products of generative AI, but also as part of the systems that make generative AI and make AI systems more generally. This sort of shift to generative and synthetic media has huge implications for knowledge and for the grounds of knowledge. When I wrote the book, this stuff wasn't even on the horizon. I was writing... Most of the book was written in 2019, 2020, 2021. And that's the era where machine learning has really taken hold and machine learning systems have become quite common and people from outside and inside computer science are trying to grapple with all of that. And then all of a sudden these generative AI systems take this huge leap. In the book I write a little bit about early deep fake technologies and raise the question of how algorithmic systems might become false witnesses, so how they might become witness to things that did not take place to do so in ways that are really dangerous.

Speaker 14:

Well, AI has the potential to improve law enforcement efficacy. It can also cause harmful errors with high stakes consequences for life and liberty. For instance, erroneous AI face recognition hits and gunshot alerts have led to multiple alleged wrongful arrests and imprisonment. A recent study found that state-of-the-art AI photo forensic program performed worse than regular people with no special training.

Michael Richardson (guest)

And there's lots of really good work happening in this space. The Organization Witness, which is based in New York, where Sam Gregory has been a really important thinker on the role of Deep Fakes in human rights investigation and human rights documentation. They've done some really great work to try and get at some of the range of problems that get produced.

Speaker 15:

The capabilities and uses of deepfakes have often been overhyped, but with recent shifts, the moment to address them more comprehensively has come. First, I'll cover technological advances, commercialization and accessibility characterized changes in the last year. Deepfake technologies are now available in widely used consumer tools, not just niche apps. Furthermore, they're easy to use and particularly for audio and image can be instructed with plain language and require no coding skills. Realistic image generation has improved dramatically in its quality and customization in a year. With widely available audio cloning tools, one minute of audio is enough to fake a voice. While video remains harder to do in context-

Michael Richardson (guest)

So for example, in a human rights context, one of the problems with deepfakes is not just that you might have fake information about human rights abuses, like claiming human rights abuses taking place that have not taken place or claiming that human rights abuses didn't happen when they did, and all of those types of things. That's like one order of problem. But the second order of problem, and a more fundamental one in some ways is that the existence of those deepfakes and other types of generative systems undermine the ground of all claims because when you can't trust the media in front of you, or when the media in front of you can be accused of fakery or falseness, there's this sort of fundamental epistemic collapse starts to take hold. I'm super interested in this issue and in fact in the early days of some large research projects about it.

One is I'm part of a research center in Australia, which is spread across nine universities. It's got a mouthful of a name, which is the Australian Research Council Center of Excellence on Automated Decision-Making and Society. We just call it ADMS for short. But in the ADMS Center, we're just kicking off a really big project about what we're calling Generative Authenticity. How is authenticity produced and challenged by generative AI systems? Part of the equation here is what we could think of as authentication or verification. How do we know the provenance of a particular piece of media? And this is something that the tech industry and media organizations have been trying to figure out in recent times, although often the answer is use this Adobe product which will trace and maintain authenticity.

But there are bigger questions about this because authenticity is such an important cultural and social notion. It's an important political value. It matters about how we engage with the things made by content creators on YouTube, but it also matters with the documentation made by human rights defenders in South Sudan, for example, who might be trying to document violence and genocide and so on. And so this question of authenticity is really, really fundamental. The other angle for me that's really interesting is the role of this more directly in warfare. How will generative AI systems start to reshape autonomous systems? How will they start to reshape military knowledge? And that sort of happens along two dimensions. One is what might it mean for the inclusion of generative AI systems into military decision making? And the other dimension is what does it mean to use synthetic data or synthetic media to train military systems that might do target acquisition or might have some role in determining life or death?

On that second point, this points to a really fundamental challenge for militaries in the application of AI and autonomous systems in general, which is how do you get enough data to train an AI system to identify events that are by their very nature rare and unusual? For example, in a military context, the way in which lets say a forward operating base might be attacked in a live military context is incredibly variable. And in fact, we celebrate originality and innovation and stuff in military strategy and tactics. How do you use a system to identify when an attack is about to take place? Well, one way, the traditional way would be to say you get all the historical data. Okay, how much historical data do you have? Well, if you are projecting, say a war between the United States and China, which is the nightmare that haunts many military planners right now, the answer is there ain't any right?

There has not been a large-scale conflict between two global superpowers ever in the modern technological age where you have autonomous systems, where you have high-speed drones, where you have AI making decisions, where you have autonomous submarines, all of those things, there is no data. To train something, you have to invent data. You have to produce synthetic data, and you need to use that to train your systems. But then your training systems to be involved in some of the most fundamental decisions and actions that humans make, which is about the life and death of other humans that are built on synthetic data. And I'm sketching out a pretty cartoonish picture of it right now, of course, and it's-

Vass Bednar (host)

No, I get it.

Michael Richardson (guest)

... Like it's more complicated than that. But I think these point to super fundamental questions about the knowledge that underpins choices about military, choices about human rights, issues around politics, and so on.

Vass Bednar (host)

Michael, your work is so anticipatory and forward-looking. You kind of anticipated what I wanted to ask you to round out, so I'll just say the question. Yeah, we were curious about what else you're working on now and the other directions that your research is taking you. Thank you so much. We'll look forward to reading more of it and being challenged and kind of having our minds stretched.

Paul Samson (host)

Thanks a lot, Michael.

Michael Richardson (guest)

Thank you both. I've really enjoyed the conversation and it's kind of amazing how well you both hit on some of the big questions for me and some of the key points of the book. I really appreciate it.

Vass Bednar (host)

Oh, wow. I loved his Canadian connection. When I was reading Michael's work, it actually had me longing for a graduate seminar that I could discuss it in. I had never considered what it means for a technology or for nature to witness, even though we often have before and after pictures of certain ecologies that might come from a satellite or someone's cell phone. I'm thinking of the recent Jasper Fire, for instance. How we understand change in history and what it means when we solicit different perspectives, Michael totally expanded my thinking.

Paul Samson (host)

Yeah, totally, Vass. The Canadian connections are interesting because they keep popping up everywhere, and we managed to still keep a lot of sorry, sorry's out of the podcast, so I think we're doing okay on that measure as well. This opened my eyes a lot as well. Frankly, at the beginning, I didn't really know what to make of the book. It seemed like a little bit impenetrable in the first kind of instance to me. But then as I got through it more, I started to get it. And then he did a great job of really clearly articulating kind of what this is. And as you say, what struck me was this accelerating frequency of the separation between the human and the nonhuman that we're seeing more and more, both in terms of control and the outcomes and the consequences of that are huge for facts, for events, even for how history kind of gets recorded and recanted or recounted, I guess both. And I liked the way Michael explained things, so that was super interesting for me.

Vass Bednar (host)

Policy Prompt is produced by me, Vass Bednar, and Paul Sampson. Tim Lewis and Mel Wiersma are our technical producers. Background research is contributed by Reanne Cayenne. Brand design by Abhilasha Dewan, and creative direction from Som Tsoi. The original theme music is by Josh Snethlage, sound mixing by Francois Goudreau. And special thanks to creative consultant, Ken Ogasawara. Please subscribe and rate Policy Prompt wherever you listen to podcasts and stay tuned for future episodes.