In the coda of Kate Crawford’s new book, Atlas of AI, she describes a recent promotional video that Jeff Bezos made for his space company, Blue Origin. Part TED talk, part An Inconvenient Truth, the video features a contemplative and confident CEO on a stage talking through his vision for space colonization.
Whereas Elon Musk’s vision of Martian travel feels almost juvenile in its crass ambition and superhero aesthetic (not to mention who would want to live on Mars?), Bezos builds a much broader and, in some ways, more sophisticated case. This vision is grounded in the science fiction speculation of Gerard K. O’Neill in his book The High Frontier: Human Colonies in Space. O’Neill, and now Bezos, imagine a future where humans will live in massive self-sustaining communities in space, each community spinning to create its own internal gravity. Both imagine these space pods as bucolic places, as Edens of various forms.
At one point in the video, Bezos describes his cold rationale for why such a future is needed. Earth’s resources are finite, he argues, and if we want to keep growing, we need to move beyond the planet. On Earth, “we have to stop growing. Which I think is a very bad future.”
But for whom is it a bad future? One might think that solving the challenges of resource limitations on Earth might be an easier and even preferable proposition to developing floating bio-habitats in space and mining asteroids. The answer becomes clear when you look at the world through Bezos’s interests, the company he has built and the technology on which it is based: artificial intelligence (AI).
And here is where Kate Crawford’s work is so vitally important. Crawford, a leading scholar of the social and political implications of AI, says that for decades AI has been framed in what she calls “an abstract algorithmic nowhere.” In my recent interview with her, she described how we often talk about AI systems as if they were ephemeral, even magical. AI exists in the “cloud” and is defined by opaque, often unknowable algorithmic systems. Its power is to mimic the mind, in ways those developing it struggle to make concrete. This imagined nature of AI allows it to be anything to anyone. It gives power to those who own it, and marginalizes those who are subjected by it. It has enabled the emergence of what Crawford and Alexander Campolo have called “enchanted determinism.” Skeptics and utopians alike attribute deterministic certainty and power to AI.
In Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence, Crawford provides an alternative, material frame through which to see AI. She argues that, at its core, AI is rooted in extraction. It has to exploit the planet, people and the data they produce in order to function. It is a material technology.
AI extracts natural resources. From cloud storage to the lithium that goes into our devices to the enormous energy that is used to process machine-learning algorithms, AI must be seen as an industry that devours vast traditional resources.
AI is also built on labour. While the mythologizing of AI positions it as independent from human agency, the reality is that at all stages of the AI industrial process humans are central. Humans code data, moderate content, and even impersonate automation, what Jathan Sadowski calls “Potemkin AI.” But AI intersects with labour in more traditional ways too, by supercharging the Fordist process of industrial efficiency. Amazon workers, surveilled in every conceivable way, are stretched to the limits of human production by AI. AI developed on the backs of human labour is used to exploit workers further.
And, finally, AI needs massive data to function. Data is collected without meaningful consent from global populations that are largely oblivious to how their images, movements and communications are being used to train AI.
Much of the debate about the harms of AI revolves around the bias based on ethnic, racial, gender and other human characteristics that is built into these huge data sets. In recent years, due to the work of a field of critical AI scholars — for example, studies by Safiya Noble, Ruha Benjamin, Meredith Broussard, Virginia Eubanks, Joy Buolamwini, Charlton D. McIlwain, Arlan Hamilton, Lisa Nakamura, Wendy Hui Kyong Chun and Simone Browne — we now know that there are deep structural flaws in many of our AI systems, which have had breathtaking social consequences.
For example, an app used by Amazon in hiring was found to reject female applicants. Parole evaluation AI programs disproportionately sentence Black prisoners. And a wide range of facial recognition AI software systems used by police forces have led to false arrests and targeting of people of colour.
These problems have led to an important policy debate about the data on which AI is constructed, about whether we can debias AI and whether, in fact, ethical AI can be built. But is this debate the right one? Are the social, political and economic problems of AI adequately captured by this frame? Or do we, as Crawford suggests, need to rethink what AI is in order to determine how to govern it, or even to rethink our developing it at all?
Because AI is material, and because it extracts core resources from the land and society, AI is deeply intertwined with power. It is fundamentally designed to serve the needs of the people who own it. People such as Bezos. And if AI underpins everything about Amazon, from its global supply chain, to its home surveillance products, to its ruthlessly monitored oversight of its employees, and if AI demands finite resources, then the urgency to escape the confines of Earth becomes more clear.
But this reframing of AI from the intangible to the material also provides some clarity to the often opaque conversation about how to govern it. Crawford’s reframing pushes the discourse around AI away from the intangible, and in many ways unknowable, world of algorithms, and into the tangible world of resources, labour and power — things we are familiar with and which we know how to govern.
The result is that Crawford has provided a way to guide us through the governance conversation. Instead of seeking new methods for governing largely unknowable systems, a perhaps impossible task that ultimately serves the status quo, we could start by governing the extractive components of AI.
Sometimes a book or an idea reshapes how we think about a topic. Atlas of AI is one of those books. In shifting our focus to these material elements, Crawford has provided a map not only for understanding AI but also for governing it. In fact, if we follow Crawford’s counsel, we may not need to govern AI at all.