The release of ChatGPT last year marked an inflection point in humanity’s relationship with artificial intelligence (AI). In less than 12 months, it has spurred an AI arms race among tech firms and rendered the technology a mainstream phenomenon. The emergent role of AI in war, its impact on business and politics, and its permeation of legal systems, education, art and more are suddenly subjects of intense debate. Meanwhile, mountains of capital are being marshalled toward its evolution and application. According to multiple recent estimates, the rush to deploy AI into nearly every sector of society could expand its global market size from US$200 billion currently to US$1.8 trillion by 2030.
Yet relatively little attention is being paid to the inputs needed to sustain this big bang in AI. While it may resemble an alien form of intelligence capable of creating new ideas and perhaps someday even novel forms of culture, the technology’s growth and development are not siloed off within the nebulous ether of the digital realm. For all its wondrous capabilities, AI is still bound by the limitations of the real world. The data that fuels it must be painstakingly labelled by humans. The infrastructure it relies on requires energy and water to function and skilled labour to build and operate. The chips that power its neural networks are in increasingly short supply.
If we don’t recognize and account for these factors now in AI’s adolescence, they will manifest new forms of risk and volatility — not only for governments but also for those business models based on a presumption of limitless growth in computational power.
Keeping the Lights On
The industrial-sized data centres used to run and train algorithms around the clock require staggering amounts of water to keep from overheating. According to its 2023 environmental report, Google alone used 5.6 billion gallons, or more than 20 billion litres, of fresh water in 2022 — 20 percent more than in 2021, an increase researchers attribute to the search engine giant’s heightened focus on AI development. Based on the United Nations’ definition of water rights, that’s the same amount needed to provide potable water for one year to 2.7 million people in the developing world. Microsoft’s water use likewise spiked by 34 percent between 2021 and 2022. The company used 6.4 billion litres of water last year — more than 2,500 Olympic-sized swimming pools’ worth.
Calculations by a team of researchers from the University of California, Riverside, suggest that infrastructure supporting ChatGPT consumes at least 500 millilitres of water every time the bot responds to more than a handful of user prompts. While creating GPT-4, the latest large language model underpinning ChatGPT, the program’s Silicon Valley owner, OpenAI, drew water from two rivers near its data centre west of Des Moines, Iowa. According to the Associated Press, local communities were not aware this was happening. “It’s a recipe for disaster,” a member of the grassroots organization Iowa Citizens for Community Improvement later told tech news site Futurism. “ChatGPT is not a necessity for human life, and yet we are literally taking water to feed a computer.”
Elsewhere, Meta’s new US$1 billion-plus data centre and corporate campus being built on 400 acres on the outskirts of Mesa, Arizona, is touted by the company as a cutting-edge facility that will diligently recycle waste water and be landscaped to capture rainwater. It will still consume 6.4 million litres of water per day when in operation. This in a place, Arizona, that has become so dry after years of drought that its government allocated US$1 billion of its 2022 budget to finding replacement sources of water. Officials tasked with mitigating a water crisis in one of America’s fastest-growing states are now proposing an extreme solution: build a desalination plant in a Mexican coastal town along the Sea of Cortez, some 320 kilometres away, and pipe water north to meet Arizona’s needs. Doing so would require clear-cutting a path through the heart of a UNESCO biosphere reserve.
The same data centres are also voracious consumers of energy. The world’s 10 leading data centre companies operate more than 1,250 such facilities, based on research by market analysis firm Dgtl Infra. Topping that list are the commercial cloud-computing service divisions of Amazon, Microsoft and Google. In fourth place sits Meta, which needs its own data centres simply to process the massive amounts of information generated by its various platforms. According to the IEA (International Energy Agency), the combined electricity demand last year of the roughly 8,000 data centres globally — at least a third of them in the United States — was between 240 and 340 terawatt-hours. If it were a single country consuming that larger amount, it would rank eleventh in the world, ahead of Saudi Arabia and behind France. A country consuming the lower amount would rank nineteenth, slightly ahead of Australia.
To their credit, all big tech firms have boosted their water and energy efficiency in recent years by pivoting to green power sources and investing in water restoration projects. However, Microsoft’s 2022 sustainability report highlights a key dynamic — the company’s water usage rose in lockstep with its year-on-year business growth during that same period. Incremental steps toward sustainability by tech providers, which take time, risk being outstripped by the swelling tidal wave of demand for their AI-powered products.
IBM’s latest Global AI Adoption Index indicates more than three-quarters of businesses worldwide have either already integrated AI into their systems or are exploring its potential. A related proliferation of new networked devices will also lead to an explosion in demand for data-processing capacity. A study published in 2021 by an offshoot of the journal Nature says the number of devices online is expected to increase from 18.4 billion in 2018 to more than 29 billion by the end of the decade. Consulting firm McKinsey forecasts that data centre demand in the United States alone will grow 10 percent per year through to 2030.
Such exponential increases in AI use will motivate developers and tech firms to create bigger and better machine-learning products to capture greater market share. Implicit in this quest will be the necessity of companies’ acquiring ever greater amounts of energy, land and water. To be clear, the growth of the AI industry will not suddenly render it a leading cause of carbon emissions or water consumption. Even if it grows by an order of magnitude, its environmental impact will be vastly less than other sectors, such as manufacturing, construction, transportation and agriculture.
But in a hotter, more water-stressed world, it will spark localized conflict with communities adjacent to sites selected by companies for new data centres. Officials will court backlash for offering tax incentives and water drawing rights to lucrative tech firms whose facilities create few permanent jobs. Such facilities will also place significant burden on local electricity grids, driving up energy prices for nearby consumers. Projects in progress also risk becoming ensnared in campaigns and lawsuits by activist groups.
This may become especially true in the United States, which is draining its aquifers at a spectacular rate nationwide. However, it’s a dynamic that will likely appear elsewhere too. President Emmanuel Macron has advocated turning France into a continental tech hub by cultivating a “start-up nation.” And yet Paris is already contending with the outbreak of violent demonstrations at reservoir sites in western parts of the country ring-fenced for irrigation.
One possible outcome is that new data centres will start to be more frequently located in undemocratic jurisdictions; authoritarian regimes hungry for tax revenue can streamline the building process by silencing local opposition through coercion or force. Should this happen — placing users’ data beyond the reach of democratic oversight — it would compound the emerging race to the bottom on AI safety.
Some may argue that advancements in the technology are bound to reach a point where these tasks can be automated. But that could risk the very integrity of machine-learning models themselves.
The Human Factor
Another element deeply intertwined with AI’s development is labour. Underpinning the surface-level brilliance of various generative programs are countless hours spent by an invisible legion of human workers. These are the people who label and codify the data inputs that make machine-learning models work as intended, by ensuring data is properly formatted, annotated and corrected for bias. This function is critical — particularly when developers are using a raw data set purchased from data vendors. It also involves tedious, time-intensive and often harrowing work.
Rather than employ their own staff for data labelling, developers mostly crowdsource freelance contractors through platforms like Amazon Mechanical Turk. Another method is to outsource to third-party agencies that offer employment in “digital sweatshops” in low-wage regions abroad — places such as the Philippines, Venezuela and others.
An investigation by TIME magazine released this past January details how, beginning in November 2021, OpenAI used a Bay Area recruitment company to pay workers in Kenya less than US$2 per hour to read through reams of graphic accounts of murder, suicide, sexual abuse and other horrific activities. The purpose was to reformat these texts for use in training ChatGPT’s internal content moderation algorithm. A contractor in India doing similar labelling of harmful video content for Facebook’s AI-powered moderator function recently told The Guardian that each day of work, “I log into a torture chamber.” In May, 150 workers in Kenya whose data-labelling work supported the development of AI used by Facebook, TikTok and ChatGPT — while leaving them claiming post-traumatic stress disorder — formed a union to push for better compensation and mental health supports.
By contrast, American contract workers doing data labelling are paid relatively well. At US$15 an hour, they earn above the minimum wage in all but California, Massachusetts and Washington state. However, as AI becomes ubiquitous, developers will need to enlist more data labellers — whose compensation could rise based on increased demand — meaning unprofitable start-ups will burn through investors’ cash even faster. Absent strong regulations, unsafe products could be rushed to market out of desperation to cover losses.
Some may argue that advancements in the technology are bound to reach a point where these tasks can be automated. But that could risk the very integrity of machine-learning models themselves. A dangerous feedback loop has been identified where algorithms crash and AI systems become unstable after cannibalizing other machine-generated inputs. And fixing the issue seems like a remote prospect; the world’s leading AI scientists admit that even they can’t comprehend the inner workings of their creations. The upshot is that human labour will remain an indispensable component for AI’s development for a very long time to come.
More Roadblocks Ahead
Yet the AI industry is already grappling with a more immediate real-world concern: serious bottlenecks in the global supply of semiconductors. It’s an intractable issue with no end in sight because of the sheer complexity of the chip-making process. Companies may be pouring billions of dollars into the race to build new factories by taking advantage of lavish government subsidies on offer in Asia, Europe and the United States. But ramping up chip supply will require much more than money and overwhelming demand.
This is perhaps best illustrated by the Biden administration’s US$52 billion CHIPS and Science Act — a bipartisan strategy to secure America’s chip supply by investing in onshore production. In order to succeed, it must essentially outcompete the administration’s parallel landmark infrastructure and green energy investment plans for a limited number of construction workers. Complicating things further are the demands of organized labour. The new chip-making plants will also need to overcome a deficit of around 100,000 skilled workers to operate at full capacity. Being trained in an applicable field, such as electrical engineering, right now requires an American worker to forgo fully taking advantage of a historically strong, high-paying labour market to undertake several years of education at great individual expense. On top of that, Dutch firm ASML is the only company in the world capable of designing and producing the lithography machines needed to produce the most advanced chips.
In the meantime, for start-ups to get their hands on the semiconductors they desire — such as Nvidia’s prized graphic-processing units — means having to endure long wait times, overspend, leverage personal relationships or pool resources. Even OpenAI itself has reportedly had to set usage limits on the products it is selling to clients because it is unable to acquire enough chips to increase its behind-the-scenes computing capacity.
Looming in the background is the costly legal reckoning awaiting developers of generative AI programs around copyright infringement. Then there is mounting public distrust of the technology itself. A survey by the PEW Research Center in August indicates that 52 percent of adults in the United States say they now feel more concerned than excited about the role of AI in daily life — up from 37 percent in 2021.
It’s clear that AI is a transformative technology that will profoundly change the world — in ways for the better and for the worse. Even so, this will not happen in isolation. AI’s future will still be determined by how it interacts with the real world, including the complex and unpredictable consequences of human agency.