By matching and surpassing human cognitive abilities, highly capable artificial intelligence (AI) — advanced AI systems of the foreseeable future, which leading AI companies are working toward as part of their broader goal to create artificial general intelligence — could be among the most transformative technologies the world has ever seen. While this radical technology is being built primarily in Global North countries, its impacts are likely to be felt worldwide, and disproportionately so in those Global South countries with long-standing vulnerabilities — weak-state institutions; dependence on labour-intensive, manufacturing-based and export-led economic models; regularly recurring armed conflict; high trust in technology; and more globally subordinated cultures.

The authors of this paper consider six ways in which highly capable AI could interact with these vulnerabilities, and argue that unless this problem is remedied before the emergence of highly capable AI, there is a chance such AI could lead to catastrophic outcomes. Because of the significant societal impacts that highly capable AI could have, being concerned about AI in a general way will not suffice. The authors argue that all stakeholders who care about those who live in Global South countries must pull on the levers available to them with the goal of influencing the ongoing development of highly capable AI.

About the Authors

Cecil Abungu is a Ph.D. student at the University of Cambridge and a research affiliate at the Institute for Law & AI at the Centre for the Study of Existential Risk, University of Cambridge.

Marie Victoire Iradukunda is a master of laws student at Harvard Law School and a policy fellow with the Harvard AI Safety Student Team.

Duncan Cass-Beggs is executive director of the Global AI Risks Initiative at CIGI, focusing on developing innovative governance solutions to address current and future global issues relating to artificial intelligence.

Aquila Hassan is a project management specialist at the Centre for the Governance of AI.

Raqda Sayidali is an undergraduate student at Strathmore University and a researcher at the ILINA Program.