Global AI Risks Initiative

Leading innovation in international governance to effectively manage global AI risks.

About

Advanced artificial intelligence (AI) holds enormous promise, with the potential to accelerate scientific discovery, spur technological innovation and increase prosperity. At the same time, the risks associated with AI are growing, including issues of privacy, bias, information manipulation and inequality. Perhaps most critically, as AI systems become more powerful, they could pose safety risks worldwide. Among these global risks are the intentional misuse of powerful AI systems to cause widespread harm and the development of autonomous AI systems that cannot be controlled.

Successfully managing global AI risks requires breakthroughs in technical AI safety, such as improvements in the interpretability, controllability and alignment of AI systems. It also requires breakthroughs in governance: there is an urgent need for legitimate and effective decision making about how to develop and deploy advanced AI safely for all.

The Global AI Risks Initiative at the Centre for International Governance Innovation was created to advance the international governance that will be needed to manage global AI risks. The Initiative aims to mobilize the resources, talent and influence of policy makers, AI researchers, governance experts and civil society to reduce global risks from advanced AI systems. It will seek to build understanding of the importance of global risks from AI and access to workable policy options to mitigate these risks successfully.

The Global AI Risks Initiative will:

  • work with partners to develop and test the necessary components of an international treaty/framework convention on advanced AI systems, including the principles, legitimate decision-making processes and implementation mechanisms required;
  • build understanding among public policy makers and citizens of the strong scientific and technical case for global AI risks by providing tailored briefing materials and information sessions; and
  • promote inclusive dialogue to strengthen global support for effective action on AI risks by convening consensus-building workshops across diverse perspectives.

Team Members

  • Duncan Cass-Beggs HERO HS
    Duncan Cass-Beggs
    Executive Director, Global AI Risks Initiative

    Duncan Cass-Beggs is executive director of the Global AI Risks Initiative at CIGI, focusing on developing innovative governance solutions to address current and future global issues relating to artificial intelligence.

  • Matt Photo
    Matthew da Mota
    Senior Research Associate and Program Manager, Global AI Risks Initiative

    Matthew da Mota is a senior research associate and program manager for the Global AI Risks Initiative at CIGI, working to develop governance models to address the most significant global risks posed by AI and to realize the potential global benefits of AI in an equitable and sustainable way.

  • Andrew Mazibrada
    Andrew Mazibrada
    Senior Research Associate, Global AI Risks Initiative

    Andrew Mazibrada is a senior research associate with CIGI’s Global AI Risks Initiative. He is an international lawyer, specializing in treaty interpretation, human rights, and the intersection between international law and science and technology, particularly frontier AI systems. He is working to develop legal frameworks in international law to meet the global challenges presented by AI.

  • CIGI-Headshots Template - Web Square-Recovered
    Akash Wasil
    Senior Research Associate, Global AI Risks Initiative

    Akash Wasil is a senior research associate with CIGI’s Global AI Risks Initiative and a master’s student in Georgetown University’s Security Studies Program. Akash has written papers about how governments can prepare for AI-related emergencies, verify compliance with international AI agreements, evaluate technical safety standards and apply AI risk management approaches.

Contact Us

To learn more about the Initiative or to partner with us on addressing global AI risks, please email our team at: [email protected].