How Do Current AI Regulations Shape the Global Governance Framework?

Digital Policy Hub Working Paper

March 13, 2025

Artificial intelligence’s (AI’s) rapid advancements offer unprecedented technological, social and economic opportunities but pose serious global challenges, including algorithmic biases, tech monopolies and environmental impacts. Countries have adopted varied AI strategies and existing frameworks excel at fostering innovation, capacity building and regional cooperation. However, limited enforcement, insufficient inclusivity and fragmented regulations undermine their effectiveness. They allow influential actors — governments or corporations — to shape AI policy in ways that are likely to sideline concern for human rights. It is crucial to have an agile model of AI governance anchored in risk-based, rights-based and rules-based principles. Proposed solutions include a universal AI convention enforced by a High Commission for AI and Human Rights, an Intergovernmental Panel on Climate Change-like panel for rigorous research and policy guidance and a global research consortium inspired by the European Organization for Nuclear Research to ensure inclusive, transparent AI development and equitable benefit sharing.

About the Author

Maral Niazi is a former Digital Policy Hub doctoral fellow and a Ph.D. student at the Balsillie School of International Affairs with a multidisciplinary background in political science, human rights, law and global governance. Her research with the Digital Policy Hub expanded on her doctoral research on the global governance of AI where she will examine the societal impacts of AI on humanity.