Standards as a Basis for the Global Governance of AI in Research

Digital Policy Hub Working Paper

December 10, 2024

Artificial intelligence (AI) risk in the research sector will require international governance to ensure the preservation of an independent and reliable global research sector. While many types of governance and safety tools will be necessary, the use of international standardization would be a useful initial phase that could allow for rapid and inclusive governance on which other frameworks could build. The existing Canadian standard for AI/machine learning implementation in research institution –– CAN/DGSI 128, currently in development by the Digital Governance Standards Institute –– could be proposed and taken under review to be adopted as an international standard. International standardization for AI implementation in research institutions could help to build global consensus on preserving and protecting research institutions and their information from AI risk and could help to support other efforts at global AI governance outside of the sector.

About the Author

Matthew da Mota is a senior research associate and program manager for the Global AI Risks Initiative at CIGI, working to develop governance models to address the most significant global risks posed by AI and to realize the potential global benefits of AI in an equitable and sustainable way.