University of Toronto Libraries: A Case Study for AI Governance

Digital Policy Hub Working Paper

August 19, 2024

Existing governance structures for data and information in Canada-based research institutions are varied and often overlapping. The gaps in governance that result from this varied system leave vulnerabilities that could be exploited through the implementation of machine learning/artificial intelligence (ML/AI) tools or could leave ambiguities about the right to use certain data or metadata in developing or refining ML/AI models. Valuable but underutilized data within research institutions is particularly at risk of being accessed and used by third-party ML/AI tool developers without institutions being properly compensated. The sector requires binding standards for ML/AI deployment, alongside broad strategic planning, the promotion of safe experimentation with ML/AI tools and the development of frameworks for institutions to mobilize and exchange their data.

About the Author

Matthew da Mota is a senior research associate and program manager for the Global AI Risks Initiative at CIGI, working to develop governance models to address the most significant global risks posed by AI and to realize the potential global benefits of AI in an equitable and sustainable way.