We Need a Global AI Strategy: What Role for Canada?

Only a cohesive, global approach to AI regulation will ensure comprehensive oversight.

August 15, 2024
globalAI
Robots dance during a conference on global AI governance at the Shanghai World Expo Exhibition and Convention Center, July 4, 2024. (VCG via REUTERS)

Artificial intelligence (AI) has emerged as a transformative force with the potential to reshape whole societies. As its applications become increasingly integrated into our daily lives, it’s imperative that policy makers around the world cooperate to establish shared regulatory and legislative governance. In fact, what we need today is a universal AI strategy.

Notwithstanding the current patchwork of regulatory efforts aimed at governing the technology, only a cohesive global approach to regulation will ensure comprehensive oversight. From any objective standpoint, the contemporary regulatory arms race between Europe’s AI Act, Canada’s Artificial Intelligence and Data Act (AIDA), America’s Executive Order and China’s raft of AI regulations make AI regulation unnecessarily complicated.

Navigating the growing patchwork of regulatory jurisdictions around the world will mean burdensome complexity. Indeed, the dangers here are obvious: inconsistencies in protection, the stifling of innovation by barriers to entry, incentivization of regulatory arbitrage, and failures to properly address fundamental human rights and values. In a field already moving far more quickly than conventional government oversight, this fragmentation is dangerous.

What Elements Are Key to a Global AI Strategy?

AI regulation is not a simple problem. Unlike general-purpose technologies of the past, these systems are evolving in ways that make the technology difficult to manage. AI algorithms are designed to continually improve their performance by learning from data. As these systems continue to outperform their human counterparts in pattern recognition, problem solving and data analysis, regulation could prove extremely difficult.

For the moment, there are certain features of AI governance that are obvious and represent the low-hanging fruit of any governance strategy:

  • The need to address ethical oversight: AI applications raise profound concerns related to algorithmic bias, privacy infringement and the potential for discrimination. While some nations have taken steps to address these issues at the national level, global regulation could provide a unified ethical framework that sets international standards for AI development and usage.
  • Ensuring technological advancements: As AI advances, it is crucial that global regulation keeps pace. A coordinated global approach can promote innovation while simultaneously mitigating risks. Establishing a common framework that facilitates international cooperation on AI research and development would ensure that breakthroughs are both accessible to the vast majority and safe.
  • Preventing regulatory arbitrage: Gaps in national and regional oversight can incentivize businesses to seek out jurisdictions with the least stringent rules, creating a race to the bottom. This phenomenon, known as regulatory arbitrage, can undermine efforts to ensure responsible AI development. Global regulation can help harmonize standards and reduce the incentives for companies to exploit regulatory gaps.
  • Enhancing Interoperability: Last but not least is the issue of standardization. AI systems are increasingly used in applications that span multiple countries and regions, such as autonomous vehicles and international health-care initiatives. Without global regulation, the lack of interoperability between different regulatory jurisdictions can hinder the efficient and safe deployment of AI technologies. Standardized rules can help ensure the seamless integration of these systems across borders.

Without global AI regulation, the lack of interoperability between different regulatory jurisdictions can hinder the efficient and safe deployment of the technology.

The Limits to Global AI Regulation

While the need for global AI regulation is clear, its implementation is not without risks. Nations have unique histories and these inherent differences stand to impose contexual limits on any attempts at global governance. The following are among the obstacles.

  • Sovereignty concerns: Nations often prioritize their sovereignty and may be reluctant to cede regulatory authority to international bodies. Ensuring that global AI regulation respects national sovereignty while addressing shared concerns requires a delicate regulatory balance.
  • Differing priorities: Countries have diverse economic, political and cultural priorities. Developing a universally applicable framework that accommodates these differences is a formidable challenge. Global regulation requires thorough diplomatic negotiations and consensus building.
  • Enforcement and accountability: Establishing a global regulatory framework is one thing; ensuring effective enforcement while holding organizations accountable for violations is another. Developing mechanisms for oversight and enforcement that work across borders will require institution building.
  • Technological advancement: The fast-paced nature of AI development means that regulatory frameworks can quickly become outdated. A global regulatory body would need to adapt rapidly to keep up with technological advancements. New forms of software-driven regulatory design and enforcement will need to be conceived.

What Role for Canada?

Canada’s policy makers have signalled their intention to play a significant role in global AI governance. The country is already home to world-renowned AI research institutions and experts, such as the Vector Institute in Toronto, Mila in Montreal and the Alberta Machine Intelligence Institute (Amii) in Edmonton. These institutions contribute valuable insights and advancements to the global AI community.

Canada has also made efforts to develop proactive policies and regulations around AI, such as AIDA in Bill C-27. More recently, the current federal government has promised billions in AI research and development, including the establishment of an AI Safety Institute. With this move, Canada could help shape regulatory boundaries in guiding AI safety research, positioning the country as a global player in trustworthy AI.

The institute, together with the network of safety institutes announced at the recent AI Seoul Summit, could be helpful in promoting the sharing of insights, best practices and research findings.

However, there are certain challenges and limitations that could hinder Canada’s leadership. The most obvious challenge relates to the country’s shrinking productive capacity.

Moreover, notwithstanding efforts to cultivate a robust AI ecosystem, the loss of top talent to other nations with more lucrative opportunities and better infrastructure could impede Canada’s ability to maintain a leadership position in the sector. Economic weakness inevitably translates into weakness in governance. Canada’s leaders will need to focus their attention on rejuvenating the country’s capacity for innovation and growth.

Institution Building: A Pathway to Global Regulation

New tools inevitably mean new institutions. While there is no single way to regulate AI at the global level, the establishment of independent oversight bodies with the authority to assess and enforce global regulation is necessary and inevitable. The development of international bodies tasked with working in conjunction with existing multilateral organizations is an important first step in getting global AI regulation right.

As mentioned, the launch of a network of safety institutes at the AI Seoul Summit could be pivotal to the broader global discussion. Signatories include Australia, Canada, the European Union, France, Germany, Italy, Japan, Singapore, South Korea, the United Kingdom and the United States. Notwithstanding its lack of regulatory power, the initiative offers an example of the kinds of collaborative platforms needed.

By working alongside a network of safety institutes, Canada and other leading AI countries could help to assuage growing concerns around AI — particularly AI ethics.

Achieving a balance between innovation and safety across an evolving AI landscape is foundational to any long-term AI strategy. To that end, multilateral governance is critical to developing the legal and regulatory protocols needed. And international collaboration is essential for building public trust.

To conclude, the transformative potential of AI is immense. But this brings with it enormous responsibility. As they explore pathways to regulatory frameworks and oversight, nations can and should work together to harmonize their approaches. The goal should be to safeguard the ethical development of AI while enhancing interoperability and oversight, avoiding regulatory arbitrage and respecting national sovereignty.

Notwithstanding the challenges inherent in regulating AI, the benefits far outweigh the risks. The general application of AI means that all nations will be impacted, regardless of their level of economic development. Indeed, in a world where this technology transcends national and regional borders, global regulation is not just an option but a necessity.

The opinions expressed in this article are those of the authors and do not necessarily reflect the views of the Information and Privacy Commissioner of Ontario.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Authors

Javier Ruiz-Soler is an expert on emerging technologies and digital policies and a senior technology and policy advisor at the Information and Privacy Commissioner of Ontario. He is also a former Digital Policy Hub visiting fellow.

Daniel Araya is a CIGI senior fellow, a senior partner with the World Legal Summit, and a consultant and an adviser with a special interest in artificial intelligence, technology policy and governance.