The global movement to regulate artificial intelligence (AI) gained momentum in Paris last month as Canada added its signature to the Council of Europe’s convention on AI. We now join other key players in the race to develop AI — the European Union, Israel, the United Kingdom and the United States — in reaching this important milestone.
The council’s Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law has been in the works for several years, with a final agreement reached in May 2024. The United Kingdom and the United States signed on in the fall of 2024, after broad consultation among the 27 council nations, along with various countries with observer status at the council, which include Canada and the United States.
The convention stands out among a series of recent initiatives at the global level to formulate an overarching framework for the oversight of AI. Others include the Organisation for Economic Co-operation and Development’s Global Partnership on AI and the United Nations’ High-Level Advisory Body on AI, with the latter recommending an international scientific panel on the issue. Global summits held in the past two years — in France, South Korea and the United Kingdom — have ended in statements of common purpose.
As distinct from these mostly collaborative efforts, the convention on AI and human rights imposes binding obligations on its parties. Each of them commits to passing domestic law on a range of issues pertaining to the “lifecycle of AI systems.” These include risk mitigation and disclosure obligations and measures to address bias and discrimination, privacy and sustainability.
But the convention has a gaping flaw. Caving into pressure from industry, it imposes no clear obligation to regulate the private sector.
Focused mainly on the use of AI by “public authorities,” it vaguely gestures at the need for each party to address AI “risks and impacts” by “private actors…in a manner conforming with” the convention’s “object and purpose.” It also excludes “matters relating to national defence.”
Commentators were quick to point out these limitations, questioning the real import of the treaty. It would certainly compel states to prohibit obviously invasive or harmful uses of AI, such as arbitrary surveillance, misinformation or decision making using black box algorithms. But then a treaty would hardly have been necessary to compel the parties to do this, at least for the nations inclined to sign such a treaty.
The convention’s carve out of commercial AI might also be symptomatic of a larger problem. Reflecting on the upshot of the Paris Summit on his way home, New York Times journalist Kevin Roose noted the shift away from safety toward sustainability. Reluctance on the part of the United Kingdom and the United States to sign a mere statement of shared purpose — one that imposed no obligations — effectively marked, in his view, the collapse of global efforts to regulate AI safety. Countries are now in a zero-sum race to achieve artificial general intelligence at all costs.
Are the critics right? Will the ongoing AI summits and meetings of world leaders — including the Group of Seven in Alberta this June, which is set to address AI — amount to more than symbolic gestures? What about the council’s convention on AI? Will it make a difference to Canadian law now that we’re a party?
Despite the many doubts around their practical impact, these initiatives do matter, both for Canada and for the world. There is, however, ample cause for concern.
Canada’s AI and Data Act, part of Bill C-27, had finished its second reading when Parliament was prorogued in January, leaving it dead for now. The act would have regulated commercial AI such as Gemini or ChatGPT and imposed risk mitigation and disclosure obligations on providers such as Google and Open AI. But precisely what these companies had to do to comply was unclear, with so many key details left to regulations yet to be drafted.
The bill raised concerns that Canada would over-regulate, stifle innovation and discourage investment. The Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law preserves the leeway for Canada’s next government to table a new AI bill that will be less onerous for private entities. But unless we withdraw from the treaty, our commitment to regulate AI will loom over government, keeping open the possibility of crafting a more effective framework.
Yet the convention raises a host of other concerns. Even if countries seek to regulate commercial AI safety or bias, the meaning of these terms is fundamentally unclear. Only a small number of nations have signed on to the agreement, with most of the world, including the Global South, ignoring it. The treaty’s monitoring and enforcement mechanisms are weak, amounting to little more than occasional self-reporting.
The United States, currently the focal point of AI innovation, could also withdraw from the convention altogether. Vice President J. D. Vance told the Paris conference that “we believe that excessive regulation of the AI sector could kill a transformative industry just as it’s taking off, and we’ll make every effort to encourage pro-growth AI policies.”
But the convention and the various statements emerging from the recent global conferences on AI are still important. They may be riddled with gaps and shortcomings, but that doesn’t mean that Vance’s idea is correct — that to avoid excessive regulation, we need less oversight.
Who will keep the big platforms to their promises when it comes to energy consumption or policing the ability to create sexual deepfakes involving a neighbour or classmate? Who will impose guardrails as AI providers ramp up efforts to deploy therapy bots or even to generate erotica? What about intellectual property, misinformation, algorithmic bias or other risks?
We don’t want excessive regulation. We want effective oversight and coordination. The major breakthroughs in AI may have unfolded in commercial settings, but they were collaborative efforts among international teams of researchers. A mandate to cooperate with regulators, to be accountable for safety and security measures, can help complement effective coordination and competition.
By all means, we should be critical of the convention and its shortcomings. But we should not lose sight of the forest for the trees. Opposition to the belief that the market imposes the only constraints we need on AI, or that innovation must be unfettered, is what the convention is really about.
The question should not be what difference the treaty and global summits about AI safety and sustainability really make, but rather what would happen if such efforts were abandoned? The intensity of corporate lobbying is a sign that these efforts do impose pressure.
The fact that global initiatives to regulate AI continue is a positive sign. They point to a strong and widely shared belief that AI should serve humanity and not just the market.