Last June, Canada’s federal government introduced Bill C-27, proposing new laws to deal with the widespread adoption of digital technology. Within it, there was a surprise Easter egg: the Artificial Intelligence and Data Act (AIDA). This risk-based, sanctions-backed act was meant to promote a responsible Canadian artificial intelligence (AI) industry, and to protect Canadians against harmful AI systems. However, AIDA failed on both counts.
Indeed, AIDA is an empty shell that falls short of providing much-needed clarity about rights and obligations pertaining to AI systems. Undefined terms gesture toward obligations for entities responsible for AI systems, with the blanks supposedly to be filled in down the line by government regulations subject to little democratic oversight. The result is a lack of basic legal certainty. With crucial provisions left open-ended, AIDA is not a fully developed legal regime.
Take, for example, its vague, circular definition of high impact system: “High-impact system means an [AI] system that meets the criteria for a high-impact system that are established in regulations.” Yet this definition is the crux of the act, as many of the obligations it sets out hinge on whether a system is “high-impact.” With AIDA evasive about which systems fall in this category, the government is effectively preserving its ability to decide how much (or how little) to constrain such systems.
In a companion document to AIDA released in March, the government specified that evidence of “risks of harm to health and safety” or “a risk of adverse impact on human rights” would be among the key factors making a system high-risk. While that clarification is a step in the right direction, it falls short of enshrining protection in binding documents — which would be the best way to ensure that Canadians are protected from AI-driven harms, and that businesses know what’s expected of them.
The government also gets to decide what counts as an “explanation,” an obligation for operators of high-risk AI systems to be transparent about how such systems are used. That’s crucial to people understanding and eventually contesting AI-driven decisions. Here, the companion document suggests operators must provide these explanations to “allow the public to understand the capabilities, limitations, and potential impacts of the systems.” This is positive, but still fails to issue concrete requirements in a binding document with prior public debate.
A better approach would be to establish an administrative body with quasi-judicial independence and enforcement powers.
AIDA and its companion document are likewise silent when it comes to the consequences of failing to meet obligations. It remains unclear how sanctions would actually be defined and enforced — a troubling omission, to say the least. This contrasts with Bill C-27’s detailed sanctions for violating the Consumer Privacy Protection Act (CPPA).
Regulating AI will be challenging, both technically and legally. An act can’t provide for everything but it should at least lay out the big picture. Regulations — flexible tools for fast-moving or technical matters — specify details. This is the pattern followed by the CPPA, the main legislative proposal in Bill C-27. But as for AIDA, it lacks any semblance of certainty.
Even broad orientations are labelled “to be announced” in its regulations — forthcoming, at the government’s discretion. Without clarity about what’s on the table, constructive public debate will remain shallow. The companion document provides some details on the logic behind the proposal, and some examples of implementation — but doesn’t paint a clear picture of what the framework will look like. That effectively puts more arbitrary power in the hands of government. At any rate, the failure to spell out obligations in the act or regulations misses the mark on securing a predictable legal environment.
This all-power-to-government framework extends to AIDA’s enforcement. The bill proposes to create the highly anticipated “AI and Data Commissioner” role, the responsibility of which will be to implement the law. However, “commissioner” here is a misnomer. Unlike all other federal commissioners, the AI and Data Commissioner is not to be an independent agent heading a regulatory agency. Instead, a senior official answering to the minister for innovation, science and industry will lead enforcement.
This department’s portfolio includes the very important mission of promoting innovation, accelerating AI development and capitalizing on Canada’s AI research to maintain its competitive edge in a global market. However, that orientation is hard to reconcile with the commissioner’s mission of protecting Canadians against AI-driven harm.
Indeed, promoting growth and innovation may be seen as dissonant with the commissioner’s role of protecting Canadians and ensuring compliance with AIDA. Like having a road traffic controller answer to a racecar driver, putting oversight in the minister’s orbit sets up the commissioner for failure. A better approach would be to establish an administrative body with quasi-judicial independence and enforcement powers. This latter approach is the model in the CPPA, which gives the privacy commissioner enforcement powers against private sector actors for privacy violations.
Undefined regulation and enforcer weaknesses give far too much power to the executive. Besides writing the playbook, the minister will be the de facto judge, jury and executioner. Leaving such power to the discretion of government seriously undermines the bill’s legitimacy and should be a red flag to both businesses and Canadians. Businesses need independent regulators insulated from the political whims of the government of the day. Canadians need strong regulators, with sufficient resources, independence and expert staffing to provide adequate oversight.
These fundamental flaws should be addressed now to allow for a meaningful, democratic debate. We need to have a conversation about the basis of the rulebook for AI systems. Regulation here should not be left to any single minister, let alone one whose mandate justifiably leans toward promoting innovation.
Canada pioneered responsible AI in the public sector, actively engaging on the topic in international fora and setting out how the federal government procures AI systems. These initiatives earned us hat tips from all over the world.
In recent years, however, Canada’s domestic actions have not capitalized on those early gains. This is bad for innovators, and for Canadians. We can, and must, do better with AIDA.
The government needs to go back to the drawing board. It should begin by engaging in an actual conversation with all stakeholders, including civil society.