Canadian researchers are pioneers in a number of fields that comprise artificial intelligence (AI), and that’s spurred some momentum in attracting foreign investment and government support. But when it comes to governance of AI, Canada is lagging, and falling farther behind with each day of neglect and inaction.
It’s perplexing: not only is Canada in an arguably optimal position to lead AI governance, but AI is currently one of the most widely discussed public policy areas. Ask almost anyone what AI is, what they think about AI or what areas of AI are concerning and they will express uncertainty, doubt and even fear. They’ve seen the movies, they’ve read the science fiction, and for many of them, AI represents not only threat — ranging from the potential loss of jobs to autonomous weaponization — but an existential crisis that will lead to a day of reckoning, when machines rise up against people.
One can make the case that these widespread worries about and mistrust of AI reflect a deep desire to see some sort of governance model or regulatory framework applied to AI.
Nonetheless, the cohort of government and public policy workers focused on AI governance is a small one. Five individuals seem to be leading the charge.
- Michael Karlin, who works for the Treasury Board of Canada Secretariat. Karlin is studying the role of AI within the government and as part of public service delivery, but not addressing the governance of AI itself.
- Taylor Owen, an assistant professor at the University of British Columbia and board member at the Centre for International Governance Innovation, has published work that raises the need for the governance of AI, in both alarming and urgent terms.
- Elizabeth Dubois, an assistant professor at the University of Ottawa, is examining the role and influence of political bots in elections, and is generally interested in how to increase political participation and engagement.
- Ian Kerr, who holds the Canada Research Chair in Ethics, Law & Technology at the University of Ottawa Faculty of Law, recently co-edited Robot Law, a book about the intersection of law, robotics and artificial intelligence. His current passion is calling for an international ban on killer robots — literally addressing the deep primary, science fiction-fuelled concern that most Canadians have about AI. Kerr has spent the last few years attempting to get the Canadian government to take a lead on this issue, to no effect. Kerr is one of the world’s leading experts around AI, law, ethics and governance, but the government has yet to act on his call for action.
- Finally, there’s Fenwick McKelvey, an assistant professor at Concordia University. His research spans network neutrality, the politics of platforms (such as Facebook and Twitter), bots in elections (with Dubois) and the governance of AI. McKelvey is also participating in an initiative led by Université de Montréal that has come up with the Montreal Declaration for a Responsible Development of AI. As an ethical framework, it provides Canada’s only example of work toward the governance of AI, albeit largely in the form of questions.
Effectively, that’s it. Outside of new research and projects from these few ambitious pioneers, AI governance is operating from a pretty dated rulebook.
Canada defaults to its Charter of Rights and Freedoms when it comes to governing AI. According to the media relations team at Innovation, Science and Economic Development Canada, AI governance also falls under the “existing marketplace framework,” in addition to the federal Personal Information Protection and Electronic Documents Act (PIPEDA). Translation: the existing marketplace framework is owned and operated by large foreign entities and operating within a privacy framework that has no teeth and has been largely unsuccessful in holding these companies accountable.
Yet, there’s a range of ways that AI applications, impacts and influences could be in violation of the Charter. The most obvious example of this is discrimination. AI is increasingly being used to assist, augment or even replace human decision making, but initial research suggests that biases in the data or biases in the machine learning models can then cause these systems to discriminate according to race, gender or socioeconomic status.
The standing Charter of Rights and Freedoms paired with PIPEDA and the “existing marketplace framework” isn’t adequate. This patchwork of policies built for another purpose can’t come close to addressing major AI governance questions.
For starters, most users are not providing meaningful consent to the use of their information, as they are not in a position to understand how AI applications are using it. Second, aggregation of data is often employed to anonymize and protect privacy. However, the correlations employed by advanced AI allow for effective identification and de-anonymization of users. In the context of existing marketplace frameworks, Canadians are supposed to trust black boxes to ensure that Charter rights are not being violated, or that the AI and algorithms are not discriminating against people.
Dubois argues that “AI is becoming pervasive and omnipresent. As Canadians we need to be aware of this reality and confident that our government is protecting our interests and needs.”
For McKelvey, those needs begin with elections, given the growing role of bots in the electoral process and the media-disseminating power of platforms such as Facebook and Twitter. PIPEDA does not cover political parties, and the use of social media advertising in political campaigns allows for a kind of targeting and manipulation that existing laws did not anticipate.
And, in the long term, McKelvey wants to see privacy laws used to slow down the development of AI, or at least to help make it more transparent to those most affected by it. He points out that the current fuel for the rise of AI is our personal information.
“Canadians are willingly participating in an experiment they do not understand and arguably don’t consent to,” he says.
To be fair, research and funding are headed toward AI in Canada, with the federal government alone pledging $125 million in the 2017 budget. Various levels of government and the private sector alike have championed the foreign-dominated industry, supporting firms with research and development funds. But the enthusiasm subsides when it comes to working on issues of ethics and governance.
The new Canada.ai portal is a great example. Created by the organizations that receive federal funding to pursue AI research and development, the site is designed to “showcase Canada’s leadership in the field of artificial intelligence.”
The site focuses exclusively on the commercial side of AI; issues of governance, or perspectives from social scientists, ethicists or researchers from the humanities are absent, despite their substantial contributions to discussion around AI governance. The researchers highlighted above certainly aren’t mentioned.
When asked what federal initiatives were underway to examine or address the governance of AI, the media relations team at Innovation, Science and Economic Development Canada said they were funding the organizations that are part of Canada.ai and nothing else. The projects that have received funding are focused on the functionality of AI, not the governance of AI, so it’s no wonder that the federal efforts haven’t seeded a plan for AI’s regulatory framework.
This approach is in direct contrast to that of the European Union, where the General Data Protection Regulation is giving Europe the opportunity to develop genuine capacity for AI governance.
Europe recognizes that governance innovation has to keep up with technology innovation. It seems like common sense: as the economy is transformed by AI, the regulatory environment must keep up. The rights of citizens need to be protected, and the ability of government to govern must be maintained.
In North America, libertarianism is the dominant policy discourse when it comes to technology. Often called the Californian Ideology, this system serves the interests of Silicon Valley well, but it certainly does not serve the interests of Canadians. Interestingly, Microsoft made the case at the January 2018 World Economic Forum in Davos that the world needs new laws and regulations to govern AI.
The longer we defer developing the capacity to govern AI, the harder it will be to govern it, and to use public policy as a means of responding or engaging the world around us. Perhaps this is the goal of the libertarians of Silicon Valley — to create new monopolies that are free from regulation, and free from public accountability.
Toronto’s Quayside neighbourhood is a perfect example of how this might play out. The site will host a “smart neighbourhood” built by Alphabet’s Sidewalk Labs as a sort of template for the future of cities. In this context people are not seen as citizens, but rather as subjects of a grand experiment — “users” of a neighbourhood rather than members of a community or participants in a democratic society. Given that the municipal government, let alone the provincial or federal government, has little to no experience in the governance of AI, what can it do to engage or supervise this experiment?
Regulation in a democratic society generally centres on the principle of preventing or mitigating harm. When it comes to technology, and AI in particular, we’ve been blind or willfully ignorant. Two recent books, Cathy O’Neil’s Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (2016) and Virginia Eubanks’s Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor (2017), make the case that a lack of effective AI governance can result in considerable harm.
Perhaps there is room in existing Canadian regulations — federal privacy law and competition law — to govern AI. Effective governance, however, requires an understanding of the potential harm that can come from AI systems, and how privacy, security or ethics concerns intersect with existing laws and regulations. That kind of interdisciplinary collaboration isn’t currently happening in the Canadian AI landscape.
It’s daunting to attempt to regulate foreign tech titans like Facebook, Google or Amazon, but people — Canadians included — have something of incredible value that these companies want: personal information. Combine that asset with society’s growing expertise in the sprawling world of AI, and all of a sudden the general public has a fairly strong negotiating position.
Fearmongers say that regulation will stifle innovation, or that foreign companies will leave and go elsewhere. In this instance, the opposite is true.
In Canada, innovation in AI is supported by the public, whether via public funds or institutions, or the personal information of Canadians. The Facebooks and Googles of the world are not going anywhere. In order to continue harvesting the information of Canadians, they’ll bend to almost any laws created, and Europe is a great case study for this. The university system will continue to produce world-class research, and developers will continue to find new commercial opportunities enabled by emerging technology.
The question is not whether all of this innovation will happen, but under what terms it will happen — and that’s up for debate. In the era of AI, government support can’t stop at innovation of software or products. Innovation of government must happen too.