Explainability Should Be Central to Canada’s Governance of Generative AI

Why is this so important? Because it builds public trust.

November 22, 2023
generativeAIgrok
The proliferation of new technologies is precipitating a rethink of the requirements of governance. (Photo illustration by Jaap Arriens/NurPho via REUTERS)

Canada’s code of practice for generative AI was announced by Innovation, Science and Economic Development Canada (ISED) in late September 2023 with 15 signatories to date, including BlackBerry, Cohere and Telus. Notably missing was a reference to explainability. It’s a big oversight.

The vast majority of policy and regulatory instruments contemplate explainability — an ability to understand and trust the outputs of a model. Most of our current governance models were developed prior to the spectacular debut of OpenAI’s ChatGPT in 2022, including Canada’s proposed Artificial Intelligence and Data Act and the US Blueprint for an AI Bill of Rights. The rise of generative AI has triggered a new set of concerns and policy responses, which policy makers are rushing to address. Results are, to put it diplomatically, mixed.

In June 2023, the proposed EU Artificial Intelligence Act was modified to incorporate new obligations for foundation models, including design and development approaches that achieve “appropriate levels of interpretability” over the model’s lifecycle (article 28b (2.c)). In July, the White House announced it had secured voluntary commitments from seven AI companies (now 15) to manage AI risk. These commitments fell into three categories, including “safety,” where explainability was specifically referenced: companies making this commitment commit to advancing ongoing research in AI safety, including on the interpretability of AI systems’ decision-making processes.” And more recently, G7 leaders agreed to a code of conduct on AI that advocates for explainability and interpretability, among other requirements.

Historically, practices around the use of models have been loose, with the exception of regulated areas such as banking. Not surprisingly, the proliferation of new technologies is precipitating a rethink of the requirements of model governance. A recent report from Canada’s Office of the Superintendent of Financial Institutions emphasized the different levels of explainability that need to be considered in different instances.

Why is explainability so important here? From a governance standpoint, it builds public trust. From a technical standpoint, the ability to understand how models are trained and to interpret how they arrive at outcomes enables assessment of system behaviour. Explainability might be seen as a necessary but insufficient condition for many of the principles we associate with trustworthy AI, such as accuracy, fairness, safety, security, privacy and compliance.

But the fact that explainability is important doesn’t make it easy. Even before ChatGPT, when less spectacular forms of AI roamed the Earth, explainability was a challenge. Machine-learning applications vary significantly. For the most part, developers and deployers rely on after-the-fact analyses, which can be horribly misleading. With big data, many methods do not lend themselves to both accuracy and true interpretability. This is an even bigger problem with foundational models and generative AI.

Traditionally, the aspiration for model explainability was paramount. Banks had to know if the AI-powered mortgage-approval process was unfairly biased with respect to gender, ethnicity or race; Google’s AI systems had to “know” why they’re targeting you with specific ads so they could target more effectively and increase revenue; AI-powered medical diagnoses had to show why cancer was detected so that false flags could be caught and new cancer-causing factors could be discovered. The common theme in all these cases was specificity of data and predictions.

Generative models are expanding the horizon. You can ask GPT-4 to draw you a picture or write an email. This jump from narrow intelligence to general-purpose AI was the result of using very complex algorithms that make sense of billions of bits of data. With these systems, it becomes much more difficult to determine how the answer is arrived at; what a model is “predicting” changes almost every time, and what goes on inside it is an incalculable amount of math, influenced by billions of data points.

Indeed, the innovations and infrastructure underlying generative AI are not built for true explainability. Imposing a classical requirement for it on these systems would force businesses to choose between older systems (where explainability is possible but often illusory) and the newer black-box systems. This would be a very costly choice.

Since Canada is one of several world leaders in AI research and development, Canadian developers and regulators need to push toward explainable solutions. Voluntary codes of practice are useful guideposts for both governance and innovation. As voluntary instruments, they can be aspirational, signalling to developers where regulators and stakeholders want to see focus.

The organizations that signed on to Canada’s code are taking a worthwhile first step toward addressing the concerns and limitations of generative AI models. Explainability should be front and centre in their future plans. And the requirement for it should be enshrined in any Canadian code of practice for generative AI. Otherwise, how can these systems ever gain broad public trust?

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Authors

Mardi Witzel is the CEO and co-founder of PolyML, an early-stage Canadian artificial intelligence and machine learning company with novel technology.

Benji Christie is co-founder and CEO of Formic.ai.