Is AI Governance a Vitamin Pill or a Painkiller?

It isn’t always possible to see how decisions are made about AI. It’s time for organizations to get serious about taking concrete steps toward effective AI governance.

September 15, 2022
spiral
Two architects have unveiled a conceptual design for a tranquil Japanese retreat using an algorithm. (Cover Media via REUTERS)

Let’s say you work for a bank that uses automated systems to make decisions about loan applications, or hiring, or internal promotion. These systems include machine machine-learning tools designed according to a set of criteria, trained on historical data sets, then freed to do their mysterious work. Maybe you personally were passed over for a promotion.

Now, imagine that sometime later, you learn that the artificial intelligence (AI) making this decision was flawed. Perhaps the data used to train it was biased, or the model was poorly designed. Maybe the system “drifted,” as machine-learning models are known to do (drift happens when a model’s predictive power decays over time due to changes in the real world). It’s one thing to get turned down by a human you can challenge. But there’s much grey area with AI. It isn’t always possible to see how decisions are made.

This truth underlies the widespread call for trustworthy AI — that is to say, for transparency, fairness and accountability in the development and use of AI solutions. Despite the great promise of these tools, the risk of negative outcomes is not far-fetched. AI bias is documented and real. This is why it’s time for organizations to get serious about taking concrete steps toward effective AI governance.

Indeed, there are hard costs to AI done badly — including fines, litigation and settlement charges. Unsurprisingly, legislation has been proposed in the European Union and Canada that will impose massive penalties for breach of the rules around AI development and use. Companies have already experienced the hard costs of data breaches: for example, Capital One was fined US$80 million for its 2018 data breach and settled customer lawsuits for US$190 million. AI-related infractions will be similarly costly. And beyond the hard costs, soft ones — such as business distraction, loss of confidence and reputational damage — have even greater potential to damage organizations that do AI badly.

That’s why every board and senior management team in just about any business today should be talking about AI governance. At the policy level, nation-states and non-governmental organizations have been addressing the issue for several years. There are countless principles-based frameworks floating around, including the Model AI Governance Framework, the Responsible AI Global Policy Framework and others. What has been slower to develop is a practical, firm-level capacity for governing AI.

That’s partly because so few organizations are doing it in practice. That situation is concerning, because the technology is already being broadly deployed. Along with other contemporary governance challenges such as data and cybersecurity, AI governance needs to have a place in the boardroom. In the absence of hard legislation and compliance requirements, many private sector actors will view it as an unnecessary cost or distraction. The pacing problem — the recognized gap between the emergence of technologies and their legal-ethical oversight — is expected to grow and widen. It’s time for organizations to get serious about operationalizing the principles-based frameworks: taking terms like transparency, fairness, bias, accuracy and privacy, and using them to outline what is required for AI to be trustworthy in a way that is quantifiable and measurable.

What is involved in actually doing AI governance? Implementation is not a one-size-fits-all undertaking and needs to be organization-specific. Several practical components can contribute to success:

  • Board-level education: Education for boards and senior leadership teams should be relevant to the organization’s strategic priorities, governance maturity and specific AI use cases. AI as a technology and governance challenge occupies a fast-changing landscape, and education and awareness should be seen as ongoing requirements. For most boards, AI should be added to today’s risk registers — the tools organizations use to identify and characterize risks, along with the probability and likely severity of their occurrence, and possible mitigating actions.
  • AI governance framework: Creating an organization-specific framework involves using best practices drawn from global frameworks, evolving standards and legislation. An organization’s AI governance framework needs to be flexible enough to allow it to be applied for a broad range of AI projects. It should enable oversight of the AI according to defined AI governance guardrails.
    • Guardrails are the policies, processes and tools, along with measures, defined by the governors and with which they can monitor the performance and ethical behaviour of the AI and assure themselves it’s operating safely. The guardrails should be focused on stakeholders’ interests and their perceptions of whether an AI system can be trusted.
  • AI governance platform: Effective oversight can best be achieved with a platform that sets guardrails and has ways of measuring outcomes and a repository of tools for performing due diligence. AI is dynamic, so the use cases must be monitored over time. The key is to translate important technical details into governance visuals, such as oversight dashboards that capture AI project operating metrics, and operational thresholds that trigger action if an AI system is operating outside its guardrails.

In other words, it isn’t enough to just talk about ethical AI. Organizations need to implement it. That means outlining a process to define, measure, monitor and report on aspects such as fairness, bias, explainability and privacy. Regulations are coming, in Canada and around the world, but waiting for the ink to dry on those is a mistake.

AI has the potential to make us individually and collectively healthier or sicker, more prosperous or left behind, safer or more vulnerable. AI governance done early and well provides a competitive boost, like a vitamin. AI governance done poorly or when a crisis looms becomes a costly but necessary painkiller. It’s time for theoretical discussion to make way for AI governance in practice.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Authors

Mardi Witzel is the CEO and co-founder of PolyML, an early-stage Canadian artificial intelligence and machine learning company with novel technology.

Niraj Bhargava is the CEO and lead faculty at NuEnergy.ai and an expert on artificial intelligence governance.