Can We Trust AI? From Ottawa, a Qualified Yes

On the one hand, we’re told AI holds the potential to solve some of the world’s biggest problems. On the other, some very smart people have been sounding the alarm.

July 6, 2022
AIchip
The T-Head 910 chip is portrayed at the Alibaba stand at the World Artificial Intelligence Conference in Shanghai, August 29, 2019. (REUTERS)

Will artificial intelligence — AI — save humanity or supplant us? The question has arisen with escalating frequency in recent years — a sort of journalistic thought bubble emerging from the collective, and anxious, consciousness of writers.

On the one hand, we’re told AI holds the potential to solve some of the world’s biggest problems, challenges such as poverty, food insecurity, inequality and climate change. On the other, some very smart people have been sounding the alarm for years. Elon Musk called AI “humanity’s greatest threat.” Stephen Hawking said the technology could “spell the end of the human race.” That’s some major whistle-blowing.

Globally, there has been a surge in advocacy for ethical AI. The Organisation for Economic Co-operation and Development catalogues more than 60 countries with national AI strategies, but few have hard legislation yet. In 2021, the European Union introduced its proposed Artificial Intelligence Act (AIA); this year, in February, the Algorithmic Accountability Act was reintroduced in the US Senate and House of Representatives; on March 1, China passed a sweeping piece of legislation governing the way online recommendations are made through algorithms. And last month, the Canadian federal government joined the party, introducing the Artificial Intelligence and Data Act on June 16.

The proposed Artificial Intelligence and Data Act is one of three regulatory components in the Digital Charter Implementation Act of 2022 and, if passed, will require firms designing, developing and using high-impact AI systems to meet certain requirements aimed at identifying, assessing and mitigating bias and harm. The Artificial Intelligence and Data Act outlines seven requirements relating to anonymization of data, system impact assessment, risk management, system monitoring, record keeping, publication of system description and notification of material harm. This proposed legislation is noteworthy because it is the first example of a hard law around AI being aimed at the private sector in Canada.

It will take some time to understand, in practical terms, what companies will need to do to comply, how they will demonstrate compliance and what the federal government’s approach to enforcement will look like. As with the European Union’s proposed AIA and the General Data Protection Regulation, the penalties for offences are severe and can result in fines of up to $10,000,000 or three percent of a company’s gross global revenues.

What does seem likely is that this type of law will induce the creation of an entire new services market, the market for AI assurance. The push to develop robust approaches to assuring the trustworthiness of AI is evident in places such as the United Kingdom, where the cultivation of a world-class AI assurance industry is seen as a critical component for harnessing the technology’s transformative potential. The vulnerability of institutional reputations to AI-related misadventure, and the financial materiality of these forays, has given rise to a broad discussion around ethical AI — but in most places, that discussion has yet to evolve from talk to walk.

There is a vanguard of practitioners carving a path with AI governance, but as yet there are few practical models. That will change with regulation, as the scales tip in favour of compliance. Firms and other organizations who are designing, developing and deploying AI will soon have a concrete reason to figure out what it means to practise AI governance, a function that encompasses establishing guardrails, applying specialty tools and employing services including education, conformity assessment and third-party audit.

The firms that do it well will be integrating the governance of AI into their enterprise risk-management systems, identifying and mitigating risks at each stage of the machine-learning life cycle. AI assurance is both a market need and a market opportunity, and there will be a flood of new entrants to the space, particularly as standards emerge and disclosure requirements from regulators, such as the securities and exchange commissions, proliferate.

Who will enter the fray to provide this type of service? Perhaps an existing profession, say, accountants or engineers, will round out their curriculum with new and broader competencies. The opportunity seems ripe for the chartered professions, with their combination of subject matter expertise, credentialling and integrity, but large institutional elements don’t always lead, as size and speed are often in tension.

Perhaps we will see the emergence of a new profession, based on a composite of data science, information technology and assurance skills. Maybe the start-up community will claim this territory, with an agility and expertise that combine to get more help to more organizations more quickly. In the next five years we will see the growth of a major new market in technology assurance, and much of it will be built around AI. Disclosure will be the linchpin in this emerging field.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Mardi Witzel is the CEO and co-founder of PolyML, an early-stage Canadian artificial intelligence and machine learning company with novel technology.