Recent news has featured many horror stories about ArriveCAN, the Government of Canada’s COVID-19 border app, incorrectly instructing travellers entering Canada to quarantine. The government apologized and acknowledged a glitch was affecting up to three percent of travellers using the app. Although the precise number is unknown, the glitch may have affected thousands, if not tens of thousands, of travellers.
The incident highlights several shortcomings of Canadian privacy and data protection law when it comes to mandatory data collection by the government. In private transactions, individuals can generally withhold or revoke consent if they are unhappy with the results of an automated decision-making process or algorithm. If one does not like the processes used by a private social media company such as, say, Facebook or Twitter, one can avoid using the service.
Not so with ArriveCAN. Through the Quarantine Act, the government requires use of ArriveCAN as a “condition” of entry, with few exceptions and under significant monetary penalty for non-compliance. But ArriveCAN does not actually determine entry eligibility. Instead, it collects travel and health information (everything from contact information and trip details to proof of vaccination and a quarantine plan), verifies it, and then issues a receipt to travellers. This receipt must be shown to a Canada Border Services Agency (CBSA) officer. The officer — not the app — makes the final decisions regarding the traveller’s eligibility to enter Canada, the validation of the traveller’s vaccination status and whether the trip is essential travel or discretionary. After travellers have entered Canada, ArriveCAN may continue to communicate with those who may be subject to conditions based on the CBSA officer’s determination. In some cases, and as the government admitted recently, ArriveCAN sends “notifications that don’t reflect your situation.”
Unfortunately, little is publicly known about the underlying technology. The government has said the app was developed by five private companies: BDO Canada, TEKSystems, Coradix Technology Consulting, Dalian Enterprises, and GCStrategies. None of these companies are on the Government of Canada’s “list of suppliers who can provide the Government of Canada with responsible and effective AI services, solutions and products.” To date, the government has not released any of the contracts for their services. (In response to my requests for them under the Access to Information Act, the CBSA immediately issued a delay letter with a delay period that exceeds the time of the underlying Order-in-Council. I have submitted a complaint.)
What little is known about ArriveCAN is concerning. An algorithmic impact assessment conducted by the chief data officer of the Public Health Agency of Canada last October for ArriveCAN’s optical character-recognition algorithm noted that ArriveCAN scored 47 out of 107 on the assessment’s grading scheme for risk mitigation. (For reasons that are unclear, the app is subject to the second-lowest risk level, and only needs a 25–49 percent score to pass the assessment.)
The chief data officer noted ArriveCAN has no processes in place
- to test for “bias and other unexpected outcomes”;
- “to manage the risk that outdated or unreliable data is used”;
- “to document how data quality issues were resolved during the design process”;
- “to log the instances when overrides were performed”; or
- to show who is making such overrides.
ArriveCAN’s Privacy Notices says that it also shares information between institutions of government (and with private contractors). However, the assessment says there are no agreements or protocols in place to govern this data sharing, even though the privacy commissioner has called for them.
Significantly, the algorithmic assessment also notes this algorithm is a trade secret. If the algorithm is the proprietary information of any of those companies, under the Access to Information Act, this means that it cannot be disclosed to the public, leaving Canadians without proper transparency or accountability about how the ArriveCAN’s algorithm is designed to arrive at its decisions. Such decisions might include not only the faulty quarantine orders but other decisions, too — such as how individuals’ data is being utilized. The government notes ArriveCAN’s algorithm works by verifying information against data sets comprised of other individuals’ data. In other words, ArriveCAN makes individual Canadians complicit in the production of its decisions without letting them even know about the decision-making criteria. Challenging those decisions and practices in court is virtually impossible.
It is worth recalling that ArriveCAN no longer bears any connection to its original goal of contact tracing during the COVID-19 pandemic. While it was intended to speed things up at the border by helping CBSA agents make eligibility determinations, the union for CBSA officers says it does just the opposite (for example, “We’re in a situation where we’re kind of not doing our actual work as border service officers anymore. All of our time is being spent on the app”). In most cases, ArriveCAN does not even collect full health information, such as booster shot information. Finally, although the Quarantine Act’s provisions currently serve as the mandate for travellers to use it, one of the prerequisites under the act is that there are no reasonable alternatives. Given that ArriveCAN is not used to determine eligibility to enter Canada (a CBSA officer does that), it seems reasonable alternatives are available.
The foregoing observations raise pressing questions. Why do procurement laws permit private actors to provide such services while thwarting transparency and accountability laws? How does this affect the legitimacy of government institutions deploying such technologies? Can other laws, such as the Canadian Charter of Rights and Freedoms, resolve any of these accountability gaps?
Going forward, Canadian privacy and data protection laws need to be updated to require proof of stricter necessity for data collection by government. Meaningful consent, including opt-out features, needs to be enhanced if the government is going to engage in these practices. And whenever the government insists on using such technologies, it needs to foster trust by making them as transparent and accountable as possible — rather than designating the technology a trade secret and hiding behind that designation.
This type of secrecy does not build trust.