The world is facing an unprecedented number of complex migration crises. As more and more people interact with immigration and refugee determination systems, countries are beginning to dabble with artificial intelligence (AI) to automate the millions of decisions that are made every day as people cross borders and seek new homes. As a major migration hub, Canada is experimenting with the use of these technologies.
Bots at the Gate: A Human Rights Analysis of Automated Decision Making in Canada’s Immigration and Refugee System, a report released in September 2018 by the University of Toronto’s International Human Rights Program (IHRP) and the Citizen Lab at the Munk School of Global Affairs and Public Policy focuses on the impacts of automated decision making in Canada’s immigration and refugee system from a human rights perspective. The report — co-authored by this article’s author — highlights how the use of AI to replace or augment administrative decision making in this context threatens to create a laboratory for high-risk experiments within an already highly discretionary system. Vulnerable and under-resourced communities — such as non-citizens — often have less access to robust human rights protections and fewer resources with which to defend those rights. Adopting these technologies in an irresponsible manner may only serve to exacerbate these disparities.
What Is "Automated Decision-Making"?
The term “automated decision systems” refers to a particular class of technologies that either assist or replace the judgment of human decision makers. These systems draw from fields like statistics, linguistics and computer science, and use techniques such as regression, rules-based systems, predictive analytics, machine learning, deep learning and neural networks, often in combination with one another. Automated decision systems may be used for any number of diverse applications by either the government or the private sector, depending on the ultimate “decision” at issue.
The introduction of automated systems can impact both the processes and the outcomes associated with decisions that would otherwise be made by administrative tribunals, immigration officers, border agents, legal analysts and other officials responsible for the administration of Canada’s immigration and refugee system.
These systems could make seemingly innocuous decisions — like whether an application is complete — but they could also make more complex determinations about whether a marriage is “genuine,” for example, or whether people should be given protection on “humanitarian and compassionate” grounds.
How Are Automated Systems Used in Canada's Immigration Processes?
While the parameters of Canada’s roll-out of these technologies are not completely clear, we do know that the use of these technologies is not merely speculative: the Canadian government has already been experimenting with their adoption of automated systems in the immigration context since at least 2014. For example, the federal government has been in the process of developing a system of “predictive analytics” to automate certain activities currently conducted by immigration officials and to support the evaluation of some immigrant and visitor applications. The government has also sought input from the private sector via a tender notice (published on April 13 and amended May 23, 2018) titled “Artificial Intelligence Solution (B8607-180311/A),” about using AI in immigration decision making, including in humanitarian and compassionate applications and pre-removal risk assessments. These two applications are often used as a last resort by vulnerable people fleeing violence and war to remain in Canada.
These systems could make seemingly innocuous decisions — like whether an application is complete — but they could also make more complex determinations about whether a marriage is “genuine.”
The ramifications of using automated decision making in the immigration and refugee space are far-reaching. The sheer scale of potential impact is extraordinary. Hundreds of thousands of people enter Canada every year through a variety of applications for temporary and permanent status. Many come from war-torn countries and are seeking protection from violence and persecution. For example, in 2017, Immigration, Refugees and Citizenship Canada and the Canada Border Services Agency processed over 50,000 refugee claims. Canada is projecting the admission of 310,000 new permanent residents in 2018 and up to 340,000 new permanent residents annually by 2020.
Immigration Law: Discretionary and Opaque Decision-Making
Decisions on immigration and refugee matters fall under the rubric of administrative law, which refers to the area of law concerned with the actions and operations of government agencies and administrative decision makers. This area of law directly affects people’s everyday lives, including decisions on issues ranging from social assistance to immigration. In particular, it provides the framework for courts to engage in the review of decisions made by a government minister, administrative boards, commissions, tribunals or agencies. Unfortunately, such decisions are notoriously opaque, discretionary, and difficult to challenge.
Given the difficulty of determining why a human officer made a particular decision, it’s worth asking how decisions will be questioned, investigated or challenged when they are made by an automated system.
We already know that algorithms make mistakes. For example, 7,000 students were wrongfully deported from the United Kingdom because an algorithm falsely accused them of cheating on a language test. Other countries such as Australia and New Zealand are also experimenting with using biometrics and facial recognition technology to identify so-called future “troublemakers,” which civil society organizations are fighting against under grounds of discrimination and racial profiling. In June 2018, US Immigration and Customs Enforcement (ICE) was also heavily criticized for its use of an algorithm that was set to recommend the detention of migrants in every case to comply with the Trump administration’s hardline policies on the US-Mexico border.
Current uses of algorithms have a very problematic track record with regard to discrimination. Something as innocuous as a simple Google search may yield discriminatory ads targeted on the basis of racially associated personal names, or systematically display lower-paying job opportunities to women. Machines are also learning how to perpetuate stereotypes based on appearance, such as photo recognition software that reproduces gender stereotypes (for example, by associating “woman” with “kitchen”) or software that purports to discern sexual orientation from photos.
Given the difficulty of determining why a human officer made a particular decision, it’s worth asking how decisions will be questioned, investigated or challenged when they are made by an automated system.
The Real-World Impacts of Using AI in Immigration Decision-Making
Refugee and immigration claims are filled with nuance and complexity, qualities that may be lost on automated technologies, leading to serious breaches of internationally and domestically protected human rights in the form of bias, discrimination, privacy breaches, and due process and procedural fairness issues, among others. For example, it is not yet clear how the right to a fair and impartial decision maker and the right to appeal a decision will be upheld during the use of automated decision-making systems. These rights are internationally protected by instruments that Canada has ratified, such as the United Nations Convention on the Status of Refugees, the International Covenant on Economic, Social and Cultural Rights and the Canadian Charter of Rights and Freedoms. In the case of refugee and immigration claims, automated decision making could have life-and-death ramifications for ordinary people, many of whom are fleeing for their lives.
The Private Sector's Role in Automated Decision-Making
The private sector also has a crucial role to play in the development and roll-out of these new technologies and must think carefully about its responsibilities to uphold and protect human rights.
Private sector products designed to support individuals interfacing with the immigration and refugee system may create new privacy risks. For example, Visabot is a Facebook Messenger-based AI application designed to help users apply for visas and green cards and to schedule appointments with the US Citizenship and Immigration Service. Visabot has also launched a service to specifically assist young immigrants who qualify for the DACA (Deferred Action for Childhood Arrivals) program. Although this program is designed to help at-risk migrants and potential immigrants, it comes with a significant privacy and security trade-off — Facebook, and other companies like it, operate within business models that primarily rely on the aggregation, analysis and resale of their users’ private information to third parties (such as advertisers).
Unfortunately, government surveillance, policing, immigration enforcement and border security programs can incentivize and reward industry for developing rights-infringing technologies. Among them is Amazon’s “Rekognition” surveillance and facial recognition system, which is being marketed explicitly for use by law enforcement. Using deep learning techniques, Rekognition is able to identify, track and analyze individuals in real time, recognize up to 100 people in a single image and analyze collected information against mass databases of faces. This “person tracking” service will allow the government to identify, investigate and monitor “people of interest,” including in crowded group photos and in public places such as airports.
The technology has come under fire from the American Civil Liberties Union, which has demanded that Amazon stop allowing governments to use the technology, citing “profound civil liberties and civil rights concerns.” A number of Amazon shareholders have also criticized the company’s sale of the technology, citing long-standing issues of bias in facial recognition software, the threat of false positives and the risk that markets for the technology would expand to include authoritarian regimes abroad — all of which may impact the company’s stock valuation and increase financial risk. Amazon’s own workforce has led this call, and demanded that Amazon cut its ties with a controversial data analytics firm called Palantir Technologies. Palantir is responsible for providing the technology that supports the detention and deportation programs run by the ICE and the Department of Homeland Security, which Amazon workers have decried as an “immoral U.S. policy” and part of the U.S.’s increasingly handline treatment of refugees and immigrants.
Government surveillance, policing, immigration enforcement and border security programs can incentivize and reward industry for developing rights-infringing technologies.
Private sector businesses have an independent responsibility to ensure that the technologies they develop do not violate international human rights. They also have clear legal obligations to comply with Canadian law — including privacy and human rights legislation — in the development of their products and services. Technologists, developers and engineers responsible for building this technology also have special ethical obligations to ensure that their work does not facilitate human rights violations. Many major private sector companies are also beginning to develop codes of conduct and technical standards, and participate in industry consortia and coalitions, in order to better navigate the challenges raised by these technologies.
What Is the Way Forward?
Bots at the Gate makes seven detailed recommendations. For example, Canada should establish an independent, arm’s-length body to engage in all aspects of oversight and review for all automated decision-making systems used by the federal government, making all current and future uses of AI public. Canada should also create a task force that brings together key government stakeholders, alongside academia and civil society, to better understand the current and prospective impacts of automated decision system technologies on human rights and the public interest more broadly. This is a unique opportunity for Canada to be a leader in the development of ethical AI and on human rights.
Immigration and refugee law is a useful lens through which to examine state practices, particularly in times of greater border control security and screening measures, complex systems of global migration management, the increasingly widespread criminalization of migration and rising xenophobia. Immigration law operates at the nexus of domestic and international law and draws upon global norms of international human rights and the rule of law. Canada has clear domestic and international legal obligations to respect and protect human rights when it comes to the use of these technologies, and it is incumbent upon policy makers, government officials, technologists, engineers, lawyers, civil society and academia to take a broad and critical look at the very real impacts that these technologies have on human lives.