Friday, September 28, 2018

In a commentary in the Globe and Mail, International Human Rights Program researcher Petra Molnar (JD 2016) and Ronald Deibert, Director of the Citizen Lab at the University of Toronto, warn about the implications of the federal government's use of artificial intelligence in refugee cases ("Ottawa’s use of AI in immigration system has profound implications for human rights," September 26, 2018).

The commentary is based on the report published by the IHRP and the Citizen's Lab, “Bots at the Gate: A Human Rights Analysis of Automated Decision Making in Canada’s Immigration and Refugee System.”

Read the full commentary on the Globe and Mail website, or below.


Ottawa’s use of AI in immigration system has profound implications for human rights

By Petra Molnar and Ronald Deibert

September 26, 2018

How would you feel if an algorithm made a decision about your application for a Canadian work permit, or determined how much money you can bring in as an investor? What if it decided whether your marriage is “genuine?” Or if it trawled through your Tweets or Facebook posts to determine if you are “suspicious” and therefore a “risk,” without ever revealing any of the categories it used to make this decision?

While seemingly futuristic, these types of questions will soon be put to everyone who interacts with Canada’s immigration system.

 

A report released Wednesday by the University of Toronto’s International Human Rights Program (IHRP) and the Citizen Lab at the Munk School of Global Affairs and Public Policy finds that algorithms and artificial intelligence are augmenting and replacing human decision makers in Canada’s immigration and refugee system, with profound implications for fundamental human rights.

We know that Canada has already introduced automated decision-making experiments as part of the immigration determination process since at least 2014. These new automated techniques support the evaluation of immigrant and visitor applications such as Express Entry for Permanent Residence. Recent announcements signal an expansion of the uses of these technologies in a variety of applications and immigration decisions in the coming years.

 

Exploring new technologies and innovations is exciting and necessary, particularly when used in an immigration system plagued by lengthy delays, protracted family separation and uncertain outcomes. However, without proper oversight, mechanisms and accountability measures, the use of AI threatens to create a laboratory for high-risk experiments.

The system is already opaque. The ramifications of using AI in immigration and refugee decisions are far-reaching. Vulnerable and under-resourced communities such as those without citizenship often have access to less-robust human rights protections and fewer resources with which to defend those rights. Adopting these technologies in an irresponsible manner may serve only to exacerbate these disparities and can result in severe rights violations, such as discrimination and threats to life and liberty.

Without proper oversight, automated decisions can rely on discriminatory and stereotypical markers, such as appearance, religion, or travel patterns, and thus entrench bias in the technology. The nuanced and complex nature of many refugee and immigration claims may be lost on these technologies. This could lead to serious breaches of internationally and domestically protected human rights, in the form of bias, discrimination, privacy breaches, due process and procedural fairness issues, such as the right to have a fair and impartial decision maker and being able to appeal your decision. These rights are internationally protected by instruments that Canada has ratified, such as the United Nations Convention on the Status of Refugees, and the International Covenant on Economic, Social and Cultural Rights, among others. These rights are also protected by the Canadian Charter of Rights and Freedoms and accompanying provincial human rights legislation.

At this point, there are more questions than answers.

If an algorithm makes a decision about your fate, can it be considered fair and impartial if it relies on biased data that is not made public? What happens to your data during the course of these decisions and can it be shared with other departments, or even with the government of your country, potentially putting you at risk? The use of AI has already been criticized in the predictive policing context, where algorithms linked race with the likelihood of re-offending, or when they link women with lower paying jobs, or purport to discern sexual orientations from photos.

Given the already limited safeguards and procedural justice protections in immigration and refugee decisions, the use of discriminatory and biased algorithms have profound ramifications on a person’s safety, life, liberty, security, and mobility. Before exploring how these technologies will be used, we need to create a framework for transparency and accountability that addresses bias and error in automated decision making.

Our report recommends Ottawa establish an independent, arm’s-length body with the power to engage in all aspects of oversight and review all automated decision-making systems by the federal government, publishing all current and future uses of AI by the government. We advocate for the creation of a task force that brings key government stakeholders, alongside academia and civil society, to better understand the current and prospective impacts of automated decision system technologies on human rights and the public interest more broadly.

Without these frameworks and mechanisms, we risk creating a system that – while innovative and efficient – could ultimately result in human rights violations. Canada is exploring the use of this technology in high-risk contexts within an accountability vacuum. Human decision-making is also riddled with bias and error, and AI may in fact have positive impacts in terms of fairness and efficiency. We need a new framework of accountability that builds on the safeguards and review processes we have in place for the frailties in human decision-making. AI is not inherently objective or immune to bias and must be implemented only after a broad and critical look at the very real impacts these technologies will have on human lives.