Emerging Voices: Immigration, Iris-Scanning and iBorderCTRL–The Human Rights Impacts of Technological Experiments in Migration

Emerging Voices: Immigration, Iris-Scanning and iBorderCTRL–The Human Rights Impacts of Technological Experiments in Migration

[Petra Molnar is a Lawyer and Research Associate at the International Human Rights Program, University of Toronto Faculty of Law. This post is based on author’s research at the University of Cambridge.]

Detention of migrants at the U.S.-Mexico border; wrongful deportation of 7,000 foreign students accused of cheating on a language test; racist or sexist discrimination based on social media profiles – what do these examples have in common? In every case, an algorithm made a decision with serious consequences for people’s lives.

Nearly 70 million people are on the move due to conflict, instability, environmental factors, and economic reasons. As a result, many states and international organizations involved in migration management are exploring various technological experiments to strengthen border enforcement and improve decision-making. These experiments range from Big Data predictions about population movements in the Mediterranean, to Canada’s use of automated decision-making in immigration, to Artificial Intelligence (AI) lie detectors at European borders. However, these technological experiments often fail to account for profound human rights ramifications and real impacts on human lives.

Technological implementations exude the promises of increased fairness and efficiency. However, these advances also reveal the fissures of imbalanced power relations in society. International human rights law is useful for codifying potential harms, because technology and its development is inherently global and transnational. Currently, new technologies in migration are largely unregulated. More global oversight and accountability mechanisms are needed to safeguard fundamental rights such as freedom from discrimination, privacy rights, and procedural justice safeguards such as the right to a fair decision-maker and the rights of appeal.

Technologies of Migration Management

Data-Driven Humanitarianism, Biometrics, and Informed Consent

Automated decision-making technologies require vast amounts of data on which to learn.  Various projects use Big Data, or extremely large data sets, to predict population movements during conflicts and make the delivery of aid more efficient. However, data collection is not an apolitical exercise, when powerful actors collect information on vulnerable populations with few regulated methods of oversights and accountability. In an increasingly anti-immigrant global landscape, migration data has also been misrepresented for political ends, to affect the distribution of aid dollars and resources and support hardline anti-immigration policies.

What is concerning is the growing role of the private sector in the collection, use, and storage of this data. The World Food Program recently signed a $45 million (USD) deal with Palantir Technologies, the same company that has been criticized for providing technology that supports the detention and deportation programs run by US Immigration and Customs Enforcement (ICE). What will happen with the data of 92 million aid recipients shared with Palantir? It is not yet clear what data accountability mechanism will be in place during this partnership or whether data subjects can refuse to have their data shared.

The use of new technologies also raises issues of informed consent, particularly in the increasing reliance on biometric data. In Jordan, refugees now have their irises scanned in lieu of identification to receive their food rations. However, are they able to meaningfully opt out from having their data collected? Most refugees reported being uncomfortable with this collection but felt that they could not refuse to get scanned if they wanted to eat that week. Consent is not free if it is given under coercion, even if the coercive circumstances masquerade as efficiency and better service delivery.

Fortifying the Border

Autonomous technologies are increasingly used in securing border spaces. FRONTEX, the European Border and Coast Guard Agency, has been testing unpiloted military-grade drones in the Mediterranean for the surveillance and interdiction of migrant vessels hoping to reach European shores to file asylum applications. These technologies can have drastic results. While ‘smart-border’ technologies have been called a more ‘humane’ alternative to the Trump Administration’s physical wall, using new surveillance technologies along the US-Mexico border have more than tripled migrant deaths and pushed migration routes towards more dangerous terrains through the Arizona desert. This echoes the rising deaths in the Mediterranean as more migrant boats are intercepted before reaching the shores of Europe. 

Automating Migration Decisions

States receiving large numbers of migrants have been experimenting with automated decision-making. A 2018 report (co-written by the author) explored the human rights risks of using AI to replace or augment immigration decisions in Canada. In other jurisdictions, these experiments are in full force. Following the Trump administration’s executive orders cracking down on migration, Immigration and Customs Enforcement (ICE) used an algorithm at the USA-Mexico border to justify detention of migrants in every single case.

Instances of bias in automated decision-making are widely documented. These biases have far-reaching results if embedded in emerging migration technologies. In airports in Hungary, Latvia, and Greece, a new project by a company called iBorderCtrl  introduced AI-powered lie detectors at border checkpoints. Passengers’ faces will be monitored for signs of lying, and if the system becomes more ‘skeptical’ through a series of increasingly complicated questions, the person will be selected for further screening by a human officer. However, what happens if a refugee claimant interacts with these systems? Can this system account for trauma and its effects on memory, or for cultural differences in communication? This use of AI again raises concerns about information sharing without people’s consent, as well as about bias in identification through facial recognition, as facial recognition technologies struggle when analyzing women or people with darker skin tones.  

What happens when an algorithm like this makes a mistake? An algorithm already wrongfully deported over 7,000 students from the UK after accusing them of cheating on a language acquisition test. Where does liability lies in a decision like this – with the designer, the coder, the immigration officer, or the algorithm itself? Should algorithms have legal personality? Much of immigration and refugee decision-making already sits at an uncomfortable legal nexus: the impact on the rights of individuals is very significant, even where procedural safeguards are weak. It is unclear how a whole new system of decision-making will impact mechanisms of redress and how courts will interpret algorithmic decision-making and relevant administrative law principles like procedural fairness and the right to an impartial decision-maker.

International Human Rights Law and Migration Management Technologies

These discussions are not merely speculative. A number of internationally protected rights are already engaged in the increasingly widespread use of new technologies.

Life and Liberty

The starkest example is the denial of liberty when migrants are placed in administrative detention at the US-Mexico border as a result of an algorithm making the decision to detain. Immigration detention is an opaque and discretionary phenomenon, and the justification of increased incarceration on the basis of algorithms that have been adjusted for particular political ends shows just how far the state is willing to justify incursions on basic human rights under the guise of national security and border enforcement. Errors, miscalibrations, and deficiencies in training data can result in rights-infringing outcomes, such as unjust deportation back to persecution or torture.

Equality Rights and Freedom from Discrimination

Algorithms are vulnerable to the same decision-making concerns that plague human decision-makers: transparency, accountability, discrimination, bias, and error. The opaque nature of immigration and refugee decision-making creates an environment ripe for algorithmic discrimination. Decisions in this system—from whether a refugee’s life story is ‘truthful’ to whether a prospective immigrant’s marriage is ‘genuine’—are highly discretionary, and often hinge on assessment of a person’s credibility. In the experimental use of AI lie detectors at EU airports, what will constitute truthfulness and how will differences in cross-cultural communication be dealt with in order to ensure that problematic inferences are not encoded and reinforced into the system? The complexity of migration – and the human experience – is not easily reducible to an algorithm.

Privacy Rights

Privacy is not only a consumer or property interest: it is a human right, rooted in foundational democratic principles of dignity and autonomy. We must consider the differential impacts of privacy infringements when looking at the experiences of migrants. If collected information is shared with repressive governments from whom refugees are fleeing, the ramifications can be life-threatening. Or, if automated decision-making systems designed to predict a person’s sexual orientation are infiltrated by states targeting the LGBTQ community, discrimination and threats to life and liberty will likely occur. A facial recognition algorithm developed at Stanford University already tried to discern a person’s sexual orientation from photos. This use of technology has particular ramifications in the refugee and immigration context, where asylum applications based on sexual orientation grounds often rely on having to prove one’s persecution based on outdated tropes around non-heteronormative behaviour.

Recommendations: Accountability and Oversight Mechanisms

Technology replicated power in society and its benefits do not accrue equally. Yet no global regulatory framework exists to oversee the use of new technologies in the management of migration. Much of technological development occurs in so-called ‘black boxes,’ where intellectual property laws and proprietary considerations shield the public fully understanding how the technology operates. Affected communities must also be involved in technological development. While conversations around the ethics of AI are taking place, ethics do not go far enough. We need a sharper focus on oversight mechanisms grounded in fundamental human rights.

Print Friendly, PDF & Email
Topics
Emerging Voices, Featured, General, International Human Rights Law, Organizations, Public International Law, Symposia
No Comments

Sorry, the comment form is closed at this time.