Interview
“What Are We Optimizing For?”

Artificial intelligence already pervades our daily lives, suggesting what content we see or which people we should follow online. The detrimental effects that algorithmic-based systems can have on societies and individuals are also coming to light:  from increasing extremist views or promoting hate speech online, to the very real negative consequences for individuals looking to apply for credit or a job. Our Senior Migration Expert, Jessica Bither, describes risks and challenges of applying AI-based technologies in the field of migration.

Robert Bosch Stiftung | January 2022
Eine Visualisierung von Daten mit einer Frau und zwei Männern im Hintergrund
©Friends Stock - stock.adobe.com

There has been a lot of hype surrounding the use and potential promise of AI in policy fields, from being able to screen for diseases, to addressing the big challenges of climate change or fighting poverty. What role does artificial intelligence play in a politically sensitive area like migration policy?

Jessica Bither: First off, the term “AI” has been used to describe very different things over the past years, so it’s important to be clear on what we mean by that term. In my recent work, I’ve looked at what we call “automated decision-making” (ADM) systems in migration policy. These systems are already being tested in certain areas of migration management. Some governments have started looking at using it in processing visa applications. There are also couple of programs such as in Switzerland or the United States, where algorithms are used to match either accepted asylum seekers or refugees that are resettled to different geographic regions or municipalities. And there are other areas as well, for example in the humanitarian sector or in crisis prevention where different machine-learning based models are being developed related to forced displacement or human mobility in certain regions.

Since you mentioned visa policy: How can we ensure that, for example, people from a certain country are not disadvantaged by the use of ADM systems?

Looking at visa policy, we first really have to get granular to understand what is being automated. For example, the government of Canada has piloted a program with certain applications for temporary resident visas from India and China where the system basically triages the applications. The automated decision part is only used on those in category “one”, meaning those that would have most likely been approved anyway. That's an important distinction: It makes a difference if the ADM systems makes positive decisions on very straight forward applications, or whether an algorithm makes a sort of prediction or a “risk” score that is assigned to an individual. Currently, Canada is not using ADM systems to deny visas. 

The second point is that it of course depends on what kind of data are you basing your algorithm on. Say, for example, if you are basing your model on past visa decisions, there's a real danger of encoding personal biases of individual case officers. So, it’s always important when developing these systems that you are very aware of the potential for discrimination and bias, and to understand the data that you are using in order to include a meaningful impact assessment as part of your model. 

Jessica Bither
Privat

About the person

Jessica Bither is a Senior Expert for Migration. She has many years of experience working with practitioners in international migration policy. Among other things, she leads the foundations’ work on technology & migration.

Agencies such as the European Asylum Support Office have developed an early-warning and forecasting system that aims to better predict migration movements. How do these models work?

There are different government or humanitarian agencies that in recent years have used machine-learning or artificial intelligence technologies in so-called forecasting systems. This means that either new technological developments are able to analyze existing data sets in new ways, or that new data sources can be used to make predictions or assessments of a certain situation that weren't there before. So, you can combine different data sources from anything like satellite imagery to different statistics on displacement, conflict, or environmental factors to see whether there are different warning signs or indicators that such a machine-learning system can reveal or incorporate as part of the model. 

The Danish Refugee Council, for example, developed a software called “Foresight” that combines a whole range of different factors to make different assessments of the level of displacement in certain regions in order to better allocate resources, to better make decisions on the ground for individual humanitarian case workers, and to design a policy strategy. The key point here is really to look at the motivation behind such a system: You could use the same type of AI-based model to send humanitarian assistance, open up more reception centers, as well as to send border guards or close off borders. The mere possibility of developing a system based on these new technologies doesn't tell you whether we should use it or not, or whether it is ethical to do so.

The mere possibility of developing a system based on these new technologies doesn't tell you whether we should use it or not.

How should we use these systems ideally?

In order to evaluate automated decision-making systems as well as its consequences any migration stakeholder needs to look at the nuances of each case and ask questions like: Which data sources were used? Was the data checked for bias? Is the model really as accurate as people say? Is it fully-automated - which is usually never the case - or is it just a small additional piece of information that you'll use for making a decision? Even more important is the context in which these technologies are being employed. Is it producing the outcome we actually want or is it perpetuating systemic discrimination and biases that we already find in this world? Finally, the underlying question to any ADM model must always be: What are we optimizing for? That's fundamentally a question about values. 

So, migration policy stakeholders and anyone working in the field need to think through implications for individuals, communities, and migration policy more generally, and to build in real safeguards and procedures such as mandatory impact assessments. To get there, we will need more cooperation and places for real discussion between people working on the migration side but also data scientists and technologists. This is where we see a real need at the intersection of tech and migration in the near future, and where the Robert Bosch Stiftung is supporting partners to create these spaces and the relevant knowledge.