Policy brief
16.09.2025

Algorithmic Risk in EU Migration and Asylum Governance

Postdoctoral researcher Mirko Đuković explores how two landmark European instruments, the EU Artificial Intelligence Act and the Council of Europe AI Convention, approach the regulation of automated systems in migration and asylum governance.

This policy brief by Mirko Đuković, postdoctoral researcher in the Algorithmic Fairness for Asylum Seekers and Refugees (AFAR) project,  examines how automated decision-making (ADM) is increasingly embedded in Europe’s migration and asylum systems, from ETIAS triage at the border to mobile-phone data extraction in asylum procedures. These technologies routinely shape life-changing outcomes, yet operate with limited transparency and uneven safeguards for fairness and fundamental rights. The EU AI Act adopts a product-safety logic, categorising AI systems into four risk tiers and generally classifying migration-related systems as high-risk. By contrast, the Council of Europe’s AI Convention introduces a contextual, rights-centred methodology (HUDERIA), emphasising scale, scope, probability, and reversibility of risks. Both frameworks converge on the need for ex-ante risk assessment, but diverge on how such assessments should be carried out. Combining the AI Act’s regulatory certainty with the Convention’s richer human-rights analysis could yield stronger protections for migrants and asylum seekers while offering clarity to authorities.

The 4-year Algorithmic Fairness for Asylum Seekers and Refugees (AFAR) is a collaborative research project hosted at the Centre from 2021 to 2025. Funded by the Volkswagen Foundation through its “Challenges for Europe” funding programme, the project includes five other institutions across Europe, aiming to explore the fairness of automation in the highly contested field of migration and asylum governance.