Predictive analytics in law describes AI systems that can predict the outcome of legal cases via computational legal reasoning. Predictive analytics in refugee status determination (RSD) has been cautiously employed. However, active research into model development holds significant potential for scalable application. Predictive analytics, traditionally built upon inductive decision-making processes like supervised machine learning and decision trees, risks compromising the abductive reasoning processes that RSD relies upon. Even if models are built to effectively navigate this hurdle, problems with data remain. Insufficient data is an intractable element of forced displacement, which means inaccuracies or uncertainty. The prospective nature of a well-founded fear undermines how algorithms are traditionally trained from historical data. The inability of predictive analytics to measure subjective fear may mean that RSD credibility assessments are pushed to ‘pseudo-scientific’ tools such as lie detectors and emotion recognition technology. Without the ability to remove subjectivity, the training data will capture subjective fear from previous claimants that will inform other case outcomes. Case characteristics that describe an individual case within a dataset and upon which the algorithm is trained must be carefully chosen in consultation with legal experts. Without considering how case characteristics reflect legal standards, they risk subverting hard-fought and won international legal protections. Finally, commonly acknowledged issues of algorithmic bias and harmful feedback loops must not be forgotten. This conversation is crucial for ensuring that the likely implementation of predictive analytics in RSD upholds fairness and established legal standards and, crucially, does not forget the human at its heart.
Read full abstract