Abstract

Advances in mobile phone technology and social media have created a world where the volume of information generated and shared is outpacing the ability of humans to review and use that data. Machine learning (ML) models and big data analytical tools have the power to ease that burden by making sense of that information and providing insights that might not otherwise exist. In the context of international criminal and human rights law, ML is being used for a variety of purposes, including to uncover mass graves in Mexico, find evidence of homes and schools destroyed in Darfur, detect fake videos and doctored evidence, predict the outcome of European Court of Human Rights’ judicial hearings and gather evidence of war crimes in Syria. ML models are also increasingly being incorporated by States into weapon systems to better enable targeting systems to distinguish between civilians, allied soldiers and enemy combatants or even inform decision-making for military attacks. The same technology, however, also comes with significant risks. ML models and big data analytics are highly susceptible to common human biases. Those biases have the potential of reinforcing and even accelerating existing racial, political or gender inequalities. They can also paint a misleading and distorted picture of the facts on the ground. This article canvasses how common human biases can impact ML models and big data analytics. This article also examines what legal implications these biases can have under international criminal law and international humanitarian law.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call