In the debate on how to improve efficiencies in the humanitarian sector and better meet people’s needs, the argument for the use of artificial intelligence (AI) and automated decision-making (ADMs) systems has gained significant traction and ignited controversy for its ethical and human rights-related implications.Setting aside the implications of introducing unmanned and automated systems in warfare, we focus instead on the impact of the adoption of AI-based ADMs in humanitarian response. In order to maintain the status and protection conferred by the humanitarian mandate, aid organizations are called to abide by a broad set of rules condensed in the humanitarian principles and notably the principles of humanity, neutrality, impartiality, and independence. But how do these principles operate when decision-making is automated?This article opens with an overview of AI and ADMs in the humanitarian sector, with special attention to the concept of algorithmic opacity. It then explores the transformative potential of these systems on the complex power dynamics between humanitarians, principled assistance, and affected communities during acute crises. Our research confirms that the existing flaws in accountability and epistemic processes can be also found in the mathematical and statistical formulas and in the algorithms used for automation, artificial intelligence, predictive analytics, and other efficiency-gaining-related processes.In doing so, our analysis highlights the potential harm to people resulting from algorithmic opacity, either through removal or obfuscation of the causal connection between triggering events and humanitarian services through the so-called black box effect (algorithms are often described as black boxes, as their complexity and technical opacity hide and obfuscate their inner workings (Diakopoulos, Tow Center for Digital Journ, 2017). Recognizing the need for a humanitarian ethics dimension in the analysis of automation, AI, and ADMs used in humanitarian action, we endorse the concept of “explicability” as developed within the ethical framework of machine learning and human-computer interaction, together with a set of proxy metrics.Finally, we stress the need for developing auditable standards, as well as transparent guidelines and frameworks to rein in the risks of what has been defined as humanitarian experimentation (Sandvik, Jacobsen, and McDonald, Int. Rev. Red Cross 99(904), 319–344, 2017). This article concludes that accountability mechanisms for AI-based systems and ADMs used to respond to the needs of populations in situation of vulnerability should be an essential feature by default, in order to preserve the respect of the do no harm principle even in the digital dimension of aid.In conclusion, while we confirm existing concerns related to the adoption of AI-based systems and ADMs in humanitarian action, we also advocate for a roadmap towards humanitarian AI for the sector and introduce a tentative ethics framework as basis for future research.
Read full abstract