Abstract

The United Nations and the North Atlantic Treaty Organization (NATO) have put in place systems governing the integration and deployment of Artificial Intelligence (AI) in their Peace Support Operations (PSOs). Policy proposals and discourses have emphasized putting humans at the centre of those processes, often without effectively linking important ethical and normative considerations to the technical aspects of AI and the practical aspects of peacekeeping operations. This article mainly explains that the disconnection can be mapped out as follows: the lack of mechanisms to manage knowledge and mastery, less consideration of processes pertaining to trust, and very little concern about explainability: these are all significant elements that could improve accountability in those two organizations. In addition, while taking into account the promises and perils of AI in those operations at the strategic, tactical, and operational levels, the main arguments of this paper is that the governance systems in place aiming at integrating and deploying AI do not fully take humans into consideration. The arguments in this article follow the principles behind prioritizing the needs and concerns of civilians, whom these operations aim to protect, as well as the civilian, military, and police employees of the organizations that carry out the operations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call