Abstract

ABSTRACT Developments in machine learning prompt questions about algorithmic decision-support systems (DSS) in warfare. This article explores how the use of these technologies impact practices of legal reasoning in military targeting. International Humanitarian Law (IHL) requires assessment of the proportionality of attacks, namely whether the expected incidental harm to civilians and civilian objects is excessive compared to the anticipated military advantage. Situating human agency in this practice of legal reasoning, this article considers whether the interaction between commanders (and the teams that support them) and algorithmic DSS for proportionality assessments alter this practice and displace the exercise of human agency. As DSS that purport to provide recommendations on proportionality generate output in a manner substantively different to proportionality assessments, these systems are not fit for purpose. Moreover, legal reasoning may be shaped by DSS that provide intelligence information due to the limits of reliability, biases and opacity characteristic of machine learning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call