Abstract

ABSTRACT Along the line of exploring the implications of algorithmic decision-making for international law, Garcia highlights the growing dehumanization process in the military domain that reduces humans to mere data and pattern-recognizing technologies. ‘Immoral codes’ containing instructions to target and kill humans raise the likelihood of unpredictable and unintended violence. Compounding this challenge is a lack of international law that puts restraints on the pervasive use of algorithms in society and the ongoing military AI race. Garcia argues that current international mechanisms under international humanitarian law developed to regulate ‘hardware’ are not sufficient to withstand ‘software’ challenges posed by algorithmic-based weaponry. Instead, the human-centricity of international law is eroded by algorithmic decision-making and more violence and instability triggered by great power rivalry. International rules need to be updated to ensure the prohibition of killing that is outside human oversight.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call