Abstract

The use of autonomous weapons systems (AWS) to select targets and attack them without human intervention poses a real legal dilemma. What heralds the urgency of the issue is the emergence of some unofficial reports talking about AWS entering the battlefield in recent armed conflicts. Previous literature has been inconclusive on the legitimacy of AWS. This is what prompted us to do this research, which deserves to be investigated in more depth to help reach an international consensus within the international humanitarian law (IHL) framework. The article uses a combination of both doctrinal and non-doctrinal methodology to provide a more comprehensive understanding of the issue. The methodology focuses on analyzing AWS through the perspective of IHL principles because it is the most related law by which the legitimacy of AWS can be assessed. The data collected were secondary and analyzed using quantitative data analysis to shed light on the contradiction between public sentiment and the actual trajectory of AWS development. The results show that military necessity and humanity are two concepts inherent in the true principles of IHL that do not accept measurement or compromise. The article concludes that although artificial intelligence (AI) has not yet reached a threshold that allows reliable deployment of AWS, However, the acceleration of its development indicates that AWS will be able to comply with true IHL principles in the near future.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call