Abstract
Autonomous weapon systems (AWS) are considered particularly dangerous because they could make war somewhat independent of humans. Should that be the case-some argue-there would be neither rules nor empathy or compassion on the battlefield, and we would no longer know who was responsible for the crimes committed. For these reasons, a vast movement committed to banning such weapons emerged. Sharing the goal of greater compliance with the humanitarian law and-at the same time-embracing a realist conception of relations between groups and states, in this article we argue in favor of the idea that the technological development of AWS, in particular the possibility of observing and directing any warfare action remotely, opens new opportunities for the regulation of military actions. We also claim that a ban of such weapons is likely to be not only ineffective but also factually unrealistic; so, we propose to encourage research on AWS with the goal of embedding in them strict constraints of use. The idea is that the process will be triggering more humane conflicts in compliance with international law. To this extent, we suggest the establishment of ‘war juries’ composed of representative citizens, who would monitor all military actions of their country and decide how and when to limit them.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.