Abstract
Autonomy in weapon systems is already a genuine concern. States try to come up with their own definitions of these systems and pay utmost effort to impose their own understanding of these systems upon other states. For a fairly high number of states barring a total ban on such weapons would be the ideal solution; however, such states that are anxious about the increase in autonomy in war-making capabilities, as adopts a second-best scenario to contain risks created by the deployment of such systems. To this end, placing them under meaningful human control emerges as an important political and legal objective. The author believes that placing autonomous weapons under human supervision, despite its initial promise, will yield negative results. This is due to the fact that humans tend rather to be too willing to follow the solutions generated by autonomous systems. First observed in other industries of civilian nature like aviation or health, automation bias has the potential to negate most if not all of supervision measures expected to ensure proper implementation of international humanitarian law.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.