Abstract

The rapid advancements of increased levels of autonomy in contemporary and emerging weapons technologies are changing the landscape of war significantly in ways that are not yet fully clear. This chapter engages with the complexities of assigning and taking responsibility in the use of lethal autonomous weapons systems. At stake in the debates about so-called Lethal Autonomous Weapons Systems (LAWS) is whether the human can exert adequate levels of meaningful human control over weapons systems that are capable of selecting and engaging targets autonomously. The advent of new complex and distributed technologies of autonomy, especially those that employ advanced modes of machine learning and deep neural networks, challenges conceptions of the human as knowledgeable and free moral agent, acting with intent in the conduct of warfare. This challenge to human agency and control has consequences not only for legal responsibility and accountability in war but also changes parameters for taking moral responsibility for lethal acts in warfare. This chapter argues that the characteristics of the technology itself pose a considerable challenge to our conventional understanding of lines of responsibility for actions in the context of conflict warfare.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.