Abstract

Lethal autonomous weapon systems present a prominent yet controversial military innovation. While previous studies have indicated that the deployment of “killer robots” would face considerable public opposition, our understanding of the elasticity of these attitudes, contingent on different factors, remains limited. In this article, we aim to explore the sensitivity of public attitudes to three specific factors: concerns about the accident-prone nature of the technology, concerns about responsibility attribution for adverse outcomes, and concerns about the inherently undignified nature of automated killing. Our survey experiment with a large sample of Americans reveals that public attitudes toward autonomous weapons are significantly contingent on beliefs about their error-proneness relative to human-operated systems. Additionally, we find limited evidence that individuals concerned about human dignity violations are more likely to oppose “killer robots.” These findings hold significance for current policy debates about the international regulation of autonomous weapons.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.