Abstract

Lethal autonomous weapon systems present a prominent yet controversial military innovation. While previous studies have indicated that the deployment of “killer robots” would face considerable public opposition, our understanding of the elasticity of these attitudes, contingent on different factors, remains limited. In this article, we aim to explore the sensitivity of public attitudes to three specific factors: concerns about the accident-prone nature of the technology, concerns about responsibility attribution for adverse outcomes, and concerns about the inherently undignified nature of automated killing. Our survey experiment with a large sample of Americans reveals that public attitudes toward autonomous weapons are significantly contingent on beliefs about their error-proneness relative to human-operated systems. Additionally, we find limited evidence that individuals concerned about human dignity violations are more likely to oppose “killer robots.” These findings hold significance for current policy debates about the international regulation of autonomous weapons.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call