Abstract

Abstract Countries around the world are increasingly investing in artificial intelligence (AI) to automate military tasks that traditionally required human involvement. Despite growing interest in AI-enabled systems, relatively little research explores whether and how AI affects military decision-making. Yet, national security practitioners may perceive the judgments of and actions taken by algorithms differently than those of humans. This variation may subsequently affect decisions on the use of force. Using two original survey experiments fielded on a sample of US national security experts, we find that AI use by both friendly and rival forces affects decision-making during interstate crises. National security experts are less likely to take military action when AI is used to analyze intelligence than when humans conduct the analysis. Experts also viewed an accident involving a rival's AI-enabled weapon that kills American troops as more deserving of retaliation than an accident involving only human operators, suggesting that national security practitioners are less forgiving of errant AI systems than of similarly erring humans. Our findings suggest emerging technologies such as AI can affect decisionmakers’ perceptions in ways that shape political outcomes. Even in a world of algorithms, human decisions will still have important consequences for international security.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.