Abstract

ABSTRACT The use of AI to automate defense and intelligence tasks is increasing. And yet, little is known about how algorithmic analyses, data capture, and decisions will be perceived by elite decision-makers. This article presents the results of two experiments that explore manifestations of AI systems in the cyber conflict decision-making loop. Though findings suggest that technical expertise positively impacts respondents’ ability to gauge the potential utility and credibility of an input (indicating that training can, in fact, overcome bias), the perception of human agency in the loop even in the presence of AI inputs mitigates this effect and makes decision-makers more willing to operate on less information. This finding is worrying given the extensive challenges involved in effectively building human oversight and opportunity for intervention into any effective employment of AI for national security purposes. The article considers these obstacles and potential solutions in the context of data gathered.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.