Abstract

ABSTRACT The use of AI to automate defense and intelligence tasks is increasing. And yet, little is known about how algorithmic analyses, data capture, and decisions will be perceived by elite decision-makers. This article presents the results of two experiments that explore manifestations of AI systems in the cyber conflict decision-making loop. Though findings suggest that technical expertise positively impacts respondents’ ability to gauge the potential utility and credibility of an input (indicating that training can, in fact, overcome bias), the perception of human agency in the loop even in the presence of AI inputs mitigates this effect and makes decision-makers more willing to operate on less information. This finding is worrying given the extensive challenges involved in effectively building human oversight and opportunity for intervention into any effective employment of AI for national security purposes. The article considers these obstacles and potential solutions in the context of data gathered.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call