Abstract

Hybrid military teams, formed by human warfighters and autonomous artificial agents, represent the technological future of Defence operations. Both the potential and the inherent limitations of current technology are well-known, but the cognitive–behavioral and motivational aspects of human–robot interaction on the battlefield have yet to be systematically investigated. To lay the theoretical and methodological foundation of this scientific investigation, our position paper critically examines how the military personnel’s spontaneous tendency to anthropomorphize artificial autonomous agents can affect operations of hybrid military teams in multiple ways. We will argue that the psychological impact of anthropomorphism on military personnel is neither easily avoidable nor necessarily detrimental. Correctly identifying the multi-level cognitive mechanisms that underpin implicit and explicit forms of anthropomorphism allows us to increase the efficacy of human–agent interaction. We will argue that, within hybrid teams, the capability to communicate with teammates, allies, civilians, and adversaries relies on embodied social cognition processes that are inherently geared toward anthropomorphism and leverage its effects. By updating both the design of autonomous artificial agents and the training of human troops to account for these processes, their reciprocal coordination can be augmented.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.