Abstract

ABSTRACT Continuous advances in artificial intelligence has enabled higher levels of autonomy in military systems. As the role of machine-intelligence expands, effective co-operation between humans and autonomous systems will become an increasingly relevant aspect of future military operations. Successful human-autonomy teaming (HAT) requires establishing appropriate levels of trust in machine-intelligence, which can vary according to the context in which HAT occurs. The expansive body of literature on trust and automation, combined with newer contributions focused on autonomy in military systems, forms the basis of this study. Various aspects of trust within three general categories of machine intelligence applications are examined. These include data integration and analysis, autonomous systems in all domains, and decision-support applications. The issues related to appropriately calibrating trust levels varies within each category, as do the consequences of poorly aligned trust and potential mitigation measures.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call