Abstract

A critical challenge in human-autonomy teaming is for human players to comprehend their nonhuman teammates (agents). Transparency in agents' behaviors is the key for such comprehension, which may be obtained by embedding a self-explanation ability into the agent to explain its own behaviors. Previous studies have relied on searching for the executed functions and logics to generate explanations for behaviors of goal-following logic-based agents. With the increasing number of functions and logics, current methods, such as component and process-based methods, have become impractical. This article proposes a new method exploiting the agent's artificial situation awareness states for generating explanations that involves several techniques: A Bayesian network, fuzzy theory, and Hamming distance. Our new method is evaluated in a collaborative driving context, in which a significant number of accidents recently occurred around the globe due to the lack of understanding of the autopilot agents. Using an autonomous driving simulator called Carla, two typical scenarios in collaborative driving, namely, traffic light and overtaking situations, are used. The findings show that the new method potentially reduces the search space in generating explanations and exhibits better computational performance and a lower cognitive workload. This work is important to calibrate human trust and to enhance comprehension of the agent.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.