Abstract

In human-autonomy teaming (HAT), human operators and intelligent agents cooperate and coordinate together to achieve shared goals. This study aimed to enhance autonomy transparency by proposing the automatic generation of confidence and explanations (bounding boxes, and bounding boxes and keypoints) and investigating their impacts in HAT. A total of 36 participants engaged in a simulated surveillance task, during which they were assisted by the intelligent agents we designed using Keypoint Faster R-CNN. As a result, we found that visual explanations using bounding boxes and keypoints improved detection task performance only when confidence was not visualized. Moreover, participants had higher trust in and preference for autonomy when visual explanations were provided but whether confidence was visualized did not influence their trust and preference. These findings have implications for the design of autonomy and can facilitate human-machine interactions in HAT.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call