In recent years, researchers have extensively used non-verbal gestures, such as head and arm movements, to express a robot's intentions and capabilities to humans. Inspired by past research, we investigated how different explanation modalities can aid human understanding and perception of how robots communicate failures and provide explanations during block pick-and-place tasks. Through an in-person, within-subjects experiment with 24 participants, we studied four modes of explanations across four types of failures. Some of these were chosen to mimic combinations from prior work in order to both extend and replicate past findings by the community. We found that speech explanations were preferred to non-verbal and visual cues in terms of similarity to humans. Additionally, projected images had a comparable effect on explanation as other non-verbal modules. We also found consistent results with a prior online study.
Read full abstract