Abstract
In recent years, researchers have extensively used non-verbal gestures, such as head and arm movements, to express a robot's intentions and capabilities to humans. Inspired by past research, we investigated how different explanation modalities can aid human understanding and perception of how robots communicate failures and provide explanations during block pick-and-place tasks. Through an in-person, within-subjects experiment with 24 participants, we studied four modes of explanations across four types of failures. Some of these were chosen to mimic combinations from prior work in order to both extend and replicate past findings by the community. We found that speech explanations were preferred to non-verbal and visual cues in terms of similarity to humans. Additionally, projected images had a comparable effect on explanation as other non-verbal modules. We also found consistent results with a prior online study.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.