Abstract

Transparency is an important aspect of human-robot interaction (HRI), as it can improve system trust and usability, leading to improved communication and performance. However, most transparency models focus only on the amount of information given to users. In this paper, we propose a bidirectional transparency model, termed the transparency-based action (TBA) model, which allows the robot to take actions based on transparency information received from the human (robot-of-human and human-to-robot), in addition to providing transparency information to the human (robot-to-human). To examine the impact of a three-level (High, Medium and Low) TBA model on acceptance and HRI, we first implemented the model on a robotic system trainer in two pilot studies (with students as participants). Based on the results of these studies, the Medium TBA level was not included in the subsequent main experiment, which was conducted with older adults (aged 75–85). In that experiment, two TBA levels were compared: Low (basic information including only robot-to-human transparency) and High (including additional information relating to predicted outcomes with robot-of-human and human-to-robot transparency). The results revealed a statistically significant difference between the two TBA levels of the model in terms of perceived usefulness, ease of use, and attitude . The High TBA level was preferred by users and yielded improved user acceptance.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.