Abstract
Trust is a significant predictor of humans' willingness to engage with robots. What increases – and decreases – human-robot trust? In contrast with research that has focused on robots' physical features and gestures, the present study examined psychological features. We operationalized trust as the willingness to make oneself vulnerable to potential exploitation. Participants (N = 811) played two rounds of an online Repeated Prisoner's Dilemma game against a robotic or human counterpart. The counterpart was randomly varied to display high versus low levels of four theoretically derived dimensions of humanness: Values, Autonomy, Social Connection, and Self-Aware Emotions. Varying the robotic counterpart's expressed commitment to Values from low to high increased participants' likelihood of choosing the cooperative option. In contrast, varying the robot's Self-Aware Emotions from low to high increased participants' likelihood of choosing the competitive option. These data suggest that imbuing a robot with a commitment to moral principles fosters higher trust that the robot will not choose the exploitative option, whereas imbuing a robot with a high level of emotional self-awareness hinders this type of trust. This work represents a starting point for the development of a more comprehensive model of the psychology of human-robot trust.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.