Abstract

Trust is a significant predictor of humans' willingness to engage with robots. What increases – and decreases – human-robot trust? In contrast with research that has focused on robots' physical features and gestures, the present study examined psychological features. We operationalized trust as the willingness to make oneself vulnerable to potential exploitation. Participants (N = 811) played two rounds of an online Repeated Prisoner's Dilemma game against a robotic or human counterpart. The counterpart was randomly varied to display high versus low levels of four theoretically derived dimensions of humanness: Values, Autonomy, Social Connection, and Self-Aware Emotions. Varying the robotic counterpart's expressed commitment to Values from low to high increased participants' likelihood of choosing the cooperative option. In contrast, varying the robot's Self-Aware Emotions from low to high increased participants' likelihood of choosing the competitive option. These data suggest that imbuing a robot with a commitment to moral principles fosters higher trust that the robot will not choose the exploitative option, whereas imbuing a robot with a high level of emotional self-awareness hinders this type of trust. This work represents a starting point for the development of a more comprehensive model of the psychology of human-robot trust.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call