Abstract

Social robots are being groomed for human influence, including the implicit and explicit persuasion of humans. Humanlike characteristics are understood to enhance robots’ persuasive impact; however, little is known of how perceptions of two key human capacities—mind and morality—function in robots’ persuasive potential. This experiment tests the possibility that perceived robot mind and morality will correspond with greater persuasive impact, moderated by relational trustworthiness for a moral appeal and by capacity trustworthiness for a logical appeal. Via an online survey, a humanoid robot asks participants to help it learn to overcome CAPTCHA puzzles to access important online spaces—either on grounds that it is logical or moral to do so. Based on three performance indicators and one self-report indicator of compliance, analysis indicates that (a) seeing the robot as able to perceive and act on the world selectively improves compliance, and (b) perceiving agentic capacity diminishes compliance, though capacity trustworthiness can moderate that reduction. For logical appeals, social-moral mental capacities promote compliance, moderated by capacity trustworthiness. Findings suggest that, in this compliance scenario, the accessibility of schemas and scripts for engaging robots as social-moral actors may be central to whether/how perceived mind, morality, and trust function in machine persuasion.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call