Understanding people’s perceptions and inferences about social robots and, thus, their responses toward them, constitutes one of the most pervasive research themes in the field of Human–Robot interaction today. We herein augment and extend this line of work by investigating, for the first time, the novel proposition that one’s implicit self-theory orientation (underlying beliefs about the malleability of self-attributes, such as one’s intelligence), can influence one’s perceptions of emerging social robots developed for everyday use. We show that those who view self-attributes as fixed (entity theorists) express greater robot anxiety than those who view self-attributes as malleable (incremental theorists). This result holds even when controlling for well-known covariate influences, like prior robot experience, media exposure to science fiction, technology commitment, and certain demographic factors. However, only marginal effects were obtained for both attitudinal and intentional robot acceptance, respectively. In addition, we show that incremental theorists respond more favorably to social robots, compared to entity theorists. Furthermore, we find evidence indicating that entity theorists exhibit more favorable responses to a social robot positioned as a servant. We conclude with a discussion about our findings.
Read full abstract