Abstract

The “Computers are social actors” (CASA) assumption (Nass and Moon in J Soc Issues 56:81–103, 2000. https://doi.org/10.1111/0022-4537.00153) states that humans apply social norms and expectations to technical devices. One such norm is to distort one’s own response in a socially desirable direction during interviews. However, findings for such an effect are mixed in the literature. Therefore, a new study on the effect of social desirability bias in human–robot evaluation was conducted, aiming for a conceptual replication of previous findings. In a between-subject laboratory experiment, N = 107 participants had to evaluate the robot and the interaction quality after a short conversation in three different groups: In one group, the evaluation was conducted using (1) the same robot of the former interaction, (2) a different robot, (3) a tablet computer. According to the CASA assumption, it was expected, that evaluations on likability and quality of interaction, are higher in the condition with the same robot conducting the evaluation, compared to a different robot or a tablet computer because robots are treated as social actors and hence humans distort ratings in a socially desirable direction. Based on previous findings, we expected robots to evoke higher anthropomorphism and feelings of social presence compared to the tablet computer as potential explanation. However, the data did not support the hypotheses. Low sample size, low statistical power, lack of measurement validation and other problems that could lead to an overestimation of effect sizes—in this study and the literature in general—are discussed in light of the replicability crisis.

Highlights

  • For product improvement and development, user evaluation is key

  • Social desirability bias is an intensively studied psychological phenomenon and “refers to the tendency by respondents, under some conditions and modes of administration, to answer questions in a more socially desirable direction than they would under other conditions or modes of administration” [55, p. 755]

  • The aim of this article is two-fold: first, this article aims to contribute toward questions on the robustness of social desirability effects by adding further insights from a conceptual replication of a study by Nass et al [48] and other replications in different contexts

Read more

Summary

Introduction

For product improvement and development, user evaluation is key. Certain technological products such as video games or robots, can directly inquire about the user’s subjective use experience without the involvement of a third party, such as a human interviewer. The level of this distortion (response bias) can be obtained by the mean difference in scores on socially sensitive questions. These include questions in which the answer is distorted toward a prevailing social norm or value (normative response bias), or questions, in which the answer is distorted toward what is expected to be preferred by the interviewer and following social rules

Objectives
Methods
Results
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call