Enabled by technological advances, robot teachers have entered educational service frontlines. Scholars and policymakers suggest that during Human-Robot Interaction (HRI), human teachers should remain “in-the-loop” (i.e., oversee interactions between students and robots). Drawing on impression management theory, we challenge this belief to argue that robot teacher confidentiality (i.e., robot teachers not sharing student interactions with the human teacher) lets students make more use of the technology. To examine this effect and provide deeper insights into multiple mechanisms and boundary conditions, we conduct six field, laboratory and online experiments that use virtual and physical robot teachers (Total N = 2,012). We first show that students indeed make more use of a confidential (vs. nonconfidential) robot teacher (both physical and virtual). In a qualitative study (Study 2), we use structural topic modeling to inductively identify relevant mediators and moderators. Studies 3 through 5 provide support for these, showing two key mediators (i.e., social judgment concern and interaction anxiety) and two moderators (i.e., student prevention focus and teacher benevolence) for the effect of robot teacher confidentiality. Collectively, the present research introduces the concept of service robot confidentiality, illustrating why and how not sharing HRI with a third actor critically impacts educational service encounters.
Read full abstract