Abstract

An appropriate level of human-robot trust is widely acknowledged as a crucial factor for successful human-robot interaction (HRI). Previous research has addressed anthropomorphic design but mostly neglected linguistic framing as a means to influence trust by altering the perceived human-likeness of a robot. Similarly, time-dependant patterns of trust and distrust during an HRI have rarely been investigated, hindering the development of a coherent theoretical framework on framing effects, their formation conditions, and their relation to trust and distrust evolution in the HRI context. A previous online study suggested that linguistic framing modulates inexperienced factory workers’ trust in a collaborative robot at the workplace. This article presents a follow-up study with a sample of students and an realistic HRI setting. Despite using similar framing stimuli, the study failed to replicate significant framing effects on trust, suggesting their context-sensitivity and dependence on individual characteristics of the recipients of the framing. Additionally, a significant increase in trust and decrease in distrust during the HRI highlights the need for time-dependant considerations. The results call for a more fine-grained investigation of linguistic framing effects as well as of trust and distrust evolution in order to develop a comprehensive theoretical framework applicable to the HRI context.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call