Abstract

Little is known regarding public opinion of autonomous robots. Trust of these robots is a pertinent topic as this construct relates to one’s willingness to be vulnerable to such systems. The current research examined gender-based effects of trust in the context of an autonomous security robot. Participants (N = 200; 63% male) viewed a video depicting an autonomous guard robot interacting with humans using Amazon’s Mechanical Turk. The robot was equipped with a non-lethal device to deter non-authorized visitors and the video depicted the robot using this non-lethal device on one of the three humans in the video. However, the scenario was designed to create uncertainty regarding who was at fault – the robot or the human. Following the video, participants rated their trust in the robot, perceived trustworthiness of the robot, and their desire to utilize similar autonomous robots in several different contexts that varied from military use to commercial use to home use. The results of the study demonstrated that females reported higher trust and perceived trustworthiness of the robot relative to males. Implications for the role of individual differences in trust of robots are discussed.

Highlights

  • Robots are becoming omnipresent and use across an increasingly broad portion of society is growing (Breazeal, 2002)

  • The current study examined one of the fundamental individual differences, namely gender effects on trust of an autonomous security robot

  • There was a statistically significant difference in levels of trust between males (M = 3.39, SD = 1.48) and females (M = 3.89, SD = 1.40); U = 3752, z = −2.30, p < 0.05, r = 0.16. These results suggest that females were more trusting of the robot

Read more

Summary

Introduction

Robots are becoming omnipresent and use across an increasingly broad portion of society is growing (Breazeal, 2002). Robots like the Knightscope use many sensors to relay information about suspicious activities and people to their clients. These sensors may have complex machine learning algorithms that analyze massive amounts of data in speeds faster than humans but are essentially opaque to humans, lacking understandability and reducing trust (Christensen and Lyons, 2017). That complexity coupled with the physical size of the system creates an inherent vulnerability to the robot and the need to understand the public’s trust in autonomous security robots. Vulnerability in this sense can be derived through a human having to interact

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.