Abstract

Robots that are capable of outperforming human beings on mental and physical tasks provoke perceptions of threat. In this article we propose that implicit self-theory (core beliefs about the malleability of self-attributes, such as intelligence) is a determinant of whether one person experiences threat perception to a greater degree than another. We test for this possibility in a novel experiment in which participants watched a video of an apparently autonomous intelligent robot defeating human quiz players in a general knowledge game. Following the video, participants received either social comparison feedback, improvement-oriented feedback, or no feedback, and were then given the opportunity to play against the robot. We show that those who adopt a malleable self-theory (incremental theorists) are more likely to play against a robot after imagining losing to it, as well as exhibit more favorable responses and less identity threats than entity theorists (those adopting a fixed self-theory). Moreover, entity theorists (vs. incremental theorists) perceive autonomous intelligent robots to be significantly more threatening (both in terms of realistic and identity threats). These findings offer novel theoretical and practical implications, in addition to enriching the HRI literature by demonstrating that implicit self-theory is, in fact, an influential variable underpinning perceived threat.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.