Abstract

People tend to anthropomorphize agents that look and/or act human, and further, they tend to evaluate such agents more positively. This, in turn, has motivated the development of robotic agents that are humanlike in appearance and/or behavior. Yet, some agents -- often those with highly humanlike appearances -- have been found to elicit the opposite, wherein they are evaluated more negatively than their less humanlike counterparts. These trends are captured by Masahiro Mori's uncanny valley hypothesis, which describes a (uncanny) valley in emotional responding - a switch from affinity to dislike - elicited by agents that are ``too humanlike'. However, while the valley phenomenon has been repeatedly observed via subjective measures, it remains unknown as to whether such evaluations reflect a potential impact to a person's behavior (i.e., aversion). We attempt to address this gap in the literature via a novel experimental paradigm employing both traditional subjective ratings, as well as measures of peoples' behavioral and phsyiological responding. The results show that not only do people rate highly humanlike robots as uncanny, but moreover, they exhibit greater avoidance of such encounters than encounters with less humanlike and human agents. Thus, the findings not only support Mori's hypothesis, but further, they indicate the valley should be taken as a serious consideration for peoples' interactions with humanlike agents.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.