Abstract
The research on human-robot interactions indicates possible differences toward robot trust that do not exist in human-human interactions. Research on these differences has traditionally focused on performance degradations. The current study sought to explore differences in human-robot and human-human trust interactions with performance, consideration, and morality trustworthiness manipulations, which are based on ability/performance, benevolence/purpose, and integrity/process manipulations, respectively, from previous research. We used a mixed factorial hierarchical linear model design to explore the effects of trustworthiness manipulations on trustworthiness perceptions, trust intentions, and trust behaviors in a trust game. We found partner (human versus robot) differences across all three trustworthiness perceptions, indicating biases towards robots may be more expansive than previously thought. Additionally, there were marginal effects of partner differences on trust intentions. Interestingly, there were no differences between partners on trust behaviors. Results indicate human biases toward robots may be more complex than considered in the literature.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.