Abstract

ABSTRACT Personal Intelligent Agents (PIAs) like Siri and Alexa are becoming more popular among users. In this paper, we investigate the effect of humour and voice of personal intelligent agents, their impact on users’ perceptions of intelligence and anthropomorphism, and the relationship of these perceptions with cognitive- and emotion-based trust. The results from an online experiment show that humour and voice significantly and positively influence users’ perceptions of anthropomorphism. These perceptions positively impact users’ emotion-based trust, which increases their intention to use the PIA. We also find that perceptions of intelligence shape users’ cognitive-based trust in the PIA. Our model is novel because it examines two key design characteristics of PIAs and articulates their effects on user perceptions. The effect of human-like characteristics, specifically humour and voice, on perceptions of intelligence and anthropomorphism and the potential impact on users’ cognitive- and emotion-based trust in PIAs have not been explored in an IS context.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.