ABSTRACT Personal Intelligent Agents (PIAs) like Siri and Alexa are becoming more popular among users. In this paper, we investigate the effect of humour and voice of personal intelligent agents, their impact on users’ perceptions of intelligence and anthropomorphism, and the relationship of these perceptions with cognitive- and emotion-based trust. The results from an online experiment show that humour and voice significantly and positively influence users’ perceptions of anthropomorphism. These perceptions positively impact users’ emotion-based trust, which increases their intention to use the PIA. We also find that perceptions of intelligence shape users’ cognitive-based trust in the PIA. Our model is novel because it examines two key design characteristics of PIAs and articulates their effects on user perceptions. The effect of human-like characteristics, specifically humour and voice, on perceptions of intelligence and anthropomorphism and the potential impact on users’ cognitive- and emotion-based trust in PIAs have not been explored in an IS context.
Read full abstract