Abstract

Despite repeated reports of socially inappropriate and dangerous chatbot behaviour, chatbots are increasingly used as mental health services in providing support for young people. In sensitive settings as such, the notion of perceived moral agency (PMA) is crucial, given its critical role in human-human interactions. In this paper, we investigate the role of PMA in human-chatbot interactions. Specifically, we seek to understand how PMA influence the perception of trust, likeability, and perceived safety of chatbots for mental health across two distinct age groups. We conduct an online experiment(N = 279)to evaluate chatbots with low and high PMA as targeted towards teenagers and adults. Our results indicate increased trust, likeability, and perceived safety in mental health chatbots displaying high PMA. A qualitative analysis revealed four themes, assessing participants' expectations of mental health chatbots in general, as well as targeted towards teenagers: Anthropomorphism, Warmth, Sensitivity, and Appearance manifestation. We show that PMA plays a crucial role in influencing the perceptions of chatbots and provide recommendations for designing socially appropriate mental health chatbots.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call