Abstract

ABSTRACT Although virtual healthcare assistants have been extensively adopted in the healthcare industry, users still question their reliability. Based on the Computers are Social Actor (CASA) paradigm, the current study conducted an experiment to examine how different chatbot design cues (Chatbot vs. Layperson vs. Doctor) affect users’ trust towards virtual healthcare assistants. Results indicate that the doctor-like design cues and bot-like design cues (vs. layperson-like design cues) elicited significantly greater perceived chatbot expertise, which enhanced users’ trust in health information. This study further found the significant moderating effects of users’ perceived threat on chatbot expertise and privacy concerns. Positive effects of the doctor-like design cue and the bot-like design cue on perceived expertise were found to be significant only for those whose perceived threat was high. Interestingly, the doctor-like design cues led to greater (less) privacy concerns when the perceived threat level was low (high). The findings provide human–computer interaction researchers and chatbot UX/UI designers with important theoretical and practical implications.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.