ABSTRACT Although virtual healthcare assistants have been extensively adopted in the healthcare industry, users still question their reliability. Based on the Computers are Social Actor (CASA) paradigm, the current study conducted an experiment to examine how different chatbot design cues (Chatbot vs. Layperson vs. Doctor) affect users’ trust towards virtual healthcare assistants. Results indicate that the doctor-like design cues and bot-like design cues (vs. layperson-like design cues) elicited significantly greater perceived chatbot expertise, which enhanced users’ trust in health information. This study further found the significant moderating effects of users’ perceived threat on chatbot expertise and privacy concerns. Positive effects of the doctor-like design cue and the bot-like design cue on perceived expertise were found to be significant only for those whose perceived threat was high. Interestingly, the doctor-like design cues led to greater (less) privacy concerns when the perceived threat level was low (high). The findings provide human–computer interaction researchers and chatbot UX/UI designers with important theoretical and practical implications.