Abstract
The commercialization of artificial intelligence (AI) in healthcare is accelerating, yet academic research on its users remains scarce. To what extent are they willing to disclose personal health privacy to AI doctors compared to traditional human doctors? What factors are shaping these decisions? The lack of user research has left these questions unanswered. This article, based on privacy calculus theory, conducted a multi-factorial between-subjects online experiment (N = 582) with a 2 (medical provider: AI vs. human) × 2 (emotional support: low vs. high) × 2 (information sensitivity: low vs. high) design. The results indicated that AI doctors lead participants to perceive both lower health benefits and privacy risks. Emotional support is not always beneficial. On one hand, high emotional support can provide patients with more health benefits, but on the other hand, it also poses higher levels of privacy risks. Additionally, high emotional support responses from AI doctors could enhance patients’ health benefits, trust, and willingness to disclose health privacy, while the opposite was observed for human doctors.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal of Human–Computer Interaction
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.