Abstract

Along with the increasing development of information technology, the interaction between artificial intelligence and humans is becoming even more frequent. In this context, a phenomenon called “medical AI aversion” has emerged, in which the same behaviors of medical AI and humans elicited different responses. Medical AI aversion can be understood in terms of the way that people attribute mind capacities to different targets. It has been demonstrated that when medical professionals dehumanize patients—making fewer mental attributions to patients and, to some extent, not perceiving and treating them as full human—it leads to more painful and effective treatment options. From the patient’s perspective, will painful treatment options be unacceptable when they perceive the doctor as a human but disregard his or her own mental abilities? Is it possible to accept a painful treatment plan because the doctor is artificial intelligence? Based on the above, the current study investigated the above questions and the phenomenon of medical AI aversion in a medical context. Through three experiments it was found that: (1) human doctor was accepted more when patients were faced with the same treatment plan; (2) there was an interactional effect between the treatment subject and the nature of the treatment plan, and, therefore, affected the acceptance of the treatment plan; and (3) experience capacities mediated the relationship between treatment provider (AI vs. human) and treatment plan acceptance. Overall, this study attempted to explain the phenomenon of medical AI aversion from the mind perception theory and the findings are revealing at the applied level for guiding the more rational use of AI and how to persuade patients.

Highlights

  • The results showed significant main effects for the treatment subject group and higher acceptance in the human physicians group than in the artificial intelligence (AI) group (Mhuman physicians = 4.14, SDhuman physicians = 1.14; MAI = 3.77, SDAI = 1.32, F(1, 227) = 4.524, p = 0.035, d = 0.30), while the main effect of regimen content was not significant F(1, 227) = 1.423, p = 0.234)

  • We calculated the mean of the abilities corresponding to each of the two dimensions of mental ability to obtain the experience and agency scores and we began to examine the mediating effects of each of experience and agency

  • The main objective of the current study is to verify the phenomenon of patients’

Read more

Summary

Introduction

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. Since AlphaGo’s comprehensive victory over human intelligence in Go, artificial intelligence (AI) has received widespread public attention. AI has developed so rapidly that it can rival, or even surpass, humans in some tasks in which medicine may serve as one of the best examples [1]. In 2018, Stanford University developed a convolutional neural network algorithm called Cheyne. After being trained on a chest X-ray dataset, the CheXNet could outperform professional physicians in identifying diseases such as pneumonia. In 2021, Montfort developed a regimen for the treatment of mental illness

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call