The just-ended COVID-19 pandemic and the looming global aging remind us that we need to be prepared for the shortage of doctors, which might become an urgent medical crisis in the future. Medical AI could relieve this urgency and sometimes perform better than human doctors; however, people are reluctant to trust medical AI because of algorithm aversion. Although several factors that can minimize algorithm aversion have been identified, they are not effective enough to promote medical AI as people’s first choice. Therefore, inspired by the direct and indirect information model of trust and media equation hypothesis, this research explored a new method to minimize aversion to medical AI by highlighting its social attributes. In 3 between-subject studies, a medical AI system’s direct information (i.e., transparency and quantitation of the decision-making process (DMP)) and indirect information (i.e., social proof) were manipulated. Study 1 (N = 193) and 2 (N = 429) showed that transparency of DMP and social proof increased trust in AI, but did not affect trust in human doctors. Social proof jointly affected trust in AI with non-quantitative DMP but not quantitative DMP. Study 3 (N = 184) further revealed the joint effect of the transparent non-quantitative DMP and near-perfect social proof, which could minimize algorithm aversion. These results extended the direct-indirect information model in interpersonal trust, revealed conditional media equation in human-AI trust, and offered practical implications for medical AI interface design.
Read full abstract