Medical artificial intelligence (AI) is expected to deliver worldwide access to healthcare. Through three experimental studies with Chinese and American participants, we tested how the design of medical AI varies between in- and out-groups. Participants adopted the role of a medical AI designer and decided how to develop medical AI for in- or out-groups based on their experimental condition. Studies 1 (pre-registered: N = 191) revealed that Chinese participants were less likely to adopt human doctors' assistance in medical AI system when targeting patients from US (i.e., out-groups) than for patients from China (i.e., in-groups). Study 2 (N = 190) revealed that US participants were less likely to adopt human doctors' assistance in medical AI system when targeting patients from China (i.e., out-groups) than for patients from US (i.e., in-groups). Study 3 revealed that Chinese medical students (N = 160) selected a smaller training database for AI when diagnosing diabetic retinopathy among US patients (i.e., out-groups) than for Chinese patients (i.e., in-groups), and this effect was stronger among medical students from higher (vs. lower) socioeconomic backgrounds. This AI design inequity was mediated by individuals’ underestimation of out-group heterogeneity. Overall, our evidence suggests that out-group stereotype shapes the design of medical AI, unwittingly undermining healthcare quality. The current findings underline the need for more robust data on medical AI development and intervention research addressing healthcare inequity.
Read full abstract