Medical artificial intelligence (MAI) creates an opportunity to radically expand access to healthcare across the globe by allowing us to overcome the persistent labor shortages that limit healthcare access. This democratization of healthcare is the greatest moral promise of MAI. Whatever comes of the enthusiastic discourse about the ability of MAI to improve the state-of-the-art in high-income countries (HICs), it will be far less impactful than improving the desperate state-of-the-actual in low- and middle-income countries (LMICs). However, the almost exclusive development of MAI in HICs risks this promise being thwarted by contextual bias, an algorithmic bias that arises when the context of the training data is significantly dissimilar from potential contexts of application, which makes the unreflective application of HIC-based MAI in LMIC contexts dangerous. The use of MAI in LMICs demands careful attention to context. In this paper, I aim to provide that attention. First, I illustrate the dire state of healthcare in LMICs and the hope that MAI may help us to improve this state. Next, I show that the radical differences between the health contexts of HICs and those of LMICs create an extraordinary risk of contextual bias. Then, I explore ethical challenges raised by this risk, and propose policies that will help to overcome those challenges. Finally, I sketch a wide range of related issues that need to be addressed to ensure that MAI has a positive impact on LMICs-and is able to improve, rather than worsen, global health equity.
Read full abstract