Abstract

Machine learning models are being increasingly deployed in real-world clinical environments. However, these models often exhibit disparate performance between population groups, potentially leading to inequitable and discriminatory predictions. In this case study, we use several distinct concepts of algorithmic fairness to analyze a deep learning model that predicts from their chest X-ray whether someone has a disease. After observing disparities in the false positive rate and false negative rate between groups from several protected classes, we apply algorithmic fairness methods to remove such disparities. However, we find that such algorithmic interventions can have serious unintended consequences. Finally, we question what the appropriate definition of fairness is in the clinical context, and advocate for an investigation of bias in the data whenever possible, as opposed to blindly applying algorithmic interventions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call