Abstract

Machine learning (ML) models are increasingly being used to aid decision-making in high-risk applications. However, these models can perpetuate biases present in their training data or the systems in which they are integrated. When unaddressed, these biases can lead to harmful outcomes, such as misdiagnoses in healthcare [11], wrongful denials of loan applications [9], and over-policing of minority communities [2, 4]. Consequently, the fair ML community is dedicated to developing algorithms that minimize the influence of data and model bias.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call