Abstract

The impact of automated decision-making systems on human lives is growing, emphasizing the need for these systems to be not only accurate but also fair. The field of algorithmic fairness has expanded significantly in the past decade, with most approaches assuming that training and testing data are drawn independently and identically from the same distribution. However, in practice, differences between the training and deployment environments exist, compromising both the performance and fairness of the decision-making algorithms in real-world scenarios. A new area of research has emerged to address how to maintain fairness guarantees in classification tasks when the data generation processes differ between the source (training) and target (testing) domains. The objective of this survey is to offer a comprehensive examination of fair classification under distribution shift by presenting a taxonomy of current approaches. The latter is formulated based on the available information from the target domain, distinguishing between adaptive methods, which adapt to the target environment based on available information, and robust methods, which make minimal assumptions about the target environment. Additionally, this study emphasizes alternative benchmarking methods, investigates the interconnection with related research fields, and identifies potential avenues for future research.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call