Abstract
As machine learning algorithms are increasingly deployed for high-impact automated decision-making, the presence of bias (in datasets or tasks) gradually becomes one of the most critical challenges in machine learning applications. Such challenges range from the bias of race in face recognition to the bias of gender in hiring systems, where race and gender can be denoted as sensitive attributes. In recent years, much progress has been made in ensuring fairness and reducing bias in standard machine learning settings. Among them, learning fair representations with respect to the sensitive attributes has attracted increasing attention due to its flexibility in learning the rich representations based on advances in deep learning. In this article, we propose graph-fair, an algorithmic approach to learning fair representations under the graph Laplacian regularization, which reduces the separation between groups and the clustering within a group by encoding the sensitive attribute information into the graph. We have theoretically proved the underlying connection between graph regularization and distance correlation and show that the latter can be regarded as a standardized version of the former, with an additional advantage of being scale-invariant. Therefore, we naturally adopt the distance correlation as the fairness constraint to decrease the dependence between sensitive attributes and latent representations, called dist-fair. In contrast to existing approaches using measures of dependency and adversarial generators, both graph-fair and dist-fair provide simple fairness constraints, which eliminate the need for parameter tuning (e.g., choosing kernels) and introducing adversarial networks. Experiments conducted on real-world corpora indicate that our proposed fairness constraints applied for representation learning can provide better tradeoffs between fairness and utility results than existing approaches.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Neural Networks and Learning Systems
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.