Abstract
Published research has confirmed the bias of automated face-based gender classification algorithms across gender-racial groups. Specifically, unequal accuracy rates were obtained for women and dark-skinned people for face-based automated gender classification algorithms. To mitigate the bias of gender classification and other facial-analysis-based algorithms in general, the vision community has proposed several techniques. However, most of the existing bias mitigation techniques suffer from a lack of generalizability, need a demographically-annotated training set, are application-specific, and often offer a trade-off between fairness and classification accuracy. This means that fairness is often obtained at the cost of a reduction in the classification accuracy of the best-performing demographic sub-group. In this paper, we propose a novel bias mitigation technique that leverages the power of semantic preserving augmentations at the image- and feature-level in a self-consistency setting for the downstream gender classification task. Thorough experimental validation on gender-annotated facial image datasets confirms the efficacy of our bias mitigation technique in improving overall gender classification accuracy as well as reducing bias across all gender-racial groups over state-of-the-art bias mitigation techniques. Specifically, our proposed technique obtained a reduction in the bias by an average of 30% over existing bias mitigation techniques as well as an improvement in the overall classification accuracy of about 5% over the baseline gender classifier. Therefore, resulting in state-of-the-art generalization performance in the intra- and cross-dataset evaluations. Additionally, our proposed technique operates in the absence of demographic labels and is application agnostic, compared to most of the existing bias mitigation techniques.
Accepted Version (Free)
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have