Abstract

Deep learning algorithms have achieved tremendous success over the past few years. However, the biased behavior of deep models, where the models favor/disfavor certain demographic subgroups, is a major concern in the deep learning community. Several adverse consequences of biased predictions have been observed in the past. One solution to alleviate the problem is to train deep models for fair outcomes. Therefore, in this research, we propose a novel loss function, termed as Uniform Misclassification Loss (UML) to train deep models for unbiased outcomes. The proposed UML function penalizes the model for the worst-performing subgroup for mitigating bias and enhancing the overall model performance. The proposed loss function is also effective while training with imbalanced data as well. Further, a metric, Joint Performance Disparity Measure (JPD) is introduced to jointly measure the overall model performance and the bias in model prediction. Multiple experiments have been performed on four publicly available datasets for facial attribute prediction and comparisons are performed with existing bias mitigation algorithms. Experimental results are reported using performance and bias evaluation metrics. The proposed loss function outperforms existing bias mitigation algorithms that showcase its effectiveness in obtaining unbiased outcomes and improved performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call