Abstract

With learning models widely deployed in daily life, researchers are discovering that many of them generate discriminatory predictions towards sensitive attributes such as gender or race. A common way of tackling this problem is to learn fair features without sensitive information by removing features. This method requires extra heads for attribute predictions and eliminates the information via adversarial settings. Although the processes can impose fairness, they reduce the accuracy of the models relative to the originals. In this research, we generate continuous domains containing information from different subgroups with mixup operations. We then learn domain invariant features with similarity constraints. Different from previous methods, the proposed method, CIFair, can learn fair features without feature removal operations or task-irrelevant learning objectives. Finally, we evaluated our approach on the CelebA dataset with different sensitive attributes under multiple settings. All experimental results demonstrate that CIFair is able to impose enforce better fair prediction results than previous methods while maintaining model accuracy performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.