Abstract

Fairness learning is one of research hotspots in machine learning. The purpose of preventing discrimination is to eliminate the impact of unfair training sets on classifiers before performing prediction tasks. To ensure the fairness and accuracy of classification, this paper presents a method for generating fair data sets by identifying and eliminating discriminatory samples in original data sets. This is a margin-based weighted method for dealing with discrimination in binary classification tasks and obtaining the demographic parity and equalized odds. To improve the classification accuracy, the target set is selected after projecting based on the margin principle. For each sample in the target set, a weighted distance measurement method is used to identify the discriminatory sample and then correct it. The experimental results on three real data sets demonstrate that the proposed method can obtain better classification fairness and accuracy than existing methods; the conclusion is not limited to specific fairness criteria or classifiers.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.