Abstract

Although Sentiment Analysis (SA) is widely applied in many domains, existing research has revealed that the unfairness in SA systems can be harmful to the welfare of less privileged people. Several works propose pre-processing and in-processing methods to eliminate bias in SA systems, but little attention is paid to utilizing post-processing methods to heal bias. Postprocessing methods are particularly important for systems that use third-party SA services. Systems that use such services have no access to the SA engine or its training data and thus cannot apply pre-processing nor in-processing methods. Therefore, this paper proposes a black-box post-processing method to make an SA system heal bias and construct fair results when bias is detected. We propose and investigate six self-healing strategies. Our evaluation results on two datasets show that the best strategy can construct fair results and improve accuracy on the two datasets by 2.76% and 2.85%, respectively. To the best of our knowledge, our work is the first self-healing method that can be deployed to ensure SA fairness without requiring access to the SA engine or its training data.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.