Abstract

Natural language processing (NLP) is increasingly applied to a broad range of sensitive tasks, such as human resources, biomedicine, and healthcare. Accordingly, a growing body of research is investigating the impact of sex and gender bias in the models and the data on which such models are trained. As NLP systems become more pervasive in our societies, the vulnerability to sex and gender bias may cause the perpetuation of prejudice and discriminatory decisions. To address this challenge, a widespread awareness of bias needs to be created in the NLP community and more robust learning algorithms and fair solutions are required for the development and evaluation of NLP methods. In this chapter, we survey the state-of-the-art NLP models and some popular applications to biomedicine and health, with special emphasis on chatbots for mental health. Moreover, we discuss sources and implications of bias in this area and show examples of notable debiasing methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call