Abstract

Recent research demonstrates that word embeddings, trained on the human-generated corpus, have strong gender biases in embedding spaces, and these biases can result in the discriminative results from the various downstream tasks. Whereas the previous methods project word embeddings into a linear subspace for debiasing, we introduce a Latent Disentanglement method with a siamese auto-encoder structure with an adapted gradient reversal layer. Our structure enables the separation of the semantic latent information and gender latent information of given word into the disjoint latent dimensions. Afterwards, we introduce a Counterfactual Generation to convert the gender information of words, so the original and the modified embeddings can produce a gender-neutralized word embedding after geometric alignment regularization, without loss of semantic information. From the various quantitative and qualitative debiasing experiments, our method shows to be better than existing debiasing methods in debiasing word embeddings. In addition, Our method shows the ability to preserve semantic information during debiasing by minimizing the semantic information losses for extrinsic NLP downstream tasks.

Highlights

  • Recent researches have disclosed that word embeddings contain unexpected bias in their geometry on the embedding space (Bolukbasi et al, 2016; Zhao et al, 2019)

  • : Feminine word embedding and Masculine word embedding with gender-pair relationship : Gender biased word embedding, gender-counterfactual word embedding, and Neutralized word embedding example of the analogies is the relatively closer distance of she to nurse; and he to doctor

  • If the gender direction vector includes a component of semantic information2, the semantic information will be lost through the post-processing projections

Read more

Summary

Introduction

Recent researches have disclosed that word embeddings contain unexpected bias in their geometry on the embedding space (Bolukbasi et al, 2016; Zhao et al, 2019). Bolukbasi et al (2016) enumerated that the automatically generated analogies of (she, he) in the Word2Vec (Mikolov et al, 2013b) show the gender biases in significant level. Garg et al (2018) demonstrated that the embeddings, from Word2Vec (Mikolov et al, 2013a) to Glove (Pennington et al, 2014), have strong associations between value-neutral words and population-segment words, i.e. a strong association between housekeeper and Hispanic. This unwanted bias can cause biased results in the downstream tasks (Caliskan et al, 2017a; Kiritchenko and Mohammad, 2018; Bhaskaran and Bhallamudi, 2019) and gender discrimination in NLP systems

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call