Abstract
Concerns about gender bias in word embedding models have captured substantial attention in the algorithmic bias research literature. Other bias types however have received lesser amounts of scrutiny. This work describes a large-scale analysis of sentiment associations in popular word embedding models along the lines of gender and ethnicity but also along the less frequently studied dimensions of socioeconomic status, age, physical appearance, sexual orientation, religious sentiment and political leanings. Consistent with previous scholarly literature, this work has found systemic bias against given names popular among African-Americans in most embedding models examined. Gender bias in embedding models however appears to be multifaceted and often reversed in polarity to what has been regularly reported. Interestingly, using the common operationalization of the term bias in the fairness literature, novel types of so far unreported bias types in word embedding models have also been identified. Specifically, the popular embedding models analyzed here display negative biases against middle and working-class socioeconomic status, male children, senior citizens, plain physical appearance and intellectual phenomena such as Islamic religious faith, non-religiosity and conservative political orientation. Reasons for the paradoxical underreporting of these bias types in the relevant literature are probably manifold but widely held blind spots when searching for algorithmic bias and a lack of widespread technical jargon to unambiguously describe a variety of algorithmic associations could conceivably be playing a role. The causal origins for the multiplicity of loaded associations attached to distinct demographic groups within embedding models are often unclear but the heterogeneity of said associations and their potential multifactorial roots raises doubts about the validity of grouping them all under the umbrella term bias. Richer and more fine-grained terminology as well as a more comprehensive exploration of the bias landscape could help the fairness epistemic community to characterize and neutralize algorithmic discrimination more efficiently.
Highlights
The term algorithmic bias is often used to imply systematic offsets in algorithmic output that produce unfair outcomes such as privileging or discriminating an arbitrary group of people over others
The only embedding model displaying an apparent lack of overall gender bias is the Glove model trained on the Twitter corpus
The most noteworthy result of this work is the finding in most word embedding models of significant associations between negative sentiment words and terms used to denote conservative individuals or ideas, people of middle and working-class socioeconomic status, underage and adult males, senior citizens, Muslims, lack of religious faith, and given names popular among African-Americans
Summary
The term algorithmic bias is often used to imply systematic offsets in algorithmic output that produce unfair outcomes such as privileging or discriminating an arbitrary group of people over others. The topic of algorithmic bias has recently elicited widespread attention among the artificial intelligence research community. Popular machine learning artifacts have been used to illustrate the creeping of purported societal biases and prejudices into models used for computer vision [1], recidivism prediction [2] or language modeling [3]. Word embedding models are dense vector representations of words learned from a corpus of natural language [4]. Word embeddings have revolutionized natural language processing due to their ability to model semantic similarity and relatedness among pairs of words as well as linear regularities between words that roughly capture meaningful cultural constructs such as gender or social class [5]. The usage of word embeddings in upstream natural language processing tasks has often improved the accuracy of those systems downstream [6]
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.