Abstract

Natural Language Processing (NLP) systems have a mundane impact, yet they harbour either obvious or potential gender bias. The automation of decision-making in NLP models even exacerbates unfair treatment. In recent years, researchers have started to notice this issue and have made some approaches to detect and mitigate these biases, yet no consensus on the approaches exists. This paper discusses the interdisciplinary field of linguistics and computer sciences by presenting the most common gender bias categories and breaking them down with ethical and artificial intelligence approaches. Specific methods for detecting and minimizing bias are shown around biases present in raw data, annotator, model, and the linguistic gender system. In this paper, an overview of the hotspots and future perspectives of this research topic is presented. Limitations of some detection methods are pinpointed, providing novel insights into future research.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call