Abstract

In recent years, the work of organizations in the area of digitization has intensified significantly. This trend is also evident in the field of recruitment where job application tracking systems (ATS) have been developed to allow job advertisements to be published online. However, recent studies have shown that recruiting in most organizations is not inclusive, being subject to human biases and prejudices. Most discrimination activities appear early but subtly in the hiring process, for instance, exclusive phrasing in job advertisement discourages qualified applicants from minority groups from applying. The existing works are limited to analyzing, categorizing and highlighting the occurrence of bias in the recruitment process. In this paper, we go beyond this and develop machine learning models for identifying and classifying biased and discriminatory language in job descriptions. We develop and evaluate a machine learning system for identifying five major categories of biased and discriminatory language in job advertisements, i.e., masculine-coded, feminine-coded, exclusive, LGBTQ-coded, demographic and racial language. We utilized the combination of linguistic features with recent state-of-the-art word embeddings representations as input features for various machine learning classifiers. Our results show that the machine learning classifiers were able to identify all the five categories of biased and discriminatory language with a decent accuracy. The Random Forest classifier with FastText word embeddings achieved the best performance with tenfolds cross-validation. Our system directly addresses the bias in the attraction phase of hiring by identifying and classifying biased and discriminatory language and thus encouraging recruiters to write more inclusive job advertisements.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call