Abstract

ABSTRACT Automated employment decision tools use machine learning, artificial intelligence, predictive analytics, and other data-driven approaches to enhance candidate experiences and streamline employment related decision-making, allowing human resources to be concentrated where they are needed most. However, the use of these tools without appropriate safeguards has resulted in a number of high-profile scandals in recent years, particularly in regard to bias. Accordingly, lawmakers have started to propose laws that require bias audits of automated employment decision tools to examine their outputs for subgroup differences. The first of its kind was New York City Local Law 144, but other US states have since followed suit. In this paper, we examine the concerns about the effectiveness of this and other similar laws, including the suitability of metrics, the scope of the law, and low levels of compliance. We conclude that despite the law being a good initial first step towards greater transparency around automated employment decision tools and reducing bias, examining outcomes alone is not sufficient to prevent bias elsewhere in the tool. Moreover, effective bias prevention will require a multidisciplinary approach that combines expertise in IO psychology, law, and computer science to develop appropriate metrics and maximize the enforceability of such laws.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.