Abstract

AbstractThe rise in AI‐based assessments in hiring contexts has led to significant media speculation regarding their role in exacerbating or mitigating employment inequities. In this study, we examined 46,214 ratings from 4947 interviews to ascertain if gender differences in ratings were related to interactions among content (stereotype‐relevant competencies), context (occupational gender composition), and rater type (human vs. algorithm). Contrary to the hypothesized effects of smaller gender differences in algorithmic scoring than with human raters, we found that both human and algorithmic ratings of men on agentic competencies were higher than those given to women. Also unexpected, the algorithmic scoring evidenced greater gender differences in communal ratings than humans (with women rated higher than men) and similar differences in non‐stereotypic competency ratings that were in the opposite direction (humans rated men higher than women, while algorithms rated women higher than men). In more female‐dominated occupations, humans tended to rate applicants as generally less competent overall relative to the algorithms, but algorithms rated men more highly in these occupations. Implications for auditing for group differences in selection contexts are discussed.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call