Abstract

The use of algorithms and artificial intelligence (AI) in recruitment is now commonplace, yet research indicates candidates are wary of their use. This paper takes an institution-based trust perspective to investigate whether assurance mechanisms can enhance trust and acceptance of AI use in job applicant screening. Two experiments provide preliminary evidence for this proposition. In Study 1 (N = 267), we found that assurance in the form of governance by an ethical review board (compared to no information about review), enhanced candidate trust and acceptance of an AI applicant screening tool. Study 1 findings were broadly replicated in Study 2 (N = 653) which further showed that a combination of ethical review and direct human oversight of AI output enhanced trust and acceptance, but direct human oversight alone did not. However, low mean values of trust and acceptance suggest that even with assurance mechanisms in place, candidates were still sceptical. Qualitative findings indicate that human oversight and the use of AI tools to support human decisionmaking is important for people to trust AI use in recruitment and selection. Taken together, findings suggest that assurance mechanisms are necessary but not sufficient to garner trust and acceptance of AI use in recruitment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call