This literature review investigates the complex issue of trust in the context of Artificial Intelligence (AI) applications in Human Resource Management (HRM). Adopting the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines, we systematically reviewed 43 research articles published between 2013 and 2023 to examine the attitudes towards AI as a disruptive technology in HRM practices, the ethical and legal challenges that influence trust, and the strategies for building a trustworthy future. The findings reveal a mixed picture of trust, with growing adoption of AI in various HRM practices, such as recruitment, selection, performance management, and employee development, accompanied by significant concerns related to bias, transparency, privacy, and job displacement. The review identifies key factors that affect trust in AI, including perceived usefulness, ease of use, and fairness, as well as the ethical and legal challenges that erode trust, such as data privacy, algorithmic bias, and lack of explainability. Furthermore, it discusses the implications for HRM professionals and proposes strategies for enhancing trust, such as ensuring human oversight, promoting transparency, and developing AI literacy skills. This review contributes to the research on AI in HRM by providing a nuanced understanding of the trust dynamics, challenges, and opportunities associated with this disruptive technology, offering valuable insights for both researchers and practitioners.
Read full abstract