Abstract
ABSTRACT Although algorithm-based systems are increasingly used as a decision-support for managers, there is still a lack of research on the effects of algorithm use and more specifically on potential algorithmic bias on decision-makers. To investigate how potential social bias in a recommendation outcome influences trust, fairness perceptions, and moral judgement, we used a moral dilemma scenario. Participants (N = 215) imagined being human resource managers responsible for personnel selection and receiving decision-support from either human colleagues or an algorithm-based system. They received an applicant preselection that was either gender-balanced or predominantly male. Although participants perceived algorithm-based support as less biased, they also perceived it as generally less fair and had less trust in it. This could be related to the finding that participants perceived algorithm-based systems as more consistent but also as less likely to uphold moral standards. Moreover, participants tended to reject algorithm-based preselection more often than human-based and were more likely to use utilitarian judgements when accepting it, which may indicate different underlying moral judgement processes.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have