Abstract

Algorithms are increasingly making decisions in organizations that carry moral consequences and such decisions are considered to be ordinarily made by leaders. An important consideration to be made by organizations is therefore whether adopting algorithms in this domain will be accepted by employees and whether this practice will harm their reputation. Considering this emergent phenomenon, we set out to examine employees’ perceptions about (a) algorithmic decision-making systems employed to occupy leadership roles and make moral decisions in organizations, and (b) the reputation of organizations that employ such systems. Furthermore, we examine the extent to which the decision agent needs to be recognized as “merely” a human, or whether more information is needed about the decision agent’s moral values (in this case, whether it is known that the human leader is humble or not) to be preferred over an algorithm. Our results reveal that participants in the algorithmic leader condition—relative to those in the human leader and humble human leader conditions—perceive the decision made to be less fair, trustworthy, and legitimate, and this in turn produces lower acceptance rates of the decision and more negative perceptions of the organization’s reputation. The human leader and humble human leader conditions do not significantly differ across all main and indirect effects. This latter effect strongly suggests that people prefer human (vs. algorithmic) leadership primarily because they are human and not necessarily because they possess certain moral values. Implications for theory, practice, and directions for future research are discussed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call