Abstract

Voice recognition has become the dominant tool for communication in our daily life by replacing the keyboard. Not surprisingly, however, this extraordinary technology comes with great risks and moral responsibilities. Over the past decade, the computer science literature has made great strides in advancing the accessibility and capability of speech technology, but lacks an understanding of the ethical implications of such tools for users and organizations. We argue that management scholars should consider how speech technology impacts fairness and ethical behavior in today’s organizations. In this paper, we provide an ethical analysis of potential risks to fairness in speech technology in the workplace. By analyzing these risks through the lenses of behavioral ethics and three dimensions of organizational fairness – distributive, procedural, interactional – this paper identifies several concerns relevant to the use of speech technology. We discuss these concerns and provide practical recommendations for promoting fairness in this context. Keywords: speech technology; distributive fairness; procedural fairness; interactional fairness; biometrics; deep fakes

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call