Abstract

Emotion recognition technologies, while critiqued for bias, validity, and privacy invasion, continue to be developed and applied in a range of domains including in high-stakes settings like the workplace. We set out to examine emotion recognition technologies proposed for use in the workplace, describing the input data and training, outputs, and actions that these systems take or prompt. We use these design features to reflect on these technologies' implications using the ethical speculation lens. We analyzed patent applications that developed emotion recognition technologies to be used in the workplace (N=86). We found that these technologies scope data collection broadly; claim to reveal not only targets' emotional expressions, but also their internal states; and take or prompt a wide range of actions, many of which impact workers' employment and livelihoods. Technologies described in patent applications frequently violated existing guidelines for ethical automated emotion recognition technology. We demonstrate the utility of using patent applications for ethical speculation. In doing so, we suggest that 1) increasing the visibility of claimed emotional states has the potential to create additional emotional labor for workers (a burden that is disproportionately distributed to low-power and marginalized workers) and contribute to a larger pattern of blurring boundaries between expectations of the workplace and a worker's autonomy, and more broadly to the data colonialism regime; 2) Emotion recognition technology's failures can be invisible, may inappropriately influence high-stakes workplace decisions and can exacerbate inequity. We discuss the implications of making emotions and emotional data visible in the workplace and submit for consideration implications for designers of emotion recognition, employers who use them, and policymakers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call