Abstract

Emotion artificial intelligence (AI) is shown to vary systematically in its ability to accurately identify emotions, and this variation creates potential biases. In this paper, we conduct an experiment involving three commercially available emotion AI systems and a group of human labelers tasked with identifying emotions from two image data sets. The study focuses on the alignment between facial expressions and the emotion labels assigned by both the AI and humans. Importantly, human labelers are given the AI’s scores and informed about its algorithmic fairness measures. This paper presents several key findings. First, the labelers’ scores are affected by the emotion AI scores, consistent with the anchoring effect. Second, information transparency about the AI’s fairness does not uniformly affect human labeling across different emotions. Moreover, information transparency can even increase human inconsistencies. Plus, significant inconsistencies in the scoring among different emotion AI models cast doubt on their reliability. Overall, the study highlights the limitations of individual decision making and information transparency regarding algorithmic fairness measures in addressing algorithmic fairness. These findings underscore the complexity of integrating emotion AI into practice and emphasize the need for careful policies on emotion AI.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call