Abstract

Affective AI, or emotion-recognition artificial intelligence, is increasingly adopted to heighten organizational capabilities. As with other machine learning artificial intelligence, affective AI may be susceptible to algorithmic failures that lead to unfair and/or biased outcomes. The objective of this paper is to explore the effectiveness of information transparency and human augmentation to offset these types of algorithmic failures, particularly in light of human cognitive biases such as the anchoring effect. This study scored two datasets for emotions with three commercially available affective AI tools. Labelers score emotions and facial expressions in images with varying access to information about the affective AI models’ outputs and average demographic parity. This study yielded several interesting findings. First, human augmentation was effective at counterbalancing some inference inconsistencies, e.g., when the affective AI identified a particular facial expression but did not infer the concomitant emotion. Second, facial expression uncertainty, i.e., disagreements among affective AI models about an image’s facial expression, was associated with demographic-based differences in the emotions recognized by humans and by affective AI models. Third, information transparency, i.e., reporting average demographic parity, affected human emotion scores but often led to spillovers across all images, not just images from a particular population. This paper contributes to our understanding of the affective AI, information transparency, and human augmentation for algorithmic failures, especially for AI with difficult to quantify fairness like affective AI.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call