Abstract
ABSTRACT Inspired by the emerging documentation paradigm emphasizing data and model transparency, this study explores whether displaying racial diversity cues in training data and labelers’ backgrounds enhance users’ expectations of algorithmic fairness and trust in AI systems, even to the point of making them overlook racially biased performance. It also explores how their trust is affected when the system invites their feedback. We conducted a factorial experiment (N=597) to test hypotheses derived from a model of Human-AI Interaction based on the Theory of Interactive Media Effects (HAII-TIME). We found that racial diversity cues in either training data or labelers’ backgrounds trigger the representativeness heuristic, which is associated with higher algorithmic fairness expectations and increased trust. Inviting feedback enhances users’ sense of agency and is positively related to behavioral trust, but it reduces usability for Whites when the AI shows unbiased performance. Implications for designing socially responsible AI interfaces are discussed, considering both users’ cognitive limitations and usability.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.