Abstract

When evaluating automated systems, some users apply the “positive machine heuristic” (i.e. machines are more accurate and precise than humans), whereas others apply the “negative machine heuristic” (i.e. machines lack the ability to make nuanced subjective judgments), but we do not know much about the characteristics that predict whether a user would apply the positive or negative machine heuristic. We conducted a study in the context of content moderation and discovered that individual differences relating to trust in humans, fear of artificial intelligence (AI), power usage, and political ideology can predict whether a user will invoke the positive or negative machine heuristic. For example, users who distrust other humans tend to be more positive toward machines. Our findings advance theoretical understanding of user responses to AI systems for content moderation and hold practical implications for the design of interfaces to appeal to users who are differentially predisposed toward trusting machines over humans.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call