Abstract

Recommender systems are designed to help us navigate through an abundance of online content. Collaborative filtering (CF) approaches are commonly used to leverage behaviors of others with a similar taste to make predictions for the target user. However, CF is prone to introduce or amplify popularity bias in which popular (often consumed or highly ranked) items are prioritized over less popular items. Many computational metrics of popularity biases — and resulting algorithmic (un)fairness — have been presented. However, it is largely unclear whether these metrics reflect human perception of bias and fairness. We conducted a user study with 170 participants to explore how users perceive recommendation lists created by algorithms with different degrees of popularity bias. Our results show — surprisingly — that popularity biases in recommendation lists are barely observed by users, even when corresponding bias/fairness metrics clearly indicate them.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call