Abstract

Latent factor models in collaborative filtering (CF) have been the state-of-the-art for over a decade and widely studied. Most models learn a representation of each user/item and calculate the inner product between them to perform a recommendation. Meanwhile, the inherent long-tailed data make the model training difficult and can cause popularity bias. Popularity bias mitigation has become one of the central themes of recent years. However, although the problems caused by popularity bias (e.g., the Matthew effect) should not be ignored, the popularity of an item implies its quality or trend. Popularity bias mitigation is rather undesirable for platforms and users who prefer popular items. In this study, we first focus on the inner product model and investigate the good property of the inner product for long-tailed data. The inner product is also employed in state-of-the-art CF models, and its effectiveness has been demonstrated empirically. We find that the inner product allows for modeling the long-tailed user–item interactions using the item vector magnitude while the representations remain identifiable. Based on this good property, we propose a method, DirectMag, that allows a platform and even the user to flexibly manipulate popularity bias. DirectMag determines the magnitude of the vectors directly after the training and can control the degree of popularity bias by adjusting only one parameter. In our experiments, we perform a detailed analysis that is not limited to the value of the average recommendation accuracy. We show that conventional methods inevitably suffer from trade-offs regarding the popularity of items, whereas our method is flexible enough to meet the diverse needs of a platform and the users.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call