Abstract

In recommender systems, top- N recommendation is an important task with implicit feedback data. Although the recent success of deep learning largely pushes forward the research on top- N recommendation, there are increasing concerns on appropriate evaluation of recommendation algorithms. It therefore is important to study how recommendation algorithms can be reliably evaluated and thoroughly verified. This work presents a large-scale, systematic study on six important factors from three aspects for evaluating recommender systems. We carefully select 12 top- N recommendation algorithms and eight recommendation datasets. Our experiments are carefully designed and extensively conducted with these algorithms and datasets. In particular, all the experiments in our work are implemented based on an open sourced recommendation library, Recbole [ 139 ], which ensures the reproducibility and reliability of our results. Based on the large-scale experiments and detailed analysis, we derive several key findings on the experimental settings for evaluating recommender systems. Our findings show that some settings can lead to substantial or significant differences in performance ranking of the compared algorithms. In response to recent evaluation concerns, we also provide several suggested settings that are specially important for performance comparison.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call