Abstract

AbstractIn this study, we compare the Bayesian personalized ranking (BPR) algorithms with two recent state‐of‐the‐art algorithms, namely, noisy‐label robust Bayesian point‐wise optimization (NBPO) and Light Graph Convolution Network (LightGCN) algorithms, to validate and generalize their performance by using six publicly available datasets and one proprietary dataset containing web‐based data visualization usage records. We follow the guidelines explained in the original studies to pre‐process the input data and evaluate these algorithms using various evaluation metrics. We also perform hyperparameter tuning for the recommendation algorithms to determine the optimal configuration resulting in the best recommendation quality. We observe that the best hyperparameter configuration varies based on the algorithms and the datasets. The results of our analysis show some similarities with the results of the original studies while differing in certain respects. We observe that adaptive oversampling BPR (AOBPR) and LightGCN algorithms generate higher quality recommendations than the other algorithms. However, algorithm convergence rates vary significantly for each dataset. We note that the AOBPR approach is particularly useful for data visualization recommendation task, and can contribute to the improved recommendations in practice.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call