Abstract

The bias problems in recommender systems are an important challenge. In this paper, we focus on solving the bias problems via uniform data. Previous works have shown that simple modeling with a uniform data can alleviate the bias problems and improve the performance. However, the uniform data is usually few and expensive to collect in a real product. In order to use the valuable uniform data more effectively, we propose a novel and general knowledge distillation framework for counterfactual recommendation with four specific methods, including label-based distillation, feature-based distillation, sample-based distillation and model structure-based distillation. Moreover, we discuss the relation between the proposed framework and the previous works. We then conduct extensive experiments on both public and product datasets to verify the effectiveness of the proposed four methods. In addition, we explore and analyze the performance trends of the proposed methods on some key factors, and the changes in the distribution of the recommendation lists. Finally, we emphasize that counterfactual modeling with uniform data is a rich research area, and list some interesting and promising research topics worthy of further exploration. Note that the source codes are available at <uri>https://github.com/dgliu/TKDE_KDCRec</uri>.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call