Abstract

Nowadays, the model compression method of knowledge distillation has drawn great attentions in Recommender systems (RS). The strategy of bidirectional distillation performs the bidirectional learning for both the teacher and the student models such that these two models can collaboratively improve with each other. However, this strategy cannot effectively exploit representation capabilities of each item and lack of the interpretability for the importance of items. Thus, how to develop an effective sampling scheme is still valuable for us to further study and explore. In this paper, we propose an improved rank discrepancy-aware item sampling strategy to enhance the performance of bidirectional distillation learning. Specifically, by employing the distillation loss, we train the teacher and student models to reflect the fact that a user has partiality for the unobserved items. Then, we propose the improved rank discrepancy-aware sampling strategy based on feedback learning mechanism to transfer just the useful information which can effectively enhance each other. The key part of the multiple distillation training aims to select valuable items which can be re-distilled in the network for training. The proposed technique can effectively solve the problem of high ambiguity in nature for recommender system. Experimental results on several real-world recommender system datasets well demonstrate that the improved bidirectional distillation strategy shows better performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call