Abstract

While personalization increases the utility of item recommendation, it also suffers from the issue of popularity bias. However, previous methods emphasize adopting supervised learning models to relieve popularity bias in the static recommendation, ignoring the dynamic transfer of user preference and amplification effects of the feedback loop in the recommender system (RS). In this paper, we focus on studying this issue in the interactive recommendation. We argue that diversification and novelty are both equally crucial for improving user satisfaction of IRS in the aforementioned setting. To achieve this goal, we propose a D iversity- N ovelty- a ware I nteractive R ecommendation framework (DNaIR) that augments offline reinforcement learning (RL) to increase the exposure rate of long-tail items with high quality. Its main idea is first to aggregate the item similarity, popularity, and quality into the reward model to help the planning of RL policy. It then designs a diversity-aware stochastic action generator to achieve an efficient and lightweight DNaIR algorithm. Extensive experiments are conducted on the three real-world datasets and an authentic RL environment (Virtual-Taobao). The experiments show that our model can better and full use of the long-tail items to improve recommendation satisfaction, especially those low popularity items with high-quality ones, thus achieving state-of-the-art performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call