Abstract

This paper presents a novel reinforcement learning and knowledge graph-based sequential recommendation method using an empirical distribution function. Conventional sequential recommendation methods have used the reward function combining sequence-level and knowledge-level rewards. These methods have never considered the variations in the possible values of each level reward. In the proposed method, we introduce an empirical distribution function into the reward function to achieve robust learning to variations in the possible values of each level reward. The proposed method can provide more accurate sequential recommendation than baseline and state-of-the-art recommendation methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call