Abstract
Sequential recommendation has become a trending research topic for its capability to capture dynamic user intents based on historical interaction sequence. To train a sequential recommendation model, it is a common practice to optimize the next-item recommendation task with a pairwise ranking loss. In this paper, we revisit this typical training method from the perspective of contrastive learning and find it can be taken as a specialized contrastive learning task conceptually and mathematically, named context-target contrast . Further, to leverage other self-supervised signals in user interaction sequences, we propose another contrastive learning task to encourage sequences after augmentation, as well as sequences with the same target item, to have similar representations, called context-context contrast . A general framework, ContraRec, is designed to unify the two kinds of contrast signals, leading to a holistic joint-learning framework for sequential recommendation with different contrastive learning tasks. Besides, various sequential recommendation methods (e.g., GRU4Rec, Caser, and BERT4Rec) can be easily integrated as the base sequence encoder in our ContraRec framework. Extensive experiments on three public datasets demonstrate that ContraRec achieves superior performance compared to state-of-the-art sequential recommendation methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.