Abstract

AbstractItem representation learning is a fundamental task in Sequential Recommendation (SR). Effective representations are crucial for SR because they enable recommender systems to learn relevant relationships between items. SR researchers rely on User Historical Interactions (UHI) for effective item representations. While it is well understood that UHI inherently suffers from data sparsity, which weakens item relation signals, seldom considered is the fact the interaction between users and items is mediated by an underlying candidate generation process susceptible to bias, noise and error. These limitations further distort the item relationships and limit the learning of superior item representations. In this work, we seek to amplify weak item relation signals in UHI by augmenting each input sequence with a set of permutations that preserve both the local and global context. We employ a multi-layer bi-directional transformer encoder to learn superior contextualized item representations from the augmented data. Extensive experiments on benchmark datasets for next-item recommendations demonstrate that our proposed SR model can recover item relational dynamics distorted during the candidate generation process. In addition, our approach leads to learning superior item representations for many next-item state-of-the-art models employing RNNs and self-attention networks.KeywordsContextSequential recommendationRecommender systemsTransformersData augmentation

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call