Abstract
Sequential recommendation models aim to predict the interested items to a user based on his historical behaviors. To train sequential recommenders, implicit feedback data is widely adopted since it is easier to obtain than explicit feedback data. In the setting of implicit feedback, a user’s historical behaviors can be characterized as a chronologically ordered sequence of interacted items. From a perspective of machine learning, the historical interaction sequence and the recommended items can be considered as context and label , respectively, which are usually in one-hot representations in the recommendation models. However, due to the discrete nature, one-hot representations are hard to sufficiently reflect the underlying user preference, and might also contain noise from implicit feedback that will mislead the model training. To solve these issues, we propose a general optimization framework, Multi-View Smoothness (MVS), to enhance the smoothness of sequential recommendation models in both data representations and model learning. Specifically, with the help of a complementary model, we smooth and enrich the one-hot representations of contexts and labels to better depict the underlying user preference (i.e., context smoothness and label smoothness), and devise a model regularization strategy to enforce the neighborhood smoothness of the model itself (i.e., model smoothness). Based on these strategies, we design three regularizers to constrain and improve the training of sequential recommendation models. Extensive experiments on five datasets show that our approach is able to improve the performance of various base models consistently and outperform other regularization training methods.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have