Abstract

Sequential recommendation aims to predict future user interactions by analyzing dynamic patterns within their historical behavior sequences. Deep neural networks have recently become popular for learning representations of these sequences in the time domain. However, representing users’ intentions in the time domain faces challenges such as noise in interactions and sparsity of data. Contrastive learning and representation learning in the frequency domain can mitigate these issues from different perspectives. In this paper, to fully integrate time-domain sequence representations, frequency-domain sequence representations and contrastive learning based on them, we propose a model called Time–Frequency Consistency based contrastive learning for Sequential Recommendation (TFCSRec). TFCSRec utilizes a time-domain encoder with a fully connected network and a filter network to extract high-order features and catch pure sequential patterns. Then, a learnable frequency-domain encoder with a recurrent neural network is designed to capture sequential characteristics in the frequency-domain space. Finally, TFCSRec combines a recommendation task and two contrastive learning tasks to optimize the two user representation encoders. Its contrastive learning is designed to minimize a contrastive regularization loss and a time–frequency consistency loss, which for the first time is constructed directly on the time-domain sequence representation and the frequency-domain sequence representation. Experiments on five benchmark datasets show that the proposed TFCSRec model outperforms other sequential recommendation models based on deep neural networks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call