Abstract

Recent research has shown that self-attention mechanisms improve Sequential Recommender Systems (SRS) by capturing sequential associations with the interactions. Nevertheless, existing work still needs to address two critical limitations. Firstly, the behavior of users in the original sequences contains various preference signals that are implicit and noisy and hard to reflect the users intentions fully. As a result, it would deteriorate the representation of their true intentions to model all interactions. Secondly, most models only model single-scale interaction sequences and ignore the multi-scale feature relationships of the sequences. In order to address these limitations, the paper proposes MFTSRec (Multi-scale Filter Enhanced Transformer Sequential Recommender), which can weaken those interactions irrelevant to the users intentions from their implicit feedback and adaptively focus on the users multi-scale intentions. Besides, this paper also does extensive experiments on four benchmark datasets and further demonstrates the effectiveness and robustness of MFTSRec compared to the state-of-the-art model.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.