Abstract

Self-attention (SA) mechanisms have been widely used in developing sequential recommendation (SR) methods, and demonstrated state-of-the-art performance. However, in this article, we show that self-attentive SR methods substantially suffer from the over-smoothing issue that item embeddings within a sequence become increasingly similar across attention blocks. As widely demonstrated in the literature, this issue could lead to a loss of information in individual items, and significantly degrade models’ scalability and performance. To address the over-smoothing issue, in this article, we view items within a sequence constituting a star graph and develop a method, denoted as \(\mathop{\mathtt{MSSG}}\limits\) , for SR. Different from existing self-attentive methods, \(\mathop{\mathtt{MSSG}}\limits\) introduces an additional internal node to specifically capture the global information within the sequence, and does not require information propagation among items. This design fundamentally addresses the over-smoothing issue and enables \(\mathop{\mathtt{MSSG}}\limits\) a linear time complexity with respect to the sequence length. We compare \(\mathop{\mathtt{MSSG}}\limits\) with eleven state-of-the-art baseline methods on six public benchmark datasets. Our experimental results demonstrate that \(\mathop{\mathtt{MSSG}}\limits\) significantly outperforms the baseline methods, with an improvement of as much as 10.10%. Our analysis shows the superior scalability of \(\mathop{\mathtt{MSSG}}\limits\) over the state-of-the-art self-attentive methods. Our complexity analysis and runtime performance comparison together show that \(\mathop{\mathtt{MSSG}}\limits\) is both theoretically and practically more efficient than self-attentive methods. Our analysis of the attention weights learned in SA-based methods indicates that on sparse recommendation data, modeling dependencies in all item pairs using the SA mechanism yields limited information gain, and thus, might not benefit the recommendation performance. Our source code and data are publicly accessible through GitHub .

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.