Abstract
Recently, deep neural networks are widely applied in recommender systems for their effectiveness in capturing/modeling users’ preferences. Especially, the attention mechanism in deep learning enables recommender systems to incorporate various features in an adaptive way. Specifically, as for the next item recommendation task, we have the following three observations: 1) users’ sequential behavior records aggregate at time positions (“time-aggregation”), 2) users have personalized taste that is related to the “time-aggregation” phenomenon (“personalized time-aggregation”), and 3) users’ short-term interests play an important role in the next item prediction/recommendation. In this paper, we propose a new Time-aware Long- and Short-term Attention Network (TLSAN) to address those observations mentioned above. Specifically, TLSAN consists of two main components. Firstly, TLSAN models “personalized time-aggregation” and learn user-specific temporal taste via trainable personalized time position embeddings with category-aware correlations in long-term behaviors. Secondly, long- and short-term feature-wise attention layers are proposed to effectively capture users’ long- and short-term preferences for accurate recommendation. Especially, the attention mechanism enables TLSAN to utilize users’ preferences in an adaptive way, and its usage in long- and short-term layers enhances TLSAN’s ability of dealing with sparse interaction data. Extensive experiments are conducted on Amazon datasets from different fields (also with different size), and the results show that TLSAN outperforms state-of-the-art baselines in both capturing users’ preferences and performing time-sensitive next-item recommendation.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.