Hierarchical long and short-term preference modeling with denoising Mamba for sequential recommendation

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Hierarchical long and short-term preference modeling with denoising Mamba for sequential recommendation

Similar Papers
  • Book Chapter
  • Cite Count Icon 3
  • 10.1007/978-3-030-59413-8_20
Long- and Short-Term Preference Model Based on Graph Embedding for Sequential Recommendation
  • Jan 1, 2020
  • Yu Liu + 6 more

As sequential recommendation mainly obtains the user preference by analyzing their transactional behavior patterns to recommend the next item, how to mine real preference from user’s sequential behavior is crucial in sequential recommendation, and how to find the user long-term and short-term preference accurately is the key to solve this problem. Existing models mainly consider either the user short-term preference or long-term preference, or the relationship between items in one session, ignoring the complex item relationships between different sessions. As a result, they may not adequately reflect the user preference. To this end, in this paper, a Long- and Short-Term Preference Network (LSPN) based on graph embedding for sequential recommendation is proposed. Specifically, item embedding with a complex relationship of items between different sessions is obtained based on graph embedding. Then this paper constructs the network to obtain the user long- and short-term preferences separately, combing them through the fuzzy gate mechanism to provide the user final preference. Furthermore, the results of experiments on two datasets demonstrate the efficiency of our model in Recall@N and MRR@N.

  • Book Chapter
  • Cite Count Icon 1
  • 10.1007/978-3-031-22677-9_24
Heterogeneous Graph Based Long- And Short-Term Preference Learning Model for Next POI Recommendation
  • Jan 1, 2023
  • Shiyang Zhou + 3 more

The next POI recommendation aiming at recommending the venues that people are likely interested in has become a popular service provided by location-based social networks such as Foursquare and Gowalla. Many existing methods attempt to improve the recommendation accuracy by modeling the long- and short-term preferences of people. However, these methods learn users’ preferences only from their own historical check-in records, which leads to bad recommendation performance in sparse dataset. To this end, we propose a novel approach named long- and short-term preference learning model based on heterogeneous graph convolution network and attention mechanism (LSPHGA) for next POI recommendation. Specifically, we design a heterogeneous graph convolution network to learn the higher-order structural relations between User-POI-Categories and obtain the long-term preferences of users. As for the short-term preference, we encode the recent check-in records of users through self-attention mechanism and aggregate the short-term preference by spatio-temporal attention. Finally, the long- and short-term preference is linearly combined into a unified preference with personalized weights for different users. Extensive experiments on two real-world datasets consistently validate the effectiveness of the proposed method for improving recommendation.KeywordsPOI recommendationLong- and short-term preferenceGraph neural networkAttention mechanismSpatio-temporal context

  • Conference Article
  • Cite Count Icon 141
  • 10.24963/ijcai.2019/585
Adaptive User Modeling with Long and Short-Term Preferences for Personalized Recommendation
  • Jul 30, 2019
  • Zeping Yu + 4 more

User modeling is an essential task for online recommender systems. In the past few decades, collaborative filtering (CF) techniques have been well studied to model users' long term preferences. Recently, recurrent neural networks (RNN) have shown a great advantage in modeling users' short term preference. A natural way to improve the recommender is to combine both long-term and short-term modeling. Previous approaches neglect the importance of dynamically integrating these two user modeling paradigms. Moreover, users' behaviors are much more complex than sentences in language modeling or images in visual computing, thus the classical structures of RNN such as Long Short-Term Memory (LSTM) need to be upgraded for better user modeling. In this paper, we improve the traditional RNN structure by proposing a time-aware controller and a content-aware controller, so that contextual information can be well considered to control the state transition. We further propose an attention-based framework to combine users' long-term and short-term preferences, thus users' representation can be generated adaptively according to the specific context. We conduct extensive experiments on both public and industrial datasets. The results demonstrate that our proposed method outperforms several state-of-art methods consistently.

  • Book Chapter
  • Cite Count Icon 3
  • 10.1007/978-3-030-36808-1_26
LSPM: Joint Deep Modeling of Long-Term Preference and Short-Term Preference for Recommendation
  • Jan 1, 2019
  • Jie Chen + 5 more

In the era of information, recommender systems are playing an indispensable role in our lives. A lot of deep learning based recommender systems have been created and proven to be good progress. However, users’ decisions are determined by both long-term and short-term preferences, and most of the existing efforts study these two requirements separately. In this paper, we seek to build a bridge between the long-term and short-term preferences. We propose a Long & Short-term Preference Model (LSPM), which incorporates LSTM and self-attention mechanism to learn the short-term preference and jointly model the long-term preference by a neural latent factor model. We conduct experiments to demonstrate the effectiveness of LSPM on three public datasets. Compared with the state-of-the-art methods, LSPM got a significant improvement in HR@10 and NDCG@10, which relatively increased by \(3.875\%\) and \(6.363\%\). We publish our code at https://github.com/chenjie04/LSPM/.

  • Research Article
  • Cite Count Icon 266
  • 10.1609/aaai.v34i01.5353
Where to Go Next: Modeling Long- and Short-Term User Preferences for Point-of-Interest Recommendation
  • Apr 3, 2020
  • Proceedings of the AAAI Conference on Artificial Intelligence
  • Ke Sun + 5 more

Point-of-Interest (POI) recommendation has been a trending research topic as it generates personalized suggestions on facilities for users from a large number of candidate venues. Since users' check-in records can be viewed as a long sequence, methods based on recurrent neural networks (RNNs) have recently shown promising applicability for this task. However, existing RNN-based methods either neglect users' long-term preferences or overlook the geographical relations among recently visited POIs when modeling users' short-term preferences, thus making the recommendation results unreliable. To address the above limitations, we propose a novel method named Long- and Short-Term Preference Modeling (LSTPM) for next-POI recommendation. In particular, the proposed model consists of a nonlocal network for long-term preference modeling and a geo-dilated RNN for short-term preference learning. Extensive experiments on two real-world datasets demonstrate that our model yields significant improvements over the state-of-the-art methods.

  • Conference Article
  • Cite Count Icon 5
  • 10.1145/2348283.2348460
Collaborative filtering with short term preferences mining
  • Aug 12, 2012
  • Diyi Yang + 3 more

Recently, recommender systems have fascinated researchers and benefited a variety of people's online activities, enabling users to survive the explosive web information. Traditional collaborative filtering techniques handle the general recommendation well. However, most such approaches usually focus on long term preferences. To discover more short term factors influencing people's decisions, we propose a short term preferences model, implemented with implicit user feedback. We conduct experiments comparing the performances of different short term models, which show that our model outperforms significantly compared to those long term models.

  • Research Article
  • Cite Count Icon 81
  • 10.1016/j.ins.2023.01.131
GNN-based long and short term preference modeling for next-location prediction
  • Feb 1, 2023
  • Information Sciences
  • Jinbo Liu + 4 more

GNN-based long and short term preference modeling for next-location prediction

  • Conference Article
  • Cite Count Icon 1
  • 10.1109/cisai54367.2021.00014
Long and short-term neural network news recommendation model based on self-attention mechanism
  • Sep 1, 2021
  • Xiujin Shi + 2 more

Information overload has become a huge barrier for people to effectively obtain information while enjoying the convenience of the big data era. Personalized news recommendation system can effectively help news platforms find articles that best suit user’s preferences from a large amount of news and enhance user experience. Existing news recommendation systems usually process user's preferences in a unified manner, ignoring the difference between long-term preferences and short-term preferences. In response to the problem above, this paper studies the long and short-term memory model based on GRU (Gated Recurrent Unit). This paper uses the LFM(Latent Factor Model) to extract user's long-term preferences and uses the GRU to obtain user’s short-term preferences from browsing history. Aiming at the problem of user interest transfer in the short term, this paper uses a self-attention mechanism based on time interval to characterize the degree of it. Experiments on two real-world datasets show our approach can effectively improve the performance of news recommendation.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 13
  • 10.3390/ijgi11060323
Long- and Short-Term Preference Modeling Based on Multi-Level Attention for Next POI Recommendation
  • May 26, 2022
  • ISPRS International Journal of Geo-Information
  • Xueying Wang + 4 more

The next point-of-interest (POI) recommendation is one of the most essential applications in location-based social networks (LBSNs). Its main goal is to research the sequential patterns of user check-in activities and then predict a user’s next destination. However, most previous studies have failed to make full use of spatio-temporal information to analyze user check-in periodic regularity, and some studies omit the user’s transition preference for the category at the POI semantic level. These are important for analyzing the user’s preference for check-in behavior. Long- and short-term preference modeling based on multi-level attention (LSMA) is put forward to solve the above problem and enhance the accuracy of the next POI recommendation. This can capture the user’s long-term and short-term preferences separately, and consider the multi-faceted utilization of spatio-temporal information. In particular, it can analyze the periodic hobbies contained in the user’s check-in. Moreover, a multi-level attention mechanism is designed to study the multi-factor dynamic representation of user check-in behavior and non-linear dependence between user check-ins, which can multi-angle and comprehensively explore a user’s check-in interest. We also study the user’s category transition preference at a coarse-grained semantic level to help construct the user’s long-term and short-term preferences. Finally, experiments were carried out on two real-world datasets; the findings showed that LSMA modeling outperformed state-of-the-art recommendation systems.

  • Conference Article
  • 10.1117/12.2646912
Sequential recommendation method integrating item relationship and user preference
  • Sep 7, 2022
  • Zhaoju Zeng + 4 more

The sequential recommendation aims to recommend items that may be of interest to users based on user behavior sequence information. However, most of the current sequential recommendation tasks cannot adequately model the longterm preferences and short-term intentions of users when mining user preferences. In order to model and integrate the user's long-term and short-term preferences effectively, a new sequential recommendation method, which integrates item relationship and user preference, named IRUP, is proposed. Firstly, Graph Convolution Network (GCN) and an itemrelation level attention mechanism are utilized to model long-term and short-term preferences of users respectively; secondly, a co-attention mechanism is used to learn the cross-correlation information between long-term and short-term preferences, which enhances the user's preference representation; finally, the long-term preference and the short-term preference are fused, and the inner product is introduced to calculate the recommendation list. The experimental results on two public datasets Beauty and Home show that the proposed method IRUP can effectively improve the recommendation performance. Compared with the best baseline method KDA, the two evaluation metrics HR@5 and NDCG@5 have an average improvement of 8.28% and 7.03%, respectively.

  • Conference Article
  • Cite Count Icon 31
  • 10.1145/3308558.3313603
Hierarchical Neural Variational Model for Personalized Sequential Recommendation
  • May 13, 2019
  • Teng Xiao + 2 more

In this paper, we study the problem of recommending personalized items to users given their sequential behaviors. Most sequential recommendation models only capture a user's short-term preference in a short session, and neglect his general (unchanged over time) and long-term preferences. Besides, they are all based on deterministic neural networks, and consider users' latent preferences as point vectors in a low-dimensional continuous space. However, in real world, the evolutions of users' preferences are full of uncertainties. We address this problem by proposing a hierarchical neural variational model (HNVM). HNVM models users' three preferences: general, long-term and short-term preferences through an unified hierarchical deep generative process. HNVM is a hierarchical recurrent neural network that enables it to capture both user's long-term and short-term preferences. Experiments on two public datasets demonstrate that HNVM outperforms state-of-the-art sequential recommendation methods.

  • Research Article
  • 10.1371/journal.pone.0270182
COVID-19 infected cases in Canada: Short-term forecasting models.
  • Sep 22, 2022
  • PLOS ONE
  • Mo’Tamad H Bata + 4 more

Governments have implemented different interventions and response models to combat the spread of COVID-19. The necessary intensity and frequency of control measures require us to project the number of infected cases. Three short-term forecasting models were proposed to predict the total number of infected cases in Canada for a number of days ahead. The proposed models were evaluated on how their performance degrades with increased forecast horizon, and improves with increased historical data by which to estimate them. For the data analyzed, our results show that 7 to 10 weeks of historical data points are enough to produce good fits for a two-weeks predictive model of infected case numbers with a NRMSE of 1% to 2%. The preferred model is an important quick-deployment tool to support data-informed short-term pandemic related decision-making at all levels of governance.

  • Research Article
  • Cite Count Icon 10
  • 10.1016/j.engappai.2021.104348
Improving current interest with item and review sequential patterns for sequential recommendation
  • Jun 11, 2021
  • Engineering Applications of Artificial Intelligence
  • Jinjin Zhang + 4 more

Improving current interest with item and review sequential patterns for sequential recommendation

  • Conference Article
  • Cite Count Icon 11
  • 10.1145/3357384.3357901
Dynamic Collaborative Recurrent Learning
  • Nov 3, 2019
  • Teng Xiao + 2 more

In this paper, we provide a unified learning algorithm, dynamic collaborative recurrent learning, DCRL, of two directions of recommendations: temporal recommendations focusing on tracking the evolution of users' long-term preference and sequential recommendations focusing on capturing short-term preferences given a short time window. Our DCRL builds based on RNN and Sate Space Model (SSM), and thus it is not only able to collaboratively capture users' short-term and long-term preferences as in sequential recommendations, but also can dynamically track the evolution of users' long-term preferences as in temporal recommendations in a unified framework. In addition, we introduce two smoothing and filtering scalable inference algorithms for DCRL's offline and online learning, respectively, based on amortized variational inference, allowing us to effectively train the model jointly over all time. Experiments demonstrate DCRL outperforms the temporal and sequential recommender models, and does capture users' short-term preferences and track the evolution of long-term preferences.

  • Conference Article
  • Cite Count Icon 41
  • 10.1145/3459637.3482136
Locker: Locally Constrained Self-Attentive Sequential Recommendation
  • Oct 26, 2021
  • Zhankui He + 5 more

Recently, self-attentive models have shown promise in sequential recommendation, given their potential to capture user long-term preferences and short-term dynamics simultaneously. Despite their success, we argue that self-attention modules, as a non-local operator, often fail to capture short-term user dynamics accurately due to a lack of inductive local bias. To examine our hypothesis, we conduct an analytical experiment on controlled 'short-term' scenarios. We observe a significant performance gap between self-attentive recommenders with and without local constraints, which implies that short-term user dynamics are not sufficiently learned by existing self-attentive recommenders. Motivated by this observation, we propose a simple framework, (Locker) for self-attentive recommenders in a plug-and-play fashion. By combining the proposed local encoders with existing global attention heads, Locker enhances short-term user dynamics modeling, while retaining the long-term semantics captured by standard self-attentive encoders. We investigate Locker with five different local methods, outperforming state-of-the-art self-attentive recom- menders on three datasets by 17.19% (NDCG@20) on average.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon