Abstract

Next point-of-interest (POI) recommendation has become an important and challenging problem due to complex information and variety in user behavior patterns. Most of the prior studies utilized the RNN method to obtain their preference to various POIs. Recently, researchers integrate long- and short-term interests and achieve success. However, they fail to capture the influence of long-term preference on short-term preference. Besides, the granularity of the preference modeling is too coarse. To address the above limitations, we propose an end-to-end framework named Long- and Short-term Preference Learning with Transformer(LST), considering the user’s preference for various places at both long-term and short-term levels. Specifically, the multi-head self-attention mechanism in Transformer is utilized to extract long-term preference. To learn user’s short-term preference, we utilize spatial and temporal information of the POIs to model two different behavior patterns. In addition, our model incorporates long-term preference as background information into short-term preference to enhance the preference modeling ability. Results from extensive experiments performed on two Foursquare check-in datasets show that our model has advantages over the state-of-the-art baselines.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call