Abstract

Abstract. Next location prediction is helpful for service recommendation, public safety, intelligent transportation, and other location-based applications. Existing location prediction methods usually use sparse check-in trajectories and require massive historical data to capture complex spatial-temporal correlations. High spatial-temporal resolution trajectories have rich information. However, obtaining personal trajectories with long time series and high spatio-temporal resolution usually proves challenging. Herein, this paper proposes a two-stage Context-Aware Spatial-Temporal Location Embedding (CASTLE) model, a multi-modal pre-training model for sequence-to- sequence prediction tasks. The method is built in two steps. First, large-scale location datasets, which are sparse but easier to be acquired (i.e., check-in and anomalous navigation data), are used for pre-training location embedding to capture the multi-functional properties under different contexts. After that, the learned contextual embedding is used for downstream location prediction in small-scale but higher spatio-temporal resolution trajectory datasets. Specifically, the CASTLE model combines Bidirectional and Auto-Regressive Transformers to generate contextual embedding vectors rather than a fixed vector for each location. Furthermore, we introduce a location and time-aware encoder to reflect the spatial distances between locations and visit times. Experiments are conducted on two real trajectory datasets. The results show that the CASTLE model can pre-train beneficial location embedding and outperforms the model without pre-training by 4.6–7.1%. The proposed method is expected to improve the next location prediction accuracy without massive historical data, which will greatly drive the use of trajectory data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call