Abstract

Driving trajectory representation learning is of great significance for various location-based services such as driving pattern mining and route recommendation. However, previous representation generation approaches rarely address three challenges: (1) how to represent the intricate semantic intentions of mobility inexpensively, (2) complex and weak spatial–temporal dependencies due to the sparsity and heterogeneity of the trajectory data, and (3) route selection preferences and their correlation to driving behaviour. In this study, we propose a novel multimodal fusion model, DouFu, for trajectory representation joint learning, which applies a multimodal learning and attention fusion module to capture the internal characteristics of trajectories. We first design movement, route, and global features generated from the trajectory data and urban functional zones, and then analyse them with an with the attention encoder or fully connected network. The attention fusion module incorporates route features with movement features to create more effective spatial–temporal embedding. Combined with the global semantic feature, DouFu produced a comprehensive embedding for each trajectory. We evaluated the representations generated by our method and other baseline models on the classification and clustering tasks. The empirical results show that DouFu outperforms other models in most learning algorithms, such as the linear regression and the support vector machines, by more than 10%.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.