Abstract

Accurate transportation mode identification is essential for traffic management and travel planning. The rapid development of GPS-enabled devices has made it both popular and cost-effective to obtain travel modes from massive GPS trajectory datasets. However, since different transportation modes exhibit significantly different spatial characteristics, varied scale representation can be used to efficiently capture these differences, but existing research have failed to fully exploit this point.To address these issues, we propose a novel deep learning framework named Multi-Attribute-Scale-Object-based Multi-Stage Fusion Network (MASO-MSF). A MASO structure is constructed to represent the local motion states and spatial characteristics of one GPS trajectory segment at different spatial scales. Subsequently, a multi-stage fusion model (MSF) is designed to perform accurate transportation mode identification in an end-to-end manner. In this model, an attribute channel fusion module is built to fuse local motion states and capture the spatial dependencies. A scale feature fusion module is then established to selectively aggregate features from different spatial scales. Finally, MSF deploys an object decision fusion module to generate the final identification result from different probabilities of multiple objects in a trajectory segment.To evaluate the performance of the proposed MASO-MSF model, a series of experiments are conducted on a publicly available dataset. The experimental results verify the effectiveness of MASO in characterizing a trajectory segment from three perspectives (i.e., attribute, scale, and object) and demonstrate that MASO-MSF can achieve the state-of-the-art identification performance. In addition, the source code of MASO-MSF is now available at GitHub (https://github.com/MYL23/MASO-MSF).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call