Abstract
Inferring the transportation modes of travelers is an essential part of intelligent transportation systems. With the development of mobile services, it is easy to effectively obtain massive location readings of travelers with GPS-enabled smart devices, such as smartphones. These readings make understanding human activities very convenient. Therefore, how to automatically infer transportation modes from these massive readings has come into the spotlight. The existing methods for transportation mode identification are usually based on supervised learning. However, the raw GPS readings do not contain any labels, and it is expensive and time-consuming to annotate sufficient samples for training supervised learning-based models. In addition, not enough attention is paid to the problem that GPS readings collected in urban areas are affected by surrounding geographic information (e.g., the level of road transportation or the distribution of stations). To solve this problem, a geographic information-fused semi-supervised method based on a Dirichlet variational autoencoder, named GeoSDVA, is proposed in this paper for transportation mode identification. GeoSDVA first fuses the motion features of the GPS trajectories with the nearby geographic information. Then, both labeled and unlabeled trajectories are used to train the semi-supervised model based on the Dirichlet variational autoencoder architecture for transportation mode identification. Experiments on three real GPS trajectory datasets demonstrate that GeoSDVA can train an excellent transportation mode identification model with only a few labeled trajectories.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.