Abstract

Real-time, accurate and comprehensive traffic flow data is the key of intelligent transportation systems to provide efficient services for urban transportation. In the process of collecting data, there are many factors causing data loss, which needs to be supplemented and repaired to reduce the instability, and improve the precision of system application in the intelligent transportation system. This paper proposes a Spatio-Temporal Learnable Bidirectional Attention Generative Adversarial Networks (ST-LBAGAN) for missing traffic data imputation. First, we take external factors, historical observations, incomplete data, and masked image as the input of generator, and obtain the missing data imputation by using binary classification as the output of the discriminator. Secondly, the encoder and decoder of generator are constructed on the basis of the U-Net. The forward attention map and the reverse attention map of learnable bidirectional attention correspond to the encoder and the decoder respectively to effectively obtain the spatial–temporal random characteristics of traffic flow. Thirdly, high-level and low-level features, in the encoder and decoder, are combined by multiple skip connections. Furthermore, a new objective function is optimized by combining masked reconstruction loss, perceptual loss, discriminative loss and adversarial loss to improve the data imputation ability. Finally, our model is well-adapted on the Beijing taxi GPS dataset. The experimental results show that an improved state-of-the-art performance is achieved on various standard benchmarks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.