Abstract

AbstractAccurate multi-step citywide urban flow prediction plays a critical role in traffic management and future smart city. However, it is very challenging since urban flow is affected by complex semantic factors and has multi-scale dependencies on both spatial and temporal dimensional features. Moreover, it’s difficult for most existing one-step urban flow prediction models to predict several future time steps in a short time accurately. Inspired by the success of Generative Adversarial Networks (GAN) in video prediction and image generation, in this paper we propose a Seq2Seq Spatial-Temporal Semantic Generative Adversarial Networks named STS-GAN for multi-step urban flow prediction. We regard citywide urban flow data in successive time steps as image frames of a video. Specifically, we first design a Spatial-Temporal Semantic Encoder (STSE) to capture relative semantic factors and spatial-temporal dependencies simultaneously at each time step, which consists of Residual Convolution units. Then a Seq2Seq GAN model is proposed to generate a sequence of future urban flow predictions based on historical data. Furthermore, by integrating GAN’s adversarial loss with prediction error, our STS-GAN can effectively address the blurry prediction issue. Extensive experiments are conducted on two large-scale urban flow datasets in Beijing and Guangzhou, which demonstrate STS-GAN achieves state-of-the-art performance compared with existing methods.KeywordsSpatial-temporal data miningUrban flow predictionGenerative Adversarial NetworksNeural network models

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call