Microscopic traffic flow data, an important input to virtual test scenarios for autonomous driving, are often difficult to obtain in large quantities to allow for batch testing. In this paper, a neural network for generating microscopic traffic flow scene fragments is proposed, which is improved by adding Gate Recurrent Units (GRU) to the discriminator of the Deep Convolutional Generative Adversarial Network (DCGAN) to enable it to better discriminate continuous data. Subsequently, this paper compares individual sample motion trajectories of the generated data using Grey Relational Analysis (GRA) and Dynamic Time Warping algorithm (DTW) at the microscopic scale, and evaluates the overall scenes generated using averaged statistics at the macroscopic scale. The results show that the method proposed in this paper can generate realistic microscopic traffic flow data very well and that the neural network proposed in this paper can generate better near-realistic microscopic traffic flow data than the original DCGAN under the evaluation metrics used in this paper.
Read full abstract