Abstract

At present, visual recognition systems have acquired wide employment in the autonomous-driving area. The lack of fully featured benchmarks that mimic scenarios faced by autonomous-driving system is the core factor limiting the visual understanding of complex urban traffic scenes. However, to establish a dataset adequately captures the complexity of real-world urban traffics consuming time and effort. In order to solve these difficulties, authors involve virtual reality to develop a large-scale dataset, which trains and tests approaches for autonomous-driving vehicles. Using the label of the object in virtual scenes, the coordinate transformation of a 3D object to a 2D plane is calculated, which makes the label of the pixel block corresponding to the object in the 2D plane accessible. Their recording platform is equipped with video camera models, LiDAR model and positioning system. By using the pilot-in-the-loop method with driving simulator hardware and VR devices, the authors acquire and establish a large, diverse dataset comprising stereo video sequences recorded in streets and mountain roads from several different environments. Their pioneering method of using VR technology significantly mitigates the costs of acquisition of training data. Crucially, their effort exceeds previous attempts in terms of dataset size, scene variability and complexity.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call