Abstract

Image dataset plays an essential role in training the computer vision models. Compared with realistic datasets that are expensive to label and inflexible in number and scene, virtual datasets gradually show advantages and begin to act as a supplement in the training and validation of computer vision models. In this paper, we propose a pipeline for constructing artificial traffic scenes and generating virtual datasets based on an automatic driving simulation software from the perspective of parallel vision. Firstly, map data and simulation modeling elements like buildings are used to model the 3D artificial scene built upon a real environment. Furthermore, we use those artificial transportation scenes to automatically generate a large-scale and diverse dataset with the ground-truth label, which can provide service for computer vision tasks such as object detection and semantic segmentation. The environmental conditions and sensor properties in artificial scenes can be controlled flexibly and repeatedly. Finally, we show the detail and performance of creating virtual artificial scenes and generating the virtual dataset with low modeling time and highly accurate labeling, which does not depend on manual labeling and achieves a diverse dataset automatically and conveniently.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call