Abstract

Autonomous driving has become a prevalent research topic in recent years, arousing the attention of many academic universities and commercial companies. As human drivers rely on visual information to discern road conditions and make driving decisions, autonomous driving calls for vision systems such as vehicle detection models. These vision models require a large amount of labeled data while collecting and annotating the real traffic data are time-consuming and costly. Therefore, we present a novel vehicle detection framework based on the parallel vision to tackle the above issue, using the specially designed virtual data to help train the vehicle detection model. We also propose a method to construct large-scale artificial scenes and generate the virtual data for the vision-based autonomous driving schemes. Experimental results verify the effectiveness of our proposed framework, demonstrating that the combination of virtual and real data has better performance for training the vehicle detection model than the only use of real data.

Highlights

  • With the breakthrough progress of artificial intelligence technology, intelligent vehicles with the advanced driving assistance system (ADAS) are vigorously launched on the market [1, 2]

  • We propose a novel vehicle detection framework based on the parallel vision theory, using specially designed virtual datasets to help train the vehicle detection model

  • We conduct the experiments to validate the effectiveness of our vehicle detection framework

Read more

Summary

Introduction

With the breakthrough progress of artificial intelligence technology, intelligent vehicles with the advanced driving assistance system (ADAS) are vigorously launched on the market [1, 2]. As the important information sources of intelligent vehicles, vision systems play crucial roles in safe and efficient automatic driving. It can obtain the information accurately and help to conduct the analysis promptly [3]. With the rapid development of intelligent visual perception technology, the deep learning methods for the visionbased intelligent driving models far exceed the traditional methods These deep learning-based methods require a large amount of labeled data for training. Artificial scenes, which simulate the actual traffic scenes, can be utilized to solve the above problems of real data They can help to collect data under various complex situations more . The detailed and accurate annotation information is automatically generated during the collection of virtual data

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call