Abstract

Object detection in autonomous vehicles is typically operated in an embedded system to reduce power consumption. The use of an object detection algorithm with high accuracy and real-time detection speed in the embedded systems is essential for ensuring safe driving. This study proposes a parallel processing method for GPU and CPU operations to enhance the detection speed of the model. In addition, this study proposes data augmentation and image resize techniques that consider the camera input size of autonomous driving, which increases the accuracy significantly while improving the detection speed. The application of these proposed schemes to a baseline algorithm, tiny Gaussian YOLOv3, improves the mean average precision by 1.14 percent points (pp) for the Berkeley Deep Drive (BDD) dataset and 1.34 pp for the KITTI dataset compared to the baseline. Furthermore, in the NVIDIA Jetson AGX Xavier, which is an embedded platform for autonomous driving, the proposed algorithm improves the detection speed by 22.54 % for the BDD, and 24.67 % for the KITTI compared to the baseline, thereby enabling high-speed real-time detection on both datasets.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.