Abstract

Commercialization of self-driving applications requires precision and reliability of the perception system due to the highly dynamic and complex road environment. Early perception systems either rely on the camera or on LiDAR for moving obstacle detection. With the development of vehicular sensors and deep learning technologies, the multi-view and sensor fusion based convolutional neural network (CNN) model for detection tasks has become a popular research area. In this paper, we present a novel multi-sensor fusion-based CNN model–SaccadeFork–that integrates the image and upsampled LiDAR point clouds as the input. SaccadeFork includes two modules: (1) a lightweight backbone that consists of hourglass convolution feature extraction module and a parallel dilation convolution module for adaptation of the system to different target sizes; (2) an anchor-based detection head. The model also considers deployment of resource-limited edge devices in the vehicle. Two refinement strategies, i.e., Mixup and Swish activation function are also adopted to improve the model. Comparison with a series of latest models on public dataset of KITTI shows that SaccadeFork can achieve the optimal detection accuracy on vehicles and pedestrians under different scenarios. The final model is also deployed and tested on a local dataset collected based on edge devices and low-cost sensor solutions, and the results show that the model can achieve real-time efficiency and high detection accuracy.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.