Abstract

Object detection plays an important role in autonomous driving systems. LiDAR is widely used in autonomous driving vehicles and robots as a sensor for environmental perception. Recently, the development of computational power and deep learning technology makes it possible to classify and locate objects from LiDAR point cloud in a single end-to-end learnable network. However, objects are sparsely distributed in large point cloud field, and are always been partly scanned by LiDAR, which pose a challenge for accurate and rapid object positioning and classification from the raw point cloud. In this paper, we introduce a new single-shot refinement neural network for fast and accurate 3D object detection from the raw LiDAR point cloud. Firstly, we exploit self-attention mechanism in main object detection branch to enhance object feature representation. Secondly, we apply deformable convolution for learning adaptive receptive fields to fully capture the features of rotating and partially visible objects. Thirdly, an object refinement branch is introduced to produce a finer regression of objects upon the primary estimation from the main detection branch. All proposed modules have been proven to effectively improve the accuracy of object detection. Our method is evaluated on KITTI 3D detection benchmark and achieves state-of-the-art results while maintains real-time efficiency. Furthermore, real-time test in autonomous driving vehicle demonstrates that our method is robust to 16 channels LiDAR and can meet the demands of accuracy, efficiency, and visibility of object detection in various urban scenarios.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call