PurposeIn autonomous driving, the inherent sparsity of point clouds often limits the performance of object detection, while existing multimodal architectures struggle to meet the real-time requirements for 3D object detection. Therefore, the main purpose of this paper is to significantly enhance the detection performance of objects, especially the recognition capability for small-sized objects and to address the issue of slow inference speed. This will improve the safety of autonomous driving systems and provide feasibility for devices with limited computing power to achieve autonomous driving.Design/methodology/approachBRTPillar first adopts an element-based method to fuse image and point cloud features. Secondly, a local-global feature interaction method based on an efficient additive attention mechanism was designed to extract multi-scale contextual information. Finally, an enhanced multi-scale feature fusion method was proposed by introducing adaptive spatial and channel interaction attention mechanisms, thereby improving the learning of fine-grained features.FindingsExtensive experiments were conducted on the KITTI dataset. The results showed that compared with the benchmark model, the accuracy of cars, pedestrians and cyclists on the 3D object box improved by 3.05, 9.01 and 22.65%, respectively; the accuracy in the bird’s-eye view has increased by 2.98, 10.77 and 21.14%, respectively. Meanwhile, the running speed of BRTPillar can reach 40.27 Hz, meeting the real-time detection needs of autonomous driving.Originality/valueThis paper proposes a boosting multimodal real-time 3D object detection method called BRTPillar, which achieves accurate location in many scenarios, especially for complex scenes with many small objects, while also achieving real-time inference speed.
Read full abstract