Abstract
AbstractSensor fusion is very important for collaborative intelligent systems. A regional feature fusion network called ReFuNet for detecting 3D Object is proposed. It is difficult to detect distant or small objects accurately for the sparsity of LiDAR point cloud. The LiDAR point cloud and camera image information to solve the problem of point cloud sparsity is used, which can integrate image‐rich semantic information to enhance point cloud features. Also, the authors’ ReFuNet method segments the possible areas of objects by the results of 2D image detection. A cross‐attention mechanism adaptively fuses image and point cloud features within the areas. Then, the authors’ ReFuNet uses fused features to predict the 3D bounding boxes of objects. Experiments on the KITTI 3D object detection dataset showed that the authors’ proposed fusion method effectively improved the performance of 3D object detection.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have