Abstract

The detection of 3D objects with high precision from point cloud data has become a crucial research topic in intelligent transportation systems. By effectively modeling global and local features, it can be acquired the state-of-the-art detector for 3D object detection. Nevertheless, regarding the previous work on feature representations, volumetric generation or point learning methods have difficulty building the relationships between local features and global features. Thus, we propose a multi-feature fusion network (MFFNet) to improve detection precision for 3D point cloud data by combining the global features from 3D voxel convolutions with the local features from the point learning network. Our algorithm is an end-to-end detection framework that contains a voxel convolutional module, a local point feature module and a detection head. Significantly, MFFNet constructs the local point feature set with point learning and sampling and the global feature map through 3D voxel convolution from raw point clouds. The detection head can use the obtained fusion feature to predict the position and category of the examined 3D object, so the proposed method can obtain higher precision than existing approaches. An experimental evaluation on the KITTI 3D object detection dataset obtain 97% MAP (Mean Average Precision) and Waymo Open dataset obtain 80% MAP, which proves the efficiency of the developed feature fusion representation method for 3D objects, and it can achieve satisfactory location accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call