Abstract

The LiDAR point cloud data and camera images are distorted to a different degree under various severe weather conditions. Due to this, the traditional single modal object detection methods are unable to use the complementary information between different sensors. Consequently, these algorithms are unable to address various issues caused by severe weather conditions. Recently, the multimodal data fusion methods are applied to the road object detection under severe weather conditions. However, the multimodal algorithms suffer from low data alignment accuracy and the inability to suppress the changes in exposure under severe weather conditions. In this work, we propose a new multimodal sensor fusion object detection network. The proposed network effectively overcomes the shortcomings caused by the camera and LiDAR distortions in severe weather conditions and achieves robust environment perception. We propose: 1) point-wise aligned data fusion method based on K-means++ clustering to improve the accuracy of data alignment; 2) implicit feature pyramid network (i-FPN) to fuse the image features for suppressing the distortions caused by the changes in exposure; 3) hybrid attention mechanism (HAM) to deal with the fusion features and improve the adaptability towards different working conditions. We perform experiments using the ONCE and KITTI dataset. The experimental results and analysis show that the proposed method effectively improves the performance of multimodal deep fusion network under both clear and severe weather conditions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call