Abstract

Object detection is one of the vital components used for autonomous navigation in dynamic environments. Camera and lidar sensors have been widely used for efficient object detection by mobile robots. However, they suffer from adverse weather conditions in operating environments such as sun, fog, snow, and extreme illumination changes from day to night. The sensor fusion of camera and lidar data helps to enhance the overall performance of an object detection network. However, the diverse distribution of training data makes the efficient learning of the network a challenging task. To address this challenge, we systematically study the existing visual and lidar features based on object detection methods and propose an adaptive feature attention module (AFAM) for robust multisensory data fusion-based object detection in outdoor dynamic environments. Given the camera and lidar features extracted from the intermediate layers of EfficientNet backbones, the AFAM computes the uncertainty among the two modalities and adaptively refines visual and lidar features via attention along the channel and the spatial axis. The AFAM integrated with the EfficientDet performs the adaptive recalibration and fusion of visual lidar features by filtering noise and extracting discriminative features for an object detection network under specific environmental conditions. We evaluate the AFAM on a benchmark dataset exhibiting weather and light variations. The experimental results demonstrate that the AFAM significantly enhances the overall detection accuracy of an object detection network.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call