Abstract
Multispectral image pairs can provide complementary visual information, making pedestrian detection systems more robust and reliable. To benefit from both RGB and thermal IR modalities, we introduce a novel attentive multispectral feature fusion approach. Under the guidance of the inter- and intra-modality attention modules, our deep learning architecture learns to dynamically weigh and fuse the multispectral features. Experiments on two public multi-spectral object detection datasets demonstrate that the proposed approach significantly improves the detection accuracy at a low computation cost.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have