Abstract

Visible and thermal modalities are strongly complementary in object signal representation. Using the two modalities simultaneously is beneficial to reduce the impact of illumination variation on pedestrian detection. To effectively utilize multimodal information, this paper proposes an anchor-free multimodal pedestrian algorithm. First, a modal feature fusion module is proposed, which executes modal fusion by decaying dense connections and combines convolution with the self-attention mechanism to account for local and global information between the modalities. Secondly, through the multiwindow global context module and the pyramid feature fusion module, a new feature pyramid network enhanced by global context information is proposed. On the visible-thermal pedestrian detection datasets KAIST, CVC-14 and LLVIP, the proposed method achieves 5.67%, 20.51% and 2.21% average miss rate respectively, which is better than the mainstream algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call