Abstract

Multispectral pedestrian detection is an emerging solution with great promise in many around-the-clock applications, such as automotive driving and security surveillance. To exploit the complementary nature and remedy contradictory appearance between modalities, in this paper, we propose a novel cross-modality interactive attention network that takes full advantage of the interactive properties of multispectral input sources. Specifically, we first utilize the color (RGB) and thermal streams to build up two detached feature hierarchy for each modality, then by taking the global features, correlations between two modalities are encoded in the attention module. Next, the channel responses of halfway feature maps are recalibrated adaptively for subsequent fusion operation. Our architecture is constructed in the multi-scale format to better deal with different scales of pedestrians, and the whole network is trained in an end-to-end way. The proposed method is extensively evaluated on the challenging KAIST multispectral pedestrian dataset and achieves state-of-the-art performance with high efficiency.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.