Abstract

Multispectral pedestrian detection is an important and valuable task in many applications, which could provide a more accurate and reliable pedestrian detection result by using the complementary visual information from color and thermal images. However, it faces two open and difficult challenges: 1) how to effectively and dynamically integrate multispectral information according to the confidence of different modalities, and 2) how to produce a reliable prediction result. In this paper, we propose a novel confidence-aware multispectral pedestrian detection (CMPD) method, which flexibly learns the multispectral representation while simultaneously producing a reliable result with confidence estimation. Specifically, a dense fusion strategy is first proposed to extract the multilevel multispectral representation at the feature level. Then, an additional confidence subnetwork is utilized to dynamically estimate the detection confidence for each modality. Finally, Dempsters combination rule is introduced to fuse the results of different branches according to the rectified confidence. Our proposed CMPD method not only effectively integrates multimodal information but also provides a reliable prediction. Extensive experimental results demonstrate the efficiency of our algorithm compared with state-of-the-art methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.