Abstract
Cross-modal crowd counting aims to use the information between different modalities to generate crowd density images, so as to estimate the number of pedestrians more accurately in unconstrained scenes. Due to the huge differences between different modal images, how to effectively fuse the information between different modalities is still a challenging problem. To address this problem, we propose a cross-modal crowd counting method based on CNN and novel cross-modal transformer, which effectively fuses the information between different modalities and boosts the accuracy of crowd counting in unconstrained scenes. Concretely, we first design double CNN branches to capture the modality-specific features of images. After that, we design a novel cross-modal transformer to extract cross-modal global features from the modality-specific features. Furthermore, we a propose cross layer connection structure to connect the front-end information and back-end information of the network by adding different layer features. At the end of the network, we develop a cross- modal attention module to strengthen the cross-modal feature representation by extracting the complementarities between different modal features. The experimental results show that the method combining CNN and novel cross-modal transformer proposed in this paper achieves state-of-the-art performance, which not only effectively improves the accuracy and robustness of cross-modal crowd counting, but also has good generalization under multimodal crowd counting.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.