Abstract

To simultaneously estimate the crowd count and the density map from the crowd images, this paper presents a novel cross-modal fusion based approach for RGB-D crowd counting. For the RGB-D crowd counting task, the depth data is often utilized into the procedure of detecting heads of the crowd to enhance the counting performance so as to reduce the underestimation from the small heads of the crowd. Different from the traditional methods utilizing depth image for the target task, the proposed approach is essentially designed as a density estimation-based regression framework to learn the more abundant deep representation from the original images through cross-modal interactions in multiple locations of the framework, which is more beneficial to crowd counting in various scenes, especially the congested scenes. Meanwhile, modeling the global and local contexts is designed to facilitate the proposed approach to learn the more adequate scale-aware representation for the counting task. Extensive experiments on MICC and large-scale ShanghaiRGBD benchmarks demonstrate that the performance of the proposed approach is superior to the state-of-the-art methods for RGB-D crowd counting and density estimation. Further, the proposed approach could be extended to RGB crowd counting task and the experimental results show that it achieves the comparable performance with the existing crowd counting methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.