Abstract

Crowd counting is challenging due to unconstrained imaging factors, e.g., background clutters, non-uniform distribution of people, large scale and perspective variations. Dealing with these problems using deep neural networks requires rich prior knowledge and multi-scale contextual representations. In this paper, we propose a Cross-stage Refinement Network (CRNet) that can refine predicted density maps progressively based on hierarchical multi-level density priors. In particular, CRNet is composed of several fully convolutional networks. They are stacked together recursively with the previous output as the next input, and each of them serves to utilize previous density output to gradually correct prediction errors of crowd areas and refine the predicted density maps at different stages. Cross-stage multi-level density priors are further exploited in our recurrent framework by the cross-stage skip layers based on ConvLSTM. To cope with different challenges of unconstrained crowd scenes, we explore different crowd-specific data augmentation methods to mimic real-world scenarios and enrich crowd feature representations from different aspects. Extensive experiments show the proposed method achieves superior performances against state-of-the-art methods on four widely-used challenging benchmarks in terms of counting accuracy and density map quality. Code and models are available at this https://github.com/lytgftyf/Crowd-Counting-via-Cross-stage-Refinement-Networks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.