Abstract

Crowd counting that aims to estimate the crowd density has recently made significant progress but remains an unsolved problem due to several challenges. In this paper, we propose an Attentive Encoder-Decoder Network (AEDNet) to overcome the notorious scale-variation problem in crowd counting. Our major contributions can be summarized in three aspects. First, we design an Attentive Feature Refinement (AFR) block in the encoder to adaptively extract multi-scale features. AFR compares the spatial information in different scales through the attention mechanism and then adaptively assign importance weights to each point, which highlights the distinctive roles in multi-scale feature extraction. Second, we develop a Separable Non-local Fusion (SNF) block in the decoder with the self-attention mechanism to aggregate multi-scale features from different layers, which not only achieves the sufficient feature fusion by capturing long-range dependencies, but also vastly reduces the computation cost compared to the original non-local operation. Third, we propose a Regional MSE (R-MSE) loss to tackle the pixel-isolation problems in regular MSE loss. To demonstrate the effectiveness of the proposed AEDNet, we conduct extensive experiments on four widely-used crowd counting datasets, and our AEDNet consistently achieves the state-of-the-art performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.