Abstract

Recently, crowd counting has drawn widespread attention in computer vision, but it is extremely challenging because of the varying scales and densities. Many existing methods focus on improving the multi-scale representation by utilizing multi-column or multi-branch architectures with different kernel sizes. However, such networks cannot extract the feature maps with large receptive fields due to limitation of depth. In addition, the importance of utilizing the multi-level feature information in a deep network is ignored. In this paper, we propose a multi-scale and multi-level features aggregation network (MFANet) for accurate and efficient crowd counting, and it can be trained by end-to-end. A vital component of the network is the scale and level aggregation module (SLAM), which can extract multi-scale features and make full use of multi-level feature information for more accurate estimation. When six SLAMs are stacked together and applied to our network, our method can achieve the best performance. Furthermore, we introduce a new loss function called normalized Euclidean loss (NEL) to balance the contribution of all samples to network training. To demonstrate the performance of the proposed method, extensive experiments are conducted on four benchmark crowd counting datasets, including ShanghaiTec Part A/B, UCF-CC-50, Mall, and UCF-QNRF. Experimental results show that our MFANet achieves state-of-the-art performance in crowd counting and crowd localization.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call