Abstract

Crowd counting from a single image is a challenging task due to high appearance similarity, perspective changes, and severe congestion. Many methods only focus on the local appearance features and they cannot handle the aforementioned challenges. In order to tackle them, we propose a perspective crowd counting network (PCC Net), which consists of three parts: 1) density map estimation (DME) focuses on learning very local features of density map estimation; 2) random high-level density classification (R-HDC) extracts global features to predict the coarse density labels of random patches in images; and 3) fore-/background segmentation (FBS) encodes mid-level features to segments the foreground and background. Besides, the Down, Up, Left, and Right (DULR) module is embedded in PCC Net to encode the perspective changes on four directions (DULR). The proposed PCC Net is verified on five mainstream datasets, which achieves the state-of-the-art performance on the one and attains the competitive results on the other four datasets. The source code is available at https://github.com/gjy3035/PCC-Net .

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call