Abstract

In real-world scenarios (i.e., in the wild), pedestrians are often far from the camera (i.e., small scale), and they often gather together and occlude with each other (i.e., heavily occluded). However, detecting these small-scale and heavily occluded pedestrians remains a challenging problem for the existing pedestrian detection methods. We argue that these problems arise because of two factors: 1) insufficient resolution of feature maps for handling small-scale pedestrians and 2) lack of an effective strategy for extracting body part information that can directly deal with occlusion. To solve the above-mentioned problems, in this article, we propose a key-point-guided super-resolution network (coined KGSNet) for detecting these small-scale and heavily occluded pedestrians in the wild. Specifically, to address factor 1), a super-resolution network is first trained to generate a clear super-resolution pedestrian image from a small-scale one. In the super-resolution network, we exploit key points of the human body to guide the super-resolution network to recover fine details of the human body region for easier pedestrian detection. To address factor 2), a part estimation module is proposed to encode the semantic information of different human body parts where four semantic body parts (i.e., head and upper/middle/bottom body) are extracted based on the key points. Finally, based on the generated clear super-resolved pedestrian patches padded with the extracted semantic body part images at the image level, a classification network is trained to further distinguish pedestrians/backgrounds from the inputted proposal regions. Both proposed networks (i.e., super-resolution network and classification network) are optimized in an alternating manner and trained in an end-to-end fashion. Extensive experiments on the challenging CityPersons data set demonstrate the effectiveness of the proposed method, which achieves superior performance over previous state-of-the-art methods, especially for those small-scale and heavily occluded instances. Beyond this, we also achieve state-of-the-art performance (i.e., 3.89% MR-2 on the reasonable subset) on the Caltech data set.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call