Abstract

Pedestrian attribute recognition (PAR) aims to generate a structured description of pedestrians and plays an important role in surveillance. Current work focusing on 2D images can achieve decent performance when there is no variation in the captured pedestrian orientation. However, the performance of these works cannot be maintained in scenarios when the orientation of pedestrians is ignored. To mitigate this problem, this paper proposes orientation-aware pedestrian attribute recognition based on graph convolution network (GCN), which is composed of an orientation-aware spatial attention (OSA) module and an orientation-guided attribute-relation learning (OAL) module. Since some attributes can be invisible for certain orientations, OSA is proposed for orientation-aware feature extraction to enhance the learned representation of the visual attributes. Moreover, since different orientations result in different relations among attributes, OAL is proposed to achieve distinguishable and impactful attribute relations by eliminating the confusion of attribute relations in different orientations. Experiments on three challenging datasets (PETA, RAP, and PA100K) demonstrate that the proposed PAR outperforms the state-of-the-art methods by considerable margins.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call