Abstract

In complex traffic scenes, accurate identification of pedestrian orientations can help drivers determine pedestrian trajectories and help reduce traffic accidents. However, there are still many challenges in pedestrian orientation recognition. First, due to the irregular appearance of pedestrians, it is difficult for general Convolutional Neural Networks (CNNs) to extract discriminative features. In addition, more features of body parts help to judge the orientation of pedestrians. For example, head, arms and legs. However, they are usually small and not conducive to feature extraction. Therefore, in this work, we use several discrete values to define the orientation of pedestrians, and propose a Gated Graph Neural Network (GGNN)-based Graph Recurrent Attention Network (GRAN) to classify the orientation of pedestrians. The contributions are as follows: (1) We construct a body parts graph consisting of head, arms and legs on the feature maps output by the CNN backbone. (2) Mining the dependencies between body parts on the graph via the proposed GRAN, and utilizing the encoder–decoder to propagate features among graph nodes. (3) In this process, we propose an adjacency matrix with attention edge weights to dynamically represent graph node relationships, and the edge weights are learned during network training. To evaluate the proposed method, we conduct experiments on three different benchmarks (PDC, PDRD, and Cityscapes) with 8, 3, and 4 orientations, respectively. Note that the orientation labels for PDRD and Cityscapes are annotated by our hand. The proposed method achieves 97%, 91% and 90% classification accuracy on the three data sets, respectively. The results are all higher than current state-of-the-art methods, which demonstrate the effectiveness of the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call