Abstract

Gait recognition is becoming one of the promising methods for biometric authentication owing to its self-effacing nature. Contemporary approaches of joint position-based gait recognition generally model gait features using spatio-temporal graphs which are often prone to overfitting. To incorporate long-range relationships among joints, these methods utilize multi-scale operators. However, they fail to provide equal importance to all joint combinations resulting in an incomplete realization of long-range relationships between joints and important body parts. Furthermore, only considering joint coordinates may fail to capture discriminatory information provided by the bone structures and motion. In this work, a novel multi-scale graph convolution approach, namely ‘GaitGCN++’, is proposed, which utilizes joint and bone information from individual frames and joint-motion data from consecutive frames providing a comprehensive understanding of gait. An efficient hop-extraction technique is utilized to understand the relationship between closer and further joints while avoiding redundant dependencies. Additionally, traditional graph convolution is enhanced by leveraging the ‘DropGraph’ regularization technique to avoid overfitting and the ‘Part-wise Attention’ to identify the most important body parts over the gait sequence. On the benchmark gait recognition dataset CASIA-B and GREW, we outperform the state-of-the-art in diversified and challenging scenarios.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call