Abstract

Gait recognition is a non-invasive biometric technology that can be used to identify humans in surveillance systems. It is based on the style or manner in which a person walk and can be realized with minimal amount of individual cooperation for its acquisition. However, it may causes many challenges in the form of varying viewpoints, carrying conditions and clothing variations. To tackle such limitations, we present a view-invariant gait recognition network that divide the gait cycle into five segments (GCS). The intra gait-cycle-segment (GCS) convolutional spatio-temporal relationships has been obtained by employing a 3D-CNN via. transfer learning mechanism. Later, a stacked LSTM has been trained over spatio-temporal features to learn the long and short relationship between inter gait-cycle-segment.The first step in our work is data pre-processing, in which we create silhouette stereo map (SSM) from the binary silhouettes of the gait video frame and sampled each video into a fixed 80 frames. These 80 frames SSM have been divided into 5 gait-cycle-segments (GCS) of 16 frames each. From each of these GCS, we extract spatio-temporal features using a pre-trained 3-D CNN. These features have been concatenated temporally, and an LSTM cell is used to learn the long-term dependencies between each GCS. Finally, the required class scores are computed by averaging (to handle noise) the output generated by LSTM. The network is trained in an end-to-end fashion using triplet loss function so as to learn the gait metric well using only the hard triplets. All the experiments are carried out on publicly available CASIA-B and OU-ISIR gait dataset. From the experimental results, it has been indicated that the proposed network performs better than the current state-of-the-art gait recognition systems.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.