Abstract

The current video-based gait recognition research needs to process the decoded video into the original image. Moreover, the performance of gait recognition is frequently deteriorated by factors such as clothing, carrying conditions, and viewing angle variations. In this paper, we present a novel perspective that utilizes motion information in the compressed video as gait features. Proposed method provides gait silhouettes without semantic segmentation of video and contains more temporal motion information. In order to obtain more discriminative and effective features, we use srm filters to extract local noise features from gait residual silhouette images, then a convolutional neural network is constructed to learn the distribution characteristics of silhouette noises. Finally, evaluation is performed on publicly available CASIA-B and CASIA-C gait dataset. Experiments show that compared with the existing methods, proposed method saves at least 65% in the time to obtain gait silhouette features, reduces the storage consumption by an order of magnitude, and obtains an average recognition accuracy rate of 96.6%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call