Abstract

Industrial advancements and large financial stakes are rousing to grasp the aims of smart cities. Their key aim is to increase efficiency and citizens’ quality of life. Human Gait Recognition (HGR) is an important application of smart cities in which a person is recognized with the help of walking styles. The main advantage of HGR is an individual’s inconspicuous distant gait data acquisition. It is being utilized to mitigate security threats at airports, banks, embassies & various corporate vicinities. In this paper, a deep learning method is proposed for HGR. The proposed methodology utilizes a large Multiview gait database CASIA-B comprising video sequences. Initial video processing in frames precedes the implementation of the pre-trained deep model DenseNet-201. Reutilizing the same for gait recognition using Transfer Learning (TL) yields the target model, which further acts as the source for feature extraction from the global average pool layer. Feature extraction as a dominant step is responsible here for accurate gait recognition. Poor lighting, varying view angles, diverse clothing conditions, carrying luggage, etc., are some constraints that adversely affect systems’ efficiency. To tackle this situation, we use two parallel techniques in the process as- improved BAT algorithm (IBAT) and Entropy selection. We use a Canonical Correlation Analysis (CAA) to fuse these carefully selected features into a matrix finally. Final gait recognition is achieved by passing these fused features from the Softmax classifier. The proposed technique reaches 96.13% and 95.2% accuracy by utilizing all angles of the CASIA-B and CASIA-C datasets. The authenticity of this work is cross-examined with the existing methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call