Gait can be used to recognize people in an uncooperative and noninvasive manner and it is hard to imitate or counterfeit, which makes it suitable for video surveillance. The current solutions for gait recognition are still not robust to handle the conditions when the view angles of the gallery and query are different. We improve the performance of cross-view gait recognition from the perspective of metric learning. Specifically, we propose to use angular softmax loss to impose an angular margin for extracting separable features. At the same time, we use triplet loss to make the extracted features more discriminative. Additionally, we add a batch-normalization layer after extracting gait features to effectively optimize two different losses. We evaluate our approach on two widely-used gait dataset: CASIA-B dataset and TUM GAID dataset. The experiment results show that our approach outperforms the prior state-of-the-art approaches, which shows the effectiveness of our approach.
Read full abstract