Each individual describes unique patterns during their gait cycles. This information can be extracted from the live video stream and used for subject identification. In appearance based recognition methods, this is done by tracking silhouettes of persons across gait cycles. In recent years, there has been a profusion of sensors that in addition to RGB video images also provide depth data in real-time. When such sensors are used for gait recognition, existing RGB appearance based methods can be extended to get a substantial gain in recognition accuracy. In this paper, this is accomplished using information fusion techniques that combine features from extracted silhouettes, used in traditional appearance based methods, and the height feature that can now be estimated using depth data. The latter is estimated during the silhouette extraction step with minimal additional computational cost. Two approaches are proposed that can be implemented easily as an extension to existing appearance based methods. An extensive experimental evaluation was performed to provide insights into how much the recognition accuracy can be improved. The results are presented and discussed considering different types of subjects and populations of different height distributions.
Read full abstract