Abstract

In this paper, we study one-shot learning gesture recognition on RGB-D data recorded from Microsoft’s Kinect. To this end, we propose a novel bag of manifold words (BoMW)-based feature representation on symmetric positive definite (SPD) manifolds. In particular, we use covariance matrices to extract local features from RGB-D data due to its compact representation ability as well as the convenience of fusing both RGB and depth information. Since covariance matrices are SPD matrices and the space spanned by them is the SPD manifold, traditional learning methods in the Euclidean space, such as sparse coding, cannot be directly applied to them. To overcome this problem, we propose a unified framework to transfer the sparse coding on SPD manifolds to the one on the Euclidean space, which enables any existing learning method to be used. After building BoMW representation on a video from each gesture class, a nearest neighbor classifier is adopted to perform the one-shot learning gesture recognition. Experimental results on the ChaLearn gesture data set demonstrate the outstanding performance of the proposed one-shot learning gesture recognition method compared against the state-of-the-art methods. The effectiveness of the proposed feature extraction method is also validated on a new RGB-D action recognition data set.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call