Abstract

The Dempster–Shafer theory based on multi-SVM to deal with multimodal gesture images for intention understanding is proposed, in which the Sparse Coding (SC) based Speeded-Up Robust Features (SURF) are used for feature extraction of depth and RGB image. Aiming at the problems of the small sample, high dimensionality and feature redundancy for image data, we use the SURF algorithm to extract the features of the original image, and then perform their Sparse Coding, which means that the image is subjected to two-dimensional feature reduction. The dimensionally reduced gesture features are used by the multi-SVM for classification. A fusion framework based on D–S evidence theory is constructed to deal with the recognition of depth and RGB image to realize the gesture intention understanding. To verify the effectiveness of the proposal, the experiments on two RGB-D datasets (CGD2011 and CAD-60) are conducted. The results of 10-fold cross validation test show that the recognition rates were higher than those produced by other methods under the condition when each sensor was considered individually. Meanwhile, the preliminary experiments are also carried out in the developing emotional social robot system. The results indicate that the proposal can be applied to human–robot interaction.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call