Abstract

With the rapid development of Internet of things technology, the interaction between people and things has become increasingly frequent. Using simple gestures instead of complex operations to interact with the machine, the fusion of smart data feature information and so on has gradually become a research hotspot. Considering that the depth image of the Kinect sensor lacks color information and is susceptible to depth thresholds, this paper proposes a gesture segmentation method based on the fusion of color information and depth information; in order to ensure the complete information of the segmentation image, a gesture feature extraction method based on Hu invariant moment and HOG feature fusion is proposed; and by determining the optimal weight parameters, the global and local features are effectively fused. Finally, the SVM classifier is used to classify and identify gestures. The experimental results show that the proposed fusion features method has a higher gesture recognition rate and better robustness than the traditional method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call