Abstract

This paper presents a 3D gesture interaction method based on a single RGB image. In this paper, a real-time image processing and estimation network is used to evaluate two-dimensional and three-dimensional hand joint information from a single RGB image. Subsequently, this paper utilizes a gesture classifier to classify the semantics of gestures and complete the recognition of gestures to complete the interaction. Firstly, the RGB image is positioned, and then the key nodes of the two-dimensional and three-dimensional hand are predicted and esti-mated. The classifier utilizes voting classifier to merge the three classifiers, and achieves considerable classification results. The interaction in this paper contains three-dimensional information, so it can have more flexible and more real interaction.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call