Abstract

In this paper, we present a new approach for dynamic hand gesture recognition that uses intensity, depth, and skeleton joint data captured by Kinect sensor. This method integrates global and local information of a dynamic gesture. First, we represent the skeleton 3D trajectory in spherical coordinates. Then, we select the most relevant points in the hand trajectory with our proposed method for keyframe detection. After, we represent the joint movements by spatial, temporal and hand position changes information. Next, we use the direction cosines definition to describe the body positions by generating histograms of cumulative magnitudes from the depth data which were converted in a point-cloud. We evaluate our approach with different public gesture datasets and a sign language dataset created by us. Our results outperformed state-of-the-art methods and highlight the smooth and fast processing for feature extraction being able to be implemented in real time.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.