Abstract

Sign language is a visual language used by deaf people. One difficulty of sign language recognition is that sign instances of vary in both motion and shape in three-dimensional (3D) space. In this research, we use 3D depth information from hand motions, generated from Microsoft's Kinect sensor and apply a hierarchical conditional random field (CRF) that recognizes hand signs from the hand motions. The proposed method uses a hierarchical CRF to detect candidate segments of signs using hand motions, and then a BoostMap embedding method to verify the hand shapes of the segmented signs. Experiments demonstrated that the proposed method could recognize signs from signed sentence data at a rate of 90.4%.

Highlights

  • Sign language is a visual language used by deaf people, which consists of two types of action: signs and finger spellings

  • Signs are dynamic gestures characterized by continuous hand motions and hand configurations, while finger spellings are static postures discriminated by a combination of continuous hand configurations [1,2,3]

  • The term “gesture” means that the character is performed with hand motions, while “posture” refers to a character that can be described with a static hand configuration [4]

Read more

Summary

Introduction

Sign language is a visual language used by deaf people, which consists of two types of action: signs and finger spellings. Ren et al researched a robust hand gesture recognition system using a Kinect [5] They proposed a modified Finger-Earth Mover’s Distance metric (FEMD) in order to distinguish noisy hand shapes obtained from the Kinect sensor. They achieved a 93.2% mean accuracy on a 10-gesture dataset. Chai et al proposed a sign language recognition and translation system based on 3D trajectory matching algorithms in order to connect the hearing impaired community with non-hearing impaired people [13] They extracted 3D trajectories of hand motions using the Kinect, and collected a total of. The BoostMap embedding method is used to verify the hand shapes of the segmented signs

Face and Hand Detection
Feature Extraction
CRF-Based Sign Language Recognition
Shape-Based Sign Language Verification
Experimental Environment
Sign Language Recognition with Continuous Data
Conclusions and Further Research

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.