Abstract

A sign language recognition system translates signs performed by deaf individuals into text/speech in real time. Inertial measurement unit and surface electromyography (sEMG) are both useful modalities to detect hand/arm gestures. They are able to capture signs and the fusion of these two complementary sensor modalities will enhance system performance. In this paper, a wearable system for recognizing American Sign Language (ASL) in real time is proposed, fusing information from an inertial sensor and sEMG sensors. An information gain-based feature selection scheme is used to select the best subset of features from a broad range of well-established features. Four popular classification algorithms are evaluated for 80 commonly used ASL signs on four subjects. The experimental results show 96.16% and 85.24% average accuracies for intra-subject and intra-subject cross session evaluation, respectively, with the selected feature subset and a support vector machine classifier. The significance of adding sEMG for ASL recognition is explored and the best channel of sEMG is highlighted.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.