Abstract

Automatic sign language recognition is an important milestone in facilitating the communication between the deaf community and hearing people. Existing approaches are either intrusive or susceptible to ambient environments and user diversity. Moreover, most of them perform only isolated word recognition, not sentence-level sequence translation. In this paper, we present MyoSign, a deep learning based system that enables end-to-end American Sign Language (ASL) recognition at both word and sentence levels. We leverage a lightweight wearable device which can provide inertial and electromyography signals to non-intrusively capture signs. First, we propose a multimodal Convolutional Neural Network (CNN) to abstract representations from inputs of different sensory modalities. Then, a bidirectional Long Short Term Memory (LSTM) is exploited to model temporal dependences. On the top of the networks, we employ Connectionist Temporal Classification (CTC) to get around temporal segments and achieve end-to-end continuous sign language recognition. We evaluate MyoSign on 70 commonly used ASL words and 100 ASL sentences from 15 volunteers. Our system achieves an average accuracy of 93.7% at word-level and 93.1% at sentence-level in user-independent settings. In addition, MyoSign can recognize sentences unseen in the training set with 92.4% accuracy. The encouraging results indicate that MyoSign can be a meaningful buildup in the advancement of sign language recognition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call