Abstract

Sign language predominantly involves the use of various hand postures and hand motions to enable visual communication. However, signing is mostly unfamiliar and not understood by normal hearing people due to which signers often rely on sign language interpreters. In this chapter, a novel multi-input deep learning model is proposed for end-to-end recognition of 50 common signs from Indian Sign Language (ISL). The ISL dataset is developed using multiple wearable sensors on the dominant hand that can record surface electromyogram, tri-axial accelerometer, and tri-axial gyroscope data. Multi-channel data from these three modalities is processed in a multi-input deep neural network with stacked convolutional neural network (CNN) and long short-term memory (LSTM) layers. The performance of the proposed multi-input CNN-LSTM model is compared with the traditional single-input approach in terms of quantitative performance measures. The multi-input approach yields around 5% improvement in classification accuracy over the traditional single-input approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call