Abstract

Research on automatic translation of sign language to verbal languages has been progressively explored in recent years to assist speech and hearing-impaired people in communicating with non-signers. In this paper, a tiny machine learning (TinyML) solution is proposed for sign language recognition using a low-cost, wearable, internet-of things (IoT) device. A lightweight deep neural network is deployed on the edge device to interpret isolated signs from the Indian sign language using the time-series data collected from the motion sensors of the device. The scarcity of labeled training data is addressed by employing the deep transfer learning approach. Here, the knowledge gained from the data collected using the motion sensors of a different device is used to initialize the model parameters. The performance of the model is assessed in terms of classification accuracy and prediction time for different sampling rates and transferring schemes. The model achieves an average accuracy of 87.18% when all the parameters are retrained with just 4 observations of each sign recorded from the motion sensors of the proposed IoT device. The recognized sign is transmitted to a cloud platform in real-time. A mobile application, SignTalk, is also developed, which wirelessly receives the predicted signs from the cloud and displays it as text. Additionally, text-to-speech conversion is also provided on SignTalk to vocalize the predicted sign for better communication.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call