Abstract

This paper presents the design and construction of a cost-effective, portable sign language to speech translator. The system is designed to work with American Sign Language fingerspelling (which facilitates translation in any Latin alphabet-based language) and aims to translate a pangram into speech. The system consists of three subsystems, namely, gesture detection, gesture classification, and text-to-speech subsystems and relies on a power bank for its power supply. The gesture detection subsystem makes use of five flex sensors, each placed on the finger of a glove, as well as a three-axis accelerometer. Gesture classification is achieved through a supervised machine learning approach — five different algorithms are compared to determine the best configuration for this system. Overall, a support vector machine with a radial basis function kernel and a penalty parameter of 10.0 performs best in the context of this investigation. Using the best-case sensor configuration, and the best-performing machine learning classifier, the system achieves a practical repeatability of 85.51%. Text-to-speech translation of the classified gestures is performed using the eSpeak engine. Overall, the system costs under $35 which is lower when compared to similar systems investigated, that cost at least $100. The final system meets the initial goals of sign language to speech translation, portability and cost-effectiveness.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call