Abstract

Around 70 million people in this world are mute people. There are children who suffer from Nonverbal Autism. Communication between the people with speech impairment and normal people is very difficult. Normally people with speech impairment use Sign Language to communicate with others. Not each and every person can understand Sign Language. In this paper, a prototype is proposed to give speech output for the Sign Language gestures to bridge the communication gap between the people with speech impairment and normal people. This prototype consists of a glove which has flex sensors, gyroscopes and accelerometers embedded on it. These sensors capture the real time gestures made by the user. Arduino Nano micro controller is used to collect data from these sensors and sends it to the PC via Bluetooth. The PC processes the data sent by the Arduino and runs a Machine Learning Algorithm to classify the Sign Language gestures and predicts the word associated with each gesture. Support Vector Machine (SVM) is used for classification. This prototype is very compact and can recognize both American Sign Language (ASL) and Indian Sign Language (ISL). This prototype not only gives speech to the mute people but also makes them multi linguists.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call