Abstract

People who are deaf or have difficulty speaking use sign language, which consists of hand gestures with particular motions that symbolize the “language” they are communicating. A gesture in a sign language is a particular movement of the hands with a specific shape from the fingers and whole hand. In this paper, we present an Intelligent for Deaf/Dumb People approach in real time based on Deep Learning using Gloves (IDLG). The approach IDLG offers scientific contributions based deep-learning, a multi-mode command techniques, real-time, and effective use, and high accuracy rates. For this purpose, smart gloves working in real time were designed. The data obtained from the gloves was processed using deep-learning-based approaches and classified multi-mode commands that allow dumb people to speak with regular people via their smart phone. Internally, the glove has five flex sensors and an accelerometer using to achieve Low-Cost Control System. The flex sensor generates a proportional change in resistance for each individual move. The processing of these hand gestures is in Atmega32A Microcontroller which is an advance version of the microcontroller and the lab view software. IDLG compares the input signal to memory-stored specified voltage values. The performance of the IDLG approach was verified on a dataset created using different hand gestures from 20 different people. In the test using the IDLG approach on 10,000 data points, process time performance of milliseconds was achieved with 97% accuracy.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call